Text Material Preview
NCP-US Nutanix Certified Professional – Unified Storage (NCP-US) v6 exam exam dumps questions are the best material for you to test all the related Nutanix exam topics. By using the NCP-US exam dumps questions and practicing your skills, you can increase your confidence and chances of passing the NCP-US exam. Features of Dumpsinfo’s products Instant Download Free Update in 3 Months Money back guarantee PDF and Software 24/7 Customer Support Besides, Dumpsinfo also provides unlimited access. You can get all Dumpsinfo files at lowest price. Nutanix Certified Professional – Unified Storage (NCP-US) v6 exam NCP-US exam free dumps questions are available below for you to study. Full version: NCP-US Exam Dumps Questions 1.Refer the exhibit 1 / 7 https://www.dumpsinfo.com/unlimited-access/ https://www.dumpsinfo.com/exam/ncp-us When creating an Object Store. an administrator receives an error. as shown in the exhibit. How should the administrator resolve the error? A. Select different address B. Provide required IP addresses in CIDR format. C. Select a different subnet D. Provide required IP addresses in a list format. Answer: B Explanation: According to Nutanix Support & Insights1, “The IP address consumption for the objects infra network is as follows: … From DHCP Pool: Each Worker VM requires one IP Address.” Therefore, the administrator should provide a range of IP addresses in CIDR notation (for example, 10.0.0.0/24) that can be used by the Objects worker VMs. https://portal.nutanix.com/page/documents/solutions/details? targetId=NVD-2151-Unified-Storage:nutanix-objects.html 2.An administrator needs to deploy a new Linux log collector package which creates a directory for each monitored item. The logs would be analyzed by a Windows application. Which action should the administrator take, that will provide the best performance and simplicity? A. Create an Objects bucket with versioning enabled. B. Assign e Volume vDisk to the Linux collector VM, C. Create a Files distributed share with multi-protocol access. D. Configure Files Analytics to analyze the collected logs. Answer: C 2 / 7 https://www.dumpsinfo.com/ Explanation: Creating a Files distributed share with multi-protocol access would allow for easy and efficient transfer of log files between the Linux collector package and the Windows application. This would provide simplicity in terms of configuration and management, and would also ensure that the log files are stored in a distributed and highly available manner, making it easier to access them as and when required. , Files is a software-defined scale-out file storage solution that provides a single namespace for unstructured data2. Files supports both SMB and NFS protocols, which means you can access the same share from both Linux and Windows machines3. This would provide the best performance and simplicity for your log collector package and analysis application. 3.An administrator has deployed a new backup software suite and needs to meet the following requirements: A. use S3-compatiblestorage. B. provide one-yearretention. C. protect from deletions or overwrites for oneyear. D. meet regulatoryrequirements. Answer: A Explanation: ✑ The administrator should use Nutanix Objects as the backup software suite. ✑ Nutanix Objects is a S3-compatible storage solution that provides one-year retention and protection from deletions or overwrites for one year using WORM (Write Once Read Many) policies. ✑ WORM policies can help meet regulatory requirements such as SEC 17a-4(f), FINRA 4511©, CFTC 1.31©-(d), and Rule 204-2 of Investment Advisers Act of 1940. https://www.nutanix.com/sg/support-services/training-certification/certifications/certification-details- nutanix-certified-professional-unified-storage-v6 https://next.nutanix.com/education-blog-153/nutanix- unified-storage-v6-5-training-now-available-a-special-cert-offer-41673 4.An administrator is managing a Nutanix cluster used for Citrix full clone virtual desktops. Capacity Runway is in a warning state, indicating storage resources may be fully utilized within 90 days, Which action can be performed to increase current capacity? A. Enable deduplication at the container level. B. Power off over- provisioned VMs. C. Analyze inefficient VMS and delete the inactive ones. D. update resources for constrained VMs, Answer: A Explanation: Deduplication is a data reduction technique that eliminates duplicate blocks of data and reduces the amount of storage space required1. By enabling deduplication at the container level, the administrator can optimize the storage efficiency and extend the capacity runway of the Nutanix cluster1. 5.Which feature ensures that a host failure's impact on a Files cluster will be minimal? A. VM-VM anti-affinity rules B. VM-VM affinity rules C. VM-Host anti-affinityrules D. VM-Host affinity rules Answer: C Explanation: 3 / 7 https://www.dumpsinfo.com/ These rules ensure that VMs are distributed across different hosts in a cluster, so that if one host fails, the impact on the VMs and their data will be minimal1. Nutanix Files also supports DFS-N (Distributed File System - Namespaces), which allows multiple file servers hosting the same data to support a common folder and provide site affinity for users2. 6.During a recent audit, the auditors discovered several shares that were unencrypted. To remediate the audit item, the administrator enabled Encrypt SMB3 Messages on the accounting, finance, and facilities shares. After encryption was enabled, several users have reported that they are no longer able to access the shares. What is causing this issue? A. The users are accessing the shares from Windows 8 desktops. B. Advanced Encryption Standard 128 & 256 are disabled in Windows 7. C. Advanced Encryption Standard 128 & 256 are disabled in Linux or Mac OS. D. The users are accessing the shares from Linux desktops. Answer: D Explanation: According to Encryption-Files | Nutanix Community1, SMB3 message encryption is a feature that encrypts messages on the file server side and decrypts them on the client side. However, clients that do not support encryption (Linux, Mac, windows 7) cannot access a share with encryption enabled. According to Nutanix Support & Insights2, Nutanix Files supports SMB3 encryption for SMB3 client- server traffic. This means that only clients that support SMB3 protocol can access encrypted shares. Therefore, if the users are accessing the shares from Linux desktops, they will not be able to access them because Linux does not support SMB3 encryption. https://portal.nutanix.com/page/documents/solutions/details?targetId=NVD-2151-Unified- Storage:client-server-traffic-encryption.html 7.An administrator has deployed an SMB v3 Files cluster, but needs to make the cluster able to support Multi-protocol access, Which protocol will be native? A. NFS B. S3 C. SMB D. cifs Answer: C Explanation: According to the Nutanix Files SMB Migration Guide1, Nutanix Files supports SMB protocol V2 and V31. SMB is a native protocol for Nutanix Files that allows file sharing between Windows clients and servers1. 8.An existing Objects bucket was created for backups with these parameters: A) WORM policy of three years B) Versioning policy of two years C) Lifecycle policy of two years The customer reports that the cluster is nearly full due to backups created during a recent crypto locker attack. The customer would like to automatically delete backups older than one year to free up space in the cluster. How should the administrator change settings within Objects? A. Modify the existing bucket lifecycle policy from two years to one year. B. Create a new' bucket with the lifecycle policy of one year. 4 / 7 https://www.dumpsinfo.com/ C. Create a new' bucket with the WORM policy of twoyears. D. Modify the existing bucket WORM policy from three years to one year. Answer: A Explanation: According to Nutanix documentation on unified storage (NCP-US) v6, to automatically delete backups older than one year, an administrator should modify the existing bucket lifecycle policy to set the expiration period to one year. Lifecycle policies enable administrators to automate the transition of objects to different storage classes and to expire them altogether. In this scenario, modifying the existing bucket lifecycle policy to shorten the expiration period to one year will ensure that backups older than one year are automatically deleted to free up space in the cluster. https://portal.nutanix.co m/page/documents/details?targetId=Objects-v3_1:v31-lifecycle-policies-rule-c.html 9.An administrator needs to upgrade all the File Analytics deployments in the air-gapped environments. During the preparation stage, the administrator downloads the lcm_file_analytics_3.2.0.tar.gz bundle and transfers it to the dark site web server. upon performing the LCM inventory and trying to upgrade File Analytics, the administrator does not see the new version in the software tab. Which additional file should the administrator transfer to the web server in order for the inventory to work? A. Nutanix_compatibility_bundle.tar.gz B. Nutanix_file_server_4.1.0.3.tar.gz C. Lcm_fsm_x.x.x.x. tar.gz D. lcm_file_manager_x.x.x.tar.gz Answer: D Explanation: This is because this bundle contains LCM file manager component that is required for LCM inventory to work in dark site mode. ✑ Download lcm_file_analytics_x.x.x.tar.gz bundle from Nutanix portal which contains File Analytics software binaries. ✑ Download lcm_file_manager_x.x.x.tar.gz bundle from Nutanix portal which contains LCM file manager component that handles file operations such as upload, download and delete. ✑ Transfer both bundles to a dark site web server that is accessible by LCM. ✑ Configure LCM settings in Prism Element to use dark site mode and specify the URL of the dark site web server. ✑ Perform LCM inventory which scans for available software bundles on the dark site web server. ✑ Upgrade File Analytics using LCM which downloads and installs File Analytics software binaries from lcm_file_analytics_x.x.x.tar.gz bundle. https://portal.nutanix.com/page/documents/details?targetId=File-Analytics-v3_2:File-Analytics-v3_2 10.An administrator is implementing a storage solution with these requirements: Is easily searchable Natively supports disaster recovery Access to each item needs to be fast Can scale to petabytes of data users are granted access after authentication user data is isolated, but could be shared How should the administrator satisfy these requirements? A. Deploy Objects with AD integration. B. use Files distributed share with ABE. 5 / 7 https://www.dumpsinfo.com/ C. Implement Volumes with CHAP. D. Configure Calm with an application per user. Answer: A Explanation: This is because Objects can provide fast access to each item using S3-compatible API which can be easily searched using metadata tags or third-party tools3. Objects also natively supports disaster recovery using replication policies1. Objects can scale to petabytes of data using erasure coding which reduces storage overhead1. Users can be granted access after authentication using AD integration which simplifies identity management1. User data can be isolated but could be shared using buckets which are logical containers for objects that can have different policies applied1. 11.A CIO has been reviewing the corporate BCDR plan. In this review, the CIO has noticed that they are replicating their Files deployments using the built in Files DR capabilities that are configured out of the box. Upon further investigation, the CIO has identified that there are no granular share replications between their Files deployments and has requested the administrator to take an initiative and implement a granular Files Share recovery model. Which Files capability should the administrator configure in order to be able to failover only certain shares? A. Data Lens B. Smart DR C. Smart Tiering D. NC2 on AWS Answer: B Explanation: Smart DR enables granular recovery of individual file shares in Nutanix Files by replicating data at the share level. This allows for more fine-grained control over the failover process and ensures that only critical data is recovered in the event of a disaster or outage. Files Smart DR is a feature that allows you to replicate between Files instances, either on-premises or running on Nutanix Cloud Clusters on AWS1. With Files Smart DR, you can configure granular share replication, which means you can select which shares to replicate and which ones to exclude1. Therefore, the correct answer to your question is B. Smart DR. Files Smart DR also supports replicating snapshots between the source share and its target, which can help with data recovery and compliance2. Additionally, Files Smart DR is the mechanism by which Files will support disaster recovery in the Nutanix Xi cloud3. 12.An administrator is building a new application server and has decided to use post-process compression for the file server and inline compression for the database components. The current environment has these characteristics: Two Volume Groups named VGI and VG2. A Storage Container named SCI with inline-compression. A Storage Container named SC2 with post-process compression. Which action should the administrator take to most efficiently satisfy this requirement? A. Wfithin VG1, create one vDisk in SC1 and one vDisk in Sc2. B. Within SC1, create one Wisk in VG1 and within SC2. create one Disk in VG2 C. Wthin SC1, create one vDisk in VG1 and one vDisk in VG2, D. Writhin VG1, create one vDisk in SC1 and within VG2, create one vDisk in SC2. Answer: D Explanation: Volume Groups (VGs) are collections of vDisks that can be attached to VMs1. vDisks are virtual disks 6 / 7 https://www.dumpsinfo.com/ that reside on Storage Containers2. Storage Containers are logical entities that provide storage services with different compression options3. To use post-process compression for the file server and inline compression for the database components, you need to create two vDisks on different Storage Containers with different compression options. Then you need to attach those vDisks to different VGs. https://next.nutanix.com/volumes-block-storage-171/configuration-for-volumes-vdisks-40537 7 / 7 https://www.dumpsinfo.com/