111
Platform Administration Guide NOS 3.5 24-Sep-2013

Platform administration guide-nos_v3_5

Embed Size (px)

Citation preview

Page 1: Platform administration guide-nos_v3_5

Platform Administration Guide

NOS 3.524-Sep-2013

Page 2: Platform administration guide-nos_v3_5

Copyright | Platform Administration Guide | NOS 3.5 | 2

Notice

Copyright

Copyright 2013 Nutanix, Inc.

Nutanix, Inc.1740 Technology Drive, Suite 400San Jose, CA 95110

All rights reserved. This product is protected by U.S. and international copyright and intellectual propertylaws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marksand names mentioned herein may be trademarks of their respective companies.

Conventions

Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host$ command The commands are executed as a non-privileged user (such as nutanix)in the system shell.

root@host# command The commands are executed as the root user in the hypervisor host(vSphere or KVM) shell.

output The information is displayed as output from a command or in a log file.

Default Cluster Credentials

Interface Target Username Password

Nutanix web console Nutanix Controller VM admin admin

vSphere client ESXi host root nutanix/4u

SSH client or console ESXi host root nutanix/4u

SSH client or console KVM host root nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

IPMI web interface or ipmitool Nutanix node ADMIN ADMIN

IPMI web interface or ipmitool Nutanix node (NX-3000) admin admin

Version

Last modified: September 24, 2013 (2013-09-24-13:28 GMT-7)

Page 3: Platform administration guide-nos_v3_5

3

Contents

Part I: NOS................................................................................................... 6

1: Cluster Management....................................................................... 7To Start a Nutanix Cluster....................................................................................................... 7To Stop a Cluster..................................................................................................................... 7To Destroy a Cluster................................................................................................................ 8To Create Clusters from a Multiblock Cluster..........................................................................9Disaster Protection................................................................................................................. 12

2: Password Management.................................................................15To Change the Controller VM Password............................................................................... 15To Change the ESXi Host Password.....................................................................................16To Change the KVM Host Password..................................................................................... 17To Change the IPMI Password..............................................................................................18

3: Alerts...............................................................................................19Cluster.....................................................................................................................................19Controller VM..........................................................................................................................22Guest VM................................................................................................................................24Hardware.................................................................................................................................26Storage....................................................................................................................................30

4: IP Address Configuration............................................................. 33To Reconfigure the Cluster.................................................................................................... 33To Prepare to Reconfigure the Cluster.................................................................................. 34Remote Console IP Address Configuration........................................................................... 35To Configure Host Networking............................................................................................... 38To Configure Host Networking (KVM)....................................................................................39To Update the ESXi Host Password in vCenter.................................................................... 40To Change the Controller VM IP Addresses..........................................................................40To Change a Controller VM IP Address (manual)................................................................. 41To Complete Cluster Reconfiguration.................................................................................... 42

5: Field Installation............................................................................ 44NOS Installer Reference.........................................................................................................44To Image a Node................................................................................................................... 44

Part II: vSphere.......................................................................................... 47

6: vCenter Configuration...................................................................48To Use an Existing vCenter Server....................................................................................... 48

Page 4: Platform administration guide-nos_v3_5

4

7: VM Management............................................................................ 55Migrating a VM to Another Cluster........................................................................................ 55vStorage APIs for Array Integration....................................................................................... 57Migrating vDisks to NFS.........................................................................................................58

8: Node Management.........................................................................62To Shut Down a Node in a Cluster....................................................................................... 62To Start a Node in a Cluster..................................................................................................63To Restart a Node..................................................................................................................64To Patch ESXi Hosts in a Cluster..........................................................................................65Removing a Node...................................................................................................................65

9: Storage Replication Adapter for Site Recovery Manager.......... 68To Configure the Nutanix Cluster for SRA Replication.......................................................... 69To Configure SRA Replication on the SRM Servers............................................................. 70

Part III: KVM............................................................................................... 72

10: Kernel-based Virtual Machine (KVM) Architecture...................73Storage Overview................................................................................................................... 73VM Commands....................................................................................................................... 74

11: VM Management Commands......................................................75virt_attach_disk.py.................................................................................................................. 76virt_check_disks.py................................................................................................................. 77virt_clone.py............................................................................................................................ 79virt_detach_disk.py................................................................................................................. 80virt_eject_cdrom.py................................................................................................................. 81virt_insert_cdrom.py................................................................................................................82virt_install.py........................................................................................................................... 83virt_kill.py................................................................................................................................ 85virt_kill_snapshot.py................................................................................................................86virt_list_disks.py...................................................................................................................... 86virt_migrate.py.........................................................................................................................87virt_multiclone.py.................................................................................................................... 88virt_snapshot.py...................................................................................................................... 89nfs_ls.py.................................................................................................................................. 90

Part IV: Hardware...................................................................................... 93

12: Node Order...................................................................................94

13: System Specifications................................................................ 98NX-1000 Series System Specifications..................................................................................98NX-2000 System Specifications........................................................................................... 100

Page 5: Platform administration guide-nos_v3_5

5

NX-3000 System Specifications........................................................................................... 103NX-3050 System Specifications........................................................................................... 105NX-6000 Series System Specifications................................................................................108

Page 6: Platform administration guide-nos_v3_5

NOS | Platform Administration Guide | NOS 3.5 | 6

Part

INOS

Page 7: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 7

1Cluster Management

Although each host in a Nutanix cluster runs a hypervisor independent of other hosts in the cluster, someoperations affect the entire cluster.

To Start a Nutanix Cluster

1. Log on to any Controller VM in the cluster with SSH.

2. Start the Nutanix cluster.

nutanix@cvm$ cluster start

If the cluster starts properly, output similar to the following is displayed for each node in the cluster:

CVM: 172.16.8.167 Up, ZeusLeader Zeus UP [3148, 3161, 3162, 3163, 3170, 3180] Scavenger UP [3333, 3345, 3346, 11997] ConnectionSplicer UP [3379, 3392] Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447] Medusa UP [3488, 3501, 3502, 3523, 3569] DynamicRingChanger UP [4592, 4609, 4610, 4640] Pithos UP [4613, 4625, 4626, 4678] Stargate UP [4628, 4647, 4648, 4709] Cerebro UP [4890, 4903, 4904, 4979] Chronos UP [4906, 4918, 4919, 4968] Curator UP [4922, 4934, 4935, 5064] Prism UP [4939, 4951, 4952, 4978] AlertManager UP [4954, 4966, 4967, 5022] StatsAggregator UP [5017, 5039, 5040, 5091] SysStatCollector UP [5046, 5061, 5062, 5098]

What to do next. After you have verified that the cluster is running, you can start guest VMs.

To Stop a Cluster

Before you begin. Shut down all guest virtual machines, including vCenter if it is running on the cluster.Do not shut down Nutanix Controller VMs.

Note: This procedure stops all services provided by guest virtual machines, the Nutanix cluster,and the hypervisor host.

1. Log on to a running Controller VM in the cluster with SSH.

2. Stop the Nutanix cluster.

nutanix@cvm$ cluster stop

Wait to proceed until output similar to the following is displayed for every Controller VM in the cluster.

CVM: 172.16.8.191 Up, ZeusLeader Zeus UP [3167, 3180, 3181, 3182, 3191, 3201]

Page 8: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 8

Scavenger UP [3334, 3351, 3352, 3353] ConnectionSplicer DOWN [] Hyperint DOWN [] Medusa DOWN [] DynamicRingChanger DOWN [] Pithos DOWN [] Stargate DOWN [] Cerebro DOWN [] Chronos DOWN [] Curator DOWN [] Prism DOWN [] AlertManager DOWN [] StatsAggregator DOWN [] SysStatCollector DOWN []

To Destroy a Cluster

Destroying a cluster resets all nodes in the cluster to the factory configuration. All cluster configuration andguest VM data is unrecoverable after destroying the cluster.

1. Log on to any Controller VM in the cluster with SSH.

2. Stop the Nutanix cluster.

nutanix@cvm$ cluster stop

Wait to proceed until output similar to the following is displayed for every Controller VM in the cluster.

CVM: 172.16.8.191 Up, ZeusLeader Zeus UP [3167, 3180, 3181, 3182, 3191, 3201] Scavenger UP [3334, 3351, 3352, 3353] ConnectionSplicer DOWN [] Hyperint DOWN [] Medusa DOWN [] DynamicRingChanger DOWN [] Pithos DOWN [] Stargate DOWN [] Cerebro DOWN [] Chronos DOWN [] Curator DOWN [] Prism DOWN [] AlertManager DOWN [] StatsAggregator DOWN [] SysStatCollector DOWN []

3. If the nodes in the cluster have Intel PCIe-SSD drives, ensure they are mapped properly.

Check if the node has an Intel PCIe-SSD drive.

nutanix@cvm$ lsscsi | grep 'SSD 910'

→ If no items are listed, the node does not have an Intel PCIe-SSD drive and you can proceed to thenext step.

→ If two items are listed, the node does have an Intel PCIe-SSD drive.

If the node has an Intel PCIe-SSD drive, check if it is mapped correctly.

nutanix@cvm$ cat /proc/partitions | grep dm

→ If two items are listed, the drive is mapped correctly and you can proceed.→ If no items are listed, the drive is not mapped correctly. Start then stop the cluster before proceeding.

Page 9: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 9

Perform this check on every Controller VM in the cluster.

4. Destroy the cluster.

Caution: Performing this operation deletes all cluster and guest VM data in the cluster.

nutanix@cvm$ cluster -s cvm_ip_addr destroy

To Create Clusters from a Multiblock Cluster

The minimum size for a cluster is three nodes.

1. Remove nodes from the existing cluster.

→ If you want to preserve data on the existing cluster, remove nodes by following To Remove a Nodefrom a Cluster on page 65.

→ If you want multiple new clusters, destroy the existing cluster by following To Destroy a Cluster onpage 8.

2. Create one or more new clusters by following To Configure the Cluster on page 10.

Product Mixing Restrictions

While a Nutanix cluster can include different products, there are some restrictions.

Caution: Do not configure a cluster that violates any of the following rules.

Compatibility Matrix

NX-1000 NX-2000 NX-2050 NX-3000 NX-3050 NX-6000

NX-10001 • • • • • •

NX-2000 • • • • •

NX-2050 • • • • • •

NX-3000 • • • • • •

NX-3050 • • • • • •

NX-60002 • • • • 3 •

1. NX-1000 nodes can be mixed with other products in the same cluster only when they are running 10GbE networking; they cannot be mixed when running 1 GbE networking. If NX-1000 nodes are usingthe 1 GbE interface, the maximum cluster size is 8 nodes. If the nodes are using the 10 GbE interface,the cluster has no limits other than the maximum supported cluster size that applies to all products.

2. NX-6000 nodes cannot be mixed NX-2000 nodes in the same cluster.3. Because it has a larger Flash tier, NX-3050 is recommended to be mixed with NX-6000 over other

products.

• Any combination of NX-2000, NX-2050, NX-3000, and NX-3050 nodes can be mixed in the samecluster.

Page 10: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 10

• All nodes in a cluster must be the same hypervisor type (ESXi or KVM).• All Controller VMs in a cluster must have the same NOS version.• Mixed Nutanix clusters comprising NX-2000 nodes and other products are supported as specified

above. However, because the NX-2000 processor architecture differs from other models, vSphere doesnot support enhanced/live vMotion of VMs from one type of node to another unless Enhanced vMotionCapability (EVC) is enabled. For more information about EVC, see the vSphere 5 documentation andthe following VMware knowledge base articles:

• Enhanced vMotion Compatibility (EVC) processor support [1003212]• EVC and CPU Compatibility FAQ [1005764]

To Configure the Cluster

Before you begin.

• Confirm that the system you are using to configure the cluster meets the following requirements:

• IPv6 link-local enabled.• Windows 7, Vista, or MacOS.• (Windows only) Bonjour installed (included with iTunes or downloadable from http://

support.apple.com/kb/DL999).

• Determine the IPv6 service of any Controller VM in the cluster.

IPv6 service names are uniquely generated at the factory and have the following form (note the finalperiod):

NTNX-block_serial_number-node_location-CVM.local.

On the right side of the block toward the front is a label that has the block_serial_number (for example,12AM3K520060). The node_location is a number 1-4 for NX-3000, a letter A-D for NX-1000/NX-2000/NX-3050, or a letter A-B for NX-6000.

If you need to confirm if IPv6 link-local is enabled on the network or if you do not have access to get thenode serial number, see the Nutanix support knowledge base for alternative methods.

Page 11: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 11

1. Open a web browser.

Nutanix recommends using Internet Explorer 9 for Windows and Safari for Mac OS.

Note: Internet Explorer requires protected mode to be disabled. Go to Tools > InternetOptions > Security, clear the Enable Protected Mode check box, and restart the browser.

2. Navigate to http://cvm_host_name:2100/cluster_init.html.

Replace cvm_host_name with the IPv6 service name of any Controller VM that will be added to thecluster.

Following is an example URL to access the cluster creation page on a Controller VM:

http://NTNX-12AM3K520060-1-CVM.local.:2100/cluster_init.html

If the cluster_init.html page is blank, then the Controller VM is already part of a cluster. Connect to aController VM that is not part of a cluster.

3. Type a meaningful value in the Cluster Name field.

This value is appended to all automated communication between the cluster and Nutanix support. Itshould include the customer's name and if necessary a modifier that differentiates this cluster from anyother clusters that the customer might have.

Note: This entity has the following naming restrictions:

• The maximum length is 75 characters.• Allowed characters are uppercase and lowercase standard Latin letters (A-Z and a-z),

decimal digits (0-9), dots (.), hyphens (-), and underscores (_).

4. Type the appropriate DNS and NTP addresses in the respective fields.

5. Type the appropriate subnet masks in the Subnet Mask row.

6. Type the appropriate default gateway IP addresses in the Default Gateway row.

7. Select the check box next to each node that you want to add to the cluster.

Page 12: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 12

All unconfigured nodes on the current network are presented on this web page. If you will be configuringmultiple clusters, be sure that you only select the nodes that should be part of the current cluster.

8. Provide an IP address for all components in the cluster.

Note: The unconfigured nodes are not listed according to their position in the block. Ensurethat you assign the intended IP address to each node.

9. Click Create.

Wait until the Log Messages section of the page reports that the cluster has been successfullyconfigured.

Output similar to the following indicates successful cluster configuration.

Configuring IP addresses on node 12AM2K420010/A...Configuring IP addresses on node 12AM2K420010/B...Configuring IP addresses on node 12AM2K420010/C...Configuring IP addresses on node 12AM2K420010/D...Configuring Zeus on node 12AM2K420010/A...Configuring Zeus on node 12AM2K420010/B...Configuring Zeus on node 12AM2K420010/C...Configuring Zeus on node 12AM2K420010/D...Initializing cluster...Cluster successfully initialized!Initializing the cluster DNS and NTP servers...Successfully updated the cluster NTP and DNS server list

10. Log on to any Controller VM in the cluster with SSH.

11. Start the Nutanix cluster.

nutanix@cvm$ cluster start

If the cluster starts properly, output similar to the following is displayed for each node in the cluster:

CVM: 172.16.8.167 Up, ZeusLeader Zeus UP [3148, 3161, 3162, 3163, 3170, 3180] Scavenger UP [3333, 3345, 3346, 11997] ConnectionSplicer UP [3379, 3392] Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447] Medusa UP [3488, 3501, 3502, 3523, 3569] DynamicRingChanger UP [4592, 4609, 4610, 4640] Pithos UP [4613, 4625, 4626, 4678] Stargate UP [4628, 4647, 4648, 4709] Cerebro UP [4890, 4903, 4904, 4979] Chronos UP [4906, 4918, 4919, 4968] Curator UP [4922, 4934, 4935, 5064] Prism UP [4939, 4951, 4952, 4978] AlertManager UP [4954, 4966, 4967, 5022] StatsAggregator UP [5017, 5039, 5040, 5091] SysStatCollector UP [5046, 5061, 5062, 5098]

Disaster Protection

After VM protection is configured in the web console, managing snapshots and failing from one site toanother are accomplished with the nCLI.

Page 13: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 13

To Manage VM Snapshots

You can manage VM snapshots, including restoration, with these nCLI commands.

• Check status of replication.

ncli> pd list-replication-status

• List snapshots.

ncli> pd list-snapshots name="pd_name"

• Restore VMs from backup.

ncli> pd rollback-vms name="pd_name" vm-names="vm_ids" snap-id="snapshot_id" path-prefix="folder_name"

• Replace vm_ids with a comma-separated list of VM IDs as given in vm list.• Replace snapshot_id with a snapshot ID as given by pd list-snapshots.• Replace folder_name with the name you want to give the VM folder on the datastore, which will be

created if it does not exist.

The VM is restored to the container where the snapshot resides. If you used a DAS-SATA-onlycontainer for replication, after restoring the VM move it to an container suitable for active workloads withstorage vMotion

• Restore NFS files from backup.

ncli> pd rollback-nfs-files name="pd_name" files="nfs_files" snap-id="snapshot_id"

• Replace nfs_files with a comma-separated list of NFS files to restore.• Replace snapshot_id with a snapshot ID as given by pd list-snapshots.

If you want to replace the existing file, include replace-nfs-files=true.

• Remove snapshots.

ncli> pd rm-snapshot name="pd_name" snap-ids="snapshot_ids"

Replace snapshot_ids with a comma-separated list of snapshot IDs as given in pd list snapshots.

To Fail from one Site to Another

Disaster failover

Connect to the backup site and activate it.

ncli> pd activate name="pd_name"

This operation does the following:

1. Restores all VM files from last fully-replicated snapshot.2. Registers VMs on recovery site.3. Marks the failover site protection domain as active.

Planned failover

Connect to the primary site and specify the failover site to migrate to.

ncli> pd migrate name="pd_name" remote-site="remote_site_name2"

This operation does the following:

1. Creates and replicates a snapshot of the protection domain.

Page 14: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 14

2. Shuts down VMs on the local site.3. Creates and replicates another snapshot of the protection domain.4. Unregisters all VMs and removes their associated files.5. Marks the local site protection domain as inactive.6. Restores all VM files from the last snapshot and registers them on the remote site.7. Marks the remote site protection domain as active.

Page 15: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 15

2Password Management

You can change the passwords of the following cluster components:

• Nutanix management interfaces• Nutanix Controller VMs• Hypervisor software• Node hardware (management port)

Requirements

• You know the IP address of the component that you want to modify.• You know the current password of the component you want to modify.

The default passwords of all components are provided in Default Cluster Credentials on page 2.

• You have selected a password that has 8 or more characters and at least one of each of the following:

• Upper-case letters• Lower-case letters• Numerals• Symbols

To Change the Controller VM Password

Perform these steps on every Controller VM in the cluster.

Warning: The nutanix user must have the same password on all Controller VMs.

1. Log on to the Controller VM with SSH.

2. Change the nutanix user password.

nutanix@cvm$ passwd

3. Respond to the prompts, providing the current and new nutanix user password.

Changing password for nutanix.Old Password:New password:Retype new password:Password changed.

Note: The password must meet the following complexity requirements:

• At least 9 characters long• At least 2 lowercase characters• At least 2 uppercase characters• At least 2 numbers• At least 2 special characters

Page 16: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 16

To Change the ESXi Host Password

The cluster software needs to be able to log into each host as root to perform standard cluster operations,such as mounting a new NFS datastore or querying the status of VMs in the cluster. Therefore, afterchanging the ESXi root password it is critical to update the cluster configuration with the new password.

Tip: Although it is not required for the root user to have the same password on all hosts, doing sowill make cluster management and support much easier. If you do select a different password forone or more hosts, make sure to note the password for each host.

1. Change the root password of all hosts.

Perform these steps on every ESXi host in the cluster.

a. Log on to the ESXi host with SSH.

b. Change the root password.

root@esx# passwd root

c. Respond to the prompts, providing the current and new root password.

Changing password for root.Old Password:New password:Retype new password:Password changed.

2. Update the root user password for all hosts in the Zeus configuration.

Warning: If you do not perform this step, the web console will no longer show correct statisticsand alerts, and other cluster operations will fail.

a. Log on to any Controller VM in the cluster with SSH.

b. Find the host IDs.

nutanix@cvm$ ncli -p 'admin_password' host list | grep -E 'ID|Hypervisor Key'

Note the host ID for each hypervisor host.

c. Update the hypervisor host password.

nutanix@cvm$ ncli -p 'admin_password' managementserver edit name=host_addr password='host_password' nutanix@cvm$ ncli -p 'admin_password' host edit id=host_id hypervisor-password='host_password'

• Replace host_addr with the IP address of the hypervisor host.• Replace host_id with a host ID you determined in the preceding step.• Replace host_password with the root password on the corresponding hypervisor host.

Perform this step for every hypervisor host in the cluster.

3. Update the ESXi host password.

a. Log on to vCenter with the vSphere client.

b. Right-click the host with the changed password and select Disconnect.

c. Right-click the host and select Connect.

Page 17: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 17

d. Enter the new password and complete the Add Host Wizard.

If reconnecting the host fails, remove it from the cluster and add it again.

To Change the KVM Host Password

The cluster software needs to be able to log into each host as root to perform standard cluster operations,such as mounting a new NFS datastore or querying the status of VMs in the cluster. Therefore, afterchanging the KVM root password it is critical to update the cluster configuration with the new password.

Tip: Although it is not required for the root user to have the same password on all hosts, doing sowill make cluster management and support much easier. If you do select a different password forone or more hosts, make sure to note the password for each host.

1. Change the root password of all hosts.

Perform these steps on every KVM host in the cluster.

a. Log on to the KVM host with SSH.

b. Change the root password.

root@kvm# passwd root

c. Respond to the prompts, providing the current and new root password.

Changing password for root.Old Password:New password:Retype new password:Password changed.

2. Update the root user password for all hosts in the Zeus configuration.

Warning: If you do not perform this step, the web console will no longer show correct statisticsand alerts, and other cluster operations will fail.

a. Log on to any Controller VM in the cluster with SSH.

b. Find the host IDs.

nutanix@cvm$ ncli -p 'admin_password' host list | grep -E 'ID|Hypervisor Key'

Note the host ID for each hypervisor host.

c. Update the hypervisor host password.

nutanix@cvm$ ncli -p 'admin_password' managementserver edit name=host_addr password='host_password' nutanix@cvm$ ncli -p 'admin_password' host edit id=host_id hypervisor-password='host_password'

• Replace host_addr with the IP address of the hypervisor host.• Replace host_id with a host ID you determined in the preceding step.• Replace host_password with the root password on the corresponding hypervisor host.

Perform this step for every hypervisor host in the cluster.

Page 18: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 18

To Change the IPMI Password

The cluster software needs to be able to log into the management interface on each host to perform certainoperations, such as reading hardware alerts. Therefore, after changing the IPMI password it is critical toupdate the cluster configuration with the new password.

Tip: Although it is not required for the administrative user to have the same password on all hosts,doing so will make cluster management much easier. If you do select a different password for oneor more hosts, make sure to note the password for each host

1. Change the administrative user password of all IPMI hosts.

Product Administrative user

NX-1000, NX-3050, NX-6000 ADMIN

NX-3000 admin

NX-2000 ADMIN

Perform these steps on every IPMI host in the cluster.

a. Sign in to the IPMI web interface as the administrative user.

b. Click Configuration.

c. Click Users.

d. Select the administrative user and then click Modify User.

e. Type the new password in both text fields and then click Modify.

f. Click OK to close the confirmation window.

2. Update the administrative user password for all hosts in the Zeus configuration.

a. Log on to any Controller VM in the cluster with SSH.

b. Generate a list of all hosts in the cluster.

nutanix@cvm$ ncli -p 'admin_password' host list | grep -E 'ID|IPMI Address'

Note the host ID of each entry in the list.

c. Update the IPMI password.

nutanix@cvm$ ncli -p 'admin_password' host edit id=host_id ipmi-password='ipmi_password'

• Replace host_id with a host ID you determined in the preceding step.• Replace ipmi_password with the administrative user password on the corresponding IPMI host.

Perform this step for every IPMI host in the cluster.

Page 19: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 19

3Alerts

This section lists all the NOS alerts with cause and resolution, sorted by category.

• Cluster• Controller VM• Guest VM• Hardware• Storage

Cluster

CassandraDetachedFromRing [A1055]

Message Cassandra on CVM ip_address is now detached from ring due to reason.

Cause Either a metadata drive has failed, the node was down for an extended period of time,or an unexpected subsystem fault was encountered, so the node was removed from themetadata store.

Resolution If the metadata drive has failed, replace the metadata drive as soon as possible. Referto the Nutanix documentation for instructions. If the node was down for an extendedperiod of time and is now running, add it back to the metadata store with the "hostenable-metadata-store" nCLI command. Otherwise, contact Nutanix support.

Severity kCritical

CassandraMarkedToBeDetached [A1054]

Message Cassandra on CVM ip_address is marked to be detached from ring due to reason.

Cause Either a metadata drive has failed, the node was down for an extended period of time,or an unexpected subsystem fault was encountered, so the node is marked to beremoved from the metadata store.

Resolution If the metadata drive has failed, replace the metadata drive as soon as possible. Referto the Nutanix documentation for instructions. If the node was down for an extendedperiod of time and is now running, add it back to the metadata store with the "hostenable-metadata-store" nCLI command. Otherwise, contact Nutanix support.

Severity kCritical

DuplicateRemoteClusterId [A1038]

Message Remote cluster 'remote_name' is disabled because the name conflicts withremote cluster 'conflicting_remote_name'.

Page 20: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 20

Cause Two remote sites with different names or different IP addresses have same cluster ID.This can happen in two cases: (a) A remote cluster is added twice under two differentnames (through different IP addresses) or (b) Two clusters have the same cluster ID.

Resolution In case (a) remove the duplicate remote site. In case (b) verify that the both clustershave the same cluster ID and contact Nutanix support.

Severity kWarning

JumboFramesDisabled [A1062]

Message Jumbo frames could not be enabled on the iface interface in the last threeattempts.

Cause Jumbo frames could not be enabled in the controller VMs.

Resolution Ensure that the 10-Gig network switch has jumbo-frames enabled.

Severity kCritical

NetworkDisconnect [A1041]

Message IPMI interface target_ip is not reachable from Controller VM source_ip in thelast six attempts.

Cause The IPMI interface is down or there is a network connectivity issue.

Resolution Ensure that the IPMI interface is functioning and that physical networking, VLANs, andvirtual switches are configured correctly.

Severity kWarning

NetworkDisconnect [A1006]

Message Hypervisor target_ip is not reachable from Controller VM source_ip in the lastsix attempts.

Cause The hypervisor host is down or there is a network connectivity issue.

Resolution Ensure that the hypervisor host is running and that physical networking, VLANs, andvirtual switches are configured correctly.

Severity kCritical

NetworkDisconnect [A1048]

Message Controller VM svm_ip with network address svm_subnet is in a different networkthan the Hypervisor hypervisor_ip, which is in the network hypervisor_subnet.

Cause The Controller VM and the hypervisor are not on the same subnet.

Resolution Reconfigure the cluster. Either move the Controller VMs to the same subnet as thehypervisor hosts or move the hypervisor hosts to the same subnet as the ControllerVMs.

Page 21: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 21

Severity kCritical

NetworkDisconnect [A1040]

Message Hypervisor target_ip is not reachable from Controller VM source_ip in the lastthree attempts.

Cause The hypervisor host is down or there is a network connectivity issue.

Resolution Ensure that the hypervisor host is running and that physical networking, VLANs, andvirtual switches are configured correctly.

Severity kCritical

RemoteSupportEnabled [A1051]

Message Daily reminder that remote support tunnel to Nutanix HQ is enabled on thiscluster.

Cause Nutanix support staff are able to access the cluster to assist with any issue.

Resolution No action is necessary.

Severity kInfo

TimeDifferenceHigh [A1017]

Message Wall clock time has drifted by more than time_difference_limit_secs secondsbetween the Controller VMs lower_time_ip and higher_time_ip.

Cause The cluster does not have NTP servers configured or they are not reachable.

Resolution Ensure that the cluster has NTP servers configured and that the NTP servers arereachable from all Controller VMs.

Severity kWarning

ZeusConfigMismatch [A1008]

Message IPMI IP address on Controller VM svm_ip_address was updated fromzeus_ip_address to invalid_ip_address without following the Nutanix IPReconfiguration procedure.

Cause The IP address configured in the cluster does not match the actual setting of the IPMIinterface.

Resolution Follow the IP address change procedure in the Nutanix documentation.

Severity kCritical

Page 22: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 22

ZeusConfigMismatch [A1009]

Message IP address of Controller VM zeus_ip_address has been updated toinvalid_ip_address. The Controller VM will not be part of the cluster once thechange comes into effect, unless zeus configuration is updated.

Cause The IP address configured in the cluster does not match the actual setting of theController VM.

Resolution Follow the IP address change procedure in the Nutanix documentation.

Severity kCritical

ZeusConfigMismatch [A1029]

Message Hypervisor IP address on Controller VM svm_ip_address was updated fromzeus_ip_address to invalid_ip_address without following the Nutanix IPReconfiguration procedure.

Cause The IP address configured in the cluster does not match the actual setting of thehypervisor.

Resolution Follow the IP address change procedure in the Nutanix documentation.

Severity kCritical

Controller VM

CVMNICSpeedLow [A1058]

Message Controller VM service_vm_external_ip is not running on 10 Gbps networkinterface. This will degrade the system performance.

Cause The Controller VM is not configured to use the 10 Gbps NIC or is configured to shareload with a slower NIC.

Resolution Connect the Controller VM to 10 Gbps NICs only.

Severity kWarning

CVMRAMUsageHigh [A1056]

Message Main memory usage in Controller VM ip_address is high in the last 20 minutes.free_memory_kb KB of memory is free.

Cause The RAM usage on the Controller VM has been high.

Resolution Contact Nutanix Support for diagnosis. RAM on the Controller VM may need to beincreased.

Severity kCritical

Page 23: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 23

CVMRebooted [A1024]

Message Controller VM ip_address has been rebooted.

Cause Various

Resolution If the Controller VM was restarted intentionally, no action is necessary. If it restarted byitself, contact Nutanix support.

Severity kCritical

IPMIError [A1050]

Message Controller VM ip_address is unable to fetch IPMI SDR repository.

Cause The IPMI interface is down or there is a network connectivity issue.

Resolution Ensure that the IPMI interface is functioning and that physical networking, VLANs, andvirtual switches are configured correctly.

Severity kCritical

KernelMemoryUsageHigh [A1034]

Message Controller VM ip_address's kernel memory usage is higher than expected.

Cause Various

Resolution Contact Nutanix support.

Severity kCritical

NetworkDisconnect [A1001]

Message Controller VM target_ip is not reachable from Controller VM source_ip in thelast six attempts.

Cause The Controller VM is down or there is a network connectivity issue.

Resolution If the Controller VM does not respond to ping, turn it on. Ensure that physicalnetworking, VLANs, and virtual switches are configured correctly.

Severity kCritical

NetworkDisconnect [A1011]

Message Controller VM target_ip is not reachable from Controller VM source_ip in thelast three attempts.

Cause The Controller VM is down or there is a network connectivity issue.

Resolution Ensure that the Controller VM is running and that physical networking, VLANs, andvirtual switches are configured correctly.

Severity kCritical

Page 24: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 24

NodeInMaintenanceMode [A1013]

Message Controller VM ip_address is put in maintenance mode due to reason.

Cause Node removal has been initiated.

Resolution No action is necessary.

Severity kInfo

ServicesRestartingFrequently [A1032]

Message There have been 10 or more cluster services restarts within 15 minutes.

Cause This alert usually indicates that the Controller VM was restarted, but there could beother causes.

Resolution If this alert occurs once or infrequently, no action is necessary. If it is frequent, contactNutanix support.

Severity kCritical

StargateTemporarilyDown [A1030]

Message Stargate on Controller VM ip_address is down for downtime seconds.

Cause Various

Resolution Contact Nutanix support.

Severity kCritical

Guest VM

ProtectedVmNotFound [A1010]

Message Unable to locate VM with name 'vm_name and internal ID 'vm_id' in protectiondomain 'protection_domain_name'.

Cause The VM was deleted.

Resolution Remove the VM from the protection domain.

Severity kWarning

ProtectionDomainActivation [A1043]

Message Unable to make protection domain 'protection_domain_name' active on remotesite 'remote_name' due to 'reason'.

Cause Various

Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contactNutanix support.

Page 25: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 25

Severity kCritical

ProtectionDomainChangeModeFailure [A1060]

Message Protection domain protection_domain_name activate/deactivate failed. reason

Cause Protection domain cannot be activated or migrated.

Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contactNutanix support.

Severity kCritical

ProtectionDomainReplicationExpired [A1003]

Message Protection domain protection_domain_name replication to the remote siteremote_name has expired before it is started.

Cause Replication is taking too long to complete before the snapshots expire.

Resolution Review replication schedules taking into account bandwidth and overall load onsystems. Confirm retention time on replicated snapshots.

Severity kWarning

ProtectionDomainReplicationFailure [A1015]

Message Protection domain protection_domain_name replication to remote siteremote_name failed. reason

Cause Various

Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contactNutanix support.

Severity kCritical

ProtectionDomainSnapshotFailure [A1064]

Message Protection domain protection_domain_name snapshot snapshot_id failed. reason

Cause Protection domain cannot be snapshotted.

Resolution Make sure all VMs and files are available.

Severity kCritical

VMAutoStartDisabled [A1057]

Message Virtual Machine auto start is disabled on the hypervisor of Controller VMservice_vm_external_ip

Page 26: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 26

Cause Auto start of the Controller VM is disabled.

Resolution Enable auto start of the Controller VM as recommended by Nutanix. If auto start isintentionally disabled, no action is necessary.

Severity kInfo

VMLimitExceeded [A1053]

Message The number of virtual machines on node node_serial is vm_count, which is abovethe limit vm_limit.

Cause The node is running more virtual machines than the hardware can support.

Resolution Shut down VMs or move them to other nodes in the cluster.

Severity kCritical

VmActionError [A1033]

Message Failed to action VM with name 'vm_name' and internal ID 'vm_id' due to reason

Cause A VM could not be restored because of a hypervisor error, or could not be deletedbecause it is still in use.

Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contactNutanix support.

Severity kCritical

VmRegistrationError [A1002]

Message Failed to register VM using name 'vm_name' with the hypervisor due to reason

Cause An error on the hypervisor.

Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contactNutanix support.

Severity kCritical

Hardware

CPUTemperatureHigh [A1049]

Message Temperature of CPU cpu_id exceeded temperatureC on Controller VM ip_address

Cause The device is overheating to the point of imminent failure.

Resolution Ensure that the fans in the block are functioning properly and that the environment iscool enough.

Severity kCritical

Page 27: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 27

DiskBad [A1044]

Message Disk disk_position on node node_position of block block_position is markedoffline due to IO errors. Serial number of the disk is disk_serial in nodenode_serial of block block_serial.

Cause The drive has failed.

Resolution Replace the failed drive. Refer to the Nutanix documentation for instructions.

Severity kCritical

FanSpeedLow [A1020]

Message Speed of fan fan_id exceeded fan_rpm RPM on Controller VM ip_address.

Cause The device is overheating to the point of imminent failure.

Resolution Ensure that the fans in the block are functioning properly and that the environment iscool enough.

Severity kCritical

FanSpeedLow [A1045]

Message Fan fan_id has stopped on Controller VM ip_address.

Cause A fan has failed.

Resolution Replace the fan as soon as possible. Refer to the Nutanix documentation forinstructions.

Severity kCritical

FusionIOTemperatureHigh [A1016]

Message Fusion-io drive device temperature exceeded temperatureC on Controller VMip_address

Cause The device is overheating.

Resolution Ensure that the fans in the block are functioning properly and that the environment iscool enough.

Severity kWarning

FusionIOTemperatureHigh [A1047]

Message Fusion-io drive device temperature exceeded temperatureC on Controller VMip_address

Cause The device is overheating to the point of imminent failure.

Page 28: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 28

Resolution Ensure that the fans in the block are functioning properly and that the environment iscool enough.

Severity kCritical

FusionIOWearHigh [A1014]

Message Fusion-io drive die failure has occurred in Controller VM svm_ip and most ofthe Fusion-io drives have worn out beyond 1.2PB of writes.

Cause The drives are approaching the maximum write endurance and are beginning to fail.

Resolution Replace the drives as soon as possible. Refer to the Nutanix documentation forinstructions.

Severity kCritical

FusionIOWearHigh [A1026]

Message Fusion-io drive die failures have occurred in Controller VMs svm_ip_list.

Cause The drive is failing.

Resolution Replace the drive as soon as possible. Refer to the Nutanix documentation forinstructions.

Severity kCritical

HardwareClockFailure [A1059]

Message Hardware clock in node node_serial has failed.

Cause The RTC clock on the host has failed or the RTC battery has died.

Resolution Replace the node. Refer to the Nutanix documentation for instructions.

Severity kCritical

IntelSSDTemperatureHigh [A1028]

Message Intel 910 SSD device device temperature exceeded temperatureC on theController VM ip_address.

Cause The device is overheating.

Resolution Ensure that the fans in the block are functioning properly and that the environment iscool enough.

Severity kWarning

Page 29: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 29

IntelSSDTemperatureHigh [A1007]

Message Intel 910 SSD device device temperature exceeded temperatureC on theController VM ip_address.

Cause The device is overheating to the point of imminent failure.

Resolution Ensure that the fans in the block are functioning properly and that the environment iscool enough.

Severity kCritical

IntelSSDWearHigh [A1035]

Message Intel 910 SSD device device on the Controller VM ip_address has worn outbeyond 6.5PB of writes.

Cause The drive is approaching the maximum write endurance.

Resolution Consider replacing the drive.

Severity kWarning

IntelSSDWearHigh [A1042]

Message Intel 910 SSD device device on the Controller VM ip_address has worn outbeyond 7PB of writes.

Cause The drive is close the maximum write endurance and failure is imminent.

Resolution Replace the drive as soon as possible. Refer to the Nutanix documentation forinstructions.

Severity kCritical

PowerSupplyDown [A1046]

Message power_source power source is down on block block_position.

Cause The power supply has failed.

Resolution Replace the power supply as soon as possible. Refer to the Nutanix documentation forinstructions.

Severity kCritical

RAMFault [A1052]

Message DIMM fault detected on Controller VM ip_address. The node is running withcurrent_memory_gb GB whereas installed_memory_gb GB was installed.

Cause A DIMM has failed.

Page 30: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 30

Resolution Replace the failed DIMM as soon as possible. Refer to the Nutanix documentation forinstructions.

Severity kCritical

RAMTemperatureHigh [A1022]

Message Temperature of DIMM dimm_id for CPU cpu_id exceeded temperatureC on ControllerVM ip_address

Cause The device is overheating to the point of imminent failure.

Resolution Ensure that the fans in the block are functioning properly and that the environment iscool enough.

Severity kCritical

SystemTemperatureHigh [A1012]

Message System temperature exceeded temperatureC on Controller VM ip_address

Cause The node is overheating to the point of imminent failure.

Resolution Ensure that the fans in the block are functioning properly and that the environment iscool enough.

Severity kCritical

Storage

DiskInodeUsageHigh [A1018]

Message Inode usage for one or more disks on Controller VM ip_address has exceeded75%.

Cause The filesystem contains too many files.

Resolution Delete unneeded data or add nodes to the cluster.

Severity kWarning

DiskInodeUsageHigh [A1027]

Message Inode usage for one or more disks on Controller VM ip_address has exceeded90%.

Cause The filesystem contains too many files.

Resolution Delete unneeded data or add nodes to the cluster.

Severity kCritical

Page 31: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 31

DiskSpaceUsageHigh [A1031]

Message Disk space usage for one or more disks on Controller VM ip_address hasexceeded warn_limit%.

Cause Too much data is stored on the node.

Resolution Delete unneeded data or add nodes to the cluster.

Severity kWarning

DiskSpaceUsageHigh [A1005]

Message Disk space usage for one or more disks on Controller VM ip_address hasexceeded critical_limit%.

Cause Too much data is stored on the node.

Resolution Delete unneeded data or add nodes to the cluster.

Severity kCritical

FusionIOReserveLow [A1023]

Message Fusion-io drive device reserves are down to reserve% on Controller VMip_address.

Cause The drive is beginning to fail.

Resolution Consider replacing the drive.

Severity kWarning

FusionIOReserveLow [A1039]

Message Fusion-io drive device reserves are down to reserve% on Controller VMip_address.

Cause The drive is failing.

Resolution Replace the drive as soon as possible. Refer to the Nutanix documentation forinstructions.

Severity kCritical

SpaceReservationViolated [A1021]

Message Space reservation configured on vdisk vdisk_name belonging to container idcontainer_id could not be honored due to insufficient disk space resultingfrom a possible disk or node failure.

Cause A drive or a node has failed, and the space reservations on the cluster can no longer bemet.

Page 32: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 32

Resolution Change space reservations to total less than 90% of the available storage, andreplace the drive or node as soon as possible. Refer to the Nutanix documentation forinstructions.

Severity kWarning

VDiskBlockMapUsageHigh [A1061]

Message Too many snapshots have been allocated in the system. This may causeperceivable performance degradation.

Cause Too many vdisks or snapshots are present in the system.

Resolution Remove unneeded snapshots and vdisks. If using remote replication, try to lower thefrequency of taking snapshots. If you cannot resolve the error, contact Nutanix support.

Severity kInfo

Page 33: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 33

4IP Address Configuration

NOS includes a web-based configuration tool that automates the modification of Controller VMs andconfigures the cluster to use these new IP addresses. Other cluster components must be modifiedmanually.

Requirements

The web-based configuration tool requires that IPv6 link-local be enabled on the subnet. If IPv6 link-local isnot available, you must configure the Controller VM IP addresses and the cluster manually. The web-basedconfiguration tool also requires that the Controller VMs be able to communicate with each other.

All Controller VMs and hypervisor hosts must be on the same subnet. If the IPMI interfaces are connected,Nutanix recommends that they be on the same subnet as the Controller VMs and hypervisor hosts.

Guest VMs can be on a different subnet.

To Reconfigure the Cluster

Warning: If you are reassigning a Controller VM IP address to another Controller VM, you mustperform this complete procedure twice: once to assign intermediate IP addresses and again toassign the desired IP addresses.

For example, if Controller VM A has IP address 172.16.0.11 and Controller VM B has IP address172.16.0.10 and you want to swap them, you would need to reconfigure them with different IPaddresses (such as 172.16.0.100 and 172.16.0.101) before changing them to the IP addresses inuse initially.

1. Place the cluster in reconfiguration mode by following To Prepare to Reconfigure the Cluster onpage 34.

2. Configure the IPMI IP addresses by following the procedure for your hardware model.

→ To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000) on page 35→ To Configure the Remote Console IP Address (NX-3000) on page 35→ To Configure the Remote Console IP Address (NX-2000) on page 36

Page 34: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 34

Alternatively, you can set the IPMI IP address using a command-line utility by following To Configurethe Remote Console IP Address (command line) on page 37.

3. Configure networking on node the by following the hypervisor-specific procedure.

→ vSphere: To Configure Host Networking on page 38→ KVM: To Configure Host Networking (KVM) on page 39

4. (vSphere only) Update the ESXi host IP addresses in vCenter by following To Update the ESXi HostPassword in vCenter on page 40.

5. Configure the Controller VM IP addresses.

→ If IPv6 is enabled on the subnet, follow To Change the Controller VM IP Addresses on page 40.→ If IPv6 is not enabled on the subnet, follow To Change a Controller VM IP Address (manual) on

page 41 for each Controller VM in the cluster.

6. Complete cluster reconfiguration by following To Complete Cluster Reconfiguration on page 42.

To Prepare to Reconfigure the Cluster

1. Log on to any Controller VM in the cluster with SSH.

2. Stop the Nutanix cluster.

nutanix@cvm$ cluster stop

Wait to proceed until output similar to the following is displayed for every Controller VM in the cluster.

CVM: 172.16.8.191 Up, ZeusLeader Zeus UP [3167, 3180, 3181, 3182, 3191, 3201] Scavenger UP [3334, 3351, 3352, 3353] ConnectionSplicer DOWN [] Hyperint DOWN [] Medusa DOWN [] DynamicRingChanger DOWN [] Pithos DOWN [] Stargate DOWN [] Cerebro DOWN [] Chronos DOWN [] Curator DOWN [] Prism DOWN [] AlertManager DOWN [] StatsAggregator DOWN [] SysStatCollector DOWN []

3. Put the cluster in reconfiguration mode.

nutanix@cvm$ cluster reconfig

Type y to confirm the reconfiguration.

Wait until the cluster successfully enters reconfiguration mode, as shown in the following example.

INFO cluster:185 Restarted Genesis on 172.16.8.189.INFO cluster:185 Restarted Genesis on 172.16.8.188.INFO cluster:185 Restarted Genesis on 172.16.8.191.INFO cluster:185 Restarted Genesis on 172.16.8.190.INFO cluster:864 Success!

Page 35: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 35

Remote Console IP Address Configuration

The Intelligent Platform Management Interface (IPMI) is a standardized interface used to manage a hostand monitor its operation. To enable remote access to the console of each host, you must configure theIPMI settings within BIOS.

The Nutanix cluster provides a Java application to remotely view the console of each node, or host server.You can use this console to configure additional IP addresses in the cluster.

The procedure for configuring the remote console IP address is slightly different for each hardwareplatform.

To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000)

1. Connect a keyboard and monitor to a node in the Nutanix block.

2. Restart the node and press Delete to enter the BIOS setup utility.You will have a limited amount of time to enter BIOS before the host completes the restart process.

3. Press the right arrow key to select the IPMI tab.

4. Press the down arrow key until BMC network configuration is highlighted and then press Enter.

5. Select Configuration Address source and press Enter.

6. Select Static and press Enter.

7. Assign the Station IP address, Subnet mask, and Router IP address.

8. Review the BIOS settings and press F4 to save the configuration changes and exit the BIOS setuputility.The node restarts.

To Configure the Remote Console IP Address (NX-3000)

1. Connect a keyboard and monitor to a node in the Nutanix block.

Page 36: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 36

2. Restart the node and press Delete to enter the BIOS setup utility.You will have a limited amount of time to enter BIOS before the host completes the restart process.

3. Press the right arrow key to select the Server Mgmt tab.

4. Press the down arrow key until BMC network configuration is highlighted and then press Enter.

5. Select Configuration source and press Enter.

6. Select Static on next reset and press Enter.

7. Assign the Station IP address, Subnet mask, and Router IP address.

8. Press F10 to save the configuration changes.

9. Review the settings and then press Enter.The node restarts.

To Configure the Remote Console IP Address (NX-2000)

1. Connect a keyboard and monitor to a node in the Nutanix block.

2. Restart the node and press Delete to enter the BIOS setup utility.You will have a limited amount of time to enter BIOS before the host completes the restart process.

3. Press the right arrow key to select the Advanced tab.

4. Press the down arrow key until IPMI Configuration is highlighted and then press Enter.

5. Select Set LAN Configuration and press Enter.

6. Select Static to assign an IP address, subnet mask, and gateway address.

Page 37: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 37

7. Press F10 to save the configuration changes.

8. Review the settings and then press Enter.

9. Restart the node.

To Configure the Remote Console IP Address (command line)

You can configure the management interface from the hypervisor host on the same node.

Perform these steps once from each hypervisor host in the cluster where the management networkconfiguration need to be changed.

1. Log on to the hypervisor host with SSH or the IPMI remote console.

2. Set the networking parameters.

root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 ipsrc staticroot@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 ipaddr mgmt_interface_ip_addrroot@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 netmask mgmt_interface_subnet_addrroot@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 defgw ipaddr mgmt_interface_gateway

root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 ipsrc staticroot@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 ipaddr mgmt_interface_ip_addrroot@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 netmask mgmt_interface_subnet_addrroot@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 defgw ipaddr mgmt_interface_gateway

3. Show current settings.

root@esx# /ipmitool -v -U ADMIN -P ADMIN lan print 1

root@kvm# ipmitool -v -U ADMIN -P ADMIN lan print 1

Confirm that the parameters are set to the correct values.

Page 38: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 38

To Configure Host Networking

You can access the ESXi console either through IPMI or by attaching a keyboard and monitor to the node.

1. On the ESXi host console, press F2 and then provide the ESXi host logon credentials.

2. Press the down arrow key until Configure Management Network is highlighted and then press Enter.

3. Select Network Adapters and press Enter.

4. Ensure that the connected network adapters are selected.

If they are not selected, press Space to select them and press Enter to return to the previous screen.

5. If a VLAN ID needs to be configured on the Management Network, select VLAN (optional) and pressEnter. In the dialog box, provide the VLAN ID and press Enter.

6. Select IP Configuration and press Enter.

7. If necessary, highlight the Set static IP address and network configuration option and press Spaceto update the setting.

8. Provide values for the following: IP Address, Subnet Mask, and Default Gateway fields based on yourenvironment and then press Enter .

9. Select DNS Configuration and press Enter.

10. If necessary, highlight the Use the following DNS server addresses and hostname option and pressSpace to update the setting.

11. Provide values for the Primary DNS Server and Alternate DNS Server fields based on yourenvironment and then press Enter.

12. Press Esc and then Y to apply all changes and restart the management network.

13. Select Test Management Network and press Enter.

14. Press Enter to start the network ping test.

15. Verify that the default gateway and DNS servers reported by the ping test match those that youspecified earlier in the procedure and then press Enter.

Ensure that the tested addresses pass the ping test. If they do not, confirm that the correct IPaddresses are configured.

Page 39: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 39

Press Enter to close the test window.

16. Press Esc to log out.

To Configure Host Networking (KVM)

You can access the hypervisor host console either through IPMI or by attaching a keyboard and monitor tothe node.

1. Log on to the host as root.

2. Open the network interface configuration file.

root@kvm# vi /etc/sysconfig/network-scripts/ifcfg-br0

3. Press A to edit values in the file.

4. Update entries for netmask, gateway, and address.

The block should look like this:

ONBOOT="yes" NM_CONTROLLED="no" NETMASK="subnet_mask" IPADDR="host_ip_addr" DEVICE="eth0" TYPE="ethernet" GATEWAY="gateway_ip_addr" BOOTPROTO="none"

• Replace host_ip_addr with the IP address for the hypervisor host.• Replace subnet_mask with the subnet mask for host_ip_addr.• Replace gateway_ip_addr with the gateway address for host_ip_addr.

5. Press Esc.

6. Type :wq and press Enter to save your changes.

7. Open the name services configuration file.

root@kvm# vi /etc/resolv.conf

8. Update the values for the nameserver parameter then save and close the file.

9. Restart networking.

root@kvm# /etc/init.d/network restart

Page 40: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 40

To Update the ESXi Host Password in vCenter

1. Log on to vCenter with the vSphere client.

2. Right-click the host with the changed password and select Disconnect.

3. Right-click the host and select Connect.

4. Enter the new password and complete the Add Host Wizard.

If reconnecting the host fails, remove it from the cluster and add it again.

To Change the Controller VM IP Addresses

Before you begin.

• Confirm that the system you are using to configure the cluster meets the following requirements:

• IPv6 link-local enabled.• Windows 7, Vista, or MacOS.• (Windows only) Bonjour installed (included with iTunes or downloadable from http://

support.apple.com/kb/DL999).

• Determine the IPv6 service of any Controller VM in the cluster.

IPv6 service names are uniquely generated at the factory and have the following form (note the finalperiod):

NTNX-block_serial_number-node_location-CVM.local.

On the right side of the block toward the front is a label that has the block_serial_number (for example,12AM3K520060). The node_location is a number 1-4 for NX-3000, a letter A-D for NX-1000/NX-2000/NX-3050, or a letter A-B for NX-6000.

If IPv6 link-local is not enabled on the subnet, reconfigure the cluster manually.

If you need to confirm if IPv6 link-local is enabled on the network or if you do not have access to get thenode serial number, see the Nutanix support knowledge base for alternative methods.

Page 41: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 41

Warning: If you are reassigning a Controller VM IP address to another Controller VM, you mustperform this complete procedure twice: once to assign intermediate IP addresses and again toassign the desired IP addresses.

For example, if Controller VM A has IP address 172.16.0.11 and Controller VM B has IP address172.16.0.10 and you want to swap them, you would need to reconfigure them with different IPaddresses (such as 172.16.0.100 and 172.16.0.101) before changing them to the IP addresses inuse initially.

The cluster must be stopped and in reconfiguration mode before changing the Controller VM IP addresses.

1. Open a web browser.

Nutanix recommends using Internet Explorer 9 for Windows and Safari for Mac OS.

Note: Internet Explorer requires protected mode to be disabled. Go to Tools > InternetOptions > Security, clear the Enable Protected Mode check box, and restart the browser.

2. Go to http://cvm_ip_addr:2100/ip_reconfig.html

Replace cvm_ip_addr with the name of the IPv6 service of any Controller VM that will be added to thecluster.

3. Update one or more cells on the IP Reconfiguration page.

Ensure that all components satisfy the cluster subnet requirements. See Subnet Requirements.

4. Click Reconfigure.

5. Wait until the Log Messages section of the page reports that the cluster has been successfullyreconfigured, as shown in the following example.

Configuring IP addresses on node S10264822116570/A...Success!Configuring IP addresses on node S10264822116570/C...Success!Configuring IP addresses on node S10264822116570/B...Success!Configuring IP addresses on node S10264822116570/D...Success!Configuring Zeus on node S10264822116570/A... Configuring Zeus on node S10264822116570/C...Configuring Zeus on node S10264822116570/B...Configuring Zeus on node S10264822116570/D...Reconfiguration successful!

The IP address reconfiguration will disconnect any SSH sessions to cluster components. The cluster istaken out of reconfiguration mode.

To Change a Controller VM IP Address (manual)

1. Log on to the hypervisor host with SSH or the IPMI remote console.

2. Log on to the Controller VM with SSH.

root@host# ssh [email protected]

Enter the Controller VM nutanix password.

3. Restart genesis.

nutanix@cvm$ genesis restart

If the restart is successful, output similar to the following is displayed:

Page 42: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 42

Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]Genesis started on pids [30378, 30379, 30380, 30381, 30403]

4. Change the network interface configuration.

a. Open the network interface configuration file.

nutanix@cvm$ sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0

Enter the nutanix password.

b. Press A to edit values in the file.

c. Update entries for netmask, gateway, and address.

The block should look like this:

ONBOOT="yes" NM_CONTROLLED="no" NETMASK="subnet_mask" IPADDR="cvm_ip_addr" DEVICE="eth0" TYPE="ethernet" GATEWAY="gateway_ip_addr" BOOTPROTO="none"

• Replace cvm_ip_addr with the IP address for the Controller VM.• Replace subnet_mask with the subnet mask for cvm_ip_addr.• Replace gateway_ip_addr with the gateway address for cvm_ip_addr.

d. Press Esc.

e. Type :wq and press Enter to save your changes.

5. Update the Zeus configuration.

a. Open the host configuration file.

nutanix@cvm$ sudo vi /etc/hosts

b. Press A to edit values in the file.

c. Update hosts zk1, zk2, and zk3 to match changed Controller VM IP addresses.

d. Press Esc.

e. Type :wq and press Enter to save your changes.

6. Restart the virtual machine.

nutanix@cvm$ sudo reboot

Enter the nutanix password if prompted.

To Complete Cluster Reconfiguration

1. If you changed the IP addresses manually, take the cluster out of reconfiguration mode.

Perform these steps for every Controller VM in the cluster.

a. Log on to the Controller VM with SSH.

Page 43: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 43

b. Take the Controller VM out of reconfiguration mode.

nutanix@cvm$ rm ~/.node_reconfigure

c. Restart genesis.

nutanix@cvm$ genesis restart

If the restart is successful, output similar to the following is displayed:

Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]Genesis started on pids [30378, 30379, 30380, 30381, 30403]

2. Log on to any Controller VM in the cluster with SSH.

3. Start the Nutanix cluster.

nutanix@cvm$ cluster start

If the cluster starts properly, output similar to the following is displayed for each node in the cluster:

CVM: 172.16.8.167 Up, ZeusLeader Zeus UP [3148, 3161, 3162, 3163, 3170, 3180] Scavenger UP [3333, 3345, 3346, 11997] ConnectionSplicer UP [3379, 3392] Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447] Medusa UP [3488, 3501, 3502, 3523, 3569] DynamicRingChanger UP [4592, 4609, 4610, 4640] Pithos UP [4613, 4625, 4626, 4678] Stargate UP [4628, 4647, 4648, 4709] Cerebro UP [4890, 4903, 4904, 4979] Chronos UP [4906, 4918, 4919, 4968] Curator UP [4922, 4934, 4935, 5064] Prism UP [4939, 4951, 4952, 4978] AlertManager UP [4954, 4966, 4967, 5022] StatsAggregator UP [5017, 5039, 5040, 5091] SysStatCollector UP [5046, 5061, 5062, 5098]

Page 44: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 44

5Field Installation

You can reimage a Nutanix node with the Phoenix ISO. This process installs the hypervisor and theNutanix Controller VM.

Note: Phoenix usage is restricted to Nutanix sales engineers, support engineers, and authorizedpartners.

Phoenix can be used to cleanly install systems for POCs or to switch hypervisors.

NOS Installer Reference

Installation Options

Component Option

Hypervisor Clean Install Hypervisor: To install the selected hypervisor as part ofcomplete reimaging.

Clean Install SVM: To install the Controller VM as part of complete reimagingor Controller VM boot drive replacement.

Controller VM

Repair SVM: To retain Controller VM configuration.

Note: Do not use this option except under guidance from Nutanixsupport.

Supported Products and Hypervisors

Product ESX 5.0U2 & 5.1U1 KVM Hyper-V

NX-1000 •

NX-2000 •

NX-2050 •

NX-3000 • •

NX-3050 • • •

NX-6050/NX-6070 •

To Image a Node

Before you begin.

• Download the Phoenix ISO to a workstation with access to the IPMI interface on the node that you wantto reimage.

Page 45: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 45

• Gather the following required pieces of information: Block ID, Cluster ID, and Node Serial Number.These items are assigned by Nutanix, and you must use the correct values.

This procedure describes how to image a node from an ISO on a workstation.

Repeat this procedure once for every node that you want to reimage.

1. Sign in to the IPMI web console.

2. Attach the ISO to the node.

a. Go to Remote Control and click Launch Console.

Accept any security warnings to start the console.

b. In the console, click Media > Virtual Media Wizard.

c. Click Browse next to ISO Image and select the ISO file.

d. Click Connect CD/DVD.

e. Go to Remote Control > Power Control.

f. Select Reset Server and click Perform Action.The host restarts from the ISO.

3. In the boot menu, select Installer and press Enter.If previous values for these parameters are detected on the node, they will be displayed.

4. Enter the required information.

→ If all previous values are displayed and you want to use then, press Y.→ If some or all of the previous values are not displayed, enter the required values.

a. Block ID: Enter the unique block identifier assigned by Nutanix.

b. Model: Enter the product number.

c. Node Serial: Enter the unique node identifier assigned by Nutanix.

d. Cluster ID: Enter the unique cluster identifier assigned by Nutanix.

e. Node Position: Enter 1, 2, 3, or 4 for NX-3000; A, B, C, or D for all other 4-node blocks.

Warning: If you are imaging all nodes in a block, ensure that the Block ID is the same for allnodes and that the Node Serial Number and Node Position are different.

Page 46: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 46

5. Select both Clean Install Hypervisor and Clean Install SVM then select Start.

Installation begins and takes about 20 minutes.

6. In the Virtual Media window, click Disconnect next to CD Media.

7. In the IPMI console, go to to Remote Control > Power Control.

8. Select Reset Server and click Perform Action.The node restarts with the new image. After the node starts, additional configuration tasks run andthen the host restarts again. During this time, the host name is installing-please-be-patient. Waitapproximately 20 minutes until this stage completes before accessing the node.

Warning: Do not restart the host until the configuration is complete.

What to do next. Add the node to a cluster.

Page 47: Platform administration guide-nos_v3_5

vSphere | Platform Administration Guide | NOS 3.5 | 47

Part

IIvSphere

Page 48: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 48

6vCenter Configuration

VMware vCenter enables the centralized management of multiple ESXi hosts. The Nutanix cluster invCenter must be configured according to Nutanix best practices.

While most customers prefer to use an existing vCenter, Nutanix provides a vCenter OVF, which is onthe Controller VMs in /home/nutanix/data/images/vcenter. You can deploy the OVF using the standardprocedures for vSphere.

To Use an Existing vCenter Server

1. Shut down the Nutanix vCenter VM.

2. Create a new cluster entity within the existing vCenter inventory and configure its settings based onNutanix best practices by following To Create a Nutanix Cluster in vCenter on page 48.

3. Add the Nutanix hosts to this new cluster by following To Add a Nutanix Node to vCenter onpage 51.

To Create a Nutanix Cluster in vCenter

1. Log on to vCenter with the vSphere client.

2. If you want the Nutanix cluster to be in its own datacenter or if there is no datacenter, click File > New >Datacenter and type a meaningful name for the datacenter, such as NTNX-DC. Otherwise, proceed to thenext step.

You can also create the Nutanix cluster within an existing datacenter.

3. Right-click the datacenter node and select New Cluster.

4. Type a meaningful name for the cluster in the Name field, such as NTNX-Cluster.

5. Select the Turn on vSphere HA check box and click Next.

6. Select Admission Control > Enable.

7. Select Admission Control Policy > Percentage of cluster resources reserved as failover sparecapacity and enter the percentage appropriate for the number of Nutanix nodes in the cluster the clickNext.

Hosts (N+1) Percentage Hosts (N+2) Percentage Hosts (N+3) Percentage Hosts (N+4) Percentage

1 N/A 9 23% 17 18% 25 16%

2 N/A 10 20% 18 17% 26 15%

3 33% 11 18% 19 16% 27 15%

4 25% 12 17% 20 15% 28 14%

5 20% 13 15% 21 14% 29 14%

Page 49: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 49

Hosts (N+1) Percentage Hosts (N+2) Percentage Hosts (N+3) Percentage Hosts (N+4) Percentage

6 18% 14 14% 22 14% 30 13%

7 15% 15 13% 23 13% 31 13%

8 13% 16 13% 24 13% 32 13%

8. Click Next on the following three pages to accept the default values.

• Virtual Machine Options• VM monitoring• VMware EVC

9. Verify that Store the swapfile in the same directory as the virtual machine (recommended) isselected and click Next.

10. Review the settings and then click Finish.

11. Add all Nutanix nodes to the vCenter cluster inventory.

See To Add a Nutanix Node to vCenter on page 51.

12. Right-click the Nutanix cluster node and select Edit Settings.

13. If vSphere HA and DRS are not enabled, select them on the Cluster Features page. Otherwise,proceed to the next step.

Note: vSphere HA and DRS must be configured even if the customer does not plan to usethe features. The settings will be preserved within the vSphere cluster configuration, so if thecustomer later decides to enable the feature, it will be pre-configured based on Nutanix bestpractices.

14. Configure vSphere HA.

a. Select vSphere HA > Virtual Machine Options.

b. Change the VM restart priority of all Controller VMs to Disabled.

Page 50: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 50

Tip: Controller VMs include the phrase CVM in their names. It may be necessary to expandthe Virtual Machine column to view the entire VM name.

c. Change the Host Isolation Response setting of all Controller VMs to Leave Powered On.

d. Select vSphere HA > VM Monitoring

e. Change the VM Monitoring setting for all Controller VMs to Disabled.

f. Select vSphere HA > Datastore Heartbeating.

g. Click Select only from my preferred datastores and select the Nutanix datastore (NTNX-NFS).

h. If the cluster does not use vSphere HA, disable it on the Cluster Features page. Otherwise,proceed to the next step.

15. Configure vSphere DRS.

a. Select vSphere DRS > Virtual Machine Options.

b. Change the Automation Level setting of all Controller VMs to Disabled.

Page 51: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 51

c. Select vSphere DRS > Power Management.

d. Confirm that Off is selected as the default power management for the cluster.

e. If the cluster does not use vSphere DRS, disable it on the Cluster Features page. Otherwise,proceed to the next step.

16. Click OK to close the cluster settings window.

To Add a Nutanix Node to vCenter

The cluster must be configured according to Nutanix specifications given in vSphere Cluster Settings onpage 53.

Tip: Refer to Default Cluster Credentials on page 2 for the default credentials of all clustercomponents.

1. Log on to vCenter with the vSphere client.

2. Right-click the cluster and select Add Host.

3. Type the IP address of the ESXi host in the Host field.

4. Enter the ESXi host logon credentials in the Username and Password fields.

5. Click Next.

If a security or duplicate management alert appears, click Yes.

6. Review the Host Summary page and click Next.

7. Select a license to assign to the ESXi host and click Next.

8. Ensure that the Enable Lockdown Mode check box is left unselected and click Next.

Lockdown mode is not supported.

9. Click Finish.

10. Select the ESXi host and click the Configuration tab.

11. Configure DNS servers.

a. Click DNS and Routing > Properties.

b. Select Use the following DNS server address.

Page 52: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 52

c. Type DNS server addresses in the Preferred DNS Server and Alternate DNS Server fields andclick OK.

12. Configure NTP servers.

a. Click Time Configuration > Properties > Options > NTP Settings > Add.

b. Type the NTP server address.

Add multiple NTP servers if required.

c. Click OK in the NTP Daemon (ntpd) Options and Time Configuration windows.

d. Click Time Configuration > Properties > Options > General.

e. Select Start automatically under Startup Policy.

f. Click Start

g. Click OK in the NTP Daemon (ntpd) Options and Time Configuration windows.

13. Click Storage and confirm that NFS datastores are mounted.

14. Set the Controller VM to start automatically when the ESXi host is powered on.

a. Click the Configuration tab.

b. Click Virtual Machine Startup/Shutdown in the Software frame.

c. Select the Controller VM and click Properties.

d. Ensure that the Allow virtual machines to start and stop automatically with the system checkbox is selected.

e. If the Controller VM is listed in Manual Startup, click Move Up to move the Controller VM into theAutomatic Startup section.

Page 53: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 53

f. Click OK.

15. (NX-2000 only) Click Host Cache Configuration and confirm that the host cache is stored on the localdatastore.

If it is not correct, click Properties to update the location.

vSphere Cluster Settings

Certain vSphere cluster settings are required for Nutanix clusters.

vSphere HA and DRS must be configured even if the customer does not plan to use the feature. Thesettings will be preserved within the vSphere cluster configuration, so if the customer later decides toenable the feature, it will be pre-configured based on Nutanix best practices.

vSphere HA Settings

Enable host monitoring

Enable admission control and use the percentage-based policy with a value based on thenumber of nodes in the cluster.

Set the VM Restart Priority of all Controller VMs to Disabled.

Set the Host Isolation Response of all Controller VMs to Leave Powered On.

Disable VM Monitoring for all Controller VMs.

Enable Datastore Heartbeating by clicking Select only from my preferred datastores andchoosing the Nutanix NFS datastore.

vSphere DRS Settings

Disable automation on all Controller VMs.

Page 54: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 54

Leave power management disabled (set to Off).

Other Cluster Settings

Store VM swapfiles in the same directory as the virtual machine.

(NX-2000 only) Store host cache on the local datastore.

Failover Reservation Percentages

Hosts (N+1) Percentage Hosts (N+2) Percentage Hosts (N+3) Percentage Hosts (N+4) Percentage

1 N/A 9 23% 17 18% 25 16%

2 N/A 10 20% 18 17% 26 15%

3 33% 11 18% 19 16% 27 15%

4 25% 12 17% 20 15% 28 14%

5 20% 13 15% 21 14% 29 14%

6 18% 14 14% 22 14% 30 13%

7 15% 15 13% 23 13% 31 13%

8 13% 16 13% 24 13% 32 13%

Page 55: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 55

7VM Management

Migrating a VM to Another Cluster

You can live migrate a VM to an ESXi host in a Nutanix cluster. Usually this is done in the following cases:

• Migrate VMs from existing storage platform to Nutanix.• Keep VMs running during disruptive upgrade or other downtime of Nutanix cluster.

In migrating VMs between vSphere clusters, the source host and NFS datastore are the ones presentlyrunning the VM. The target host and NFS datastore are the ones where the VM will run after migration. Thetarget ESXi host and datastore must be part of a Nutanix cluster.

To accomplish this migration, you have to mount the NFS datastores from the target on the source. Afterthe migration is complete, you should unmount the datastores and block access.

To Migrate a VM to Another Cluster

Before you begin. Both the source host and the target host must be in the same vSphere cluster. AllowNFS access to NDFS by adding the source host and target host to a whitelist, as described in To Configurea Filesystem Whitelist.

To migrate a VM back to the source from the target, perform this same procedure with the target as thenew source and the source as the new target.

1. Sign in to the Nutanix web console.

2. Log on to vCenter with the vSphere client.

3. Mount the target NFS datastore on the source host and on the target host.

You can mount NFS datastores in the vSphere client by clicking Add Storage on the Configuration >Storage screen for a host.

Page 56: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 56

Note: Due to a limitation with VMware vSphere, a temporary name and the IP address of acontroller VM must be used to mount the target NFS datastore on both the source host and thetarget host for this procedure.

Parameter Value

Server IP address of the Controller VM on the target ESXi host

Folder Name of the container that has the target NFS datastore (typically /nfs-ctr)

Datastore Name A temporary name for the NFS datastore (e.g., Temp-NTNX-NFS)

a. Select the source host and go to Configuration > Storage.

b. Click Add Storage and mount the target NFS datastore.

c. Select the target host and go to Configuration > Storage.

d. Click Add Storage and mount the target NFS datastore.

4. Change the VM datastore and host.

Do this for each VM that you want to live migrate to the target.

a. Right-click the VM and select Migrate.

Page 57: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 57

b. Select Change datastore and click Next.

c. Select the temporary datastore and click Next then Finish.The VM storage is moved to the temporary datastore on the target host.

d. Right-click the VM and select Migrate.

e. Select Change host and click Next.

f. Select the target host and click Next.

g. Ensure that High priority is selected and click Next then Finish.The VM keeps running as it moves to the target host.

h. Right-click the VM and select Migrate.

i. Select Change datastore and click Next.

j. Select the target datastore and click Next then Finish.The VM storage is moved to the target datastore on the target host.

5. Unmount the datastores in the vSphere client.

Warning: Do not unmount the NFS datastore with the IP address 192.168.5.2.

a. Select the source host and go to Configuration > Storage

b. Right click the temporary datastore and select Unmount.

c. Select the target host and go to Configuration > Storage

d. Right click the temporary datastore and select Unmount.

What to do next. NDFS is not intended to be used as a general use NFS server. Once the migration iscomplete, disable NFS access by removing the source host and target host from the whitelist, as describedin To Configure a Filesystem Whitelist.

vStorage APIs for Array Integration

To improve the vSphere cloning process, Nutanix provides a vStorage APIs for Array Integration (VAAI)plugin. This plugin is installed by default during the Nutanix factory process.

Without the Nutanix VAAI plugin, the process of creating a full clone takes a significant amount of timebecause all the data that comprises a VM is duplicated. This duplication also results in an increase instorage consumption.

The Nutanix VAAI plugin efficiently makes full clones without reserving space for the clone. Read requestsfor blocks that are shared between parent and clone are sent to the original vDisk that was created for theparent VM. As the clone VM writes new blocks, the Nutanix file system allocates storage for those blocks.This data management occurs completely at the storage layer, so the ESXi host sees a single file with thefull capacity that was allocated when the clone was created.

To Clone a VM

1. Log on to vCenter with the vSphere client.

Page 58: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 58

2. Right-click the VM and select Clone.

3. Follow the wizard to enter a name for the clone, choose a cluster, and choose a host.

4. Select the datastore that contains source VM and click Next.

Note: If you choose a datastore other than the one that contains the source VM, the cloneoperation will use the VMware implementation and not the Nutanix VAAI plugin.

5. If desired, set the guest customization parameters. Otherwise, proceed to the next step.

6. Click Finish.

To Uninstall the VAAI Plugin

Because the VAAI plugin is in the process of certification, the security level is set to allow community-supported plugins. Organizations with strict security policies may need to uninstall the plugin if it wasinstalled during setup.

Perform this procedure on each ESXi host in the Nutanix cluster.

1. Log on to the ESXi host with SSH.

2. Uninstall the plugin.

root@esx# esxcli software vib remove --vibname nfs-vaai-plugin

This command should return the following message:

Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.

3. Disallow community-supported plugins.

root@esx# esxcli software acceptance set --level=PartnerSupported

4. Restart the node by following To Restart a Node on page 64.

Migrating vDisks to NFS

The Nutanix Virtual Computing Platform supports three types of storage for vDisks: VMFS, RDM, and NFS.Nutanix recommends NFS for most situations. You can migrate VMFS and RDM vDisks to NFS.

Before migration, you must have an NFS datastore. You can determine if a datastore is NFS inthe vSphere client. NFS datastores have Server and Folder properties (for example, Server:192.168.5.2, Folder: /ctr-ha). Datastore properties are shown in Datastores and Datastore Clusters >Configuration > Datastore Details in the vSphere client.

Page 59: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 59

To create a datastore, use the Nutanix web console or the datastore create nCLI command.

The type of vDisk determines the mechanism that you use to migrate it to NFS.

• To migrate VMFS vDisks to NFS, use storage vMotion by following To Migrate VMFS vDisks to NFS onpage 59.

This operation takes significant time for each vDisk because the data is physically copied.

• To migrate RDM vDisks to NFS, use the Nutanix migrate2nfs.py utility by following To Migrate RDMvDisks to NFS on page 60.

This operation takes only a small amount of time for each vDisk because data is not physically copied.

To Migrate VMFS vDisks to NFS

Before you begin. Log on to vCenter with the vSphere client.

Perform this procedure for each VM that is supported by a VMFS vDisk. The migration takes a significantamount of time.

1. Right-click the VM and select Migrate.

2. Click Change datastore and click Next.

3. Select the NFS datastore and click Next.

4. Click Finish.The vDisk begins migration. When the migration is complete, the vSphere client Tasks & Events tabshows that the Relocate virtual machine task is completed.

Page 60: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 60

To Migrate RDM vDisks to NFS

The migrate2nfs.py utility is available on Controller VMs to rapidly migrate RDM vDisks to an NFSdatastore. This utility has the following restrictions:

• Guest VMs can be migrated only to an NFS datastore that is on the same container where the RDMvDisk resides. For example, if the vDisk is in the ctr-ha container, the NFS datastore must be on thectr-ha container.

• ESXi has a maximum NFS vDisk size of in NFS is 2 TB - 512 B. To migrate vDisks to NFS, thepartitions must be smaller than this maximum. If you have any vDisks that exceed this maximum, youhave to reduce the size in the guest VM before using this mechanism to migrate it. How to reduce thesize is different for every operating system.

The following parameters are optional or are not always required.

--truncate_large_rdm_vmdksSpecify this switch to migrate vDisks larger than the maximum after reducing the size of the partitionin the guest operating system.

--filter=patternSpecify a pattern with the --batch switch to restrict the vDisks based on the name, for exampleWin7*. If you do not specify the --filter parameter in batch mode, all RDM vDisks are included.

--server=esxi_ip_addr and --svm_ip=cvm_ip_addrSpecify the ESXi host and Controller VM IP addresses if you are running the migrate2nfs.py scripton a Controller VM different from the node where the vDisk to migrate resides.

1. Log on to any Controller VM in the cluster with SSH.

2. Specify the logon credentials as environment variables.

nutanix@cvm$ export VI_USERNAME=rootnutanix@cvm$ export VI_PASSWORD=esxi_root_password

3. If you want to migrate one vDisk at a time, specify the VMX file.

nutanix@cvm$ migrate2nfs.py /vmfs/volumes/datastore_name/vm_dir/vm_name.vmx nfs_datastore

• Replace datastore_name with the name of the datastore, for example NTNX_datastore.• Replace vm_dir/vm_name with the directory and the name of the VMX file.

4. If you want to migrate multiple vDisks at the same time, run migrate2nfs.py in batch mode.

Perform these steps for each ESXi host in the cluster.

a. List the VMs that will be migrated.

nutanix@cvm$ migrate2nfs.py --list_only --batch --server=esxi_ip_addr --svm_ip=cvm_ip_addr source_datastore nfs_datastore

• Replace source_datastore with the name of the datastore that contains the VM .vmx file, forexample NTNX_datastore.

• Replace nfs_datastore with the name of the NFS datastore, for example NTNX-NFS.

b. Migrate the VMs.

nutanix@cvm$ migrate2nfs.py --batch --server=esxi_ip_addr --svm_ip=cvm_ip_addr source_datastore nfs_datastore

Each VM takes approximately five minutes to migrate.

What to do next. Migrating the vDisks changes the device signature, which causes certain operatingsystems to mark the disk as offline. How to mark the disk online is different for every operating system.

Page 61: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 61

Page 62: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 62

8Node Management

A Nutanix cluster is composed of individual nodes, or host servers that run a hypervisor. Each node hostsa Nutanix Controller VM, which coordinates management tasks with the Controller VMs on other nodes.

To Shut Down a Node in a Cluster

Before you begin. Shut down guest VMs, including vCenter and the vMA, that are running on the node, ormove them to other nodes in the cluster.

Caution: You can only shut down one node for each cluster. If the cluster would have more thanone node shut down, shut down the entire cluster.

1. Log on to vCenter (or to the ESXi host if vCenter is not available) with the vSphere client.

2. Right-click the Controller VM and select Power > Shut Down Guest.

Note: Do not Power Off or Reset the Controller VM. Shutting down the Controller VM as aguest ensures that the cluster is aware that Controller VM is unavailable.

3. Right-click the host and select Enter Maintenance Mode.

4. In the Confirm Maintenance Mode dialog box, uncheck Move powered off and suspended virtualmachines to other hosts in the cluster and click Yes.The host is placed in maintenance mode, which prevents VMs from running on the host.

5. Right-click the node and select Shut Down.

Wait until vCenter shows that the host is not responding, which may take several minutes.

If you are logged on to the ESXi host rather than to vCenter, the vSphere client will disconnect when thehost shuts down.

Page 63: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 63

To Start a Node in a Cluster

1. If the node is turned off, turn it on by pressing the power button on the front. Otherwise, proceed to thenext step.

2. Log on to vCenter (or to the node if vCenter is not running) with the vSphere client.

3. Right-click the ESXi host and select Exit Maintenance Mode.

4. Right-click the Controller VM and select Power > Power on.

Wait approximately 5 minutes for all services to start on the Controller VM.

5. Confirm that cluster services are running on the Controller VM.

nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr

Output similar to the following is displayed.

Name : 10.1.56.197 Status : Up Zeus : up Scavenger : up ConnectionSplicer : up Hyperint : up Medusa : up Pithos : up Stargate : up Cerebro : up Chronos : up Curator : up Prism : up AlertManager : up StatsAggregator : up SysStatCollector : up

Every service listed should be up.

6. Right-click the ESXi host in the vSphere client and select Rescan for Datastores. Confirm that allNutanix datastores are available.

7. Verify that all services are up on all Controller VMs.

nutanix@cvm$ cluster status

If the cluster is running properly, output similar to the following is displayed for each node in the cluster:

CVM: 172.16.8.167 Up, ZeusLeader Zeus UP [3148, 3161, 3162, 3163, 3170, 3180] Scavenger UP [3333, 3345, 3346, 11997] ConnectionSplicer UP [3379, 3392] Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447] Medusa UP [3488, 3501, 3502, 3523, 3569] DynamicRingChanger UP [4592, 4609, 4610, 4640] Pithos UP [4613, 4625, 4626, 4678] Stargate UP [4628, 4647, 4648, 4709] Cerebro UP [4890, 4903, 4904, 4979] Chronos UP [4906, 4918, 4919, 4968] Curator UP [4922, 4934, 4935, 5064] Prism UP [4939, 4951, 4952, 4978] AlertManager UP [4954, 4966, 4967, 5022] StatsAggregator UP [5017, 5039, 5040, 5091]

Page 64: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 64

SysStatCollector UP [5046, 5061, 5062, 5098]

To Restart a Node

Before you begin. Shut down guest VMs, including vCenter and the vMA, that are running on the node, ormove them to other nodes in the cluster.

Use the following procedure when you need to restart all Nutanix Complete Blocks in a cluster.

1. Log on to vCenter (or to the ESXi host if the node is running the vCenter VM) with the vSphere client.

2. Right-click the Controller VM and select Power > Shut Down Guest.

Note: Do not Power Off or Reset the Controller VM. Shutting down the Controller VM as aguest ensures that the cluster is aware that Controller VM is unavailable.

3. Right-click the host and select Enter Maintenance Mode.

In the Confirm Maintenance Mode dialog box, uncheck Move powered off and suspended virtualmachines to other hosts in the cluster and click Yes.

The host is placed in maintenance mode, which prevents VMs from running on the host.

4. Right-click the node and select Reboot.

Wait until vCenter shows that the host is not responding and then is responding again, which may takeseveral minutes.

If you are logged on to the ESXi host rather than to vCenter, the vSphere client will disconnect when thehost shuts down.

5. Right-click the ESXi host and select Exit Maintenance Mode.

6. Right-click the Controller VM and select Power > Power on.

Wait approximately 5 minutes for all services to start on the Controller VM.

7. Log on to the Controller VM with SSH.

8. Confirm that cluster services are running on the Controller VM.

nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr

Output similar to the following is displayed.

Name : 10.1.56.197 Status : Up Zeus : up Scavenger : up ConnectionSplicer : up Hyperint : up Medusa : up Pithos : up Stargate : up Cerebro : up Chronos : up Curator : up Prism : up AlertManager : up StatsAggregator : up SysStatCollector : up

Every service listed should be up.

Page 65: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 65

9. Right-click the ESXi host in the vSphere client and select Rescan for Datastores. Confirm that allNutanix datastores are available.

To Patch ESXi Hosts in a Cluster

Use the following procedure when you need to patch the ESXi hosts in a cluster without serviceinterruption.

Perform the following steps for each ESXi host in the cluster.

1. Shut down the node by following To Shut Down a Node in a Cluster on page 62, including movingguest VMs to a running node in the cluster.

2. Patch the ESXi host using your normal procedures with VMware Update Manager or otherwise.

3. Start the node by following To Start a Node in a Cluster on page 63.

4. Log on to the Controller VM with SSH.

5. Confirm that cluster services are running on the Controller VM.

nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr

Output similar to the following is displayed.

Name : 10.1.56.197 Status : Up Zeus : up Scavenger : up ConnectionSplicer : up Hyperint : up Medusa : up Pithos : up Stargate : up Cerebro : up Chronos : up Curator : up Prism : up AlertManager : up StatsAggregator : up SysStatCollector : up

Every service listed should be up.

Removing a Node

Before removing a node from a Nutanix cluster, ensure the following statements are true:

• The cluster has at least four nodes at the beginning of the process.• The cluster will have at least three functional nodes at the conclusion of the process.

When you start planned removal of a node, the node is marked for removal and data is migrated to othernodes in the cluster. After the node is prepared for removal, you can physically remove it from the block.

To Remove a Node from a Cluster

Before you begin.

• Ensure that all nodes that will be part of the cluster after node removal are running.• Complete any add node operations on the cluster before removing nodes.

Page 66: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 66

• Shut down all guest VMs on the node or migrate them to other nodes in the cluster. Do not shut downthe Controller VM.

• Get the IP address and host ID of the host you want to remove using the nCLI host list command.

When you remove a node from a cluster, the cluster must re-replicate data that is stored on the node.Otherwise, the cluster may have only one copy of some data. During node removal, you must wait for thisreplication to complete, which may take up to 6 hours depending on the volume of data stored on the node.

Note: Removing a node places additional load on the cluster. To avoid impacting servicesprovided by guest VMs, Nutanix recommends that nodes be removed at a time when the cluster isnot under peak load.

1. Unmount datastores.

a. Log on to vCenter with the vSphere client.

b. Select the ESXi host on the node to be removed.

c. Go to Configuration > Storage

d. For each NFS and VMFS datastore, right-click the datastore and click Unmount.

2. Log on to the Controller VM on a host that will remain part of the cluster, not to the host that you areremoving from the cluster.

3. Check the configuration of the rackable unit (the block).

nutanix@cvm$ ncli rackable-unit list | grep -B 4 "Position vs Host-ID.*:host_id"

Replace host_id with the ID of the node you are removing.

If the node you are removing is the only one listed in the rackable unit, you will need to remove therackable unit after removing the node. Make a note of the ID of the rack.

4. Migrate Zeus from the node to be removed.

nutanix@cvm$ cluster --migrate_from=cvm_ip_addr --genesis_rpc_timeout_secs=120 migrate_zeus

Replace cvm_ip_addr with the IP address of the Controller VM on the node to be removed.

• If the node to be removed is not a Zeus node, the following message is displayed:

CRITICAL cluster:710 The --migrate_from specified is not a zookeeper node

• If the node to be removed is a Zeus node, a new node is chosen and after about 2 minutes thefollowing message is displayed:

INFO cluster:722 Zeus node new_zeus_node is auto selected as the migration targetINFO cluster:743 Zeus migration completed successfully

In either case, you can proceed with removing the node.

5. Start the host removal process.

nutanix@cvm$ ncli host remove-start id=host_id

Replace host_id with the desired host ID from the output of host list.

The command should return the following message:

Host removal successfully initiated

Data migration begins.

Page 67: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 67

6. Monitor the status of data migration. Check approximately every half hour.

nutanix@cvm$ ncli host get-remove-status

Wait to proceed until data migration has completed. If the node you are removing is down, you do notneed to wait for data migration to be reported as complete.

• If Host Status is MARKED_FOR_REMOVAL_BUT_NOT_DETACHABLE, data migration has not completed andyou must continue to wait.

• If Host Status is DETACHABLE, data migration has completed and you can proceed to the next step.

7. If the node you removed was the only node listed in the rackable unit, remove the rackable unit from thecluster configuration.

nutanix@cvm$ ncli rackable-unit remove id=rack_unit_id

Replace rack_unit_id with the ID of the rackable unit found in the preceding step.

Confirm that the rackable unit was removed.

nutanix@cvm$ ncli rackable-unit list | grep -A 4 "ID.*rack_unit_id"

If the rackable unit is still shown, contact Nutanix support before proceeding.

8. Log on to the ESXi host with SSH and set the root password to the factory default.

Refer to Default Cluster Credentials on page 2.

Results. The node can be turned off or added to a different cluster.

Page 68: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 68

9Storage Replication Adapter for Site Recovery Manager

VMware's Site Recovery Manager (SRM) is the market leader in disaster recovery management for virtualapplications. It offers run-book automation for disaster recovery and integrates with storage replicationfrom array vendors as well as vSphere host-based replication. SRM depends on vCenter and is licensedseparately.

VMware host-based replication copies VMware snapshots from host to host. VMware snapshots sufferfrom degraded performance on reads to snapshots with long snapshot hierarchy, and collapsing delta filesis extremely slow.

Storage replication adapters (SRA), which are provided by storage vendors, allow SRM to use array-levelstorage replication. The Nutanix SRA leverages native remote replication, which provides faster and moreefficient scale-out replication of Nutanix snapshots. You cannot, however, protect VMs with both the SRAand with the Nutanix native replication (DR). The two protection mechanisms are mutually exclusive.

To use the Nutanix SRA, you must first configure the Nutanix cluster then configure the SRM servers.

SRM Architecture

Both the protected site and the recovery site have a vCenter server that manages protected VMs and anSRM server.

The Nutanix SRA is a set of scripts that must be installed on the SRM servers. The SRA uses the PrismREST API to communicate with Nutanix clusters.

vStores

A vStore is a separate mount point within a container which has its own NFS namespace. This namespacemaps to a protection domain. Each vStore is exported as a device through the Nutanix SRA. You mustexplicitly protect a vStore. It is not protected by SRM otherwise.

Page 69: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 69

The vStore mapping must have the same vStore name at both sites. Because the vStore name is derivedfrom the container name, both sites must have the same container name.

Requirements

• The Controller VM IP addresses must be different on the remote sites. If they are the same, theDiscover Array SRA method fails with a duplicate entries message.

• Every vStore that is mapped in a remote site must be protected on one site. If it is not protected oneither site then the Discover Device SRA command fails.

• The vSphere datastore name should be the same as the container name.• The maximum number of VMs per SRA is 25.• The Nutanix SRA is supported on SRM 5.0 update 2.

To Configure the Nutanix Cluster for SRA Replication

1. On both the protected and recovery site cluster, create a container and datastore for VMs that you wantto protect.

Warning: The container name must be the same on both clusters.

2. On both the protected and recovery site cluster, create a placeholder container and datastore for theSRM.

This placeholder datastore can contain VMs but cannot be protected.

3. On both clusters, configure a remote site with a vStore mapping.

ncli> remote-site create name=remote_site_name address-list="cvm_ip_addr_list" vstore-map="ctr_name:ctr_name" { enable-proxy="true" } { max-bandwidth=bandwidth }

• Replace remote_site_name with a name for the remote site. This parameter should be different forthe two sites.

• Replace cvm_ip_addr_list with a comma-separated list of Controller VMs in the remote cluster. If youspecify only a single Controller VM IP address in the cluster, the replication subsystems will discoverthe others. If the site is secure and you do not want auto-discovery, specify the IP addresses of allController VMs in the remote cluster. If you are using a tunnel, specify the tunnel IP address.

• Replace ctr_name with the name of the container that holds the VMs you want to back up.• (Optional) Replace bandwidth with the replication bandwidth limit in Kb/s.

4. On the source cluster, protect the vStore.

ncli> vstore protect name=ctr_name

Replace ctr_name with the name of the container that holds the VMs that you want to protect.

5. Find the name of the protection domain.

ncli> vstore list name=ctr_name

Replace ctr_name with the name of the container that holds the VMs you want to back up.

Make a note of the name shown in the Protection Domain field.

6. Add a schedule to the protection domain.

ncli> pd set-schedule name="pd_name" interval="snapshot_interval" retention-policy="retention_policy_list" remote-sites="remote_site_name" { remote-factor=replication_factor }

• Replace pd_name with the name of the protection domain for the vStore.• Replace snapshot_interval with the time in seconds between snapshots.

Page 70: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 70

• Replace retention_policy_list with a comma-separated list of policies that define how many of whatinterval of snapshots to keep. Each policy consists of a pair interval:quantity. For example, if thesnapshot_interval value is 3600 (1 hour), to keep the last five hourly snapshots, specify 1:5. To keepthe last three daily snapshots, specify 24:3.

• (Optional) Replace replication_factor with a number that indicates the frequency of replication. Forexample, if the snapshot interval is one hour and the replication_factor is "12", then snapshot will bereplicated every twelfth occurrence or twice daily.

7. Move the VMs to be protected to the new container.

The maximum number of VMs per SRA is 25.

The cluster detects and begins to replicate the VMs within approximately 1 hour.

To Configure SRA Replication on the SRM Servers

Perform these steps on both SRM servers.

1. Log on to the SRM server with Remote Desktop Connection.

2. Download the Nutanix SRA installer from a Controller VM using SCP.

The path to the installer is /home/nutanix/data/installer/version/pkg/Nutanix-SRA-Setup.exe.

3. Double-click the installer and accept the license.The Nutanix SRA is installed.

4. Log on to vCenter with the vSphere client.

5. Go to View > Solutions and Application > Site Recovery.

If this option is not shown, go to Plug-ins > Manage Plug-ins and install the VMware vCenter SiteRecovery Manager Extension.

6. Configure the placeholder datastore.

Perform these steps for every site.

a. Click Sites and select the site.

b. Click Placeholder Datastores.

c. Click Configure Placeholder Datastore.

d. Select the placeholder datastore you created on the cluster and click OK.

Warning: Do not select the datastore that contains the VMs to be protected.

7. Configure the array managers.

a. Click Array Managers and select the SRM server.

b. Click Add an Array Manager and enter the required information in the wizard.

c. Display Name: Enter a meaningful name for the array manager.

d. SRA type: Choose Nutanix SRA.

e. IP addresses of CVM 0, CVM 1, and CVM 2: Enter the IP addresses of three Nutanix ControllerVMs at the site.

Page 71: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 71

Warning: If the cluster at the site comprises more than one block, choose Controller VMsfrom different blocks.

f. Nutanix Command Center username and password: Enter the Nutanix management credentials.

8. Select the array manager and enable the discovered array pairs.

The items listed in the Remote Array column are identified by cluster_id:incarnation_id. Theincarnation_id is displayed by the Cluster item on the home page of the Nutanix web console .

9. Click the Devices tab and ensure that the devices for enabled array pairs match the vStore mappingsand that the direction of replication is correct.

Page 72: Platform administration guide-nos_v3_5

KVM | Platform Administration Guide | NOS 3.5 | 72

Part

IIIKVM

Page 73: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 73

10Kernel-based Virtual Machine (KVM) Architecture

KVM (Kernel-based Virtual Machine) is a full virtualization solution for Linux, allowing administrators to runmultiple virtual machines running unmodified Linux or Windows operating systems on a single hardwareplatform. As with other virtualization solutions, each virtual machine has private virtualized hardware, eventhough the physical hardware is shared.

KVM requires x86 hardware containing virtualization extensions (e.g., Intel VT or AMD-V). Linux kernels asof 2.6.20 include KVM by default. The hypervisor and virtual machines are managed and accessed throughlibvirt, an open source virtualization API.

Storage Overview

KVM on Nutanix uses iSCSI and NFS for storing VM files.

Figure: KVM Storage Example

iSCSI

Each disk which maps to a VM is defined as a separate iSCSI target. The Nutanix scripts work withlibvirtd in the kernel to create the necessary iSCSI structures in KVM. These structures map to vDiskscreated in the Nutanix container specified by the administrator. If no container is specified, the script willuse the default container name.

Standard KVM requires that disks and VMs be created with separate commands, then be joined as anadditional step. The virt_install script allows the disk and the VM to be created in the same commandline.

Page 74: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 74

NFS Datastores

Nutanix containers can be accessed by the KVM host as NFS datastores. NFS datastores are used tomanage images which may be used by multiple VMs, such as ISO files. When mapped to a VM, the scriptmaps the file in the NFS datastore to the VM as a iSCSI device, just as it does for virtual disk files.

Images must be specified by absolute path, as if relative to the NFS server. For example, if a datastorenamed ImageStore exists with a subdirectory called linux, the path required to access this set of files wouldbe /ImageStore/linux. Use the nfs_ls script to browse the datastore from the CVM:

nutanix@cvm$ nfs_ls --long --human_readable /ImageStore/linux -rw-rw-r-- 1 1000 1000 Dec 7 2012 1.6G CentOS-6.3-x86_64-LiveDVD.iso -rw-r--r-- 1 1000 1000 Jun 19 08:56 523.0M archlinux-2013.06.01-dual.iso -rw-rw-r-- 1 1000 1000 Jun 3 19:22 373.0M grml64-full_2013.02.iso -rw-rw-r-- 1 1000 1000 Nov 29 2012 694.3M ubuntu-12.04.1-amd64.iso

VM Commands

VMs in a Nutanix/KVM environment are controlled by a mix of Nutanix scripts and standard commands.

• Nutanix provides scripts for any operation that directly affects VM storage.• Non-storage operations are handled by standard libvirt commands.

This section focuses only on the Nutanix scripts.

Page 75: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 75

11VM Management Commands

Nutanix presents VMs as iSCSI targets. VMs can be managed with a set of scripts provided by Nutanix.

To install the VM management utilities for KVM, download the nutanix_kvm RPM and install it with sudo rpm-Uvh as root on every Controller VM in the cluster.

The VM management utilities are installed in /home/nutanix/bin.

You can also use the virsh command from a Controller VM to manage VMs on the KVM host. Beforeusing virsh, set this environment variable:

LIBVIRT_DEFAULT_URI=qemu+ssh://[email protected]/system

Add this line to the nutanix user ~/.bashrc so the environment variable is set when you log on.

VM Lifecycle Operations

Task Command Reference

Create a VM virt_install virt_install.py on page 83

Move a VM from one host to another virt_migrate virt_migrate.py on page 87

Shut down and delete a VM virt_kill virt_kill.py on page 85

Disk Operations

Task Command Reference

Create and attach a disk to a VM virt_attach_disk virt_attach_disk.py on page 76

List disks attached to VMs virt_list_disks virt_list_disks.py on page 86

Verify proper configuration of diskattached to a VM

virt_check_disks virt_check_disks.py on page 77

Detach a disk from a VM virt_detach_disk virt_detach_disk.py on page 80

Snapshot Operations

Task Command Reference

Create a snapshot from a VM virt_snapshot virt_snapshot.py on page 89

Create a clone from a snapshot virt_clone virt_clone.py on page 79

Delete a snapshot virt_kill_snapshot virt_kill_snapshot.py on page 86

Page 76: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 76

ISO Management

Task Command Reference

Attach an ISO file to a VM as a CD-ROM device

virt_insert_cdrom virt_insert_cdrom.py on page 82

Detach an ISO file from a VM virt_eject_cdrom virt_eject_cdrom.py on page 81

List the contents of an NFS export (forexample, ISO images)

nfs_ls nfs_ls.py on page 90

virt_attach_disk.py

Usage

nutanix@cvm$ virt_attach_disk --vm <name> --disk <args> [flags]

Attaches a nutanix disk to the specified VM.

The device will be paravirtualized if the --paravirt flag is provided, or ifthe VM already contains paravirt devices and the --noparavirt flag is notprovided.

The target bus is inferred from the --target_dev flag, if specified. Otherwise,a suitable default is chosen. For paravirt devices, a paravirt SCSI bus ispreferred over the virtio bus. For non-paravirt devices, a non-paravirt SCSIbus is preferred over the IDE bus.

This command should not be used for inserting CD-ROMs. See virt_insert_cdrom.

Disk argument:

Arguments to --disk should take one of the following forms:

--disk [create:]<size>

The "create" directive creates a new disk. This is the default behavior, if no directive is specified explicitly. The sub-argument is the size of the disk in GB.

--disk clone:(<disk>|<snap>|/<ctr>/<file>)

The "clone" directive creates a fast clone from an existing object. The sub-argument may be an existing disk name, an existing snapshot name, or the absolute path to a file hosted on a Nutanix datastore.

--disk existing:<disk>

The "existing" directive uses an existing disk. Use this mode with extreme caution: there is no safeguard to prevent the user from attaching the same disk to multiple VMs.

Example:

Attach a new 16GB disk to a freebsd VM's SCSI bus.

virt_attach_disk --vm freebsd --disk 16 --target_dev sdb

/usr/local/nutanix/nutanix_kvm/libexec/virt_attach_disk.py

--containerDefault container

Default: default

Page 77: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 77

--diskDisk device (required)

Default: None

--helpshow this help

Default: 0

--helpshortshow usage only for this module

Default: 0

--helpxmllike --help, but generates XML output

Default: false

--hostHost name

Default: 192.168.5.1

--paravirtUse paravirtualized device

Default: None

--target_devTarget device

Default: None

--vmVM name (required)

Default: None

virt_check_disks.py

Usage

nutanix@cvm$ virt_check_disks [flags] [vm_names...]

Validates that the nutanix disks attached to the specified VMs are properlydefined as libvirt storage pools, that the storage pools point to theappropriate iSCSI targets, and that the backing volumes exist. If no VM isspecified, all VMs on the host are validated.

Optionally, this script can be used to identify unused nutanix storage poolsthat are not in use by any defined VMs (even those that are powered off).

This script runs in dry-run mode by default, in which it will only reportissues. If the --fix flag is provided, it will also attempt to rectify allproblems. This flag should be used with extreme care, and should not be usedfor disks attached to powered-on VMs.

Note: The --fix mode for this script is incredibly heavy-handed, and wouldprobably benefit greatly from an interactive mode.

Examples:

Identify problems on host 10.3.200.35, including unused storage pools.

virt_check_disks --host 10.3.200.35

Page 78: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 78

Forcibly recreate storage pools for a freebsd VM:

virt_check_disks --fix --force_recreate_pools freebsd

/usr/local/nutanix/nutanix_kvm/libexec/virt_check_disks.py

--check_persistentLook for non-persistent storage pools

Default: false

--check_unusedLook for storage pools unused by any VM

Default: false

--create_poolsCreate missing storage pools VM disks (--fix mode)

Default: true

--destroy_poolsDestroy storage pools for non-existing disks (--fix mode)

Default: true

--destroy_unusedDestroy disks backing unused storage pools (--fix mode)

Default: false

--disable_pool_autostartDisable storage pool autostart (--fix mode)

Default: true

--fixAttempt to fix things

Default: false

--force_recreate_poolsForcibly recreate all storage pools (--fix mode)

Default: false

--helpshow this help

Default: 0

--helpshortshow usage only for this module

Default: 0

--helpxmllike --help, but generates XML output

Default: false

--hostHost name

Default: 192.168.5.1

--start_poolsStart storage pools (--fix mode)

Default: true

Page 79: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 79

virt_clone.py

Usage

nutanix@cvm$ virt_clone --snapshot_file <file> --target <target> [flags]

Creates clones from a snapshot descriptor.

Each target argument consists of a comma-delimited list of clone names,optionally preceded by a host name and colon. If the host is omitted, then thelocal host (192.168.5.1) is assumed. See the examples below.

By default, any unique identifiers (libvirt UUID, ethernet MAC addresses, &c.)are regenerated during this process, so the resulting clone is distinguishablefrom its antecedent. This behavior may be overridden with the --preserve_idsflag.

Examples:

Spawn a clone named "freebsd-clone" from a gold image.

virt_clone \ --snapshot_file /snapshot/freebsd-gold.xml \ --target freebsd-clone

Restore a lost VM from a snapshot and start it.

virt_clone \ --snapshot_file /snapshot/freebsd-backup.xml \ --target freebsd \ --preserve_ids --start

Spawn two clones of a gold image on each host, and start them.

virt_clone \ --snapshot_file /snapshot/freebsd-gold.xml \ --target 10.3.200.35:freebsd-10-3-200-35.0,freebsd-10-3-200-35.1 \ --target 10.3.200.36:freebsd-10-3-200-36.0,freebsd-10-3-200-36.1 \ --target 10.3.200.37:freebsd-10-3-200-37.0,freebsd-10-3-200-37.1 \ --target 10.3.200.38:freebsd-10-3-200-38.0,freebsd-10-3-200-38.1 \ --start

/usr/local/nutanix/nutanix_kvm/libexec/virt_clone.py

--helpshow this help

Default: 0

--helpshortshow usage only for this module

Default: 0

--helpxmllike --help, but generates XML output

Default: false

--persistentCreate persistent clones

Default: true

--preserve_idsPreserve UUIDs from snapshot

Default: false

Page 80: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 80

--snapshot_fileSnapshot file (required)

Default: None

--startStart clones after creation

Default: false

--targetClone target ([<host>:]<clone>[,<clone>...]); repeat this option to specify a list of values

Default: None

virt_detach_disk.py

Usage

nutanix@cvm$ virt_detach_disk --vm <name> (--disk <name>|--target_dev <dev>) [flags]

Detaches a nutanix disk from the specified VM, and optionally destroys it.

The disk may either be specified by name, or by target devnode.

By default, the disk is permanently destroyed after being detached. The usermay wish to override this behavior with the --nodestroy_disk option, allowingthe disk to be repurposed (e.g., attached to another VM).

This command should not be used for ejecting CD-ROMs. See virt_eject_cdrom.

Examples:

Remove a disk from a freebsd VM by name.

virt_detach_disk --vm freebsd --disk freebsd-disk0

Remove a disk from the freebsd VM by devnode, but do not destroy the Nutanix disk.

virt_detach_disk --vm freebsd --target_dev sdb --nodestroy_disk

/usr/local/nutanix/nutanix_kvm/libexec/virt_detach_disk.py

--destroy_diskDestroy disk

Default: true

--destroy_poolDestroy libvirt storage pool

Default: true

--diskDisk name

Default: None

--helpshow this help

Default: 0

--helpshortshow usage only for this module

Default: 0

Page 81: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 81

--helpxmllike --help, but generates XML output

Default: false

--hostHost name

Default: 192.168.5.1

--target_devTarget device

Default: None

--vmVM name (required)

Default: None

virt_eject_cdrom.py

Usage

nutanix@cvm$ virt_eject_cdrom --vm <name> [flags]

Ejects a CD-ROM from the VM's CD-ROM device.

If the CD-ROM pointed at a nutanix disk, then by default, the disk will bedestroyed after it is ejected. This behavior may be overridden using the--nodestroy_disk flag. Note, however, that CD-ROM images are typically clonedbefore being inserted, and destroying them is usually the right thing to do.

If the CD-ROM is not a nutanix disk, then the --destroy_disk and --destroy_poolflags have no effect.

If the VM does not have a CD-ROM device, this command will fail.

Example:

Eject a CD-ROM from the CD-ROM device attached as hda.

virt_eject_cdrom --vm freebsd --target_dev hda

/usr/local/nutanix/nutanix_kvm/libexec/virt_eject_cdrom.py

--destroy_diskDestroy disk

Default: true

--destroy_poolDestroy libvirt storage pool

Default: true

--helpshow this help

Default: 0

--helpshortshow usage only for this module

Default: 0

--helpxmllike --help, but generates XML output

Page 82: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 82

Default: false

--hostHost

Default: 192.168.5.1

--target_devCD-ROM device

Default: None

--vmVM name (required)

Default: None

virt_insert_cdrom.py

Usage

nutanix@cvm$ virt_insert_cdrom --vm <name> --cdrom <path> [flags]

Inserts a nutanix disk into the VM's CD-ROM drive.

If the VM has multiple CD-ROM drives, the first is used by default. A specificdevice may be selected using the --target_dev flag.

If the VM does not have a CD-ROM device, or if the selected CD-ROM device isnot empty, this command will fail.

CD-ROM arguments:

Arguments to --cdrom should always be an absolute path to a file hosted on the Nutanix datastore.

Example:

Insert a CD-ROM image to the freebsd VM.

virt_insert --vm freebsd --cdrom /ImageStore/freebsd.iso

/usr/local/nutanix/nutanix_kvm/libexec/virt_insert_cdrom.py

--cdromCD-ROM image (required)

Default: None

--helpshow this help

Default: 0

--helpshortshow usage only for this module

Default: 0

--helpxmllike --help, but generates XML output

Default: false

--hostHost

Default: 192.168.5.1

Page 83: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 83

--target_devCD-ROM device

Default: None

--vmVM name (required)

Default: None

virt_install.py

Usage

USAGE virt_install --name <name> [flags]

A nutanix-aware wrapper for virt-install. For additional help, consult thevirt-install(1) manpage.

Note that, in addition to --name, a boot method must be provided (or inferredfrom the provided arguments). Any one of --cdrom, --disk, or --pxe isconsidered a boot method.

CD-ROM arguments:

Arguments to --cdrom should always be an absolute path to a file hosted on the Nutanix datastore.

If no CD-ROM is provided, an empty CD-ROM device will be created by default.

Disk arguments:

Arguments to --disk should take one of the following forms:

--disk [create:]<size>

The "create" directive creates a new disk. This is the default behavior, if no directive is specified explicitly. The sub-argument is the size of the disk in GB.

--disk clone:(<disk>|<snap>|/<ctr>/<file>)

The "clone" directive creates a fast clone from an existing object. The sub-argument may be an existing disk name, an existing snapshot name, or the absolute path to a file hosted on a Nutanix datastore.

--disk existing:<disk>

Uses an existing disk without cloning it. Use this mode with utmost caution: there is no safeguard to prevent the user from attaching the same disk to multiple VMs.

NIC arguments:

Arguments to --nic should be an existing network name (such as "VM-Network").

Examples:

A simple freebsd VM with a CD-ROM, two disks, and a NIC.

virt_install \ --name freebsd \ --vcpus 1 \ --memory 2048 \ --cdrom /ImageStore/freebsd_installer.iso \ --disk 16 --disk 16 \

Page 84: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 84

--nic VM-Network \ --os_variant freebsd8

Rebuilding a lost VM config, using existing disks.

virt_install \ --name freebsd \ --vcpus 1 \ --memory 2048 \ --disk existing:freebsd-disk0 --disk existing:freebsd-disk1 --nic VM-Network \ --os_variant freebsd8

/usr/local/nutanix/nutanix_kvm/libexec/virt_install.py

--cdromCD-ROM image; repeat this option to specify a list of values

Default: None

--containerNutanix storage container

Default: default

--cpuCPU model and features

Default: host

--descriptionDescription string

Default: None

--diskDisk devices; repeat this option to specify a list of values

Default: None

--empty_cdromCreate an empty CD-ROM device when a CD-ROM image is not specified

Default: true

--helpshow this help

Default: 0

--helpshortshow usage only for this module

Default: 0

--helpxmllike --help, but generates XML output

Default: false

--hostHost

Default: 192.168.5.1

--nameVM name (required)

Default: None

Page 85: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 85

--nicNetwork interfaces; repeat this option to specify a list of values

Default: None

--nic_modelNIC model (ignored for paravirt)

Default: e1000

--os_typeOperating system type

Default: None

--os_variantOperating system variant

Default: None

--paravirtUse paravirtualized devices

Default: None

--passthrough_argsPassthrough arguments to virt-install(1).

Default: None

--pxePXE boot

Default: false

--ramRAM in MB

Default: 4096

--vcpusNumber of CPUs

Default: 2

--vncUse VNC graphics

Default: true

--vnc_listenVNC listen address

Default: 0.0.0.0

--vnc_portVNC port

Default: None

virt_kill.py

Usage

nutanix@cvm$ virt_kill [flags] vm_name [vm_names...]

Destroys and undefines the specified VM(s). If the VM has any disks backed bynutanix storage, these may be optionally detached from the host and destroyed.

Page 86: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 86

/usr/local/nutanix/nutanix_kvm/libexec/virt_kill.py

--destroy_disksDestroy nutanix disks

Default: true

--destroy_poolsDestroy libvirt storage pools

Default: true

--helpshow this help

Default: 0

--helpshortshow usage only for this module

Default: 0

--helpxmllike --help, but generates XML output

Default: false

--hostHost

Default: 192.168.5.1

virt_kill_snapshot.py

Usage

nutanix@cvm$ virt_kill_snapshot [flags] <snapshot-file> [...]

Destroys a snapshot descriptor file, as well as the snapshotted disks to whichit refers.

/usr/local/nutanix/nutanix_kvm/libexec/virt_kill_snapshot.py

--helpshow this help

Default: 0

--helpshortshow usage only for this module

Default: 0

--helpxmllike --help, but generates XML output

Default: false

virt_list_disks.py

Usage

nutanix@cvm$ virt_list_disks [flags] <vm-name> [...]

Lists disks attached to the specified VM(s), or all VMs if none is specified.

Page 87: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 87

/usr/local/nutanix/nutanix_kvm/libexec/virt_list_disks.py

--helpshow this help

Default: 0

--helpshortshow usage only for this module

Default: 0

--helpxmllike --help, but generates XML output

Default: false

--hostHost

Default: 192.168.5.1

--sizeShow disk capacity

Default: true

virt_migrate.py

Usage

nutanix@cvm$ virt_migrate --vm <name> --destination <host> [flags]

Migrates a VM from one host to another.

If the VM is powered down, the migration will be performed by simply copyingthe VM descriptor to the specified host. In this case, --bandwidth, --live,--suspend, and --tunnelled are ignored.

If the VM is powered on, the migration will be conducted internally bylibvirtd, and all of the flags apply. The --bandwidth flag may be used to putan approximate upper bound on the bandwidth used for the migration.

By default, the VM will be suspended, its entire memory image will be copiedover the network to the destination host, and then the VM will resume on thedestination host. After a successful migration, the VM is undefined on thesource host (--undefinesource), and persistently defined on the destinationhost (--persistent).

If --live is specified, the VM is not immediately suspended. After the initialmemory image is copied over, any pages that were modified during the transferare copied over as a delta. These deltas are transferred iteratively until theybecome small enough; then the VM is suspended, and the last delta istransfered. For VMs with a large working set, the --timeout parameter may beused to place an absolute threshold on the amount of time to wait beforeforcefully suspending the VM. Specifying 0 indicates an unlimited timeout, butthis is not recommended.

The --tunnelled option is used to tunnel the data over an SSH channel betweenthe source and destination hosts. This option is most likely to succeed if thedestination is running a firewall.

Example:

Attempt to live-migrate a freebsd VM to 10.3.200.36, but fall back to suspended migration after 60 seconds.

virt_migrate --vm freebsd --destination 10.3.200.36 --live --timeout 60

Page 88: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 88

/usr/local/nutanix/nutanix_kvm/libexec/virt_migrate.py

--bandwidthMaximum bandwidth in MBps

Default: None

--destinationDestination host

Default: None

--helpshow this help

Default: 0

--helpshortshow usage only for this module

Default: 0

--helpxmllike --help, but generates XML output

Default: false

--liveLive migration

Default: false

--persistentPersist VM on destination

Default: true

--sourceSource host

Default: 192.168.5.1

--suspendPause VM on destination

Default: false

--timeoutLive migration timeout in seconds

Default: 300

--tunnelledMigrate over SSH tunnel

Default: true

--undefinesourceUndefine VM on source

Default: true

--vmVM to migrate

Default: None

virt_multiclone.py

Page 89: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 89

Usage

nutanix@cvm$ virt_multiclone.py [flags]

/home/nutanix/nutanix_kvm/libexec/virt_multiclone.py

--helpshow this help

Default: 0

--helpshortshow usage only for this module

Default: 0

--helpxmllike --help, but generates XML output

Default: false

--persistentCreate persistent clones

Default: true

--preserve_idsPreserve UUIDs from snapshot

Default: false

--snapshot_fileSnapshot file

Default: None

--startStart clones after creation

Default: false

--targetClone target (<host>:<clone>[,<clone>...]); repeat this option to specify a list of values

Default: None

virt_snapshot.py

Usage

nutanix@cvm$ virt_snapshot [flags]

The snapshot suffix is a strftime format string (though it need not include anystrftime conversion characters). The first character of the suffix stringshould probably be a separator. If not provided, a suitable default encodingthe current date and time will be used.

By default, the VM will be suspended during the snapshot if it has more thanone disk (not including CD-ROMs). This guarantees a crash-consistent snapshot.The user may override this behavior with the --[no]suspend flag.

Example:

Take a snapshot of a freebsd VM, for use as a gold image, placing the resulting descriptor in /tmp/freebsd-gold.xml.

virt_snapshot --vm freebsd --suffix -gold --directory /tmp

Page 90: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 90

/usr/local/nutanix/nutanix_kvm/libexec/virt_snapshot.py

--allow_overwriteOverwrite existing snapshot data

Default: false

--directoryWhere to store snapshot XML

Default: None

--helpshow this help

Default: 0

--helpshortshow usage only for this module

Default: 0

--helpxmllike --help, but generates XML output

Default: false

--hostHost name

Default: 192.168.5.1

--stdoutWrite snapshot XML to stdout

Default: false

--suffixSnapshot suffix (strftime format)

Default: None

--suspendSuspend VM during snapshot

Default: None

--vmVM name (required)

Default: None

nfs_ls.py

Usage

nutanix@cvm$ nfs_ls [flags] [<path> [<path>...]]

Enumerates NFS exports, mounts them, and generates ls-like output for all pathsspecified as arguments. If no path is specified, this program operates on theroot directory of each export.

Note that, by default, this program uses unreserved ports when invoked as anon-root user, which means that it will only be able to browse insecure NFSexports.

Page 91: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 91

/usr/local/nutanix/nutanix_kvm/libexec/nfs_ls.py

--absoluteAlways print absolute path

Default: None

--allDo not ignore entries starting with .

Default: None

--atimeList (and sort by) access time

Default: None

--classifyAppend indicator (/@) to entries

Default: None

--ctimeList (and sort by) create time

Default: None

--directoryDo not list directory contents

Default: None

--gidGroup ID

Default: 1000

--helpshow this help

Default: 0

--helpshortshow usage only for this module

Default: 0

--helpxmllike --help, but generates XML output

Default: false

--hostTarget host

Default: 192.168.5.2

--human_readableHuman-readable sizes

Default: None

--inodeShow the index number of each file

Default: None

--longLong listing format

Page 92: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 92

Default: None

--mtimeList (and sort by) modification time

Default: None

--recursiveList subdirectories recursively

Default: None

--reverseReverse order when sorting

Default: None

--sort_sizeSort by file size

Default: None

--uidUser ID

Default: 1000

--use_reserved_portUse a reserved local port

Default: false

Page 93: Platform administration guide-nos_v3_5

Hardware | Platform Administration Guide | NOS 3.5 | 93

Part

IVHardware

Page 94: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 94

12Node Order

Most Nutanix models include blocks that support (up to) four nodes, but some Nutanix models (NX-6000series) include blocks that support only two nodes. See the appropriate section for your model.

Four-Node Blocks

Nutanix assigns a name to each node in a block.

For NX-2000 and NX-1000/3050, these names are:

• Node A• Node B• Node C• Node D

For NX-3000, these names are:

• Node 1• Node 2• Node 3• Node 4

Physical drives are arranged in the chassis according to this node order, as shown in the followingdiagram.

Page 95: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 95

The first drive in each node is the SSD drive, not an HDD.

Figure: NX-2000 block

The first drive in each node is the SSD drive, not an HDD.

Figure: NX-3000 block

Page 96: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 96

The first drive in each node contains the Controller VM and metadata. All the other drives are data drives.

Figure: NX-3050 block

Two-Node Blocks

Nutanix assigns a name to each node in a block. These names are:

• Node A• Node B

Node A physical drives are located in the left side of the chassis while Node B drives are located in theright side, as shown in the following diagram. The bottom two drives in each node are SSDs; the otherdrives are HDDs.

Page 97: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 97

The first drive in each node contains the Controller VM and metadata. All the other drives are data drives.

Figure: NX-6000 block

Page 98: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 98

13System Specifications

NX-1000 Series System Specifications

Hardware Components

CPU 8 × Intel Xeon E5-2620 6-core Sandy Bridge @ 2.0 GHz (2 per node)

16 × hot-swappable 1 TB SATA hard disk drives (HDD) (4 per node)

4 × hot-swappable 400 GB solid state drives (SSD) (1 per node)

Storage

4 × 16 GB SATA DOM (1 per node)

8 × 240-pin DIMM sockets per node (4 per CPU)

8 GB DIMMs: 1600/1333/1066 MHz DDR3 RDIMM, 1.5 V

Memory

16 × 8 GB = 128 GB or 8 × 8 GB = 64 GB

2 × 10 GbE SFP+ per node (both ports on the NIC)

2 × 1 GbE BASE-T RJ45 per node

Network connections

1 × 10/100 BASE-T RJ45 per node

Expansion slots 2 (x8) PCIe 3.0 MicroLP (low profile) per node (both slots filled with aNIC and LSI SATA controller)

Fans 4 × 8 cm cooling fans

System Characteristics

Standalone: 67.2 lbs. (30.5 kg)Block weight (standalone)

Package: 77.2 lbs. (35.0 kg)

Block form factor 2U rack-mount chassis

Height: 3.5" (8.9 cm)

Width: 17.3" (43.8 cm)

Block dimensions

Depth: 26.8" (67.9 cm)

Node weight 7 lbs. (3.2 kg)

Width: 6.8" (17.3 cm)Node dimensions

Length: 22.5" (57.2 cm)

Page 99: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 99

Power and Electrical

2 × redundant, hot-swappable, auto-ranging power supplies

80 PLUS Platinum Certified (180 - 240 V only)

1000 W Output @ 100-120 V, 12.0-10A, 50-60 Hz

Power supplies

1620 W Output @ 180-240 V, 10.5-8.0A, 50-60 Hz

1100 W Output @ 100-140 V, 13.5-9.5A, 50-60 HzOperating Power Consumption

1400 W Output @ 180-240 V, 9.5-7.0A, 50-60 Hz

Block power consumption(maximum)

1150 W (128 GB × 4 nodes)

Thermal dissipation (maximum) 3930 BTU/hr (128 GB × 4 nodes)

Operating Environment

Operating temperature 50° to 95° F (10° to 35° C)

Nonoperating temperature -40° to 158° F (-40° to 70° C)

Operating relative humidity 20% to 95% (non-condensing)

Nonoperating relative humidity 5% to 95% (non-condensing)

Field-Replaceable Unit List (NX-1000 Series)

Short reference description.

Image Description Part Number

Bezel, 2U, w/Lock and Key, NX-3050 (alsoapplies to NX-1000 series)

X-BEZEL-NX3050

Chassis, NX-3050 w/PSU & Fans (alsoapplies to NX-1000 series)

X-CHASSIS-NX3050

HDD, SATA, 1TB X-HDD-SATA-1TB

SSD, SATA, 400GB X-SSD-SATA-400GB

Memory, 16GB, DDR3, RDIMM, Samsung X-MEM-16GB-S

Page 100: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 100

Image Description Part Number

Node, 64GB, NX-1050 X-NODE-64GB-1050

Shipping Packaging, NX-2000 series (alsoapplies to NX-1000 series)

X-PKG-NX-2000

Power Supply, 1620W, NX-2000 series(also applies to NX-1000 series)

X-PSU-1620-NX2000

Chassis Fan, 80x80x38mm (NX-1000,NX-3050, NX-6000)

X-FAN-80-11K-SM

Rail, 2U, NX-2000 series (Also NX-1000,NX-3050, NX-6000 series)

X-RAIL-NX2000

Cable, 3m, SFP+ to SFP+ X-CBL-3M-SFP+-SFP+

Cable, 4m, SFP+ to SFP+ X-CBL-4M-SFP+-SFP+

Cable, 5m, SFP+ to SFP+ X-CBL-5M-SFP+-SFP+

NIC, 10 GbE, Dual SFP+, SMC Micro LP X-MEZZ-NIC-10G-SM

NX-2000 System Specifications

Hardware Components

CPU 8 × Intel Xeon 6-core Westmere x5650 @2.66GHz (2 CPUs pernode)

20 × hot-swappable 1 TB SATA hard disk drives (HDD)Hard drives

4 × hot-swappable 300 GB solid state drives (SSD)

Page 101: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 101

12 × 240-pin DIMM sockets per nodeMemory

Supports up to 192 GB RAM per node (1333/1066/800 MHz DDR3RDIMM), 1.5V or 1.35V

1 × 10Gb QSFP per node

2 × 1Gb BASE-T RJ45 per node

Network connections

1 × 10/100 BASE-T RJ45 per node

2 × redundant, hot-swappable supplies

80 PLUS Gold Certified

1100W Output @ 100-140V, 13.5-9.5A, 50-60HZ

Power supplies

1400W Output @ 180-240V, 9.5-7.0A, 50-60Hz

Expansion slots 1 (x16) PCIe 2.0 (low profile) per node

Fans 4 × 8cm cooling fans

System Characteristics

Standalone: 85 lbs. (38.6 kg)Block weight (standalone)

Package: 95 lbs. (43 kg)

Block form factor 28" (711 mm) deep 2U rack-mount chassis

Height: 3.5" (89 mm)

Width: 17.2" (437 mm)

Block dimensions

Depth: 28" (711 mm)

Node weight 7 lbs. (3.2 kg)

Width: 6.8" (173 mm)Node dimensions

Length: 22.5" (572 mm)

Power and Electrical

1100W Output @ 100-140 V, 13.5-9.5A, 50-60HzAC input

1400W Output @ 180-240 V, 9.5-7.0A, 50-60Hz

48 GB RAM per node: 1200WBlock power consumption(maximum)

192 GB RAM per node: 1350W

48 GB RAM per node: 4100 BTU/hr.Thermal dissipation (maximum)

192 GB RAM per node: 4600 BTU/hr.

Page 102: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 102

Operating Environment

Operating temperature 50° to 95° F (10° to 35° C)

Nonoperating temperature 32° to 110° F (0° to 40° C)

Operating relative humidity 20% to 95% (non-condensing)

Nonoperating relative humidity 5% to 95% (non-condensing)

Field-Replaceable Unit List (NX-2000)

Image Description Part Number

Spare, Bezel, 2U, with lock & keys,NX-2000 Family

X-BEZEL-NX2000

Spare, Cable, 3m, QSFP to SFP+ X-CBL-3M-QSFP-SFP+

Spare, Cable, 4m, QSFP to SFP+ X-CBL-4M-QSFP-SFP+

Spare, Cable, 5m, QSFP to SFP+ X-CBL-5M-QSFP-SFP+

Spare, Chassis, Nutanix 2000 series (PSU& Fans only)

X-CHASSIS-NX2000

Chassis Fan, Nutanix 2000 series (8x8x3.8cm)

X-FAN-NX2000

Spare, SSD, PCI, 320GB, Fusion I/ONutanix 2000 series

X-FUSION-IO-320GB

Spare, HDD, SATA, 1TB, Nutanix 2000series

X-HDD-SATA-1TB

Spare, Memory, 16GB Nutanix 2000 series(qty 2)

X-MEM-16GB

Page 103: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 103

Image Description Part Number

Spare, Memory, 8GB Nutanix 2000 series(qty 2)

X-MEM-8GB

Spare, Node, 192GB, Nutanix 2000 series X-NODE-192GB-NX2000

Spare, Node, 48GB, Nutanix 2000 series X-NODE-48GB-NX2000

Spare, Node, 96GB, Nutanix 2000 series X-NODE-96GB-NX2000

Spare, Power Supply, 1400W, Nutanix 2000series

X-PSU-1400-NX2000

Spare, Power Supply, 1620W, Nutanix 2000series

X-PSU-1620-NX2000

Rail, 2U, NX-2000 series (Also NX-1000,NX-3050, NX-6000 series)

X-RAIL-NX2000

Spare, SSD, SATA, 300GB, Nutanix 2000series

X-SSD-SATA-300GB

N/A Spare Kit, NX-2000 series (2 × HDD, 1 ×PSU, 1 × Fan)

XC-SPRKT-NX2000

NX-3000 System Specifications

Hardware Components

CPU 8 × Intel Xeon 8-core Sandy Bridge E5-2660 @ 2.2 GHz (2 CPUs pernode)

20 × 1 TB SATA hard disk drive (HDD) (5 per node)

4 × 300 GB SATA SSD (1 per node)

4 × 400 GB PCIe solid state drive (SSD) (1 per node)

Storage

4 × 4 GB eUSB module (1 per node)

Memory 64 × DDR3 DIMM socket, 1600 MHz, 1.5 V (Up to 16 DIMMs, 256 GBper node)

2 × 10 GbE SFP+ connectors per node (1 x16 lane low-profilemezzanine card, two ports)

Network connections

2 × 1 GbE BASE-T RJ45 ports per node

Page 104: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 104

1 × 10/100 BASE-T RJ45 ports per node

2 × redundant, hot-swappable power supplies, 80 PLUS GoldCertified

Power supplies

1400 W Output @ 200-240 VAC, 50-60 Hz, 9.6A

Fans 2 × cooling fan assemblies

Expansion slots 1 (x16) PCIe 2.0 (low profile) per node

System Characteristics

Standalone: 72.8 lbs. (33 kg)Block weight, max configuration

Package: 101 lbs. (45.8 kg)

Block form factor 30.5" ( 774.7 mm) deep 2U rack-mount chassis

Height: 3.44" (87.37 mm)

Width: 17.6" (447 mm)

Block dimensions

Depth: 30.5" (774.7 mm)

Node weight 6.94 lbs. (3.14 kg)

Height: 1.5" (38.1 mm)

Width: 6.6" (172 mm)

Node dimensions

Depth: 20.6" (524 mm)

Power and Electrical

AC power input 1400 W Output @ 200-240 VAC, 50-60 Hz, 9.6 A

128 GB RAM per node: 1250 WBlock power consumption(maximum)

256 GB RAM per node: 1350 W

128 GB RAM per node: 4265 BTU/hrThermal dissipation (maximum)

256 GB RAM per node: 4605 BTU/hr

Operating Environment

Operating temperature 41° to 95° F (5° to 35° C)

Nonoperating temperature -40° to 149° F (-40° to 65° C)

Operating relative humidity 20% to 80% (non-condensing)

Nonoperating relative humidity 5% to 95% (non-condensing)

Page 105: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 105

Field-Replaceable Unit List (NX-3000)

Description Part Number

Spare, Bezel, 2U, w/Lock and Key, NX3 X-BEZEL-NX3

Spare, Cable, 3m, SFP+ to SFP+ X-CBL-3M-SFP+-SFP+

Spare, Cable, 3m, SFP+ to SFP+ with QSA Adapter X-CBL-3M-QSA-SFP+

Spare, Cable, 4m, SFP+ to SFP+ X-CBL-4M-SFP+-SFP+

Spare, Cable, 5m, SFP+ to SFP+ X-CBL-5M-SFP+-SFP+

Spare, Chassis, Quanta S810-X52L, NX3, (PSU & Fans Only) X-CHASSIS-NX3

Spare, Fan Assembly, Left Side, NX-3000 X-FAN-NX3-LEFT

Spare, Fan Assembly, Right Side, NX-3000 X-FAN-NX3-RIGHT

Spare, SSD, PCI, 400GB, Intel 910, NX3 X-INTEL-400GB

Spare, HDD, SATA, 1TB, NX3 Series X-HDD-SATA-NX3-1TB

Spare, Memory, 16GB, NX3 Series X-MEM-NX3-16GB

Spare, Memory, 8GB, NX3 Series X-MEM-NX3-8GB

Spare, Node, 128GB, S810-X52L, NX3 X-NODE-128GB-NX3

Spare, Node, 64GB, S810-X52L, NX3 X-NODE-64GB-NX3

Spare, Node, 256GB, S810-X52L, NX3 X-NODE-256GB-NX3

Spare, Shipping Packaging, NX3 X-PKG-NX3

Spare, Power Supply, 1400W, S810-X52L, NX3 Series X-PSU-1400-NX3

Spare, Rail, 2U, S810-X52L, NX3 Series X-RAIL-NX3

Spare, SSD, SATA, 300GB, NX3 Series X-SSD-SATA-NX3-300GB

Spare, NIC, 10 GbE, Dual Port, Mezz X-NIC-10G-2

NX-3050 System Specifications

Hardware Components

CPU 8 × Intel Xeon E5-2670 8-core Sandy Bridge @ 2.60GHz (2 per node)

16 × hot-swappable 1 TB SATA hard disk drives (HDD) (4 per node)

8 × hot-swappable 400 GB solid state drives (SSD) (2 per node)

Storage

4 × 16 GB SATA DOM (1 per node)

16 × 240-pin DIMM sockets per node (8 per CPU)

16 GB DIMMs: 1600/1333/1066 MHz DDR3 RDIMM, 1.5V

Memory

8 × 16 GB = 128 GB or 16 × 16 GB = 256 GB

Network connections 2 × 10Gb SFP+ per node (both ports on the NIC)

Page 106: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 106

2 × 1Gb BASE-T RJ45 per node

1 × 10/100 BASE-T RJ45 per node

Expansion slots 2 (x8) PCIe 3.0 MicroLP (low profile) per node (both slots filled with aNIC and LSI SATA controller)

Fans 4 × 8 cm cooling fans

System Characteristics

Standalone: 67.2 lbs. (30.5 kg)Block weight (standalone)

Package: 77.2 lbs. (35.0 kg)

Block form factor 2U rack-mount chassis

Height: 3.5" (8.9 cm)

Width: 17.3" (43.8 cm)

Block dimensions

Depth: 26.8" (67.9 cm)

Node weight 7 lbs. (3.2 kg)

Width: 6.8" (17.3 cm)Node dimensions

Length: 22.5" (57.2 cm)

Power and Electrical

2 × redundant, hot-swappable, auto-ranging power supplies

80 PLUS Platinum Certified (180 - 240V only)

1000W Output @ 100-120V, 12.0-10A, 50-60Hz

Power supplies

1620W Output @ 180-240V, 10.5-8.0A, 50-60Hz

1100W Output @ 100-140 V, 13.5-9.5A, 50-60HzOperating Power Consumption

1400W Output @ 180-240 V, 9.5-7.0A, 50-60Hz

Block power consumption(maximum)

1350W (256 GB × 4 nodes)

Thermal dissipation (maximum) 4610 BTU/hr (256 GB × 4 nodes)

Operating Environment

Operating temperature 50° to 95° F (10° to 35° C)

Nonoperating temperature -40° to 158° F (-40° to 70° C)

Operating relative humidity 20% to 95% (non-condensing)

Page 107: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 107

Nonoperating relative humidity 5% to 95% (non-condensing)

Field-Replaceable Unit List (NX-3050)

Image Description Part Number

Bezel, 2U, w/Lock and Key, NX-3050 X-BEZEL-NX3050

Chassis, NX-3050 w/PSU & Fans X-CHASSIS-NX3050

HDD, SATA, 1TB X-HDD-SATA-1TB

SSD, SATA, 400GB X-SSD-SATA-400GB

SSD, SATA, 800GB (NX-3051 only) X-SSD-SATA-800GB

Memory, 16GB X-MEM-16GB-S

Node, 256GB, NX-3050 X-NODE-256GB-3050

Node, 128GB, NX-3050 X-NODE-128GB-3050

Shipping Packaging, NX-2000 series (alsoapplies to NX-3050)

X-PKG-NX-2000

Power Supply, 1620W, NX-2000 series(NX-1000, NX-3050, NX-6000)

X-PSU-1620-NX2000

Chassis Fan, 80x80x38mm (NX-1000,NX-3050, NX-6000)

X-FAN-80-11K-SM

Rail, 2U, NX-2000 series (also applies toNX-3050)

X-RAIL-NX2000

Page 108: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 108

Image Description Part Number

Cable, 3m, SFP+ to SFP+ X-CBL-3M-SFP+-SFP+

Cable, 4m, SFP+ to SFP+ X-CBL-4M-SFP+-SFP+

Cable, 5m, SFP+ to SFP+ X-CBL-5M-SFP+-SFP+

NIC, 10 GbE, Dual SFP+, SMC Micro LP X-MEZZ-NIC-10G-SM

Left Ear, NX-2000 series (also applies toNX-3050)

X-EAR-LEFT-NX2000

Right Ear, NX-2000 series (also applies toNX-3050)

X-EAR-RIGHT-NX2000

NX-6000 Series System Specifications

Hardware Components

CPU 4 × Intel Xeon E5-2690 8-core Sandy Bridge @ 2.90 GHz [NX-6070]or 2.60 GHz [NX-6050] (2 per node)

8 × hot-swappable 1 TB SATA hard disk drives (HDD) (4 per node)

4 × hot-swappable 800 GB [NX-6070] or 400 GB [NX-6050] solidstate drives (SSD) (2 per node)

Storage

2 × 16 GB SATA DOM (1 per node)

Memory 16 × 240-pin DIMM sockets per node (8 per CPU)

Page 109: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 109

16 GB DIMMs: 1600/1333/1066 MHz DDR3 RDIMM, 1.5 V

16 × 16 GB = 256 GB [NX-6070] or 8 × 16 GB = 128 GB [NX-6050]

2 × 10 GbE SFP+ per node (both ports on the NIC)

2 × 1 GbE BASE-T RJ45 per node

Network connections

1 × 10/100 BASE-T RJ45 per node

Expansion slots 2 (x8) PCIe 3.0 MicroLP (low profile) per node (both slots filled with aNIC and LSI SATA controller)

Fans 4 × 8 cm cooling fans

System Characteristics

Standalone: 64.8 lbs. (29.4 kg)Block weight (standalone)

Package: 75.8 lbs. (34.4 kg)

Block form factor 2U rack-mount chassis

Height: 3.5" (8.9 cm)

Width: 17.3" (43.8 cm)

Block dimensions

Depth: 29.6" (75.2 cm)

Node weight 7.7 lbs. (3.5 kg)

Width: 6.9" (17.5 cm)Node dimensions

Length: 21.5" (54.6 cm)

Power and Electrical

2 × redundant, hot-swappable, auto-ranging power supplies

80 PLUS Platinum Certified (180 - 240V only)

1000 W Output @ 100-120 V, 12.0-10 A, 50-60 Hz

Power supplies

1280 W Output @ 180-240 V, 10.5-8.0 A, 50-60 Hz

1100 W Output @ 100-140 V, 13.5-9.5 A, 50-60 HzOperating Power Consumption

1400 W Output @ 180-240 V, 9.5-7.0 A, 50-60 Hz

Block power consumption(maximum)

1100 W (256 GB × 2 nodes)

Thermal dissipation (maximum) 3750 BTU/hr (256 GB × 2 nodes)

Page 110: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 110

Operating Environment

Operating temperature 50° to 95° F (10° to 35° C)

Nonoperating temperature -40° to 158° F (-40° to 70° C)

Operating relative humidity 8% to 90% (non-condensing)

Nonoperating relative humidity 5% to 95% (non-condensing)

Field-Replaceable Unit List (NX-6000 Series)

Short reference description.

Image Description Part Number

Bezel, 2U, w/Lock and Key, SM X-BEZEL-SM-2U-2

Chassis, 2U, SM CSE-827HD, w/PSU &Fans

X-CHASSIS-SM-2

HDD, SATA, 4TB, 3.5", w/Carrier X-HDD-SATA-4TB-3.5

SSD, SATA, 400GB, 3.5", w/Carrier X-SSD-SATA-400GB-3.5

SSD, SATA, 800GB, 3.5", w/Carrier X-SSD-SATA-800GB-3.5

Memory, 16GB, DDR3, RDIMM, Samsung X-MEM-16GB-S

Node, 256GB, NX-6050 or NX-6070 X-NODE-256GB-[6050|6070]

Node, 128GB, NX-6050 or NX-6070 X-NODE-128GB-[6050|6070]

System Packaging, SM X-PKG-SM-2

Power Supply, 1620W, NX-2000 series(NX-1000, NX-3050, NX-6000)

X-PSU-1620-NX2000

Page 111: Platform administration guide-nos_v3_5

| Platform Administration Guide | NOS 3.5 | 111

Image Description Part Number

Fan, 80x80x38mm, 11K, SM X-FAN-80-11K-SM

NX-6000 Node Fan, (60x60x38mm, 13K,SM)

X-FAN-60-13K-SM

Rail, 2U, NX-2000 series (also applies toNX-6000 series)

X-RAIL-NX2000

Cable, 3m, SFP+ to SFP+ X-CBL-3M-SFP+-SFP+

Cable, 4m, SFP+ to SFP+ X-CBL-4M-SFP+-SFP+

Cable, 5m, SFP+ to SFP+ X-CBL-5M-SFP+-SFP+

NIC, 10 GbE, Dual SFP+, SMC Micro LP X-MEZZ-NIC-10G-SM