79
NetApp HCI 1.7P1 Deployment Guide February 2020 | 215-14523_2020-02_en-us [email protected]

NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

  • Upload
    others

  • View
    49

  • Download
    0

Embed Size (px)

Citation preview

Page 1: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

NetApp HCI 1.7P1

Deployment Guide

February 2020 | [email protected]

Page 2: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management
Page 3: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Contents

NetApp HCI overview .................................................................................. 5About this guide .......................................................................................................... 5

Requirements for NetApp HCI deployment .............................................. 7Network and switch requirements ............................................................................... 7

Network ports used by NetApp HCI ........................................................................... 8

Network cable requirements ...................................................................................... 11

Network configuration .............................................................................................. 12

Required network segments .......................................................................... 12

Network configuration and cabling options .................................................. 13

IP address requirements ............................................................................................ 17

Configuring LACP for optimal storage performance ................................................ 18

Configuring IPMI access for each node .................................................................... 19

Manually assigning the IPMI port IP address ............................................... 19

Changing the default IPMI password for H410C and H410S nodes ............ 20

Changing the default IPMI password for H610C, H615C, and H610S

nodes ........................................................................................................ 20

DNS and timekeeping requirements ......................................................................... 21

Environmental requirements ..................................................................................... 21

Protection Domains ................................................................................................... 21

Final preparations ...................................................................................................... 22

Deploying NetApp HCI .............................................................................. 23Accessing the NetApp Deployment Engine .............................................................. 23

Accessing the NetApp Deployment Engine in environments without

DHCP ...................................................................................................... 24

Accessing the NetApp Deployment Engine in environments with DHCP ... 24

Manually assigning IP addresses to access the NetApp Deployment

Engine ...................................................................................................... 25

Starting your deployment .......................................................................................... 26

VMware vSphere configuration ................................................................................ 27

Configuring a new VMware vSphere environment ....................................... 27

Joining an existing VMware vSphere deployment ........................................ 28

Configuring credentials ............................................................................................. 28

Selecting a network topology .................................................................................... 29

Inventory selection and node compatibility .............................................................. 30

Selecting inventory ........................................................................................ 30

Configuring network settings .................................................................................... 31

Reviewing and deploying the configuration .............................................................. 34

Post-deployment tasks ................................................................................ 36Supported networking changes ................................................................................. 36

Keeping VMware vSphere up to date ....................................................................... 37

Installing GPU drivers for GPU-enabled compute nodes ......................................... 37

Table of Contents | 3

Page 4: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Accessing NetApp Hybrid Cloud Control ................................................................ 39

Upgrading the entire NetApp HCI installation ....................................... 40Migrating from management node 10.x to version 11.x ........................................... 40

Upgrading Element software to version 11.7 on the storage cluster ......................... 42

Determining the Element version on the management node ......................... 43

Using HealthTools to run pre-upgrade health checks ................................... 43

Upgrading Element software ......................................................................... 47

Upgrading Element software at dark sites ..................................................... 50

Upgrading the management node to version 11.7 from version 11.5 ....................... 51

Upgrading the management node to version 11.7 from version 11.3 ....................... 53

Upgrading the management node to version 11.7 from version 11.0 or 11.1 ........... 55

Updating management services with NetApp Hybrid Cloud Control ....................... 58

Updating compute node firmware ............................................................................. 59

Updating compute node drivers ................................................................................ 60

Enabling cloud services on NetApp HCI .................................................. 62Expanding your NetApp HCI installation ............................................... 63

Expanding NetApp HCI storage resources ............................................................... 63

Expanding NetApp HCI compute resources ............................................................. 65

Expanding NetApp HCI storage and compute resources at the same time ............... 67

Monitoring your NetApp HCI installation ............................................... 71Powering your NetApp HCI system off or on .......................................... 72

Powering off compute resources for a NetApp HCI system ..................................... 72

Powering off storage resources for a NetApp HCI system ....................................... 73

Powering on storage resources for a NetApp HCI system ........................................ 73

Powering on compute resources for a NetApp HCI system ...................................... 74

Contacting NetApp Support ...................................................................... 75Where to find product documentation and other information .............. 76Copyright .................................................................................................... 77Trademark .................................................................................................. 78How to send comments about documentation and receive update

notifications ............................................................................................ 79

4 | NetApp HCI Deployment Guide

Page 5: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

NetApp HCI overview

NetApp HCI provides both storage and compute resources, combining them to build a VMwarevSphere environment backed by the capabilities of NetApp Element software.

After successful deployment of NetApp HCI, you manage the system from the VMware vSphereWeb Client. The NetApp Element Plug-in for vCenter Server (also referred to as the vCenter Plug-in,or VCP) is used to manage storage resources, including datastores, volumes, Quality of Service,storage cluster components, and data protection. Compute nodes appear as ESXi hosts and you canmanage them in vSphere.

NetApp HCI events and faults are reported in vSphere as system alarms. The management nodemonitors the compute nodes, and appears as a virtual machine in the cluster after deployment. Allstorage and compute monitoring data can be collected by NetApp SolidFire Active IQ, whichperforms real-time monitoring of NetApp HCI.

About this guideThis guide introduces NetApp HCI and its architecture, and covers the information needed to prepareyour environment for a deployment as well as the steps needed to complete deployment. It also givesbest practices for network and switch configuration to enable you to get the most out of your system.You can use this guide while you quickly deploy and configure your fully racked and poweredNetApp HCI installation using the NetApp Deployment Engine.

The following diagram outlines the basic steps you need to follow to deploy and configure NetAppHCI.

5

Page 6: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

To skip to a specific step, use the following links:

• Requirements for NetApp HCI deployment

• Installing NetApp HCI hardware

• Deploying NetApp HCI

• Post-deployment tasks

• Accessing NetApp Hybrid Cloud Control

6 | NetApp HCI Deployment Guide

Page 7: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Requirements for NetApp HCI deployment

NetApp HCI has specific physical and network requirements for proper operation in your datacenter.Ensure that you implement the following requirements and recommendations before you begindeployment.

Before you receive your NetApp HCI hardware, ensure that you complete the checklist items in thepre-deployment workbook from NetApp Professional Services. This document contains acomprehensive list of tasks you need to complete to prepare your network and environment for asuccessful NetApp HCI deployment.

If you intend to enable cloud services later, see NetApp HCI cloud services information.

NetApp Documentation: NetApp HCI Services

Network and switch requirementsThe switches you use for NetApp HCI require specific configuration to ensure a successfuldeployment. See your switch documentation for specific instructions on implementing each of thefollowing requirements for your environment.

NetApp HCI deployment requires at least three network segments, one for each of the followingtypes of traffic:

• Management

• VMware vMotion

• Storage/Data

Depending on the NetApp H-Series compute and storage node models and the planned cablingconfiguration, you can physically separate these networks using separate switches or logicallyseparate them using VLANs. For most deployments, however, you need to logically separate thesenetworks (and any other additional virtual machine networks) using VLANs.

Compute and storage nodes need to be able to communicate before, during, and after deployment. Ifyou are implementing separate management networks for storage and compute nodes, ensure thatthese management networks have network routes between them. These networks must have gatewaysassigned, and there must be a route between the gateways. Ensure that each new node has a gatewayassigned to facilitate communication between nodes and management networks.

NetApp HCI has the following switch requirements:

• All switch ports connected to NetApp HCI nodes must be configured as spanning tree edge ports.

◦ On Cisco switches, depending on the switch model, software version and port type, you cando this with one of the following commands:

⁃ spanning-tree port type edge

⁃ spanning-tree port type edge trunk

⁃ spanning-tree portfast

⁃ spanning-tree portfast trunk

◦ On Mellanox switches, you can do this with the spanning-tree port type edgecommand.

7

Page 8: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

• NetApp HCI nodes have redundant ports for all network functions except out-of-bandmanagement. For the best resiliency, divide these ports across two switches with redundantuplinks to either a traditional hierarchical architecture or a layer 2 spine-and-leaf architecture.

• The switches handling storage, virtual machine, and vMotion traffic must support speeds of atleast 10GbE per port (up to 25GbE per port is supported).

• The switches handling management traffic must support speeds of at least 1GbE per port.

• You must configure jumbo frames on the switch ports handling storage and vMotion traffic. Hostsmust be able to send 9000 byte packets end-to-end for a successful installation.

• Round-trip network latency between all storage and compute nodes should not exceed 2ms.

All NetApp HCI nodes provide additional out-of-band management capabilities via a dedicatedmanagement port. NetApp H300S, H300E, H500S, H500E, H700S, H700E and H410C nodes alsoallow for IPMI access via Port A. As a best practice, you should ease remote management of NetAppHCI by configuring out-of-band management for all nodes in your environment.

Note: The management network of the management node needs a configuration that includes agateway address to be able to communicate with NetApp Cloud Services. Enabling NetApp cloudservices will fail if a valid gateway address is not in place.

Network ports used by NetApp HCIYou might need to allow the following ports through your datacenter's edge firewall so that you canmanage the system remotely and allow clients outside of your datacenter to connect to resources.Some of these ports might not be required, depending on how you use the system. All ports are TCPunless stated otherwise, and should be open bidirectionally.

The following abbreviations are used in the table:

• MIP: Management IP address, a per-node address

• SIP: Storage IP address, a per-node address

• MVIP: Management virtual IP address

• SVIP: Storage virtual IP address

Source Destination Port Description

iSCSI clients Storage cluster MVIP 443 (Optional) UI and API access

iSCSI clients Storage cluster SVIP 3260 Client iSCSI communications

iSCSI clients Storage node SIP 3260 Client iSCSI communications

Management node sfsupport.solidfire

.com

22 Reverse SSH tunnel for supportaccess

Management node Storage node MIP 22 SSH access for support

Management node DNS servers 53TCP/UDP

DNS lookup

Management node Storage node MIP 442 UI and API access to storagenode and Element softwareupgrades

8 | NetApp HCI Deployment Guide

Page 9: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Source Destination Port Description

Management node Online softwarerepository:

• https://

repo.netapp.com/

bintray/api/

package

• https://netapp-

downloads.bintra

y.com

443 Management node serviceupgrades

Management node monitoring.solidfir

e.com

443 Storage cluster reporting toActive IQ

Management node Storage cluster MVIP 443 UI and API access to storagenode and Element softwareupgrades

Management node connect.pub.nks.clo

ud

443 Provides securecommunications between NKSCloud Providers and the hostedNKS Service, for example,when NKS is deployed onNetApp HCI or VMware on-premises traffic makes use ofthis Northbound MTLS securedchannel.

Management node api.nks.netapp.io 443 Facilitates initial deployment-time registration of on-premises"regions."

Management node repo.netapp.com 443 Provides access to componentsnecessary to install/update on-premises deployment.

34.208.181.140

34.217.162.31

54.187.65.159

18.236.231.155

Management node 443 HTTPS (Kubernetes clustersecurity).

6443 Kubernetes API (Kubernetescluster security).

12443 Proxy to dashboard (Kubernetescluster security).

22 Kubernetes upgrades and othertasks (Kubernetes clustersecurity).

Management node amazonaws.com 443 Dispatch tunnel

SNMP server Storage cluster MVIP 161 UDP SNMP polling

SNMP server Storage node MIP 161 UDP SNMP polling

Storage node MIP DNS servers 53TCP/UDP

DNS lookup

Storage node MIP Management node 80 Element software upgrades

Requirements for NetApp HCI deployment | 9

Page 10: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Source Destination Port Description

Storage node MIP S3/Swift endpoint 80 (Optional) HTTPcommunication to S3/Swiftendpoint for backup andrecovery

Storage node MIP NTP server 123 UDP NTP

Storage node MIP Management node 162 UDP (Optional) SNMP traps

Storage node MIP SNMP server 162 UDP (Optional) SNMP traps

Storage node MIP LDAP server 389TCP/UDP

(Optional) LDAP lookup

Storage node MIP Remote storage clusterMVIP

443 Remote replication clusterpairing communication

Storage node MIP Remote storage nodeMIP

443 Remote replication clusterpairing communication

Storage node MIP S3/Swift endpoint 443 (Optional) HTTPScommunication to S3/Swiftendpoint for backup andrecovery

Storage node MIP Management node 10514TCP/UDP

514TCP/UDP

Syslog forwarding

Storage node MIP Syslog server 10514TCP/UDP

514TCP/UDP

Syslog forwarding

Storage node MIP LDAPS server 636TCP/UDP

LDAPS lookup

Storage node MIP Remote storage nodeMIP

2181 Intercluster communication forremote replication

Storage node SIP S3/Swift endpoint 80 (Optional) HTTPcommunication to S3/Swiftendpoint for backup andrecovery

Storage node SIP S3/Swift endpoint 443 (Optional) HTTPScommunication to S3/Swiftendpoint for backup andrecovery

Storage node SIP Remote storage node SIP 2181 Intercluster communication forremote replication

Storage node SIP Storage node SIP 3260 Internode iSCSI

Storage node SIP Remote storage node SIP 4000through4020

Remote replication node-to-node data transfer

10 | NetApp HCI Deployment Guide

Page 11: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Source Destination Port Description

Storage node SIP Compute node SIP 442 Compute node API,configuration and validation,and access to softwareinventory

System administratorPC

Storage node MIP 80 (NetApp HCI only) Landingpage of NetApp DeploymentEngine

System administratorPC

Management node 442 HTTPS UI access tomanagement node

System administratorPC

Storage node MIP 442 HTTPS UI and API access tostorage node

(NetApp HCI only)Configuration and deploymentmonitoring in NetAppDeployment Engine

System administratorPC

Management node 443 HTTPS UI and API access tomanagement node

System administratorPC

Storage cluster MVIP 443 HTTPS UI and API access tostorage cluster

System administratorPC

Storage node MIP 443 HTTPS storage cluster creation,post-deployment UI access tostorage cluster

vCenter Server Storage cluster MVIP 443 vCenter Plug-in API access

vCenter Server Management node 8443 (Optional) vCenter Plug-inQoSSIOC service.

vCenter Server Storage cluster MVIP 8444 vCenter VASA provider access(VVols only)

vCenter Server Management node 9443 vCenter Plug-in registration.The port can be closed afterregistration is complete.

Network cable requirementsYou can use the following guidelines to ensure that you have enough of the right type of networkcables for the size of your deployment. For RJ45 ports, you must use Cat 5e or Cat 6 rated cables.

• Two-cable compute node configuration: Each compute node must to be connected to a 10/25GbEnetwork via two SFP+/SFP28 interfaces (one additional Cat 5e/6 cable is optional for out-of-bandmanagement).

• Six-cable compute node configuration: Each compute node must to be connected to a 10/25GbEnetwork via four SFP+/SFP28 interfaces and to a 1/10GbE network via two Cat 5e/6 cables (oneadditional Cat 5e/6 cable is optional for out-of-band management).

• Each storage node must be connected to a 10/25GbE network via two SFP+/SFP28 interfaces andto a 1/10GbE network via two Cat 5e/6 cables (one additional Cat 5e/6 cable is optional for out-of-band management).

Requirements for NetApp HCI deployment | 11

Page 12: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

• Ensure the network cables you use to connect the NetApp HCI system to your network are longenough to comfortably reach your switches.

For example, a deployment containing four storage nodes and three compute nodes (using the six-cable configuration) requires the following number of network cables:

• (14) Cat 5e/6 cables with RJ45 connectors (plus seven cables for IPMI traffic, if desired)

• (20) Twinax cables with SFP28/SFP+ connectors

This is due to the following reasons:

• Four storage nodes require eight (8) Cat 5e/6 cables and eight (8) Twinax cables.

• Three compute nodes using the six-cable configuration require six (6) Cat 5e/6 cables and twelve(12) Twinax cables.

Network configurationNetApp HCI can utilize multiple different network cabling and VLAN configurations. It is importantto plan your network configuration to ensure a successful deployment.

Required network segments

NetApp HCI requires a minimum of three network segments: management, storage, andvirtualization traffic (which includes virtual machines and vMotion traffic). You can also separatevirtual machine and vMotion traffic. These network segments usually exist as logically separatedVLANs in the NetApp HCI network infrastructure.

How compute and storage nodes connect to these networks depends on how you design the networkand cable the nodes. The sample network illustrations in this guide assume the following networks:

Network name VLAN ID

Management 100

Storage 105

vMotion 107

Virtual machines 200, 201

For automatic discovery and configuration of your NetApp HCI nodes in the NetApp DeploymentEngine, you must have a network segment that is available as an untagged or native VLAN on allswitch ports that are used for the SFP+/SFP28 interfaces on the nodes. This will provide layer 2communication between all nodes for discovery and deployment. Without a native VLAN, you mustconfigure the SFP+/SFP28 interfaces of all nodes manually with a VLAN and IPv4 address to bediscoverable. In the network configuration examples in this document, the management network(VLAN ID 100) is used for this purpose.

The NetApp Deployment Engine enables you to quickly configure networks for compute and storagenodes during the initial deployment. You can place certain built-in management components such asvCenter and the management node on their own network segment if needed. These network segmentsrequire routing to allow vCenter and the management node to communicate with storage andcompute management networks. In most deployments those components use the same managementnetwork (in this example, VLAN ID 100).

Note: You configure virtual machine networks using vCenter. The default virtual machine network(port group "VM_Network") in NetApp HCI deployments is configured without a VLAN ID. If

12 | NetApp HCI Deployment Guide

Page 13: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

you plan to use multiple tagged virtual machine networks (VLAN IDs 200, 201 in the exampleabove), be sure to include them in the initial network planning.

Network configuration and cabling options

You can use a two-cable network configuration for the H300E, H500E, H700E, and H410C computenodes, simplifying cable routing. This configuration uses two SFP+/SFP28 interfaces plus anoptional (but recommended) RJ45 interface for IPMI communication. These nodes can also use a six-cable configuration with two RJ45 and four SFP28/SFP+ interfaces.

All storage nodes support a network topology that uses four network ports; this topology uses ports Athrough D on H300S, H500S, H700S, and H610S nodes. Compute nodes support three networktopologies:

Configurationoption

Cabling and description

Option A Two-cable configuration using ports D and E (two SFP28/SFP+ interfaces)on H300E, H500E, H700E and H410C nodes

Option B Six-cable configuration using ports A through F (two RJ45 and fourSFP28/SFP+ interfaces) on H300E, H500E, H700E and H410C nodes

Option C Cable configuration similar to option B, but using native VLANs (or"access ports") on the switch for the management, storage, and vMotionnetworks

Nodes that do not have the correct number of connected cables cannot participate in the deployment.For example, you cannot deploy a compute node in a six-cable configuration if it only has ports Dand E connected.

Note: You can adjust the NetApp HCI network configuration after deployment to meetinfrastructure needs. However, when you expand NetApp HCI resources, remember that newnodes must have the same cable configuration as the existing compute and storage nodes.

Related references

Supported networking changes on page 36

Configuration option A: Two cables for compute nodes

The NetApp H300E, H500E, H700E, H410C, and H610C compute nodes support using two networkcables for connectivity to all NetApp HCI networks. This configuration requires that the storage,vMotion and any virtual machine networks use VLAN tagging. All compute and storage nodes mustuse the same VLAN ID scheme. This configuration uses vSphere Distributed Switches that requireVMware vSphere Enterprise Plus licensing.

VLAN configuration

As a best practice, you should configure the required network segments on all switch ports that thenodes are using. For example:

Network name VLAN ID Switch port configuration

Management 100 Native

Storage 105 Tagged

vMotion 107 Tagged

Virtual machines 200, 201 Tagged

Requirements for NetApp HCI deployment | 13

Page 14: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

The following illustration shows the recommended cabling configuration for two-cable H300E,H500E, H700E, and H410C compute nodes and four-cable H300S, H500S, H700S, and H410Sstorage nodes. All switch ports in this example share the same configuration.

COMPUTE NODE

A B A B

STORAGE NODE

C DC D E F

Management Storage vMotion All Networks

Example switch commands

You can use the following example commands to configure all switch ports used for NetApp HCInodes. These commands are based on a Cisco configuration, but might require only small changes toapply to Mellanox switches. See your switch documentation for the specific commands you need toimplement this configuration. Replace the interface name, description, and VLANs with the valuesfor your environment.

interface {interface name, such as EthernetX/Y or GigabitEthernetX/Y/Z}description {desired description, such as NetApp-HCI-NodeX-PortY}mtu 9216switchport mode trunkswitchport trunk native vlan 100switchport trunk allowed vlan 105,107,200,201spanning-tree port type edge trunk

Note: Some switches might require inclusion of the native VLAN in the allowed VLAN list. Seethe documentation for your specific switch model and software version.

Configuration option B: Six cables for compute nodes

As a secondary network configuration option, the H300E, H500E, H700E and H410C compute nodessupport using six network cables for connectivity to all NetApp HCI networks. This configurationrequires that the storage, vMotion and any virtual machine networks use VLAN tagging. You can usethis configuration with vSphere Standard Switches or vSphere Distributed Switches (which requireVMware vSphere Enterprise Plus licensing).

VLAN configuration

When you deploy compute nodes using six cables and storage nodes using four cables, as a bestpractice, you should configure the required network segments on all switch ports that the nodes areusing. For example:

14 | NetApp HCI Deployment Guide

Page 15: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Network name VLAN ID Switch port configuration

Management 100 Native

Storage 105 Tagged

vMotion 107 Tagged

Virtual machines 200, 201 Tagged

The following illustration shows the recommended cabling configuration for six-cable computenodes and four-cable storage nodes. All switch ports in this example share the same configuration.

COMPUTE NODE

A B A B

STORAGE NODE

C DC D E F

Management Storage vMotion All Networks

Example switch commands

You can use the following example commands to configure all switch ports used for NetApp HCInodes. These commands are based on a Cisco configuration, but might require only small changes toapply to Mellanox switches. See your switch documentation for the specific commands you need toimplement this configuration. Replace the interface name, description, and VLANs with the valuesfor your environment.

interface {interface name, such as EthernetX/Y or GigabitEthernetX/Y/Z}description {desired description, such as NetApp-HCI-NodeX-PortY}mtu 9216switchport mode trunkswitchport trunk native vlan 100switchport trunk allowed vlan 105,107,200,201spanning-tree port type edge trunk

Note: Some switches might require inclusion of the native VLAN in the allowed VLAN list. Seethe documentation for your specific switch model and software version.

Configuration option C: Six cables for compute nodes with native VLANs

You can deploy NetApp HCI without using tagged VLANs for storage and virtualization traffic, andinstead rely on the switch configuration to separate the network segments. You can use this

Requirements for NetApp HCI deployment | 15

Page 16: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

configuration with vSphere Standard Switches or vSphere Distributed Switches (which requireVMware vSphere Enterprise Plus licensing).

VLAN configuration

This topology option uses the following VLAN configuration:

Node ports used Network name VLAN ID Connected switch portconfiguration

Ports A and B on compute andstorage nodes

Management 100 Native

Ports D and E on compute nodes Storage 105 Native

Ports C and D on storage nodes Storage 105 Native

Ports C and F on compute nodes vMotion 107 Native

Ports C and F on compute nodes Virtualmachines

200, 201 Tagged

Caution: Deploying this configuration requires careful switch port configuration. Configurationerrors in this network topology can result in deployment problems that are difficult to diagnose.

The following illustration shows the network configuration overview for this topology option. In theexample, individual switch ports are configured with the appropriate network segment as the nativenetwork.

COMPUTE NODE

A B A B

STORAGE NODE

C DC D E F

Management Storage vMotion All Networks

Example switch commands

You can use the following example switch commands to configure switch ports used for the NetAppHCI nodes. These commands are based on a Cisco configuration, but might require only minimalchanges to apply to Mellanox switches. See your switch documentation for the specific commandsyou need to implement this configuration.

16 | NetApp HCI Deployment Guide

Page 17: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

You can use the following example commands to configure the switch ports used for the managementnetwork. Replace the interface name, description, and VLANs with the values for your configuration.

interface {interface name, such as EthernetX/Y or GigabitEthernetX/Y/Z}description {desired description, such as NetApp-HCI-NodeX-PortA|B}switchport access vlan 100spanning-tree port type edge

You can use the following example commands to configure the switch ports used for the storagenetwork. Replace the interface name, description, and VLANs with the values for your configuration.

interface {interface name, such as EthernetX/Y or GigabitEthernetX/Y/Z}description {desired description, such as NetApp-HCI-NodeX-PortC|D}mtu 9216switchport access vlan 105spanning-tree port type edge

You can use the following example commands to configure the switch ports used for the vMotion andvirtual machines network. Replace the interface name, description, and VLANs with the values foryour configuration.

interface {interface name, such as EthernetX/Y or GigabitEthernetX/Y/Z}description {desired description, such as NetApp-HCI-NodeX-PortC|F}mtu 9216switchport mode trunkswitchport trunk native vlan 107switchport trunk allowed vlan 200,201spanning-tree port type edge trunk

Note: Some switches might require inclusion of the native VLAN in the allowed VLAN list. Seethe documentation for your specific switch model and software version.

IP address requirementsNetApp HCI has specific IP address requirements that depend on the size of your deployment. Notethat by default the initial IP addresses you assign to each node before using the NetApp DeploymentEngine to deploy the system are temporary and cannot be reused. You need to set aside a secondpermanent set of unused IP addresses that you can assign during final deployment.

Number of IP addresses needed per NetApp HCI deployment

It is best if the storage network and the management network each use separate contiguous ranges ofIP addresses. Use the following table to determine how many IP addresses you need for your NetAppHCI deployment:

Systemcomponent

Managementnetwork IPaddressesneeded

Storagenetwork IPaddressesneeded

vMotion networkIP addressesneeded

Total IPaddresses neededper component

Compute node 1 2 1 4

Storage node 1 1 2

Storage cluster 1 1 2

VMwarevCenter

1 1

Requirements for NetApp HCI deployment | 17

Page 18: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Systemcomponent

Managementnetwork IPaddressesneeded

Storagenetwork IPaddressesneeded

vMotion networkIP addressesneeded

Total IPaddresses neededper component

Managementnode

1 1 2

IP addresses reserved by NetApp HCI

NetApp HCI reserves the following IP address ranges for system components. When planning yournetwork, avoid using these IP addresses:

IP address range Description

10.0.0.0/24 Docker overlay network

10.0.1.0/24 Docker overlay network

10.255.0.0/16 Docker swarm ingress network

169.254.100.1/22 Docker bridge network

169.254.104.0/22 Docker bridge network

Configuring LACP for optimal storage performanceFor optimal NetApp HCI storage cluster performance, you should configure Link AggregationControl Protocol (LACP) on the switch ports used for each of the storage nodes.

Before you begin

• You have configured the switch ports connected to the 10/25GbE interfaces of NetApp HCIstorage nodes as LACP port channels.

• You have set the LACP timers on the switches handling storage traffic to “fast mode (1s)” foroptimal failover detection time. During deployment, the Bond1G interfaces on all storage nodesare automatically configured for active/passive mode.

• You have configured Cisco Virtual PortChannel (vPC) or the equivalent switch stackingtechnology for the switches handling the storage network. Switch stacking technology easesconfiguration of LACP and port channels, and provides a loop-free topology between switchesand the 10/25GbE ports on the storage nodes.

Steps

1. Follow your switch vendor recommendations for enabling LACP on the switch ports used forNetApp H-series storage nodes.

2. Change the bond mode on all storage nodes to LACP in the on-node user interface (also known asthe terminal user interface, or TUI) before you deploy NetApp HCI.

18 | NetApp HCI Deployment Guide

Page 19: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Configuring IPMI access for each nodeYou can configure Intelligent Platform Management Interface (IPMI) access for each node byassigning each IPMI port an IP address, and changing the default IPMI administrator password assoon as you have remote IPMI access to the node.

Related tasks

Changing the default IPMI password for H410C and H410S nodes on page 20

Manually assigning the IPMI port IP address

Dynamic Host Configuration Protocol (DHCP) is enabled for the IPMI port of each NetApp HCInode. If your IPMI network does not use DHCP, you can manually assign a static IPv4 address to theIPMI port.

Before you begin

You have a keyboard, video, mouse (KVM) or monitor and keyboard you can use to access the BIOSof each node.

About this task

Use the arrow keys to navigate in the BIOS. Select a tab or option by pressing Enter. Go back toprevious screens by pressing ESC.

Steps

1. Power on the node.

2. Upon booting, enter the BIOS by pressing the Del key.

3. Select the IPMI tab.

4. Select BMC Network Configuration and press Enter.

5. Choose Yes and press Enter.

6. Select Configuration Address Source and press Enter.

7. Choose Static and press Enter.

8. Select Station IP address and enter a new IP address for the IPMI port. Press Enter whenfinished.

9. Select Subnet mask and enter a new subnet mask for the IPMI port. Press Enter when finished.

10. Select Gateway IP address and enter a new gateway IP address for the IPMI port. Press Enterwhen finished.

11. Connect one end of an Ethernet cable to the IPMI port and the other end to a switch.

The IPMI port for this node is ready to use.

12. Repeat this procedure for any other NetApp HCI nodes with IPMI ports that are not configured.

Requirements for NetApp HCI deployment | 19

Page 20: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Changing the default IPMI password for H410C and H410S nodes

You should change the default password for the IPMI administrator account on each compute andstorage node as soon as you configure the IPMI network port.

Before you begin

You have configured the IPMI IP address for each compute and storage node.

Steps

1. Open a web browser on a computer that can reach the IPMI network and browse to the IPMI IPaddress for the node.

2. Enter the user name ADMIN and password ADMIN in the login prompt.

3. Upon logging in, click the Configuration tab.

4. Click Users.

5. Select the ADMIN user and click Modify User.

6. Select the Change Password check box.

7. Enter a new password in the Password and Confirm Password fields.

8. Click Modify, and then click OK.

9. Repeat this procedure for any other NetApp HCI H410C and H410S nodes with default IPMIpasswords.

Related tasks

Configuring IPMI access for each node on page 19

Changing the default IPMI password for H610C, H615C, and H610S nodes

You should change the default password for the IPMI administrator account on each compute andstorage node as soon as you configure the IPMI network port.

Before you begin

You have configured the IPMI IP address for each compute and storage node.

Steps

1. Open a web browser on a computer that can reach the IPMI network and browse to the IPMI IPaddress for the node.

2. Enter the user name root and password calvin in the login prompt.

3. Upon logging in, click the menu navigation icon at the top left of the page to open the sidebardrawer.

4. Click Settings.

5. Click User Management.

6. Select the Administrator user from the list.

7. Enable the Change Password check box.

20 | NetApp HCI Deployment Guide

Page 21: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

8. Enter a new, strong password in the Password and Confirm Password fields.

9. Click Save at the bottom of the page.

10. Repeat this procedure for any other NetApp HCI H610C, H615C, or H610S nodes with defaultIPMI passwords.

DNS and timekeeping requirementsBefore deployment, you need to prepare Domain Name System (DNS) records for your NetApp HCIsystem and gather NTP server information. NetApp HCI requires a DNS server with the correct DNSentries and an NTP server for a successful deployment.

Make the following DNS and timeserver preparations before deploying NetApp HCI:

• Create any needed DNS entries for hosts (such as individual compute or storage nodes) anddocument how the host entries map to the respective IP addresses. During deployment, you willneed to assign a prefix to your storage cluster that will be applied to each host; to avoidconfusion, keep your DNS naming plans in mind when choosing a prefix.

• If you are deploying NetApp HCI with a new VMware vSphere installation using a fully qualifieddomain name, you must create one Pointer (PTR) record and one Address (A) record for vCenterServer on any DNS servers in use before deployment.

• If you are deploying NetApp HCI with a new vSphere installation using only IP addresses, you donot need to create new DNS records for vCenter.

• NetApp HCI requires a valid NTP server for timekeeping. You can use a publicly available timeserver if you do not have one in your environment.

Environmental requirementsEnsure that the power for the rack used to install NetApp HCI is supplied by AC power outlets, andthat your datacenter provides adequate cooling for the size of your NetApp HCI installation.

For detailed capabilities of each component of NetApp HCI, see the NetApp HCI datasheet.

Note: The H410C compute node operates only on high-line voltage (200-240 VAC). You mustensure that the power requirements are met when you add H410C nodes to an existing NetAppHCI installation.

Protection DomainsNetApp Element software supports Protection Domains functionality, which optimizes data layout onstorage nodes for the best data availability. To use this feature, you should split storage capacityevenly across three or more NetApp H-series chassis for optimal storage reliability. In this scenario,the storage cluster automatically enables Protection Domains.

Related information

Managing storage with Element software

Requirements for NetApp HCI deployment | 21

Page 22: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Final preparationsYou must ensure that you complete the following final items before you deploy NetApp HCI.

• Gather all relevant information about your network, current or planned VMware infrastructure,and planned user credentials.

• Rack, cable, and power on the NetApp HCI installation.

• Ensure that storage capacity is split evenly across all chassis containing storage nodes. Thisensures the best storage reliability possible.

22 | NetApp HCI Deployment Guide

Page 23: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Deploying NetApp HCI

The NetApp Deployment Engine enables you to quickly deploy NetApp HCI. During deployment,you can let the NetApp Deployment Engine automatically set many of the networking configurationdetails for you. After deployment, NetApp HCI will be ready to serve highly available compute andstorage resources in a production environment.

Before you begin

You have ensured that all compute and storage nodes that will be part of the initial deployment arerunning the same versions of Element software (for storage nodes) and Bootstrap OS (for computenodes).

If you are joining an existing VMware vSphere environment, you have completely logged out ofvCenter Server.

Steps

1. Accessing the NetApp Deployment Engine on page 23

2. Starting your deployment on page 26

3. VMware vSphere configuration on page 27

4. Configuring credentials on page 28

5. Selecting a network topology on page 29

6. Inventory selection and node compatibility on page 30

7. Configuring network settings on page 31

8. Reviewing and deploying the configuration on page 34

Accessing the NetApp Deployment EngineTo deploy NetApp HCI, you need to access the NetApp Deployment Engine on one of the NetAppH-Series storage nodes via the IPv4 address assigned to the Bond1G interface, which is the logicalinterface that combines ports A and B for storage nodes. This storage node becomes the controllingstorage node for the deployment process. Depending on your environment, you need to eitherconfigure the IPv4 address or retrieve it from one of the storage nodes.

Note: You can only access the NetApp Deployment Engine using the Bond1G interface of astorage node. Using the Bond10G interface, the logical interface that combines ports C and D forstorage nodes, is not supported.

Use one of the following methods that best describes your network environment to access theNetApp Deployment Engine.

Scenario Method

You do not have DHCP in your environment Accessing the NetApp Deployment Engine inenvironments without DHCP

You have DHCP in your environment Accessing the NetApp Deployment Engine inenvironments with DHCP

You want to assign all IP addresses manually Manually assigning IP addresses to access theNetApp Deployment Engine

23

Page 24: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Accessing the NetApp Deployment Engine in environments without DHCP

When DHCP is not in use on the network, you need to set a static IPv4 address on the Bond1Ginterface of one of the storage nodes (also known as a controlling storage node) that you will use toaccess the NetApp Deployment Engine. The NetApp Deployment Engine on the controlling storagenode will discover and communicate with other compute and storage nodes using IPv4 addresses thathave been auto-configured on the Bond10G interfaces of all nodes. You should use this methodunless your network has special requirements.

Before you begin

• You or your network administrator have completed the tasks in the Installation and SetupInstructions document.

• You have physical access to the NetApp HCI nodes.

• All of the NetApp HCI nodes are powered on.

• DHCP is not enabled for the NetApp HCI networks and the NetApp HCI nodes have not obtainedIP addresses from DHCP servers.

• The NetApp HCI management network is configured as the native VLAN on the Bond1G andBond10G interfaces of all nodes.

Steps

1. Plug a KVM into the back of one of the NetApp HCI storage nodes (this node will become thecontrolling storage node).

2. Configure the IP address, subnet mask, and gateway address for Bond1G in the user interface.You can also configure a VLAN ID for the Bond1G network if needed.

Note: You cannot reuse this IPv4 address later during deployment with the NetAppDeployment Engine.

3. Open a web browser on a computer that can access the NetApp HCI management network.

4. Browse to the IP address you assigned to the controlling storage node. For example:

http://<Bond1G IP address>

This takes you to the NetApp Deployment Engine user interface.

Related information

NetApp HCI Compute and Storage Nodes Installation and Setup Instructions

Accessing the NetApp Deployment Engine in environments with DHCP

In environments where servers automatically acquire IPv4 configuration from DHCP, you can accessthe NetApp Deployment Engine using the IPv4 address assigned to the Bond1G interface on one ofthe storage nodes. You can use a USB stick to retrieve the IPv4 address from one of the storagenodes. The NetApp Deployment Engine will automatically discover other compute and storage nodes

24 | NetApp HCI Deployment Guide

Page 25: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

that use DHCP-assigned IPv4 addresses. You should not use this method unless your network hasspecial requirements.

Before you begin

• You or your network administrator have completed the tasks in the Installation and SetupInstructions document.

• You have physical access to the NetApp HCI nodes.

• All of the NetApp HCI nodes are powered on.

• DHCP is enabled on the NetApp HCI management and storage networks.

• The DHCP address pool is large enough to accommodate two IPv4 addresses per NetApp HCInode.

Note: For the NetApp HCI deployment to succeed, all nodes in the deployment must either haveDHCP-acquired or auto-configured IPv4 addresses (you cannot mix IPv4 address assignmentmethods).

About this task

If DHCP is in use only for the storage network (Bond10G interfaces), you should use the stepsoutlined in Accessing the NetApp Deployment Engine in environments without DHCP on page 24 toaccess the NetApp Deployment Engine.

Steps

1. Wait several minutes for the nodes to request IP addresses.

2. Choose a storage node and insert a USB stick into the node. Leave it in for at least five seconds.

3. Remove the USB stick, and insert it into your computer.

4. Open the readme.html file. This takes you to the NetApp Deployment Engine user interface.

Related information

NetApp HCI Compute and Storage Nodes Installation and Setup Instructions

Manually assigning IP addresses to access the NetApp Deployment Engine

You can manually assign static IPv4 addresses to the Bond1G and Bond10G interfaces on all NetAppHCI nodes to access the NetApp Deployment Engine and deploy NetApp HCI. You should not usethis method unless your network has special requirements.

Before you begin

Attention: All IP addresses you assign manually before using the NetApp Deployment Engine todeploy the system are temporary and cannot be reused. If you choose to manually assign IPaddresses, you need to set aside a second permanent set of unused IP addresses that you can assignduring final deployment.

• You or your network administrator have completed the tasks in the Installation and SetupInstructions document.

• You have physical access to the NetApp HCI nodes.

• All of the NetApp HCI nodes are powered on.

Deploying NetApp HCI | 25

Page 26: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

• DHCP is not enabled for the NetApp HCI networks and the NetApp HCI nodes have not obtainedIP addresses from DHCP servers.

About this task

In this configuration, compute and storage nodes will use static IPv4 addresses to discover andcommunicate with other nodes during deployment. This configuration is not recommended.

Steps

1. Plug a KVM into the back of one of the NetApp HCI storage nodes (this node will become thecontrolling storage node).

2. Configure the IP address, subnet mask, and gateway address for Bond1G and Bond10G in theuser interface. You can also configure a VLAN ID for each network if needed.

3. Repeat step 2 for the remaining storage and compute nodes.

4. Open a web browser on a computer that can access the NetApp HCI management network.

5. Browse to the Bond1G IP address you assigned to the controlling storage node. For example:

http://<Bond1G IP address>

This takes you to the NetApp Deployment Engine user interface.

Related information

NetApp HCI Compute and Storage Nodes Installation and Setup Instructions

Starting your deploymentBefore continuing with your NetApp HCI deployment, you need to read and understand the end userlicense agreements.

Steps

1. On the Welcome to NetApp HCI page, click Get Started.

2. On the Prerequisites page, perform the following actions:

a. Ensure each prerequisite is met, and click each associated checkbox to confirm.

b. Click Continue.

3. On the End User Licenses page, perform the following actions:

a. Read the NetApp End User License Agreement.

b. If you accept the terms, click I accept at the bottom of the agreement text.

c. Read the VMware End User License Agreement.

d. If you accept the terms, click I accept at the bottom of the agreement text.

e. Click Continue.

26 | NetApp HCI Deployment Guide

Page 27: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

VMware vSphere configurationNetApp HCI uses the vCenter Server and ESXi components of VMware vSphere. vCenter Server isused to manage and monitor the VMware ESXi hypervisor installed on each compute node. You caninstall and configure a new vSphere deployment, which also installs the NetApp Element Plug-in forvCenter Server, or you can join and extend an existing vSphere deployment.

Be aware of the following caveats when you use the NetApp Deployment Engine to install a newvSphere deployment:

• The NetApp Deployment Engine installs the new vCenter Server Appliance with the Smalldeployment size option.

• The vCenter Server license is a temporary evaluation license. For continued operation after theevaluation period, you need to obtain a new license key from VMware and add it to the vCenterServer license inventory.

Note: If your vSphere inventory configuration uses a folder to store the NetApp HCI cluster withinthe vCenter datacenter, some operations, such as expanding NetApp HCI compute resources, willfail. Ensure that the NetApp HCI cluster is directly under the datacenter in the vSphere web clientinventory tree, and is not stored in a folder. See the NetApp Knowledgebase article for moreinformation.

If you install a new vCenter Server, you can install a vSphere standard switch or a vSpheredistributed switch (VDS) during network configuration. A VDS enables a simplified, centralizedmanagement of virtual machine network configuration after NetApp HCI deployment. Cloud dataservices functionality on NetApp HCI requires a VDS; vSphere standard switches are not supportedfor cloud data services.

Related tasks

Configuring a new VMware vSphere environment on page 27

Related information

Using VCP to manage NetApp HCI

NetApp Knowledgebase article: Possible errors when expanding compute nodes

Configuring a new VMware vSphere environment

You can deploy a new vSphere environment as part of the NetApp HCI installation process byproviding some of the network information that vSphere should use. Note that if you configurevSphere using an IP address, the address cannot be changed after installation.

Before you begin

You have obtained the network information for the planned vSphere environment.

Steps

1. Click Configure a new vSphere deployment.

2. Select which version of vSphere the system should install during deployment.

3. Configure the new vSphere environment using one of the following options:

Deploying NetApp HCI | 27

Page 28: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Option Steps

Use a domain name(recommended).

a. Click Configure Using a Fully Qualified Domain Name.

b. Enter the vCenter Server domain name in the vCenter Server FullyQualified Domain Name field.

c. Enter the DNS server IP address in the DNS Server IP Addressfield.

d. Click Continue.

Use an IP address. a. Click Configure Using an IP Address.

b. Click Continue.

Joining an existing VMware vSphere deployment

Configure NetApp HCI to take advantage of an existing vSphere deployment by providing thevCenter Server network information and credentials.

Before you begin

• If you are joining an existing vSphere 6.7 deployment, make sure vCenter Server is runningversion 6.7 Update 1.

• If you are joining an existing vSphere 6.5 deployment, make sure vCenter Server is runningversion 6.5 Update 2 or later.

• Obtain the network details and administrator credentials for your existing vSphere deployment.

About this task

If you join multiple vCenter Server systems that are connected using vCenter Linked Mode, NetAppHCI only recognizes one of the vCenter Server systems.

Steps

1. Click Join and extend an existing vSphere deployment.

2. Enter the domain name or IP address in the vCenter Server Domain Name or IP address field.

If you enter a domain name, you also need to enter the IP address of an active DNS server in theDNS Server IP Address field that appears.

3. Enter the credentials of a vSphere administrator in the User Name and Password fields.

4. Click Continue.

Configuring credentialsDuring deployment, you define a common set of credentials to be used for the newly deployedVMware vSphere environment, the NetApp HCI compute and storage resources, and themanagement node. If you are joining an existing vSphere environment, these credentials are notapplied to the existing vCenter Server.

About this task

Remember the following points about the credentials you set in the NetApp HCI DeploymentEngine:

28 | NetApp HCI Deployment Guide

Page 29: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

• To log in to a vSphere environment installed as part of deployment, use the same user name thatyou enter on this page with the following suffix:

@vsphere.local

• To log in to ESXi after deployment, use root as the user name and the same password you enteron this page.

Steps

1. On the Credentials page, enter a user name in the User Name field.

2. Enter a password in the Password field.

The password must conform to the password criteria visible in the Password must contain box.

3. Confirm the password in the Re-enter Password field.

4. Click Continue.

Selecting a network topologyWhen cabling NetApp HCI nodes, you have the option of using different network cableconfigurations depending on your needs. For each compute node, you can use all six network ports,with different types of traffic assigned to each pair of ports, or you can use two ports with all types oftraffic assigned to the ports. Storage nodes use the standard four-cable configuration. Your choiceaffects which compute nodes are selectable in the inventory.

Before you begin

If you choose the two-cable network topology for compute nodes, consider the followingrequirements:

• You have a VMware vSphere Enterprise Plus license ready to apply after deployment is complete.

• You have verified that the configuration of your network and network switches is correct.

• VLAN tagging is required for storage and vMotion networks for all compute and storage nodes.

Steps

1. On the Network Topology page, select a compute node topology that fits the way you installedcompute nodes for NetApp HCI:

• 6 Cable Option: The six-cable option provides dedicated ports for each type of traffic(management, virtual machine, and storage). You can optionally enable vSphere DistributedSwitch (VDS). Enabling VDS configures a distributed switch, enabling simplified, centralizedmanagement of virtual machine network configuration after NetApp HCI deployment iscomplete. If you enable it, you must have a vSphere Enterprise Plus license ready to applyafter deployment.

• 2 Cable Option: The two-cable option combines management, virtual machine, and storagetraffic on two bonded ports. This cabling option requires VDS, and automatically enables it.You must have a vSphere Enterprise Plus license ready to apply after deployment.

2. Some cabling options display multiple back panel views of different types of node hardware.Cycle through the back panel views to see how to connect the network cables for that specificnode model and cabling option.

3. When finished, click Continue.

Deploying NetApp HCI | 29

Page 30: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Related concepts

Network configuration and cabling options on page 13

Inventory selection and node compatibilityWhen choosing nodes for your deployment, some restrictions apply to the node configurations youcan combine in the same deployment.

Storage node compatibility

NetApp HCI supports storage nodes and drives with SED (Self-encrypting drive) and FIPS 140-2drive encryption capability. When deploying or expanding NetApp HCI, you can mix nodes withdifferent reported levels of encryption, but NetApp HCI only supports the more basic form ofencryption in this situation. For example, if you mix a storage node that is FIPS encryption capablewith nodes that only support SED encryption, SED encryption is supported with this configuration,but FIPS drive encryption is not.

Note: Adding storage nodes capable of FIPS drive encryption to the storage cluster does notenable the FIPS drive encryption feature. After deployment or expansion with FIPS-capable nodesis complete, you need to manually enable FIPS drive encryption. See the Element Software UserGuide for instructions.

All storage nodes must run the same minor version of Element software to be compatible in the samedeployment. For example, you cannot mix a storage node running Element 11.3.1 with other storagenodes running Element 11.5.

Note: Depending on node hardware configuration, H410S storage nodes might appear in theinventory list labeled as H300S, H500S, or H700S storage nodes.

Compute node compatibility

Compute nodes must meet the following requirements to be selectable as inventory:

• The CPU generations in all compute nodes must match for proper VMware vMotionfunctionality. After you select a compute node from the inventory, you cannot select computenodes with different CPU generations.

• You cannot intermix compute nodes with GPU-enabled compute nodes in the same computecluster. If you select a GPU-enabled compute node, CPU-only compute nodes becomeunselectable, and vice versa.

• The software version running on the compute node must match the major and minor version ofthe NetApp Deployment Engine hosting the deployment. If this is not the case, you need toreimage the compute node using the RTFI process. See the NetApp Knowledgebase articlesregarding RTFI for instructions.

• The compute node must have the cabling configuration you selected on the Network Topologypage to be selectable in the Compute Nodes list.

• The network cabling configurations for compute nodes of the same model must match within asingle compute cluster.

Selecting inventory

On the Inventory page, the NetApp Deployment Engine automatically detects available compute andstorage nodes, enabling you to select and add all NetApp HCI resources to the deployment. If a nodedoes not meet the requirements for deployment, it is not selectable and problems are indicated aserrors. You can position your cursor over the error in the node's row to see an explanation. When

30 | NetApp HCI Deployment Guide

Page 31: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

choosing node inventory on the Inventory page, the storage node that is hosting the NetAppDeployment Engine is automatically selected, and you cannot deselect it.

Before you begin

Jumbo frames must be enabled for proper inventory detection. If no nodes or only a subset of nodesappear in the inventory, verify that the switch ports used for NetApp HCI nodes (all SFP+/SFP28interfaces) are configured with jumbo frames.

Steps

1. On the Inventory page, view the list of available nodes.

If the system cannot detect any inventory, it displays an error. Correct the error before continuing.

If your system uses DHCP for IP address assignment, the storage and compute resources mightnot appear in the inventory immediately.

2. Optional: If a resource does not appear in the inventory immediately, or if you address an errorand need to refresh the inventory, click Refresh Inventory. You might need to refresh theinventory multiple times.

3. Optional: To filter the inventory on node attributes, such as node type:

a. Click Filter in the header of the Compute Nodes or Storage Nodes lists.

b. Choose criteria from the drop-down lists.

c. Below the drop-down lists, enter information to satisfy the criteria.

d. Click Add Filter.

e. Clear individual filters by clicking X next to an active filter, or clear all filters by clicking Xabove the list of filters.

4. Select all compute nodes that shipped with your system from the Compute Nodes list.

You need to select at least two compute nodes to proceed with deployment.

5. Select all storage nodes that shipped with your system from the Storage Nodes list.

You need to select at least four storage nodes to proceed with deployment.

6. Optional: If a storage node selection box is flagged, that storage node exceeds 33% of the totalstorage cluster capacity. Perform one of the following steps:

• Clear the selection box for the flagged storage node.

• Select additional storage nodes to more equally distribute the storage cluster capacity betweennodes.

7. Click Continue.

Related information

How to RTFI using a USB key (login required)

How to RTFI using the IPMI management console (login required)

Configuring network settingsNetApp HCI provides a network settings page with an easy form to simplify network configuration.When you complete the easy form, NetApp HCI automatically populates much of the rest of theinformation on the network settings page. You can then enter final network settings and verify that

Deploying NetApp HCI | 31

Page 32: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

the network configuration is correct before proceeding. You do not need to complete the form in itsentirety.

Before you begin

• You have obtained the following information:

◦ The planned naming prefix for the hosts and storage cluster

◦ All planned subnet mask, starting IP address, default gateway, and VLAN IDs for themanagement, iSCSI, and vMotion networks

◦ The IP address, default gateway, VLAN IDs, and subnet mask information for any plannedVMware vCenter deployment

◦ The Network Time Protocol (NTP) server address for NetApp HCI

◦ The DNS server IP address information for NetApp HCI

• If you are deploying a vSphere Distributed Switch, you have a vSphere Enterprise Plus licenseready to apply after deployment is complete.

• If you assigned VLAN IDs to node ports during terminal user interface (TUI) configuration, youhave configured those ports with the same VLAN ID during network configuration. You do notneed to configure tagged host ports as access ports or native VLANs on the connected switchports.

• You have verified that your network switch configuration is correct. Incorrect switchconfigurations (such as incorrect VLANs or MTU size) will cause deployment errors.

About this task

If you selected the two-cable network topology for compute nodes, you must use VLAN IDs for thevMotion and storage networks for all compute and storage nodes in the deployment (VLAN IDs areoptional for the management networks).

Note: In environments that require host-side VLAN tagging before deployment, if you haveconfigured VLAN IDs on compute and storage nodes so they are discoverable by the NetAppDeployment Engine, ensure you use the correct VLANs when configuring network settings in theNetApp Deployment Engine.

Steps

1. Optional: To disable live validation of network information you enter on this page, toggle theLive network validation is switch to Off.

2. In the Infrastructure Services section of the Network Settings page, enter the DNS and NTPserver information for NetApp HCI in the following fields:

Field Description

DNS Server IP Address 1 The IP address of the primary DNS server forNetApp HCI. If you specified a DNS serveron the vCenter Configuration page, this fieldis populated and read-only.

DNS Server IP Address 2 (Optional) An optional IP address of a secondary DNSserver for NetApp HCI.

NTP Server Address 1 The IP address or fully qualified domain nameof the primary NTP server for thisinfrastructure.

32 | NetApp HCI Deployment Guide

Page 33: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Field Description

NTP Server Address 2 (Optional) An optional IP address or fully qualifieddomain name of the secondary NTP server forthis infrastructure.

3. Click To save time, launch the easy form to enter fewer network settings.

The Network Settings Easy Form dialog box appears.

4. Enter a naming prefix in the Naming Prefix field.

The naming prefix is applied to the name of each host and the name of the storage cluster.Naming prefixes have the following characteristics:

• Must start with a letter

• Can contain letters, numbers, and hyphens

• Cannot exceed 55 characters

5. Choose one of the following options for assigning VLAN IDs.

At any time when using the form, click Clear next to a row to clear input from a row of fields.

Important: When you assign VLAN IDs, you are configuring VLAN tags that NetApp HCIwill apply to the network traffic. You do not need to enter your native VLAN as a VLAN ID; touse the native VLAN for a network, leave the appropriate field empty.

Option Steps

Assign VLAN IDs a. Select Yes for the Will you assign VLAN IDs option.

b. In the VLAN ID column, enter a VLAN tag to use for each type ofnetwork traffic you want to assign to a VLAN.

Both compute vMotion traffic and iSCSI traffic must use an unsharedVLAN ID.

c. Click Next.

d. In the Subnet column, enter subnet definitions in CIDR format foreach type of network traffic in each network; for example,192.168.1.0/24.

e. In the Default Gateway column, enter the IP address of the defaultgateway for each type of network traffic in each network.

f. In the Starting IP column, enter the first useable IP address for eachnetwork subnet in each network.

Do not assign VLAN IDs a. Select No for the Will you assign VLAN IDs radio button.

b. In the Subnet column, enter subnet definitions in CIDR format foreach type of network traffic in each network; for example,192.168.1.0/24.

c. In the Default Gateway column, enter the IP address of the defaultgateway for each type of network traffic in each network.

d. In the Starting IP column, enter the first useable IP address for eachtype of network traffic in each network.

6. Click Apply to Network Settings.

Deploying NetApp HCI | 33

Page 34: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

7. Click Yes to confirm.

This populates the Network Settings page with the settings you entered in the easy form. NetAppHCI validates the IP addresses you entered. You can disable this validation with the Disable LiveNetwork Validation button.

8. Verify that the automatically populated data is correct.

9. Click Continue.

Reviewing and deploying the configurationYou can review the information you provided before beginning deployment. You can also correct anyincorrect or incomplete information before you proceed.

About this task

Note: During deployment, the management node installation process creates volumes with namesbeginning with NetApp-HCI- in the Element storage cluster, and a SolidFire account beginningwith the name "tenant_". Do not delete these volumes or accounts; doing so will cause a loss inmanagement functionality.

Steps

1. Optional: Click the icon to download installation information in CSV format.

You can save this file and refer to it later for configuration information.

2. Expand each section and review the information. To expand all sections at once, click ExpandAll.

3. Optional: To make changes to information in any displayed section:

a. Click Edit in the corresponding section.

b. Make the necessary changes.

c. Click Continue until you reach the Review page. Your previous settings are saved on eachpage.

d. Repeat steps 2 and 3 to make any other necessary changes.

4. If you do not want to send cluster statistics and support information to NetApp-hosted SolidFireActive IQ servers, clear the final checkbox.

This disables real-time health and diagnostic monitoring for NetApp HCI. Disabling this featureremoves the ability for NetApp to proactively support and monitor NetApp HCI to detect andresolve problems before production is affected.

5. If all information is correct, click Start Deployment.

A dialog box appears. In the event of network connectivity issues or power loss during the finalsetup process, or if your browser session is lost, you can copy the URL displayed in the dialogand use it to browse to the final setup progress page.

6. Review the information in the dialog and click Copy to Clipboard to copy the URL to yourclipboard.

7. Save the URL to a text file on your computer.

8. When you are ready to proceed with deployment, click OK.

34 | NetApp HCI Deployment Guide

Page 35: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Deployment begins and a progress page is displayed. Do not close the browser window ornavigate away from the progress page until deployment is complete. If your browser session islost for any reason, you can browse to the URL you copied earlier (and accept any securitywarnings that appear) to regain access to the final setup progress page.

Note: If the deployment fails, save any error message text and contact NetApp Support.

After you finish

• Ensure that you can use the NetApp HCI management application by updating managementservices.

• Begin using NetApp HCI by clicking Launch vSphere.

Note: For NetApp HCI installations using vSphere 6.7, this link launches the HTML5 vSphereweb interface. For installations using vSphere 6.5, this link launches the Adobe Flash vSphereweb interface.

Deploying NetApp HCI | 35

Page 36: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Post-deployment tasks

Depending on your choices during the deployment process, you need to complete some final tasksbefore your NetApp HCI system is ready for production use, such as updating firmware and driversand migrating to an existing vSphere instance (if needed).

Supported networking changesAfter you deploy NetApp HCI, you can make limited changes to the default networkingconfiguration. However, certain settings are required for smooth operation and proper networkdetection. Changing these settings will cause unexpected behavior, and might prevent you fromexpanding compute and storage resources.

After you deploy your system, you can make the following changes to the default networkconfiguration in VMware vSphere as dictated by your network requirements:

• Change vSwitch names

• Change port group names

• Add and remove additional port groups

• Change the vmnic interface failover order for any additional port groups you have added

H300E, H500E, H700E and H410C compute nodes

NetApp HCI expects the following network configuration for H300E, H500E, H700E and H410Cnodes.

The following is a six-interface configuration with VMware vSphere Distributed Switching (VDS).This configuration is only supported when used with VMware vSphere Distributed Switches, andrequires VMware vSphere Enterprise Plus licensing.

Network function vmkernel vmnic (physical interface)

Management vmk0 vmnic2 (Port A), vmnic3 (Port B)

iSCSI-A vmk1 vmnic5 (Port E)

iSCSI-B vmk2 vmnic1 (Port D)

vMotion vmk3 vmnic4 (Port C), vmnic0 (Port F)

The following is a six-interface configuration with VMware vSphere Standard Switching (VSS). Thisconfiguration uses VMware vSphere Standard Switches (VSS).

Network function vmkernel vmnic (physical interface)

Management vmk0 vmnic2 (Port A), vmnic3 (Port B)

iSCSI-A vmk2 vmnic1 (Port E)

iSCSI-B vmk3 vmnic5 (Port D)

vMotion vmk1 vmnic4 (Port C), vmnic0 (Port F)

The following is a two-interface configuration. This configuration is only supported when used withVMware vSphere Distributed Switches (VDS), and requires VMware vSphere Enterprise Pluslicensing.

36

Page 37: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Network function vmkernel vmnic (physical interface)

Management vmk0 vmnic1 (Port D), vmnic5 (Port E)

iSCSI-A vmk1 vmnic1 (Port E)

iSCSI-B vmk2 vmnic5 (Port D)

vMotion vmk3 vmnic1 (Port C), vmnic5 (Port F)

H610C compute nodes

NetApp HCI expects the following network configuration for H610C nodes.

This configuration is only supported when used with VMware vSphere Distributed Switches (VDS),and requires VMware vSphere Enterprise Plus licensing.

Note: Ports A and B are unused on the H610C.

Network function vmkernel vmnic (physical interface)

Management vmk0 vmnic2 (Port C), vmnic3 (PortD)

iSCSI-A vmk1 vmnic3 (Port D)

iSCSI-B vmk2 vmnic2 (Port C)

vMotion vmk3 vmnic2 (Port C), vmnic3 (PortD)

H615C compute nodes

NetApp HCI expects the following network configuration for H615C nodes.

This configuration is only supported when used with VMware vSphere Distributed Switches (VDS),and requires VMware vSphere Enterprise Plus licensing.

Network function vmkernel vmnic (physical interface)

Management vmk0 vmnic0 (Port A), vmnic1 (PortB)

iSCSI-A vmk1 vmnic1 (Port B)

iSCSI-B vmk2 vmnic0 (Port A)

vMotion vmk3 vmnic0 (Port A), vmnic1 (PortB)

Keeping VMware vSphere up to dateAfter deploying NetApp HCI, you should use VMware vSphere Update Manager to apply the latestsecurity patches for the version of VMware vSphere used with NetApp HCI.

Use the Interoperability Matrix Tool to ensure that all versions of software are compatible. See theVMware vSphere Update Manager documentation for more information.

Installing GPU drivers for GPU-enabled compute nodesCompute nodes with NVIDIA graphics processing units (GPUs), like the NetApp H610C, need theNVIDIA software drivers installed in VMware ESXi so that they can take advantage of the increased

Post-deployment tasks | 37

Page 38: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

processing power. After deploying compute nodes with GPUs, you need to perform these steps oneach GPU-enabled compute node to install the GPU drivers in ESXi.

Steps

1. Open a browser and browse to the NVIDIA licensing portal at the following URL:

https://nvid.nvidia.com/dashboard/

2. Download one of the following driver packages to your computer, depending on yourenvironment:

vSphere version Driver package

vSphere 6.0 NVIDIA-GRID-

vSphere-6.0-390.94-390.96-392.05.zip

vSphere 6.5 NVIDIA-GRID-

vSphere-6.5-410.92-410.91-412.16.zip

vSphere 6.7 NVIDIA-GRID-

vSphere-6.7-410.92-410.91-412.16.zip

3. Extract the driver package on your computer.

The resulting .VIB file is the uncompressed driver file.

4. Copy the .VIB driver file from your computer to ESXi running on the compute node. Thefollowing example commands for each version assume that the driver is located in the $HOME/NVIDIA/ESX6.x/ directory on the management host. The SCP utility is readily available in mostLinux distributions, or available as a downloadable utility for all versions of Windows:

Option Description

ESXi 6.0 scp $HOME/NVIDIA/ESX6.0/NVIDIA**.vibroot@<ESXi_IP_ADDR>:/.

ESXi 6.5 scp $HOME/NVIDIA/ESX6.5/NVIDIA**.vibroot@<ESXi_IP_ADDR>:/.

ESXi 6.7 scp $HOME/NVIDIA/ESX6.7/NVIDIA**.vibroot@<ESXi_IP_ADDR>:/.

5. Use the following steps to log in as root to the ESXi host and install the NVIDIA vGPUmanager in ESXi.

a. Run the following command to log in to the ESXi host as the root user:

ssh root@<ESXi_IP_ADDRESS>

b. Run the following command to verify that no NVIDIA GPU drivers are currently installed:

nvidia-smi

This command should return the message "nvidia-smi: not found".

c. Run the following commands to enable maintenance mode on the host and install the NVIDIAvGPU Manager from the VIB file:

esxcli system maintenanceMode set --enable trueesxcli software vib install -v /NVIDIA**.vib

You should see the message "Operation finished successfully".

38 | NetApp HCI Deployment Guide

Page 39: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

d. Run the following command and verify that all eight GPU drivers are listed in the commandoutput:

nvidia-smi

e. Run the following command to verify that the NVIDIA vGPU package was installed andloaded correctly:

vmkload_mod -l | grep nvidia

The command should return output similar to the following: nvidia 816 13808

f. Run the following command to reboot the host:

reboot -f

g. Run the following command to exit maintenance mode:

esxcli system maintenanceMode set --enable false

6. Repeat steps 4-6 for any other newly deployed compute nodes with NVIDIA GPUs.

7. Perform the following tasks using the instructions in the NVIDIA documentation site:

a. Install the NVIDIA license server.

b. Configure the virtual machine guests for NVIDIA vGPU software.

c. If you are using vGPU-enabled desktops in a virtual desktop infrastructure (VDI) context,configure VMware Horizon View for NVIDIA vGPU software.

Related information

NVIDIA GRID license server installation instructions

NVIDIA vGPU driver installation instructions for virtual machine guests

NVIDIA vGPU Deployment Guide for VMware Horizon 7.5 on VMware vSphere 6.7

Accessing NetApp Hybrid Cloud ControlNetApp Hybrid Cloud Control enables you to manage NetApp HCI. You can upgrade managementservices, expand and monitor your installation, and enable NetApp Cloud Services. You log in toNetApp Hybrid Cloud Control by browsing to the IP address of the management node.

Before you begin

You have upgraded your management services to at least version 2.1.326. NetApp Hybrid CloudControl is not available in earlier service bundle versions. For information about the current servicebundle version, see the Management Services Release Notes.

Steps

1. Open a web browser and browse to the IP address of the management node. For example:

https://<ManagementNodeIP>

2. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI storage clusteradministrator credentials.

The NetApp Hybrid Cloud Control interface appears.

Post-deployment tasks | 39

Page 40: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Upgrading the entire NetApp HCI installation

You can keep the entire NetApp HCI installation up-to-date after deployment by upgrading individualNetApp HCI software components. These components include management services, managementnode, Element software, firmware, drivers, and the Element Plug-in for vCenter Server.

About this task

This process describes the recommended sequence for upgrading your NetApp HCI installation.Some tasks might not be required, depending on your configuration.

Steps

1. (For upgrades from management node 10.x only) If you have a management node 10.x, migrateyour configuration to management node 11.1.

Migrating from management node 10.x to version 11.x

2. Upgrade your Element software version to 11.7.

Upgrading Element software to version 11.7 on the storage cluster

3. Upgrade your management node to version 11.7.

• Upgrading the management node to version 11.7 from version 11.5

• Upgrading the management node to version 11.7 from version 11.3

• Upgrading the management node to version 11.7 from version 11.0 or 11.1

Note: This task also includes updating your vCenter plug-in registration and adding avCenter asset to the management node known assets.

4. (If not already completed as prerequisite in step 3) Update management services from HybridCloud Control.

Updating management services with NetApp Hybrid Cloud Control

5. Update your compute node firmware.

Updating compute node firmware

6. Update your compute node drivers.

Updating compute node drivers

Migrating from management node 10.x to version 11.xIf you have a management node at version 10.x, you cannot upgrade from 10.x to 11.x. You caninstead use this migration procedure to copy over the configuration from 10.x to a newly deployed11.1 management node. If your management node is currently at 11.0 or higher, you should skip thisprocedure. You need management node 11.0 or 11.1 and the latest HealthTools to upgrade Elementsoftware from 10.3 + through 11.x.

Steps

1. From the VMware vSphere interface, deploy the management node 11.1 OVA and power it on.

2. Open the management node VM console, which brings up the terminal user interface (TUI). Usethe TUI to create a new administrator ID and assign a password.

40

Page 41: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

3. In the management node TUI, log in to the management node with the new ID and password andvalidate that it works.

4. From the vCenter or management node TUI, get the management node 11.1 IP address andbrowse to the IP address on port 9443 to open the management node UI.

https://<mNode 11.1 IP address>:9443

5. In vSphere, select NetApp Element Configuration > mNode Settings. In older versions, thetop-level menu is NetApp SolidFire Configuration.

6. Click Actions > Clear.

7. To confirm, click Yes. The mNode Status field should report Not Configured.

Note: When you go to the mNode Settings tab for the first time, the mNode Status field mightdisplay as Not Configured instead of the expected UP; you might not be able to chooseActions > Clear. Refresh the browser. The mNode Status field will eventually display UP.

8. Log out of vSphere.

9. In a web browser, open the management node registration utility (https://<mNode 11.1 IPaddress>:9443) and select QoSSIOC Service Management.

10. Set the new QoSSIOC password.

Note: The default password is solidfire. This password is required to set the new password.

11. Click the vCenter Plug-in Registration tab.

12. Select Update Plug-in.

13. Enter required values. When you are finished, click UPDATE.

14. Log in to vSphere and select NetApp Element Configuration > mNode Settings.

15. Click Actions > Configure.

16. Provide the management node IP address, management node user ID (the user name is admin),password that you set on the QoSSIOC Service Management tab of the registration utility, andvCenter user ID and password.

In vSphere, the mNode Settings tab should display the mNode Status as UP, which indicatesmanagement node 11.1 is registered to vCenter.

17. From the management node registration utility (https://<mNode 11.1 IP address>:9443),restart the SIOC service from QoSSIOC Service Management.

18. Wait for one minute and check the NetApp Element Configuration > mNode Settings tab. Thisshould display the mNode Status as UP.

If status is DOWN, check the permissions for /sf/packages/sioc/app.properties. Thefile should have read, write, and execute permissions for the file owner. The correct permissionsshould appear as follows: -rwx------

19. After the SIOC process starts and vCenter displays mNode Status as UP, check the logs for thesf-hci-nma service on the management node. There should be no error messages.

Upgrading the entire NetApp HCI installation | 41

Page 42: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

20. (For management node 11.1 only) SSH into the management node version 11.1 with rootprivileges and start the NMA service with the following commands:

# systemctl enable /sf/packages/nma/systemd/sf-hci-nma.service

# systemctl start sf-hci-nma

21. Perform actions from vCenter to remove a drive, add a drive or reboot nodes. This triggers storagealerts which should be reported in vCenter. If this is working, NMA system alerts are functioningas expected.

22. If ONTAP Select is configured in vCenter, configure ONTAP Select alerts in NMA by copyingthe .ots.properties file from the previous management node to the management node version11.1 /sf/packages/nma/conf/.ots.properties file, and restart the NMA service using thefollowing command: systemctl restart sf-hci-nma.

23. Verify that ONTAP Select is working by viewing the logs with the following command:journalctl -f | grep -i ots.

24. Configure AIQ by doing the following:

a. SSH in to the management node version 11.1 and go to the /sf/packages/collectordirectory.

b. Run the following command:

sudo ./manage-collector.py --set-username netapp --set-password --set-mvip <MVIP>

c. Enter the management node UI password when prompted.

d. Run the following commands:

./manage-collector.py --get-all

sudo systemctl restart sfcollector

e. Verify sfcollector logs to confirm it is working.

25. In vSphere, the NetApp Element Configuration > mNode Settings tab should display themNode Status as UP

26. Verify NMA is reporting system alerts and ONTAP Select alerts.

27. If everything is working as expected, shut down and delete management node 10.x VM.

Upgrading Element software to version 11.7 on the storagecluster

You must use the HealthTools suite of tools to upgrade storage node software. You must usemanagement node 11.0, 11.1, or later to use the latest HealthTools suite of tools.

Related information

KB: HealthTools Improvement and Resolved Issues

HealthTools downloads page

42 | NetApp HCI Deployment Guide

Page 43: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Determining the Element version on the management node

Before you upgrade the NetApp Element software on your cluster, you must ensure that you havemanagement node 11.0, 11.1, or later.

Steps

1. To check the version of software running on your management node, log in to your managementnode.

2. View the Element version number in the login banner.

login as: <username>SolidFire Element 11.0.0.781

Using HealthTools to run pre-upgrade health checks

The HealthTools suite includes the components required for the software upgrade process: sfupdate-healthtools, sfupgradecheck, and sfinstall.

Note: The commands noted here require escalated privileges to run. Either preface commands withsudo or escalate your user to root privileges.

Checking the installed version of HealthTools

You can run the sfupdate-healthtools command to check the version of HealthTools installedon the management node. This will compare the currently installed version with the newest availableversion.

Before you begin

The management node in your environment is running version 11.0, 11.1, or later.

Steps

1. To view the installed version, in the command line enter the following command:

sfupdate-healthtools -v

A sample of the output is as follows:

Currently installed version of HealthTools: 2018.09.01.130

2. To view the latest available version, in the command line enter the following command:

sfupdate-healthtools -l

A sample of the output is as follows:

Latest available version of HealthTools: 2018.09.01.130The latest version of HealthTools can be downloaded from:https://mysupport.netapp.com/NOW/cgi-bin/software/

Upgrading the entire NetApp HCI installation | 43

Page 44: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Checking the installed version of HealthTools on dark sites

For management nodes isolated from external Internet access, the verification check for the latestversion of the HealthTools always fails. You must manually verify if the latest version available is thesame as the installed version.

Before you begin

• You have a management node on a dark site.

• When you run sfupdate-healthtools -l, you are unable to verify the latest available versionof HealthTools.

Steps

1. Log in to the NetApp Support Site to download the latest HealthTools software.

https://mysupport.netapp.com/products/p/healthtools.html

2. Download a JSON file from the NetApp Support Site on a computer that is not the managementnode and rename it to metadata.json.

https://library.netapp.com/ecm/ecm_get_file/ECMLP2840740

3. Run the sfupdate-healthtools command:

sfupdate-healthtools <path-to-healthtools-package>

4. Run the sfupdate-healthtools -v command to check the installed version:

sfupdate-healthtools -v

5. Run the sfupdate-healthtools -l command to check the latest version against themetadata json file:

sfupdate-healthtools -l --metadata=<path-to-metadata-json>

Example

sfupdate-healthtools -l --metadata=/tmp/metadata.json

Updating HealthTools

You can use the sfupdate-healthtools command to update the version of HealthTools that isinstalled on the management node.

Steps

1. Log in to the NetApp Support Site at https://mysupport.netapp.com/products/p/healthtools.html.

2. Download the latest version of HealthTools to a computer that is not the management node.

3. Copy the installation file to the management node.

You can do this using, for example, SCP.

4. Run the sfupdate-healthtools <path to install file> command to install the newsoftware.

44 | NetApp HCI Deployment Guide

Page 45: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Example

See the following sample input command:

sfupdate-healthtools /tmp/solidfire-healthtools-2018.09.01.130.tgz

A sample of the output is as follows:

Checking key signature for file /tmp/solidfire-healthtools-2018.09.01.130/components.tgzinstalling command sfupdate-healthtoolsRestarting on version 2018.09.01.130sfupdate-healthtools /sf/bin/sfupdate-healthtools -r 2018.09.01.130installing command sfupgradecheckinstalling command sfinstallinstalling command sfresetupgrade

5. Run the sfupdate-healthtools -v command to check the installed version.

Example

A sample of the output is as follows:

Currently installed version of HealthTools:2018.09.01.130

Checking for upgrade readiness

You can verify that the cluster is ready to be upgraded by using the sfupgradecheck command. Thiscommand verifies information such as pending nodes, disk space, and cluster faults. If yourmanagement node is at a dark site, the upgrade readiness check needs the metadata JSON file to runsuccessfully.

About this task

This procedure describes how to address upgrade checks with any of the following results:

• Running the sfupgradecheck command does not give an error. Your cluster is upgrade ready.

• Checks within the sfupgradecheck tool fail with an error message. Your cluster is not upgradeready and additional steps are required.

• Your upgrade check fails with an error message that HealthTools is out-of-date.

• Your upgrade check fails because your management node is on a dark site.

Step

1. Run the sfupgradecheck command.

sfupgradecheck -u <cluster-user-name> -p <cluster-password> MVIP

Note: For passwords that contain special characters, add a backslash (\) before each specialcharacter. For example, mypass!@1 should be entered as mypass\!\@.

See the following sample input command:

sfupgradecheck -u admin -p admin 10.117.78.244

A sample output is as follows. In this result, no errors appear and you are ready to upgrade.

check_pending_nodes:Test Description: Verify no pending nodes in cluster

Upgrading the entire NetApp HCI installation | 45

Page 46: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

More information: https://kb.netapp.com/support/s/article/ka11A0000008ltOQAQ/pending-nodescheck_cluster_faults:Test Description: Report any cluster faultscheck_root_disk_space:Test Description: Verify node root directory has at least 12 GBs of available disk spacePassed node IDs: 1, 2, 3More information: https://kb.netapp.com/support/s/article/ka11A0000008ltTQAQ/SolidFire-Disk-space-errorcheck_mnode_connectivity:Test Description: Verify storage nodes can communicate with management nodePassed node IDs: 1, 2, 3More information: https://kb.netapp.com/support/s/article/ka11A0000008ltYQAQ/mNode-connectivitycheck_files:Test Description: Verify options file existsPassed node IDs: 1, 2, 3check_cores:Test Description: Verify no core or dump files existsPassed node IDs: 1, 2, 3check_upload_speed:Test Description: Measure the upload speed between the storage node and the management nodeNode ID: 1 Upload speed: 90063.90 KBs/secNode ID: 3 Upload speed: 106511.44 KBs/secNode ID: 2 Upload speed: 85038.75 KBs/sec

If there are errors, additional actions are required:

• Your cluster is not upgrade ready. Review the sfupgradecheck error message.

A sample output is as follows:

The following tests failed:check_root_disk_space:Test Description: Verify node root directory has at least 12 GBs of available disk spaceSeverity: ERRORFailed node IDs: 2Remedy: Remove unneeded files from root driveMore information: https://kb.netapp.com/support/s/article/ka11A0000008ltTQAQ/SolidFire-Disk-space-errorcheck_pending_nodes:Test Description: Verify no pending nodes in clusterMore information: https://kb.netapp.com/support/s/article/ka11A0000008ltOQAQ/pending-nodescheck_cluster_faults:Test Description: Report any cluster faultscheck_root_disk_space:Test Description: Verify node root directory has at least 12 GBs of available disk spacePassed node IDs: 1, 3More information: https://kb.netapp.com/support/s/article/ka11A0000008ltTQAQ/SolidFire-Disk-space-errorcheck_mnode_connectivity:Test Description: Verify storage nodes can communicate with management nodePassed node IDs: 1, 2, 3More information: https://kb.netapp.com/support/s/article/ka11A0000008ltYQAQ/mNode-connectivitycheck_files:Test Description: Verify options file existsPassed node IDs: 1, 2, 3check_cores:Test Description: Verify no core or dump files existsPassed node IDs: 1, 2, 3check_upload_speed:Test Description: Measure the upload speed between the storage node and the management nodeNode ID: 1 Upload speed: 86518.82 KBs/secNode ID: 3 Upload speed: 84112.79 KBs/secNode ID: 2 Upload speed: 93498.94 KBs/sec

In this example, node 1 is low on disk space. You can find more information in the knowledgebase article listed in the error message.

• HealthTools is out-of-date. Review the error message and note that the upgrade check fails.

Sample output is as follows:

sfupgradecheck failed: HealthTools is out of date: installed version: 2018.02.01.200 latest version: 2018.03.05.901.The latest version of the HealthTools can be downloaded from: https://mysupport.netapp.com/NOW/cgi-bin/software/ Or rerun with the -n option

46 | NetApp HCI Deployment Guide

Page 47: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

When this happens, you must follow the steps described in the output.

• Your management node is on a dark site. Review the message and note that the upgrade checkfails.

A sample output is as follows:

sfupgradecheck failed: Unable to verify latest available version of healthtools.

a. When this happens, get and save the metadata.json file on a computer that is not themanagement node and run the following command:

sfupgradecheck -l --metadata=<path-to-metadata-json>

For details, see additional upgrade information for dark sites.

Upgrading Element software at dark sites on page 50

b. Verify that the HealthTools suite is up-to-date by running the following command:

sfupgradecheck -u <cluster-user-name> -p <cluster-password> MVIP

Upgrading Element software

To upgrade the NetApp Element software, you must use the sfinstall file included in theHealthTools suite of tools.

Before you begin

Attention: Before you upgrade Element software, check upgrade path information for the Elementversion you are upgrading to and verify that the upgrade path is valid.

NetApp Knowledgebase Answer 1088254: Upgrade matrix for storage clusters running NetAppElement Software

• The management node in your environment is running version 11.0, 11.1 or later.

• You have the latest version of HealthTools.

• You verified that the cluster is ready to be upgraded with the command sfupgradecheck.

Steps

1. For NetApp HCI systems, go to the NetApp HCI software download page. For standaloneSolidFire storage systems, go to the Element software download page.

2. Select the correct software release and download the latest storage node image to a computer thatis not the management node.

3. Copy the ISO file to the management node in an accessible location like /tmp.

You can do this using, for example, SCP.

Note: When you upload the ISO file, make sure that the name of the file does not change,otherwise later steps will fail.

4. Optional: Download the ISO from the management node to the cluster nodes before the upgrade.

This step reduces the upgrade time by pre-staging the ISO on the storage nodes and runningadditional internal checks to ensure that the cluster is in a good state to be upgraded. Performingthis operation will not put the cluster into "upgrade" mode or restrict any of the cluster operations.

sfinstall -u <cluster_username> -p <cluster_password> <MVIP> <path-to-install-file-ISO> --stage

Upgrading the entire NetApp HCI installation | 47

Page 48: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Note: For passwords that contain special characters, add a backslash (\) before each specialcharacter. For example, mypass!@1 should be entered as mypass\!\@. Alternatively, omit thepassword from the command line to allow sfinstall to prompt for the information.

Example

See the following sample input:

sfinstall 10.117.0.244 -u admin -p admin /tmp/solidfire-rtfisodium-11.0.0.345.iso --stage

The output for the sample shows that sfinstall attempts to verify if a newer version ofsfinstall is available:

sfinstall 10.117.0.244 -u admin -p admin /tmp/solidfire-rtfisodium-11.0.0.345.iso 2018-10-01 16:52:15: Newer version of sfinstall available. This version: 2018.09.01.130, latest version: 2018.06.05.901. The latest version of the HealthTools can be downloaded from: https:// mysupport.netapp.com/NOW/cgi-bin/software/ or rerun with --skip-version-check

See the following sample excerpt from a successful pre-stage operation:

Note: When staging completes, the upgrade event message will display Storage NodeUpgrade Staging Successful.

flabv0004 ~ # sfinstall -u admin -p admin 10.117.0.87 solidfire-rtfi-sodium-patch3-11.3.0.14171.iso --stage 2019-04-03 13:19:58: sfinstall Release Version: 2019.01.01.49 Management Node Platform: Ember Revision: 26b042c3e15a Build date: 2019-03-12 18:45 2019-04-03 13:19:58: Checking connectivity to MVIP 10.117.0.87 2019-04-03 13:19:58: Checking connectivity to node 10.117.0.86 2019-04-03 13:19:58: Checking connectivity to node 10.117.0.87 ...2019-04-03 13:19:58: Successfully connected to cluster and all nodes ...2019-04-03 13:20:00: Do you want to continue? ['Yes', 'No']: Yes ...2019-04-03 13:20:55: Staging install pack on cluster nodes 2019-04-03 13:20:55: newVersion: 11.3.0.14171 2019-04-03 13:21:01: nodeToStage: nlabp2814, nlabp2815, nlabp2816, nlabp2813 2019-04-03 13:21:02: Staging Node nlabp2815 mip=[10.117.0.87] nodeID=[2] (1 of 4 nodes) 2019-04-03 13:21:02: Node Upgrade serving image at http://10.117.0.204/rtfi/solidfire-rtfi- sodium-patch3-11.3.0.14171/filesystem.squashfs ...2019-04-03 13:25:40: Staging finished. Repeat the upgrade command without the --stage option to start the upgrade.

The staged ISOs will be automatically deleted after the upgrade completes. However, if theupgrade has not started and needs to be rescheduled, ISOs can be manually de-staged using thecommand:

sfinstall -u <cluster_username> -p <cluster_password> <MVIP> --destage

Note: After the upgrade has started, the de-stage option is no longer available.

5. Start the upgrade with the sfinstall command and the path to the ISO file:

sfinstall -u <cluster_username> -p <cluster_password> <MVIP> <path-to-install-file-ISO>

Example

See the following sample input command:

sfinstall 10.117.0.244 -u admin -p admin /tmp/solidfire-rtfi-sodium-11.0.0.345.iso

48 | NetApp HCI Deployment Guide

Page 49: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

The output for the sample shows that sfinstall attempts to verify if a newer version ofsfinstall is available:

sfinstall 10.117.0.244 -u admin -p admin /tmp/solidfire-rtfi-sodium-11.0.0.345.iso2018-10-01 16:52:15: Newer version of sfinstall available. This version: 2018.09.01.130, latest version: 2018.06.05.901. The latest version of the HealthTools can be downloaded from: https://mysupport.netapp.com/NOW/cgi-bin/software/ or rerun with --skip-version-check

See the following sample excerpt from a successful upgrade. Upgrade events can be used tomonitor the progress of the upgrade.

# sfinstall 10.117.0.161 -u admin -p admin solidfire-rtfi-sodium-11.0.0.761.iso2018-10-11 18:28Checking connectivity to MVIP 10.117.0.161Checking connectivity to node 10.117.0.23Checking connectivity to node 10.117.0.24...Successfully connected to cluster and all nodes

###################################################################You are about to start a new upgrade10.117.0.16110.3.0.161solidfire-rtfi-sodium-11.0.0.761.iso Nodes: 10.117.0.23 nlabp1023 SF3010 10.3.0.161 10.117.0.24 nlabp1025 SF3010 10.3.0.161 10.117.0.26 nlabp1027 SF3010 10.3.0.161 10.117.0.28 nlabp1028 SF3010 10.3.0.161###################################################################

Do you want to continue? ['Yes', 'No']: yes ...Watching for new network faults. Existing fault IDs are set([]).Checking for legacy network interface names that need renamingUpgrading from 10.3.0.161 to 11.0.0.761 upgrade method=rtfiWaiting 300 seconds for cluster faults to clearWaiting for caches to fall below threshold...Installing mip=[10.117.0.23] nodeID=[1] (1 of 4 nodes)Starting to move primaries.Loading volume listMoving primary slice=[7] away from mip[10.117.0.23] nodeID[1] ssid[11] to new ssid[15]Moving primary slice=[12] away from mip[10.117.0.23] nodeID[1] ssid[11] to new ssid[15]...Installing mip=[10.117.114.24] nodeID=[2] (2 of 4 nodes)Starting to move primaries.Loading volume listMoving primary slice=[5] away from mip[10.117.114.24] nodeID[2] ssid[7] to new ssid[11]...Install of solidfire-rtfi-sodium-11.0.0.761 complete.Removing old softwareNo staged builds present on nodeID=[1]No staged builds present on nodeID=[2]...Starting light cluster block service check

Related tasks

Migrating from management node 10.x to version 11.x on page 40

What happens when an upgrade fails

In the case of a software upgrade failure, you can pause the upgrade.

Attention: You should only pause an upgrade with Ctrl-C. This allows the system to clean itselfup.

When sfinstall waits for cluster faults to clear and if any failure causes the faults to remain, sfinstallwill not proceed to the next node.

• You should stop sfinstall with Ctrl+C.

• Contact NetApp Support to assist with the failure investigation.

Upgrading the entire NetApp HCI installation | 49

Page 50: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

• Resume the upgrade with the same sfinstall command.

When an upgrade is paused using Ctrl+C, if the upgrade is currently upgrading a node, you will see alist of options:

Wait

Allow the currently upgrading node to finish before resetting the cluster constants.

Continue

Continue the upgrade, which cancels the pause.

Abort

Reset the cluster constants and abort the upgrade immediately.

Note: Aborting the cluster upgrade while a node is being updated might result in thedrives being ungracefully removed from the node. If the drives are ungracefullyremoved, adding the drives back during an upgrade will require manual intervention byNetApp Support. The node might be taking longer to do firmware updates or post-update syncing activities. If the upgrade progress seems stalled, please contact NetAppSupport for assistance.

Upgrading Element software at dark sites

You must use the HealthTools suite of tools to update NetApp Element software at a dark site.

Before you begin

Before you upgrade Element software, check upgrade path information for the Element version youare upgrading to and verify that the upgrade path is valid.

• The management node in your environment is running version 11.0, 11.1, or later.

• You have the latest version of HealthTools downloaded to the management node.

• Your management node is not connected to the Internet.

Steps

1. For NetApp HCI systems, go to the NetApp HCI software download page. For standaloneSolidFire storage systems, go to the Element software download page.

2. Select the correct software release and download the latest storage node image to a computer thatis not the management node.

3. Download a JSON file from the NetApp Support Site on a computer that is not the managementnode and rename it to metadata.json.

https://library.netapp.com/ecm/ecm_get_file/ECMLP2840740

4. Copy the ISO file to the management node in an accessible location like /tmp.

You can do this using, for example, SCP. When you upload the ISO file, make sure that the nameof the file does not change, otherwise later steps will fail.

5. Run the sfupdate-healthtools command:

sfupdate-healthtools <path-to-healthtools-package>

6. Run the sfupdate-healthtools -v command to check the installed version:

sfupdate-healthtools -v

50 | NetApp HCI Deployment Guide

Page 51: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

7. Run the sfupdate-healthtools -l command to check the latest version against themetadata json file:

sfupdate-healthtools -l --metadata=<path-to-metadata-json>

8. Run the sfupgradecheck command to ensure that the cluster is ready:

sudo sfupgradecheck -u <cluster_username> -p <cluster_password> MVIP --metadata=<path-to-metadata-json>

9. Run the sfinstall command with the path to the ISO file and the metadata json file:

sfinstall -u <cluster_username> -p <cluster_password> <MVIP> <path-to-install-file-ISO> --metadata=<path-to-metadata-json-file>

See the following sample input command:

sfinstall -u admin -p admin 10.117.78.244 /tmp/solidfire-rtfi-11.3.0.345.iso --metadata=/tmp/metadata.json

Note: (Optional) You can add the --stage flag to the sfinstall command to pre-stage theupgrade in advance. See Upgrading Element software on page 47 for details about staging.

Related tasks

Migrating from management node 10.x to version 11.x on page 40

Upgrading the management node to version 11.7 fromversion 11.5

You can perform an in-place upgrade of the management node from 11.5 to version 11.7 withoutneeding to provision a new management node virtual machine.

Before you begin

• Storage nodes are running Element 11.3 or later.

Note: Use the latest HealthTools to upgrade Element software.

• The management node you are intending to upgrade is version 11.5 and uses IPv4 networking.The management node version 11.7 does not support IPv6.

Tip: To check the version of your management node, log in to your management node and viewthe Element version number in the login banner.

• You have updated your management services bundle to the latest version (2.1.368 or later) usingone of these options:

◦ Hybrid Cloud Control (HCC): https://<ManagementNodeIP>

Note: HCC is available with management services bundle 2.1.326. For NetApp HCI, theHCC option is only available if you have performed the one-time upgrade from NetAppDeployment Engine: https://<StorageNodeMIP>:442/nde

◦ Management node API: https://<ManagementNodeIP>/mnode

Note: After authenticating, use the mNode API PUT /services/update/latest toupdate services.

• You have configured an additional network adapter (if required) using the instructions forconfiguring a storage NIC (eth1) in the management node user guide your product.

Upgrading the entire NetApp HCI installation | 51

Page 52: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Note: Persistent volumes might require an additional network adapter if eth0 is not able to berouted to the SVIP. Configure a new network adapter on the iSCSI storage network to allow theconfiguration of persistent volumes.

• You have logged in to the management node virtual machine using SSH or console access.

• You have downloaded the management node ISO for NetApp HCI or Element software from theNetApp Support Site to the management node virtual machine.

Note: The name of the ISO is similar to solidfire-fdva-<Element release>-patchX-XX.X.X.XXXX.iso

• You have checked the integrity of the download by running md5sum on the downloaded file andcompared the output to what is available on NetApp Support Site for NetApp HCI or Elementsoftware, as in the following example:

sudo md5sum -b <path to iso>/solidfire-fdva-<Element release>-patchX-XX.X.X.XXXX.iso

Steps

1. Mount the management node ISO image and copy the contents to the file system using thefollowing commands:

sudo mkdir -p /upgrade

sudo mount <solidfire-fdva-<Element release>-patchX-XX.X.X.XXXX.iso> /mnt

sudo cp -r /mnt/* /upgrade

2. Change to the home directory, and unmount the ISO file from /mnt:

sudo umount /mnt

3. Delete the ISO to conserve space on the management node:

sudo rm <path to iso>/solidfire-fdva-<Element release>-patchX-XX.X.X.XXXX.iso

4. On an 11.5 (11.5.0.63) management node, run the following command to upgrade yourmanagement node OS version. The script retains all necessary configuration files after theupgrade, such as Active IQ collector and proxy settings.

sudo /sf/rtfi/bin/sfrtfi_inplace file:///upgrade/casper/filesystem.squashfs sf_upgrade=1

The management node reboots with a new OS after the upgrade process completes.

5. On the 11.7 management node, run the redeploy-mnode script to retain previous managementservices configuration settings:

52 | NetApp HCI Deployment Guide

Page 53: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Note: The script retains previous management services configuration, including configurationfrom the Active IQ collector service, controllers (vCenters), or proxy, depending on yoursettings.

sudo /sf/packages/mnode/redeploy-mnode -mu <mnode user>

Upgrading the management node to version 11.7 fromversion 11.3

You can perform an in-place upgrade of the management node from 11.3 to version 11.7 withoutneeding to provision a new management node virtual machine.

Before you begin

• Storage nodes are running Element 11.3 or later.

Note: Use the latest HealthTools to upgrade Element software.

• The management node you are intending to upgrade is version 11.3 and uses IPv4 networking.The management node version 11.7 does not support IPv6.

Tip: To check the version of your management node, log in to your management node and viewthe Element version number in the login banner.

• You have updated your management services bundle to the latest version (2.1.368 or later) usingone of these options:

◦ Hybrid Cloud Control (HCC): https://<ManagementNodeIP>

Note: HCC is available with management services bundle 2.1.326. For NetApp HCI, theHCC option is only available if you have performed the one-time upgrade from NetAppDeployment Engine: https://<StorageNodeMIP>:442/nde

◦ Management node API: https://<ManagementNodeIP>/mnode

Note: After authenticating, use the mNode API PUT /services/update/latest toupdate services.

• You have configured an additional network adapter (if required) using the instructions forconfiguring a storage NIC (eth1) in the management node user guide your product.

Note: Persistent volumes might require an additional network adapter if eth0 is not able to berouted to the SVIP. Configure a new network adapter on the iSCSI storage network to allow theconfiguration of persistent volumes.

• You have logged in to the management node virtual machine using SSH or console access.

• You have downloaded the management node ISO for NetApp HCI or Element software from theNetApp Support Site to the management node virtual machine.

Note: The name of the ISO is similar to solidfire-fdva-<Element release>-patchX-XX.X.X.XXXX.iso

Upgrading the entire NetApp HCI installation | 53

Page 54: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

• You have checked the integrity of the download by running md5sum on the downloaded file andcompared the output to what is available on NetApp Support Site for NetApp HCI or Elementsoftware, as in the following example:

sudo md5sum -b <path to iso>/solidfire-fdva-<Element release>-patchX-XX.X.X.XXXX.iso

Steps

1. Mount the management node ISO image and copy the contents to the file system using thefollowing commands:

sudo mkdir -p /upgrade

sudo mount <solidfire-fdva-<Element release>-patchX-XX.X.X.XXXX.iso> /mnt

sudo cp -r /mnt/* /upgrade

2. Change to the home directory, and unmount the ISO file from /mnt:

sudo umount /mnt

3. Delete the ISO to conserve space on the management node:

sudo rm <path to iso>/solidfire-fdva-<Element release>-patchX-XX.X.X.XXXX.iso

4. On an 11.3 (11.3.0.14235) management node, run the following command to upgrade yourmanagement node OS version. The script retains all necessary configuration files after theupgrade, such as Active IQ collector and proxy settings.

sudo /sf/rtfi/bin/sfrtfi_inplace file:///upgrade/casper/filesystem.squashfs sf_upgrade=1

The management node reboots with a new OS after the upgrade process completes.

5. On the 11.7 management node, run the redeploy-mnode script to retain previous managementservices configuration settings:

Note: The script retains previous management services configuration, including configurationfrom the Active IQ collector service, controllers (vCenters), or proxy, depending on yoursettings.

sudo /sf/packages/mnode/redeploy-mnode -mu <mnode user>

Related tasks

Updating management services with NetApp Hybrid Cloud Control on page 58

Upgrading Element software on page 47

Migrating from management node 10.x to version 11.x on page 40

54 | NetApp HCI Deployment Guide

Page 55: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Upgrading the management node to version 11.7 fromversion 11.0 or 11.1

You can perform an in-place upgrade of the management node from 11.0 or 11.1 to version 11.7without needing to provision a new management node virtual machine.

Before you begin

• Storage nodes are running Element 11.3 or later.

Note: Use the latest HealthTools to upgrade Element software.

• The management node you are intending to upgrade is version 11.0 or 11.1 and uses IPv4networking. The management node version 11.7 does not support IPv6.

Tip: To check the version of your management node, log in to your management node and viewthe Element version number in the login banner.

Note: For management node 11.0, the VM memory needs to be manually increased to 12GB.

• You have configured an additional network adapter (if required) using the instructions forconfiguring a storage NIC (eth1) in the management node user guide your product.

Note: Persistent volumes might require an additional network adapter if eth0 is not able to berouted to the SVIP. Configure a new network adapter on the iSCSI storage network to allow theconfiguration of persistent volumes.

• You have logged in to the management node virtual machine using SSH or console access.

• You have downloaded the management node ISO for NetApp HCI or Element software from theNetApp Support Site to the management node virtual machine.

Note: The name of the ISO is similar to solidfire-fdva-<Element release>-patchX-XX.X.X.XXXX.iso

• You have checked the integrity of the download by running md5sum on the downloaded file andcompared the output to what is available on NetApp Support Site for NetApp HCI or Elementsoftware, as in the following example:

sudo md5sum -b <path to iso>/ssolidfire-fdva-<Element release>-patchX-XX.X.X.XXXX.iso

Steps

1. Mount the management node ISO image and copy the contents to the file system using thefollowing commands:

sudo mkdir -p /upgrade

sudo mount solidfire-fdva-<Element release>-patchX-XX.X.X.XXXX.iso /mnt

sudo cp -r /mnt/* /upgrade

Upgrading the entire NetApp HCI installation | 55

Page 56: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

2. Change to the home directory, and unmount the ISO file from /mnt:

sudo umount /mnt

3. Delete the ISO to conserve space on the management node:

sudo rm <path to iso>/solidfire-fdva-<Element release>-patchX-XX.X.X.XXXX.iso

4. Run one of the following scripts with options to upgrade your management node OS version.Only run the script that is appropriate for your version. Each script retains all necessaryconfiguration files after the upgrade, such as Active IQ collector and proxy settings.

• On an 11.1 (11.1.0.73) management node, run the following command:

sudo /sf/rtfi/bin/sfrtfi_inplace file:///upgrade/casper/filesystem.squashfs sf_upgrade=1 sf_keep_paths="/sf/packages/solidfire-sioc-4.2.3.2288 /sf/packages/solidfire-nma-1.4.10/conf /sf/packages/sioc /sf/packages/nma"

• On an 11.1 (11.1.0.72) management node, run the following command:

sudo /sf/rtfi/bin/sfrtfi_inplace file:///upgrade/casper/filesystem.squashfs sf_upgrade=1 sf_keep_paths="/sf/packages/solidfire-sioc-4.2.1.2281 /sf/packages/solidfire-nma-1.4.10/conf /sf/packages/sioc /sf/packages/nma"

• On an 11.0 (11.0.0.781) management node, run the following command:

sudo /sf/rtfi/bin/sfrtfi_inplace file:///upgrade/casper/filesystem.squashfs sf_upgrade=1 sf_keep_paths="/sf/packages/solidfire-sioc-4.2.0.2253 /sf/packages/solidfire-nma-1.4.8/conf /sf/packages/sioc /sf/packages/nma"

The management node reboots with a new OS after the upgrade process completes.

5. On the 11.7 management node, run the upgrade-mnode script to retain previous configurationsettings.

Note: If you are migrating from an 11.0 or 11.1 management node, the script copies the ActiveIQ collector to the new configuration format.

• For a single storage cluster managed by an existing management node 11.0 or 11.1 withpersistent volumes:

sudo /sf/packages/mnode/upgrade-mnode -mu <mnode user> -pv <true - persistent volume> -pva <persistent volume account name - storage volume account>

• For a single storage cluster managed by an existing management node 11.0 or 11.1 with nopersistent volumes:

sudo /sf/packages/mnode/upgrade-mnode -mu <mnode user>

56 | NetApp HCI Deployment Guide

Page 57: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

• For multiple storage clusters managed by an existing management node 11.0 or 11.1 withpersistent volumes:

sudo /sf/packages/mnode/upgrade-mnode -mu <mnode user> -pv <true - persistent volume> -pva <persistent volume account name - storage volume account> -pvm <persistent volumes mvip>

• For multiple storage clusters managed by an existing management node 11.0 or 11.1 with nopersistent volumes (-pvm flag is just to provide one of the cluster's MVIP addresses):

sudo /sf/packages/mnode/upgrade-mnode -mu <mnode user> -pvm <mvip for persistent volumes>

6. (For all NetApp HCI installations and SolidFire stand-alone storage installations with NetAppElement Plug-in for vCenter Server) Update the vCenter Plug-in on the 11.7 management node:

a. Log out of the vSphere Web Client.

Note: The web client will not recognize updates made during this process to your vCenterPlug-in if you do not log out.

b. Browse to the registration utility (https://<ManagementNodeIP>:9443).

c. Click the vCenter Plug-in Registration tab.

d. Within Manage vCenter Plug-in, select Update Plug-in.

e. Update the vCenter address, vCenter administrator user name, and vCenter administratorpassword.

f. Click Update.

g. Log in to the vSphere Web Client and verify that the plug-in information has been updated bybrowsing to Home > NetApp Element Configuration > About.

Note: Logging into vSphere Web Client after updating registration installs the new plug-inupdates and creates a new database.

You should see the following version details or details of a more recent version:

• NetApp Element Plug-in Version: 4.3.0

• NetApp Element Plug-in Build Number: 233

7. Use the mNode API to add assets:

a. Using a browser, go to the storage MVIP and log in.

This action causes certificate to be accepted for the next step.

b. Using a browser, go to https://<ManagementNodeIP>/mnode.

c. Add a vCenter controller asset to the management node known assets for HCI monitoring(NetApp HCI installations only) and Hybrid Cloud Control (HCC):

d. Click Authorize and enter your MVIP user name and password credentials. Close the pop-upwindow.

e. Run GET /assets to pull the base asset ID needed to add the vCenter/controller asset.

f. Run POST /assets/{ASSET_ID}/controllers to add a controller asset with vCentercredentials.

Upgrading the entire NetApp HCI installation | 57

Page 58: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

g. For NetApp HCI or to access cloud services options in HCC, add a compute asset to themanagement node known assets:

Attention: You must perform this step or Cloud Services options will not be available fromHCC.

h. Click Authorize and enter your MVIP user name and password credentials. Close the pop-upwindow.

i. Run GET /assets to pull the base asset ID needed to add the compute asset.

j. Run POST/assets/{asset_id}/compute-nodes to add a compute asset with credentials for thecompute asset. The type is ESXi Host.

Updating management services with NetApp Hybrid CloudControl

Using NetApp Hybrid Cloud Control, you can update your NetApp management services.Management service bundles provide functionality and fixes to your installation outside of majorreleases.

Before you begin

• You are running management node 11.3 or later.

• You have upgraded your management services to at least version 2.1.326. NetApp Hybrid CloudControl upgrades is not available in earlier service bundle versions.

About this task

For a list of available services for each service bundle version, see the Management Services ReleaseNotes.

Steps

1. Open a web browser and browse to the IP address of the management node:

https://<ManagementNodeIP>

2. Log in to NetApp Hybrid Cloud Control by providing the storage cluster administratorcredentials.

3. Click Upgrade near the top right of the interface.

4. On the Upgrades page, select the Management Services tab.

The Management Services tab shows the current and available versions of management servicessoftware.

Note: If your installation cannot access the internet, only the current software version is shown.

5. Do one of the following:

Option Description

Your installation can accessthe internet

If a management services upgrade is available, click Begin Upgrade.

58 | NetApp HCI Deployment Guide

Page 59: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Option Description

Your installation cannotaccess the internet

a. Follow the instructions on the page to download and save amanagement services upgrade package on your computer.

b. Click Browse to locate the package you saved and upload it.

The upgrade begins, and you can see the upgrade status on this page.

Related information

Working with the management node

KB: Management services release notes

Updating compute node firmwareFor any H-Series compute node, you can update the firmware for hardware components such as theBMC, BIOS, and NIC using the RTFI image while leaving the ESXi installation and otherconfiguration data in place. After the update, the compute node boots into ESXi and works as before,retaining the configuration.

Before you begin

See the firmware and driver matrix for your hardware in NetApp KB article 1088658 (loginrequired).

About this task

In production environments, only update the firmware on one compute node at a time.

As an alternative to using the USB thumb drive method described in this procedure, you can mountthe compute node RTFI image on the compute node using the Virtual CD/DVD option in theVirtual Console in the Baseboard Management Controller (BMC) interface. The BMC method takesconsiderably longer than the USB thumb drive method. Ensure your workstation or server has thenecessary network bandwidth and that your browser session with the BMC does not time out.

Steps

1. Browse to the NetApp HCI software downloads page and click the download link for correctversion of NetApp HCI.

2. Accept the End User License Agreement.

3. Under the Compute and Storage Nodes heading, download the NetApp HCI compute nodeimage.

4. Write the raw contents of the compute node RTFI image to a USB thumb drive with at least 32GBcapacity (using dd or Etcher).

5. Place the compute node in maintenance mode using VMware vCenter, and evacuate all virtualmachines from the host.

Note: If VMware Distributed Resource Scheduler (DRS) is enabled on the cluster (this is thedefault in NetApp HCI installations), virtual machines will automatically be migrated to othernodes in the cluster.

6. Insert the USB thumb drive into a USB port on the compute node and reboot the compute nodeusing VMware vCenter.

Upgrading the entire NetApp HCI installation | 59

Page 60: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

7. During the compute node POST cycle, press F11 to open the Boot Manager. You may need topress F11 multiple times in quick succession. You can perform this operation by connecting avideo/keyboard or by using the console in BMC.

8. Select One Shot > USB Flash Drive from the menu that appears. If the USB thumb drive doesnot appear in the menu, verify that USB Flash Drive is part of the legacy boot order in the BIOSof the system.

9. Press Enter to boot the system from the USB thumb drive. The firmware flash process begins.

After firmware flashing is complete and the node reboots, it might take a few minutes for ESXi tostart.

10. After the reboot is complete, exit maintenance mode on the updated compute node using vCenter.

11. Remove the USB flash drive from the updated compute node.

12. Repeat this task for other compute nodes in your ESXi cluster until all compute nodes areupdated.

Updating compute node driversFor any H-Series compute node, you can update the drivers used on the nodes using VMware UpdateManager.

Before you begin

See the firmware and driver matrix for your hardware in NetApp KB article 1088658 (loginrequired).

About this task

Only perform one of these update operations at a time.

Steps

1. Browse to the NetApp HCI software downloads page and click the download link for correctversion of NetApp HCI.

2. Accept the End User License Agreement.

3. Under the Driver Packages for VMWare ESXi heading, download the driver package for yournode type and ESXi version.

4. Extract the downloaded driver bundle on your local computer.

Note: The NetApp driver bundle includes one or more VMware Offline Bundle ZIP files; donot extract these ZIP files.

5. After upgrading the firmware on the compute nodes, go to VMware Update Manager invCenter.

6. Import the driver offline bundle file for the compute nodes into the Patch Repository.

7. Create a new host baseline for the compute node.

8. Choose Host Extension for Name and Type and select all imported driver packages to beincluded in the new baseline.

9. In the Host and Clusters menu in vCenter, select the cluster with the compute nodes you wouldlike to update and navigate to the Update Manager tab.

60 | NetApp HCI Deployment Guide

Page 61: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

10. Click Remediate and the select the newly created host baseline. Ensure that drivers included inthe baseline are selected.

11. Proceed through the wizard to the Host Remediation Options and ensure that the Do NotChange VM Power State option is selected to keep virtual machines online during the driverupdate.

Note: If VMware Distributed Resource Scheduler (DRS) is enabled on the cluster (this is thedefault in NetApp HCI installations), virtual machines will automatically be migrated to othernodes in the cluster.

12. Proceed to the Ready to Complete page in the wizard and click Finish.

The drivers for all compute nodes in the cluster are update one node at a time while virtualmachines stay online.

Upgrading the entire NetApp HCI installation | 61

Page 62: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Enabling cloud services on NetApp HCI

THIS DOCUMENTATION IS PROVIDED AS A TECHNOLOGY PREVIEW. With NetApp HCI, ahybrid cloud infrastructure that serves as a deployment engine supporting NetApp KubernetesService, you can set up a cloud native development pipeline quickly and easily. NetApp HCI alongwith NetApp Kubernetes Service enables you to deploy your applications as you would with a publiccloud provider—all in a self-service mode without IT involvement. This enables you to treat yourdatacenter as a deployable region of your multi-cloud.

About this task

Some companies do not want to or cannot store and process their data in a public cloud. They want tostore their data in a private cloud.

Cloud services on NetApp HCI enables you to:

• Deploy applications across public cloud providers and your private cloud.

• Interact with NetApp HCI as you would with any other supported cloud provider.

• Port the apps and data across a hybrid cloud architecture regardless of cloud location withoutcompromising service-level agreements.

• Spin up cloud-native applications yourself without involving IT.

See NetApp HCI cloud services information.

NetApp Documentation: NetApp HCI Services

62

Page 63: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Expanding your NetApp HCI installation

After deployment, you can expand NetApp HCI storage and compute resources either separately or atthe same time. After installing the node in the NetApp HCI chassis, you use NetApp Hybrid CloudControl to configure NetApp HCI to utilize the new resources. NetApp HCI detects the existingnetwork configuration and offers you configuration options within the existing networks and VLANs,if any.

Note that NetApp HCI uses VMware Enhanced vMotion Compatibility (EVC) to ensure vMotionfunctionality when there are compute nodes with different CPU generations in the vSphere cluster.When EVC is required for expansion, NetApp HCI enables it automatically whenever possible. In thefollowing situations, you might need to manually change EVC settings in the vSphere client tocomplete expansion:

• The existing compute nodes have a newer CPU generation than the compute nodes you are tryingto add.

• The controlling vCenter instance does not support the required EVC level.

• The compute nodes you are trying to add have an older CPU generation than the EVC setting ofthe controlling vCenter instance.

Note: When expanding NetApp HCI compute or storage resources in the NetApp DeploymentEngine, you should connect to the vCenter instance that manages your existing NetApp HCIcompute nodes.

Expanding NetApp HCI storage resourcesAfter deployment, you can expand and configure NetApp HCI storage resources using NetAppHybrid Cloud Control.

Before you begin

• You have free and unused IPv4 addresses on the same network segment as existing nodes (eachnew node must be installed on the same network as existing nodes of its type).

• You have one of the following types of SolidFire storage cluster accounts:

◦ The native Administrator account that was created during initial deployment

◦ A custom user account with Cluster Admin, Drives, Volumes, and Nodes permissions

• You have performed the following actions with each new node:

◦ Installed the new node in the NetApp HCI chassis by following the installation and setupinstructions available in the NetApp HCI Documentation Center

◦ Cabled and powered on the new node

• You have the management IPv4 address of an already installed storage node. You can find the IPaddress in the NetApp Element Management > Cluster > Nodes tab of the NetApp ElementPlug-in for vCenter Server.

• Each new node uses the same network topology and cabling as the existing storage or computeclusters.

Tip: When expanding storage resources, storage capacity should be split evenly across all chassisfor the best reliability.

63

Page 64: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Steps

1. Open a web browser and browse to the IP address of the management node. For example:

https://<ManagementNodeIP>

2. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI storage clusteradministrator credentials.

3. Click Expand at the top right corner of the interface.

The browser opens the NetApp Deployment Engine.

4. Log in to the NetApp Deployment Engine by providing the NetApp HCI storage clusteradministrator credentials.

5. On the Welcome page, select No.

6. Click Continue.

7. On the Available Inventory page, select the storage nodes you want to add to the existingNetApp HCI installation.

8. Click Continue.

9. On the Network Settings page, some of the network information has been detected from theinitial deployment. Each new storage node is listed by serial number, and you need to assign newnetwork information to it. For each new storage node, perform the following steps:

a. If NetApp HCI detected a naming prefix, copy it from the Detected Naming Prefix field, andinsert it as the prefix for the new unique hostname you add in the Hostname field.

b. In the Management IP Address field, enter a management IP address for the new storagenode that is within the management network subnet.

c. In the Storage (iSCSI) IP Address field, enter an iSCSI IP address for the new storage nodethat is within the iSCSI network subnet.

d. Click Continue.

Note: NetApp HCI might take some time to validate the IP addresses you enter. TheContinue button becomes available when IP address validation is complete.

10. On the Review page in the Network Settings section, new nodes are shown in bold text. If youneed to make changes to information in any section, perform the following steps:

a. Click Edit for that section.

b. When finished making changes, click Continue on any subsequent pages to return to theReview page.

11. Optional: If you do not want to send cluster statistics and support information to NetApp-hostedActive IQ servers, clear the final checkbox.

This disables real-time health and diagnostic monitoring for NetApp HCI. Disabling this featureremoves the ability for NetApp to proactively support and monitor NetApp HCI to detect andresolve problems before production is affected.

12. Click Add Nodes.

You can monitor the progress while NetApp HCI adds and configures the resources.

13. Optional: Verify that any new storage nodes are visible in the VMware vSphere Web Client.

64 | NetApp HCI Deployment Guide

Page 65: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Expanding NetApp HCI compute resourcesAfter deployment, you can expand and configure NetApp HCI compute resources using NetAppHybrid Cloud Control.

Before you begin

• Ensure that the vSphere instance NetApp HCI is using has vSphere Enterprise Plus licensing ifyou are expanding a deployment with Virtual Distributed Switches.

• Ensure that none of the vCenter or vSphere instances in use with NetApp HCI have expiredlicenses.

• Ensure that you have free and unused IPv4 addresses on the same network segment as existingnodes (each new node must be installed on the same network as existing nodes of its type).

• Ensure that you have the vCenter administrator account credentials ready.

• Ensure that you have performed the following actions with each new node:

◦ Installed the new node in the NetApp HCI chassis by following the installation and setupinstructions available in the NetApp HCI Documentation Center

◦ Cabled and powered on the new node

• Ensure that each new node uses the same network topology and cabling as the existing storage orcompute clusters.

Steps

1. Open a web browser and browse to the IP address of the management node. For example:

https://<ManagementNodeIP>

2. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI storage clusteradministrator credentials.

3. In the Expand Installation pane, Click Expand.

The browser opens the NetApp Deployment Engine.

4. Log in to the NetApp Deployment Engine by providing the NetApp HCI storage clusteradministrator credentials.

5. On the Welcome page, select Yes.

6. On the End User License page, perform the following actions:

a. Read the VMware End User License Agreement.

b. If you accept the terms, click I accept at the end of the agreement text.

7. Click Continue.

8. On the vCenter page, perform the following steps:

a. Enter a FQDN or IP address and administrator credentials for the vCenter instance associatedwith your NetApp HCI installation.

b. Click Continue.

Expanding your NetApp HCI installation | 65

Page 66: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

c. Select an existing vSphere datacenter to which to add the new compute nodes, or selectCreate New Datacenter to add the new compute nodes to a new datacenter.

Note: If you select Create New Datacenter, the Cluster field is automatically populated.

d. If you selected an existing datacenter, select a vSphere cluster with which the new computenodes should be associated.

Note:

If the NetApp HCI cannot recognize the network settings of the cluster you have selectedfor expansion, ensure that the vmkernel and vmnic mapping for the management, storageand vMotion networks are set to the deployment defaults. See Supported networkingchanges for more information.

e. Click Continue.

9. On the ESXi Credentials page, enter an ESXi root password for the compute node or nodes youare adding.

You should use the same password that was created during the initial NetApp HCI deployment.

10. Click Continue.

11. If you created a new vSphere datacenter cluster, on the Network Topology page, select a networktopology to match the new compute nodes you are adding.

Note: You can only select the two-cable option if your compute nodes are using the two-cabletopology and the existing NetApp HCI deployment is configured with VLAN IDs.

12. On the Available Inventory page, select the nodes you want to add to the existing NetApp HCIinstallation.

Important: For some compute nodes, you might need to enable EVC at the highest level yourvCenter version supports before you can add them to your installation. You need to use thevSphere client to enable EVC for these compute nodes. After you enable it, refresh theInventory page and try adding the compute nodes again.

13. Click Continue.

14. Optional: If you created a new vSphere datacenter cluster, on the Network Settings page, importnetwork information from an existing NetApp HCI deployment by selecting the Copy Settingfrom an Existing Cluster checkbox.

This populates the default gateway and subnet information for each network.

15. On the Network Settings page, some of the network information has been detected from theinitial deployment. Each new compute node is listed by serial number, and you need to assignnew network information to it. For each new compute node, perform the following steps:

a. If NetApp HCI detected a naming prefix, copy it from the Detected Naming Prefix field, andinsert it as the prefix for the new unique hostname you add in the Hostname field.

b. In the Management IP Address field, enter a management IP address for the compute nodethat is within the management network subnet.

c. In the vMotion IP Address field, enter a vMotion IP address for the compute node that iswithin the vMotion network subnet.

d. In the iSCSI A - IP Address field, enter an IP address for the first iSCSI port of the computenode that is within the iSCSI network subnet.

66 | NetApp HCI Deployment Guide

Page 67: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

e. In the iSCSI B - IP Address field, enter an IP address for the second iSCSI port of thecompute node that is within the iSCSI network subnet.

16. Click Continue.

17. On the Review page in the Network Settings section, new nodes are shown in bold text. If youneed to make changes to information in any section, perform the following steps:

a. Click Edit for that section.

b. When finished making changes, click Continue on any subsequent pages to return to theReview page.

18. Optional: If you do not want to send cluster statistics and support information to NetApp-hostedSolidFire Active IQ servers, clear the final checkbox.

This disables real-time health and diagnostic monitoring for NetApp HCI. Disabling this featureremoves the ability for NetApp to proactively support and monitor NetApp HCI to detect andresolve problems before production is affected.

19. Click Add Nodes.

You can monitor the progress while NetApp HCI adds and configures the resources.

20. Optional: Verify that any new compute nodes are visible in vCenter.

Related references

Supported networking changes on page 36

Related information

NetApp HCI Compute and Storage Nodes Installation and Setup Instructions

VMware Knowledge Base: Enhanced vMotion Compatibility (EVC) processor support

Expanding NetApp HCI storage and compute resources atthe same time

After deployment, you can expand and configure NetApp HCI compute and storage resources at thesame time using NetApp Hybrid Cloud Control.

Before you begin

• Ensure that the vSphere instance NetApp HCI is using has vSphere Enterprise Plus licensing ifyou are expanding a deployment with Virtual Distributed Switches.

• Ensure that none of the vCenter or vSphere instances in use with NetApp HCI have expiredlicenses.

• Ensure that you have free and unused IPv4 addresses on the same network segment as existingnodes (each new node must be installed on the same network as existing nodes of its type).

• Ensure that you have the vCenter administrator account credentials ready.

• Ensure that you have one of the following types of SolidFire storage cluster accounts:

◦ The native Administrator account that was created during initial deployment

◦ A custom user account with Cluster Admin, Drives, Volumes, and Nodes permissions

• Ensure that you have performed the following actions with each new node:

Expanding your NetApp HCI installation | 67

Page 68: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

◦ Installed the new node in the NetApp HCI chassis by following the installation and setupinstructions available in the NetApp HCI Documentation Center

◦ Cabled and powered on the new node

• Ensure that each new node uses the same network topology and cabling as the existing storage orcompute clusters.

About this task

Note:

• You can intermix the H410C compute node with existing NetApp HCI compute and storagenodes in the same chassis and cluster.

• You cannot intermix compute nodes and GPU-enabled compute nodes in the same cluster. Ifyou select a GPU-enabled compute node, CPU-only compute nodes become unselectable, andvice versa.

• If you are adding compute nodes with CPU generations that are different than the CPUgeneration of the existing compute nodes and Enhanced vMotion Compatibility (EVC) isdisabled on the controlling vCenter instance, you must enable EVC before proceeding. Thisensures vMotion functionality after expansion is complete.

Steps

1. Open a web browser and browse to the IP address of the management node. For example:

https://<ManagementNodeIP>

2. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI storage clusteradministrator credentials.

3. In the Expand Installation pane, Click Expand.

The browser opens the NetApp Deployment Engine.

4. Log in to the NetApp Deployment Engine by providing the NetApp HCI storage clusteradministrator credentials.

5. On the Welcome page, select Yes.

6. On the End User License page, perform the following actions:

a. Read the VMware End User License Agreement.

b. If you accept the terms, click I accept at the end of the agreement text.

7. Click Continue.

8. On the vCenter page, perform the following steps:

a. Enter a FQDN or IP address and administrator credentials for the vCenter instance associatedwith your NetApp HCI installation.

b. Click Continue.

c. Select an existing vSphere datacenter to which to add the new compute nodes, or selectCreate New Datacenter to add the new compute nodes to a new datacenter.

Note: If you select Create New Datacenter, the Cluster field is automatically populated.

68 | NetApp HCI Deployment Guide

Page 69: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

d. If you selected an existing datacenter, select a vSphere cluster with which the new computenodes should be associated.

Note:

If the NetApp HCI cannot recognize the network settings of the cluster you have selectedfor expansion, ensure that the vmkernel and vmnic mapping for the management, storageand vMotion networks are set to the deployment defaults. See Supported networkingchanges for more information.

e. Click Continue.

9. On the ESXi Credentials page, enter an ESXi root password for the compute node or nodes youare adding.

You should use the same password that was created during the initial NetApp HCI deployment.

10. Click Continue.

11. If you created a new vSphere datacenter cluster, on the Network Topology page, select a networktopology to match the new compute nodes you are adding.

Note: You can only select the two-cable option if your compute nodes are using the two-cabletopology and the existing NetApp HCI deployment is configured with VLAN IDs.

12. On the Available Inventory page, select the storage and compute nodes you want to add to theexisting NetApp HCI installation.

Important: For some compute nodes, you might need to enable EVC at the highest level yourvCenter version supports before you can add them to your installation. You need to use thevSphere client to enable EVC for these compute nodes. After you enable it, refresh theInventory page and try adding the compute nodes again.

13. Click Continue.

14. Optional: If you created a new vSphere datacenter cluster, on the Network Settings page, importnetwork information from an existing NetApp HCI deployment by selecting the Copy Settingfrom an Existing Cluster checkbox.

This populates the default gateway and subnet information for each network.

15. On the Network Settings page, some of the network information has been detected from theinitial deployment. Each new node is listed by serial number, and you need to assign new networkinformation to it. For each node listed, perform the following steps:

a. If NetApp HCI detected a naming prefix, copy it from the Detected Naming Prefix field, andinsert it as the prefix for the new unique hostname you add in the Hostname field.

b. In the Management IP Address field, enter a management IP address for the node that iswithin the management network subnet.

c. In the vMotion IP Address field, enter a vMotion IP address for the node that is within thevMotion network subnet.

d. In the iSCSI A - IP Address field, enter an IP address for the first iSCSI port of the node thatis within the iSCSI network subnet.

e. In the iSCSI B - IP Address field, enter an IP address for the second iSCSI port of the nodethat is within the iSCSI network subnet.

16. Click Continue.

Expanding your NetApp HCI installation | 69

Page 70: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

17. On the Review page in the Network Settings section, new nodes are shown in bold text. If youneed to make changes to information in any section, perform the following steps:

a. Click Edit for that section.

b. When finished making changes, click Continue on any subsequent pages to return to theReview page.

18. Optional: If you do not want to send cluster statistics and support information to NetApp-hostedSolidFire Active IQ servers, clear the final checkbox.

This disables real-time health and diagnostic monitoring for NetApp HCI. Disabling this featureremoves the ability for NetApp to proactively support and monitor NetApp HCI to detect andresolve problems before production is affected.

19. Click Add Nodes.

You can monitor the progress while NetApp HCI adds and configures the resources.

20. Optional: Verify that the new nodes are visible in vCenter (for compute nodes) or the ElementPlug-in for vCenter Server (for storage nodes).

Related references

Supported networking changes on page 36

Related information

NetApp HCI Compute and Storage Nodes Installation and Setup Instructions

VMware Knowledge Base: Enhanced vMotion Compatibility (EVC) processor support

70 | NetApp HCI Deployment Guide

Page 71: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Monitoring your NetApp HCI installation

By default, NetApp HCI sends performance and alert statistics to the NetApp SolidFire Active IQservice. As part of your normal support contract, NetApp Support monitors this data and alerts you toany performance bottlenecks or potential system issues. You need to create a NetApp Supportaccount if you do not already have one (even if you have an existing NetApp SolidFire Active IQaccount) so that you can take advantage of this service.

Steps

1. Open a web browser and browse to the IP address of the management node. For example:

https://<ManagementNodeIP>

2. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI storage clusteradministrator credentials.

3. Select View Active IQ from the upper right menu icon.

The browser opens the NetApp SolidFire Active IQ interface at https://activeiq.solidfire.com.

4. Do one of the following:

Scenario Steps

You do not have a NetAppSupport account (or youcurrently have an existingNetApp SolidFire Active IQaccount)

a. Click Sign up now. The NetApp Support account registration pagesappear.

b. Follow the instructions on these pages to register for a NetAppSupport account.

c. After your account is activated, reopen the https://activeiq.solidfire.com login page and proceed with the steps in thefollowing option.

You have an existing NetAppSupport account (or have justcreated an account)

a. On the login page that appears, log in with your NetApp Supportcredentials.

b. When you are logged in, the Dashboard appears and you can monitoryour NetApp HCI installation usage statistics.

c. See the online help documentation within the NetApp SolidFireActive IQ interface.

5. After you log in and can see the Active IQ interface, verify that the NetApp HCI compute andstorage nodes are reporting telemetry correctly to Active IQ:

a. If you have more than one NetApp HCI installation, click Select a Cluster and choose thecorrect cluster from the list.

b. In the left-hand navigation pane, click Nodes.

c. Click Storage or Compute to see a list of all storage or compute nodes that are properlyreporting data to Active IQ from this installation.

6. If a node or nodes are missing from the list, contact NetApp Support.

71

Page 72: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Powering your NetApp HCI system off or on

You can power off or power on your NetApp HCI system if you have a scheduled outage, need toperform hardware maintenance, or need to expand the system. Use the following tasks to power offor power on your NetApp HCI system as required.

You might want to power off your HCI system under a number of different circumstances, such as:

• Scheduled outages

• Chassis fan replacements

• Chassis fan firmware upgrades

• Cluster expansion

The following is a high-level overview of the tasks you will perform to power off a NetApp HCIsystem:

• Power off all virtual machines except the VMware vCenter server (vCSA).

• Power off all ESXi servers except the one hosting the vCSA.

• Power off the vCSA.

• Power off the NetApp HCI storage system.

The following is a high-level overview of the tasks you will perform to power on a NetApp HCIsystem:

• Power on all physical storage nodes.

• Power on all physical compute nodes.

• Power on the vCSA.

• Verify the system and power on additional virtual machines.

Powering off compute resources for a NetApp HCI systemTo power off NetApp HCI compute resources, you need to power off individual ESXi hosts as well asthe vCenter Server Appliance in a certain order.

Steps

1. Log in to the vCenter instance controlling the NetApp HCI system and determine the ESXimachine hosting the vCenter Server Virtual Appliance (vCSA).

2. After you have determined the ESXi host running the vCSA, power down all other virtualmachines other than the vCSA as follows:

a. Select a virtual machine, right-click and select Power > Shut Down Guest OS.

3. Power off all ESXi hosts that are not the ESXi host running the vCSA.

4. Power off the vCSA. At this point, your vCenter session will fail because the vCSA disconnectsduring the power-off process.

5. At this point, all virtual machines should be shut down and only one ESXi host is still poweredon. Log in to the ESXi host.

72

Page 73: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

6. After you log in to the ESXi host, verify that all virtual machines on the host are powered off. Ifany of the virtual machines are powered on, ensure that you power them off.

7. After you power off all the virtual machines, shut down the ESXi host. This should disconnect thelast of any iSCI sessions open to the NetApp HCI storage cluster.

Powering off storage resources for a NetApp HCI systemWhen you power off storage resources for NetApp HCI, you need to use the Shutdown Element APImethod to properly halt the storage nodes.

Steps

1. After you power off the compute resources, you use a web browser to shut down all the nodes ofthe NetApp HCI storage cluster.

1. Log in to the storage cluster and verify that the iSCI session count is zero. Also, verify that youare connected to the correct MVIP.

2. Navigate to Cluster > Nodes > Active, and record the node IDs for all of the active nodes in thecluster.

3. To power off the NetApp HCI storage cluster, open a web browser and use the following URL toinvoke the power off and halt procedure, where MVIP is the management IP of the NetApp HCIstorage system and nodes=[] includes the node IDs that you recorded in step 2.

https://{MVIP}/json-rpc/1.0?method=Shutdown&nodes=[1,2,3,4]&option=halt

4. Enter the cluster administrator user name and password.

5. Validate that the API call returned successfully. Ensure that all cluster nodes are listed under the"successful" section of the API result.

You have successfully powered off all the NetApp HCI components.

Powering on storage resources for a NetApp HCI systemYou can power on NetApp HCI after the scheduled outage is complete.

Steps

1. Power on all the storage nodes using either the physical power button or the BMC.

2. If using the BMC, log in to each node and navigate to Remote Control > Power Control >Power On Server.

3. When all the storage nodes are online, log in to the NetApp HCI storage system and verify that allnodes are operational.

Powering your NetApp HCI system off or on | 73

Page 74: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Powering on compute resources for a NetApp HCI systemYou can power on compute resources for a NetApp HCI system after the scheduled outage iscomplete.

Steps

1. Power on compute nodes using the same steps you performed for powering on the storage nodes.

2. When all the compute nodes are operational, log in to the ESXi host that was running the vCSA.

3. Log in to the compute host and verify that it sees all the NetApp HCI datastores. For a typicalNetApp HCI system, you should see all the ESXi local datastores and at least the followingshared datastores: NetApp-HCI-Datastore-[01,02]

4. Assuming all storage is accessible, power on the vCSA and any other required virtual machinesas follows:

a. Select the virtual machines in the navigator, select all the virtual machines that you want topower on, and click the Power on button.

5. After you power on the virtual machines, wait for approximately 5 minutes and then use a webbrowser to navigate to the IP address or FQDN of the vCSA applicance. If you do not wait longenough, you will get a message stating that the vSphere Client web server is initializing.

6. After the vSphere Client initializes, log in and verify that all ESXi hosts and virtual machines areonline.

After you complete these steps, NetApp HCI should be fully online.

74 | NetApp HCI Deployment Guide

Page 75: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Contacting NetApp Support

If you need help with or have questions or comments about NetApp products, contact NetAppSupport.

• Web:

mysupport.netapp.com

• Phone:

◦ 888.4.NETAPP (888.463.8277) (US and Canada)

◦ 00.800.44.638277 (EMEA/Europe)

◦ +800.800.80.800 (Asia/Pacific)

75

Page 76: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Where to find product documentation and otherinformation

You can learn more about using and managing NetApp HCI and SolidFire all-flash storage from theresources available in the Documentation Centers and Resources pages for both products.

In the Documentation Centers, you can also find information about hardware installation andmaintenance, additional content resources available, links to known issues and resolved issues, andthe latest release notes. On the Resources pages, you can find links to data sheets, technical reports,white papers, and videos.

• NetApp HCI Documentation Center

• NetApp HCI Resources page

• SolidFire and Element 11.7 Documentation Center

• SolidFire and Element 11.5 Documentation Center

• SolidFire and Element 11.3 Documentation Center

• SolidFire Resources page

76

Page 77: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Copyright

Copyright © 2020 NetApp, Inc. All rights reserved. Printed in the U.S.

No part of this document covered by copyright may be reproduced in any form or by any means—graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in anelectronic retrieval system—without prior written permission of the copyright owner.

Software derived from copyrighted NetApp material is subject to the following license anddisclaimer:

THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS ORIMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIEDWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE,WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANYDIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIALDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTEGOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING INANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THEPOSSIBILITY OF SUCH DAMAGE.

NetApp reserves the right to change any products described herein at any time, and without notice.NetApp assumes no responsibility or liability arising from the use of products described herein,except as expressly agreed to in writing by NetApp. The use or purchase of this product does notconvey a license under any patent rights, trademark rights, or any other intellectual property rights ofNetApp.

The product described in this manual may be protected by one or more U.S. patents, foreign patents,or pending applications.

Data contained herein pertains to a commercial item (as defined in FAR 2.101) and is proprietary toNetApp, Inc. The U.S. Government has a non-exclusive, non-transferrable, non-sublicensable,worldwide, limited irrevocable license to use the Data only in connection with and in support of theU.S. Government contract under which the Data was delivered. Except as provided herein, the Datamay not be used, disclosed, reproduced, modified, performed, or displayed without the prior writtenapproval of NetApp, Inc. United States Government license rights for the Department of Defense arelimited to those rights identified in DFARS clause 252.227-7015(b).

77

Page 78: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

Trademark

NETAPP, the NETAPP logo, and the marks listed on the NetApp Trademarks page are trademarks ofNetApp, Inc. Other company and product names may be trademarks of their respective owners.

http://www.netapp.com/us/legal/netapptmlist.aspx

78

Page 79: NetApp HCI 1docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1...2018/09/01  · • NetApp HCI nodes have redundant ports for all network functions except out-of-band management

How to send comments about documentation andreceive update notifications

You can help us to improve the quality of our documentation by sending us your feedback. You canreceive automatic notification when production-level (GA/FCS) documentation is initially released orimportant changes are made to existing production-level documents.

If you have suggestions for improving this document, send us your comments by email.

[email protected]

To help us direct your comments to the correct division, include in the subject line the product name,version, and operating system.

If you want to be notified automatically when production-level documentation is released orimportant changes are made to existing production-level documents, follow Twitter account@NetAppDoc.

You can also contact us in the following ways:

• NetApp, Inc., 1395 Crossman Ave., Sunnyvale, CA 94089 U.S.

• Telephone: +1 (408) 822-6000

• Fax: +1 (408) 822-4501

• Support telephone: +1 (888) 463-8277

79