19
NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction Jay Goldfinch, NetApp November 2015 | TR-3982 Abstract This technical report is an introduction to the architecture and key customer benefits of the NetApp ® clustered Data ONTAP ® 8.3.x and 8.2.x operating system.

Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

  • Upload
    ledat

  • View
    233

  • Download
    6

Embed Size (px)

Citation preview

Page 1: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

NetApp Clustered Data ONTAP 8.3 and 8.2.x

Technical Report

NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

Jay Goldfinch, NetApp

November 2015 | TR-3982

Abstract

This technical report is an introduction to the architecture and key customer benefits

of the NetApp® clustered Data ONTAP® 8.3.x and 8.2.x operating system.

Page 2: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

2 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved

TABLE OF CONTENTS

NetApp Clustered Data ONTAP: Overview ............................................................................................... 4

Physical Cluster Components ................................................................................................................... 6

Nodes ..................................................................................................................................................................... 6

HA pairs .................................................................................................................................................................. 7

Drives, RAID groups, and Aggregates .................................................................................................................... 8

Network Ports ......................................................................................................................................................... 9

Clusters ................................................................................................................................................................ 10

Logical Cluster Components ................................................................................................................... 10

Storage Virtual Machines ..................................................................................................................................... 10

Logical Interfaces (LIFs) ....................................................................................................................................... 11

Flexible Volumes .................................................................................................................................................. 11

LUNs .................................................................................................................................................................... 12

NAS ...................................................................................................................................................................... 13

SAN ...................................................................................................................................................................... 14

Key Features ............................................................................................................................................. 15

Managability ......................................................................................................................................................... 15

Multiprotocol Unified Architecture ......................................................................................................................... 16

Storage Efficiency ................................................................................................................................................. 16

Data Protection and Business Continuity ............................................................................................................. 17

Storage QoS ......................................................................................................................................................... 17

Infinite Volume ...................................................................................................................................................... 18

Intelligent Scale-Out Storage ................................................................................................................................ 18

Nondisruptive Operations ..................................................................................................................................... 18

Summary .................................................................................................................................................... 19

Resources .................................................................................................................................................. 19

LIST OF FIGURES

Figure 1) A Data ONTAP cluster consisting of FAS controllers in a mix of all-flash, hybrid, and capacity configurations. A dedicated, redundant 10 gigabit Ethernet interconnect (top) connects the controllers. .......................4

Figure 2) A small Data ONTAP cluster with three storage virtual machines (SVMs). Clients and hosts connect to a storage virtual machine, rather than directly to the storage arrays that host the SVMs. Each SVM has its own volumes, LUNs, network connectivity (LIFs), and authentication. ..................................................................................4

Figure 3) A six node Data ONTAP cluster with one storage virtual machine. Clients and hosts can access data on any node from any logical interface. Flexible volumes, LUNs, and LIFs can move nondisruptively, so the SVM can grow as the cluster scales out. ................................................................................................................................................5

Page 3: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

3 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved

Figure 4) A single-node cluster consisting of a FAS8000 series controller running clustered Data ONTAP. This FAS8000 controller has one shelf of solid-state disks and another shelf of capacity drives. The controller has redundant connections to its storage shelves. ................................................................................................................7

Figure 5) An HA pair of FAS controllers provides redundancy. In most cases, controllers in an HA pair reside in the same chassis with redundant power supplies and passive interconnect circuitry. This visualization splits the nodes apart to illustrate the HA interconnect and redundant disk connectivity. ........................................................................7

Figure 6) A Flash Pool aggregate consisting of high-capacity SATA drives in one storage shelf (grey), and solid state drives in another storage shelf (green). In this example, the SATA drives are grouped together in two RAID groups with six data drives and two parity drives each. The solid state drives are grouped together in one RAID group with six data drives and one parity drive. ...............................................................................................................................8

Figure 7) A two-node cluster with two NAS data ports highlighted. Even through these two ports reside on different physical nodes, the ports are on the same VLAN, and therefore provide the same connectivity to clients or hosts.......9

Figure 8) A clustered Data ONTAP system consisting of three HA pairs. The client and host facing networks may include SAN, NAS, or both. The cluster interconnect is dedicated, dual-fabric 10 Gigabit Ethernet. The management network provides administrative access to the cluster. Disk shelves and HA interconnects omitted for clarity. ........... 10

Figure 9) A NAS LIF with IP address 192.168.1.1. The LIF is not permanently bound to a specific physical port. If the two ports shown are in the same VLAN, an administrator can move the LIF to either port. ......................................... 11

Figure 10) A Flash Pool aggregate containing 8 flexible volumes, belonging to 3 distinct storage virtual machines (represented by burgundy, teal, and purple). The volumes are logically isolated. Each storage virtual machine can only access its own volumes. Each of these volumes can be moved to a different aggregate while the data inside it is being accessed. ........................................................................................................................................................... 12

Figure 11) A LUN move operation in progress. The SAN host is accessing the LUN in a new location on the right hand side. All write operations occur there. The contents of the LUN are pulled over from the original location on the left hand side on a scheduled basis and as read requests are made. .......................................................................... 13

Figure 12) A NAS namespace. Dotted lines represent junctions to child volumes. Solid lines represent folders and directories inside a flexible volume. The path to a file or directory in a namespace remains the same, even if the volume containing that file or directory is moved to a new physical location inside the cluster. ................................... 14

Figure 13) ALUA MPIO. SAN hosts use the most direct path to a LUN. In this depiction, if the LUN or its containing volume on the right moved a node in the HA pair on the left, the SAN host would begin accessing the LUN through the more direct path on the left. .................................................................................................................................... 15

Page 4: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

4 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved

NetApp Clustered Data ONTAP: Overview

Clustered Data ONTAP is enterprise-capable, unified scale-out storage. It is the basis for virtualized, shared storage infrastructures. Clustered ONTAP is architected for nondisruptive operations, storage and operational efficiency, and scalability over the lifetime of the system.

An ONTAP cluster typically consists of fabric-attached storage (FAS) controllers: computers optimized to run the clustered Data ONTAP operating system. The controllers provide network ports that clients and hosts use to access storage. These controllers are also connected to each other using a dedicated, redundant 10 gigabit ethernet interconnect. The interconnect allows the controllers to act as a single cluster. Data is stored on shelves attached to the controllers. The drive bays in these shelves may contain hard disks, flash media, or both.

Figure 1) A Data ONTAP cluster consisting of FAS controllers in a mix of all-flash, hybrid, and capacity configurations. A dedicated, redundant 10 gigabit Ethernet interconnect (top) connects the controllers.

A cluster provides hardware resources, but clients and hosts access storage in clustered ONTAP through storage virtual machines (SVMs). SVMs exist natively inside of clustered ONTAP. They define the storage available to the clients and hosts. SVMs define authentication, network access to the storage in the form of logical interfaces (LIFs), and the storage itself, in the form of SAN LUNs or NAS volumes.

Clients and hosts are aware of SVMs, but may be unaware of the underlying cluster. The cluster provides the physical resources the SVMs need in order to serve data. The clients and hosts connect to an SVM, rather than to a physical storage array.

Like compute virtual machines, SVMs decouple services from hardware. Unlike compute virtual machines, a single SVM may use the network ports and storage of many controllers, enabling scale-out. One controller’s physical network ports and physical storage may be also shared by many SVMs, enabling multitenancy.

Figure 2) A small Data ONTAP cluster with three storage virtual machines (SVMs). Clients and hosts connect to a storage virtual machine, rather than directly to the storage arrays that host the SVMs. Each SVM has its own volumes, LUNs, network connectivity (LIFs), and authentication.

Page 5: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

5 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved

A single cluster may contain multiple storage virtual machines (SVMs) targeted for various use cases, including server and desktop virtualization, large NAS content repositories, general-purpose file services, and enterprise applications. SVMs may also be used to separate different organizational departments or tenants.

The components of an SVM are not permanently tied to any specific piece of hardware in the cluster. An SVM’s volumes, LUNs, and logical interfaces can move to different physical locations inside the cluster, while maintaining the same logical location to clients and hosts. While physical storage and network access moves to a new location inside the cluster, clients can continue accessing data in those volumes or LUNs, using those logical interfaces.

Figure 3) A six node Data ONTAP cluster with one storage virtual machine. Clients and hosts can access data on any node from any logical interface. Flexible volumes, LUNs, and LIFs can move nondisruptively, so the SVM can grow as the cluster scales out.

This allows a cluster to continue serving data as physical storage controllers are added or removed from it. It also enables workload rebalancing, and native, nondisruptive migration of storage services to different media types, like flash, spinning media, or hybrid configurations.

With clustered ONTAP, you can expand flash capacity when you need performance, high-density drives when you need raw capacity, or both. You can scale up when you need a higher end storage array, or scale out horizontally when you need to distribute a workload. All of these operations can be performed while clients and hosts continue accessing their data.

You can also scale compute in a public cloud using NetApp Private Storage (physical FAS systems next to a public cloud data center) or Cloud ONTAP (clustered ONTAP running in a virtual machine inside a public cloud), while maintaining control of your data.

Page 6: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

6 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved

Clustered ONTAP uses NetApp’s Write Anywhere File Layout (WAFL), which delivers storage and operational efficiency technologies like fast, storage efficient Snapshot copies; thin provisioning; volume, LUN, and file cloning; and deduplication. Most storage efficiency features are available regardless of the underlying media type.

Clustered ONTAP natively supports all-flash and hybrid configurations. Hybrid configurations offer performance acceleration using flash along with dense spinning media for capacity. In a hybrid configuration, active data is moved into and out of flash without administrative intervention. Fast inline compression is supported in all-flash and hybrid configurations beginning with clustered ONTAP 8.3.1. Inline deduplication is supported in all-flash and hybrid configurations beginning with clustered ONTAP 8.3.2.

WAFL accelerates write operations using nonvolitile memory inside the storage controller, in conjuction with optimized file layout on the underlying storage media.

Clustered ONTAP supports a wide range of options for data protection and business continuity. Syncronous (instantaneous) mirroring to another data center is supported with MetroCluster. MetroCluster offers a simple procedure for switching over to an alternate site in the event of a catastrophic incident.

Asyncronous (periodic) mirroring to another data center is supported with SnapMirror. SnapMirror allows the mirrored site to become the primary site during a DR event while allowing a switch back to the original primary site once the event is over. Starting with clustered ONTAP 8.3.1, a Storage Virtual Machine’s configuration, as well as its data, can be mirrored to another site using SnapMirror for Storage Virtual Machines.

Long term archives of Snapshot copies can be stored on a secondary storage system in the same or a different cluster using SnapVault. SnapVault allows the use of inexpensive media on the secondary system. SnapVault also allows end users to restore their own data without intervention from the storage administrator.

Clustered ONTAP offers application integration, allowing for application-aware backup and recovery workflows. It offers integration with hypervisors such as VMWare ESX and Microsoft Hyper-V.

Clustered ONTAP supports both SAN (block) and NAS (file) protocols. Most of the same features are available regardless of the protocol in use.

This paper is an overview of clustered Data ONTAP, including its architecture and core capabilities.

Physical Cluster Components

A cluster is typically composed of physical hardware: controllers with attached storage (solid state drives, spinning media, or both; or a third-party storage array when FlexArray is used), network interface cards, and, optionally, PCI-based flash cards (Flash Cache). Together, all of these components create a physical resource pool.

This physical resource pool is visible to cluster administrators but not to the applications and hosts that use the cluster. The storage virtual machines (SVMs) in the cluster use these resources to serve data to clients and hosts.

This section describes the most important physical components of a cluster: nodes, HA pairs, aggregates, network ports, and the cluster itself.

Nodes

Storage controllers are presented and managed as cluster nodes, or instances of clustered ONTAP.

Nodes have network connectivity and storage. The terms “node” and “controller” are sometimes used

interchangeably, but “node” more frequently means a controller, its storage, and the instance of clustered

ONTAP running on it.

The NetApp clustered Data ONTAP operating system can be deployed in a wide range of configurations,

from entry-level systems to enterprise-class systems to virtualized nodes running in a public cloud. All of

Page 7: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

7 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved

these configurations run the same operating system and offer almost all of the same data management

functionality. They have the same user interface and are managed with the same tools.

Figure 4) A single-node cluster consisting of a FAS8000 series controller running clustered Data ONTAP. This FAS8000 controller has one shelf of solid-state disks and another shelf of capacity drives. The controller has redundant connections to its storage shelves.

HA pairs

Although single-node clusters are supported, clustered ONTAP is normally deployed on HA pairs of FAS

storage controllers.

An HA pair of FAS controllers provides redundancy so that storage services can be taken over by the partner node during maintenance events, or in the event of a controller hardware failure. In most cases, controllers in an HA pair reside in the same chassis with redundant power supplies and passive interconnect circuitry. HA pairs always consist of like FAS models.

An HA pair includes an interconnect between the two nodes. This interconnect is used to mirror write operations to the partner’s nonvolatile memory. When a write request is made to either of the two nodes in an HA pair, the write request is logged on both nodes before a response is sent back to the client or the host.

Each controller in an HA pair also has connections to the other controller’s storage media. These connections are used to provide continued access to the data if one of the two nodes needs to take over the storage services normally provided by the other node.

Clustered ONTAP allows the controllers in an HA pair to be replaced using Aggregate Relocate (ARL). ARL allows one node in an HA pair to take over ownership of the storage of the other node, even if HA is disabled. Once the network connections and storage have been moved to one of the two nodes, the other can be replaced, and the process can be repeated for the remaining original node.

Controllers in an HA pair may provide redundant network connections. However, NetApp clustered Data ONTAP systems may consist of more than two nodes. In larger clusters, redundant network connectivity may be provided by either partner node, other nodes in the cluster, or both.

Figure 5) An HA pair of FAS controllers provides redundancy. In most cases, controllers in an HA pair reside in the same chassis with redundant power supplies and passive interconnect circuitry. This visualization splits the nodes apart to illustrate the HA interconnect and redundant disk connectivity.

Page 8: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

8 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved

Drives, RAID groups, and Aggregates

The physical drives managed by nodes are grouped together in RAID groups. Clustered Data ONTAP

offers a choice of RAID technologies: dual-parity RAID, sometimes called NetApp RAID DP, and RAID-4.

Drives in a RAID group typically have the same size and type.

In most cases, NetApp recommends the use of RAID DP because it can survive two simultaneous disk

failures per RAID group. This means that in the event of a drive failure, data is still protected with another

parity drive.

RAID groups are grouped together to form an aggregate. An aggregate can consist of solid-state drives

(SSDs), spinning media, or both.

An aggregate that consists of both SSDs and spinning media is called a “Flash Pool aggregate.” Flash

Pool aggregates allow data that is in use to reside on flash media, and data that has not been accessed

in months or years to reside on less expensive spinning media.

The flash portion of a Flash Pool aggregate acts as a cache, rather than a separate storage tier.

Movement of data into and out of the flash portion of a Flash Pool aggregate is handled by the system

itself. It does not require administrator intervention.

An aggregate can be thought of as a large pool of raw, RAID protected physical storage. SVMs use the

raw storage in an aggregate to store data for clients and hosts.

Clustered ONTAP 8.3 provides Advanced Drive Partitioning (ADP), which allows single physical disk to

be shared by multiple aggregates. NetApp supports ADP for All-Flash FAS, entry platforms, and NetApp

Flash Pool configurations.

Figure 6) A Flash Pool aggregate consisting of high-capacity SATA drives in one storage shelf (grey), and solid state drives in another storage shelf (green). In this example, the SATA drives are grouped together in

Page 9: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

9 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved

two RAID groups with six data drives and two parity drives each. The solid state drives are grouped together in one RAID group with six data drives and one parity drive.

Network Ports

Network ports provide the physical connectivity required to serve data to NAS clients and SAN hosts.

In the case of NAS, network ports are usually physical links to an Ethernet switch. All the ports in the

cluster that are connected to a specific VLAN on the Ethernet switch have the same physical connectivity

to the cluster.

In the case of fibre channel, network ports are connected to NPIV-enabled (N_Port ID Virtualization) ports

on a fibre channel switch. Physical connectivity to the storage with fibre channel is managed through

zoning on the switch, rather than VLANs, but the concept is similar.

A network port can be thought of as a raw, physical path to clients and hosts that SVMs can use to serve

data.

Figure 7) A two-node cluster with two NAS data ports highlighted. Even through these two ports reside on different physical nodes, the ports are on the same VLAN, and therefore provide the same connectivity to clients or hosts.

Page 10: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

10 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved

Clusters

A cluster consists of a group of nodes connected to each other with a private, dedicated, dual-fabric 10

Gigabit Ethernet interconnect. Clusters of two nodes can be optionally configured without switches, with

point-to-point connections used for the cluster-interconnect.

Management of the cluster is often performed through a management network. Cluster management

traffic can be placed on a separate physical network, to provide increased security.

A cluster can consist of up to 8 nodes if SAN is enabled, or up to 24 nodes in a NAS-only configuration. If

a cluster has more than one node, all the nodes in the cluster must be in HA pairs. Single-node clusters

are the only odd numbered cluster size NetApp supports.

Although an HA pair consists of like controllers, clusters of four or more nodes may consist of varying

FAS or All Flash FAS models, and varying configurations. All-flash configurations can reside in the same

cluster with hybrid configurations, capacity-only configurations using only SATA drives, and clustered

ONTAP systems that front-end third party storage arrays with FlexArray technology.

Together, the nodes in the cluster, their client and host-facing network ports (which may reside in different

network segments), and their attached storage aggregates form a single resource pool.

Figure 8) A clustered Data ONTAP system consisting of three HA pairs. The client and host facing networks may include SAN, NAS, or both. The cluster interconnect is dedicated, dual-fabric 10 Gigabit Ethernet. The management network provides administrative access to the cluster. Disk shelves and HA interconnects omitted for clarity.

Logical Cluster Components

Storage Virtual Machines The cluster provides connectivity and RAID-protected storage, but clients and hosts do not mount network ports or raw storage. Clients and hosts require IP addresses, WWPNs, NAS volumes, SMB (CIFS) shares, NFS exports, and LUNs.

Storage virtual machines (SVMs, sometimes called “vservers”) define these client and host-facing entities, and use the hardware of the cluster to deliver the storage services.

An SVM is what users connect to when they access data. The components of an SVM are not permanently bound to any piece of physical hardware. This allows an SVM to continue serving data as physical components of a cluster are added and removed.

The separation of physical hardware from storage services allows storage services to continue as all the physical components of a cluster are incrementally replaced.

Page 11: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

11 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved

Each SVM may have its own authentication, its own storage, its own network segments, its own users, and its own administrators. A single SVM can provide SAN, NAS, or both. A single SVM can use physical storage or network connectivity on any cluster node, enabling scale-out. New SVMs can be provisioned on demand, without deploying additional hardware.

Logical Interfaces (LIFs) Connectivity to SVMs is provided through logical interfaces (LIFs). A LIF has an IP address or World Wide Port Name used by a client or host to connect to an SVM.

A LIFs is hosted on a physical port. An SVM can have LIFs on any cluster node. Clients and hosts can access data regardless of the physical location of the data in the cluster. The cluster will use its interconnect to route traffic to the appropriate location regardless of where the request arrives.

LIFs virtualize IP addresses or WWPNs, rather than permanently mapping IP addresses and WWPNs to NIC and HBA ports.

Each SVM requires its own dedicated set of LIFs. A LIF defined for NAS access can be migrated to another port on the same or a different controller to to rebalance client performance, or to evacuate all resources on a controller for hardware lifecycle operations.

Figure 9) A NAS LIF with IP address 192.168.1.1. The LIF is not permanently bound to a specific physical port. If the two ports shown are in the same VLAN, an administrator can move the LIF to either port.

Flexible Volumes

SVMs store data for clients and hosts in flexible volumes. Aggregates provide the raw storage required by

flexible volumes.

Flexible volumes are logical containers that contain data used by applications. This can include NAS data

or SAN LUNs. NetApp recommends that NAS data and SAN LUNs not be mixed in the same flexible

volume, because each may have different backup requirements.

Each flexible volume can have up to 255 NetApp Snapshot copies. Snapshot copies are a powerful

element of clustered Data ONTAP, and they are the basis of most NetApp replication technologies.

When a Snapshot copy of a flexible volume is taken, a read-only copy of the data in the volume at that

point in time is created. That means that application administrators can restore LUNs using the Snapshot

copy, and end users can restore their own files.

Page 12: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

12 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved

Snapshot copies are high performance. When writes are made to a flexible volume that has an older

Snapshot copy, the new writes are made to free space on the underlying storage. This means that the old

contents do not have to be moved to a new location. The old contents stay in place, which means the

system continues to perform quickly, even if there are many Snapshot copies on the system.

Flexible volumes can be mirrored, archived, or nondisruptively moved to other aggregates. Nondisruptive

data mobility, sometimes called volume move or NetApp DataMotion for Volumes, is a key advantage of

clustered Data ONTAP.

Just as LIF migration between network ports allows client access to move to a different nodes,

DataMotion for Volumes allows clients and hosts to continue accessing data as that data is moved to

other cluster nodes.

DataMotion for Volumes allows a cluster to continue serving data as physical storage controllers are

added or removed from it. It also enables workload rebalancing and nondisruptive migration of storage

services to different media types, such as flash, spinning media, and Flash Pool aggregates.

No matter where a volume goes, it keeps its identity. That means that its Snapshot copies, its replication

relationships, its deduplication, and other characteristics of the flexible volume remain the same.

Figure 10) A Flash Pool aggregate containing 8 flexible volumes, belonging to 3 distinct storage virtual machines (represented by burgundy, teal, and purple). The volumes are logically isolated. Each storage virtual machine can only access its own volumes. Each of these volumes can be moved to a different aggregate while the data inside it is being accessed.

LUNs

SVMs store data for SAN hosts in LUNs. Applications requiring block data services through protocols

such as iSCSI, fibre channel, and fibre channel over Ethernet use SAN LUNs.

LUNs reside in flexible volumes. Just as with the containing flexible volumes, LUNs are not permanently

tied to a storage aggregate. When a LUN’s containing flexible volume is moved, access to that LUN

continues uninterrupted.

In clustered ONTAP 8.3, a LUN can also nondisruptively move to a different flexible volume with LUN

move, or DataMotion for LUNs. DataMotion for LUNs offers near-instantaneous cutover to a destination

volume.

When DataMotion for LUNs is used, any write requests from the host will be serviced by the destination

system as read requests are completed by retrieving the data from the source system. This immediately

reduces I/O load on the source system.

Page 13: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

13 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved

Figure 11) A LUN move operation in progress. The SAN host is accessing the LUN in a new location on the right hand side. All write operations occur there. The contents of the LUN are pulled over from the original location on the left hand side on a scheduled basis and as read requests are made.

NAS An SVM can act as a virtualized NAS server that provides file access with NFS, SMB (CIFS), or both concurrently. Each NAS-enabled SVM offers a single namespace. The namespace provides client access to storage on many cluster nodes via a single NFS mount or SMB (CIFS) share.

A namespace can contain many flexible volumes.To NAS clients, each volume appears as a folder or subdirectory inside another volume. Each flexible volume may also have its own folders and subdirectories that are not linked to any other volume.

Flexible volumes are linked to one another inside a namespace using junctions, which are roughly analagous to Unix mount points. Clients can mount at or beneath the root of the namespace. Clients see only the volumes that are mounted below their access point.

In some scale-out environments, a single SVM may have a large namespace using all the controllers in a cluster. In environments with many SVMs, each SVM will have its own namespace.

Client access is controlled using CIFS share permissions, file and directory permissions, and export policies. Export policies are analagous to the access control entries in a typical Unix /etc/exports file, and are set on a per-volume basis. Export policies should be disabled for CIFS access and in kerberos authenticated Unix environments, as the centralized authentication in those environments is typically sufficient.

Flexible volumes can be added to an SVM’s namespace at any time. Newly added volumes are immediately available to the clients, with no remount required for visibility to the new storage. The flexible volumes inside an SVM’s namespace can be moved around between controllers and media types nondisruptively, eliminating the outages and operational complexity associated with environments using exclusively automounters and DFS.

Page 14: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

14 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved

Figure 12) A NAS namespace. Dotted lines represent junctions to child volumes. Solid lines represent folders and directories inside a flexible volume. The path to a file or directory in a namespace remains the same, even if the volume containing that file or directory is moved to a new physical location inside the cluster.

Namespaces in large SVMs are sometimes nested, with child volumes, parent volumes, grandparent volumes, and ancestor volumes. Small namespaces are more often flat, with individual volumes mounted at the top of the namespace. In a flat namespace, flexible volumes have locations like “/users”, “/VMs”, and “/app_data”.

SAN An SVM can serve data with SAN protocols such as Fibre Channel, Fibre Channel over Ethernet (FCoE), and iSCSI.

Because the physical cluster consists of multiple, interconnected controllers, there may be multiple logical paths to any individual LUN in an SVM. These paths are managed by multipath I/O (MPIO) on the SAN host. Asymmetric Logical Unit Access (ALUA), an industry standard protocol for identifying optimized paths to a LUN is also used by the hosts.

If the optimized path to any LUN changes because the LUN is moved using DataMotion for LUNs, or its containing volume is moved with DataMotion for volumes, this is automatically recognized by the host. The host will update its path information, and choose a new, optimized path. If the optimized path becomes unavailable, the host’s ALUA-enabled MPIO stack can nondisruptively switch to any other available path.

Paths to a LUN can be added to or removed from any node in a cluster, and, following a rescan, a host’s ALUA enabled multipath stack will automatically switch to the most direct path nondisruptively. Paths to a LUN can also be reduced using selective LUN mapping in clustered ONTAP 8.3, which easies path management.

Page 15: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

15 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved

Figure 13) ALUA MPIO. SAN hosts use the most direct path to a LUN. In this depiction, if the LUN or its containing volume on the right moved a node in the HA pair on the left, the SAN host would begin accessing the LUN through the more direct path on the left.

Key Features

Managability

Different methods are available for managing and configuring clustered Data ONTAP, depending on user

or application requirements. Supported methods include:

NetApp OnCommand® System Manager is a simple graphical user interface designed for

administering a clustered ONTAP system. It provides the ability to provision storage virtual

machines, volumes, LUNs, network connectivity, and replication relationships. It can be used to

manage local users and groups, and administrative roles. It is available as an on-box user

interface in clustered Data ONTAP 8.3.

System Setup is configuration software designed to help storage administrators get a NetApp

clustered Data ONTAP system up and running with minimal effort. It provides a wizard interface

to configure a cluster according to best practices, performs a number of checks to make sure that

everything is wired correctly, and checks for common mistakes to help you know that the cluster

is healthy and ready for production.

OnCommand Unified Manager Provides a single dashboard to confirm the health of your

NetApp® clustered Data ONTAP® storage availability, capacity, performance, and data-

protection relationships. OnCommand® Unified Manager provides operational efficiency at scale.

Page 16: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

16 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved

OnCommand Workflow Automation enables automation of repeatable manual storage-

management processes, enabling storage self-service and standards enforcement. OnCommand

Workflow automation uses a graphical interface does not require the use of a scripting language.

Oncommand Performance Manager enables data storage performance troubleshooting,

problem isolation, and concrete solutions to performance issues based on system analysis.

Command Line a command-line interface is available for cluster and storage virtual machine

administrators. Commands are organized in a hierarchy. Tab completion is provided. Each

command can act as a wizard, prompting the administrator for additional input parameters as

required.

Software Development Kits (SDKs) provide the infrastructure to invoke Data ONTAP® APIs,

OnCommand® Unified Manager APIs, and web services APIs for OnCommand Unified Manager

Core Package on a server. SDK Core API libraries are for C and C++, Java™, Perl®, C#,

VB.NET, PowerShell®, Python™, and Ruby.

Application and hypervisor plugins are available, including Virtual Storage Console for

VMware® vSphere™, Citrix XenServer and Red Hat Enterprise Virtualization; NetApp® Snap

Creator™ Framework to standardize and simplify backup, restore, and DR in any environment;

NetApp SnapDrive to automate storage provisioning tasks; and SnapManager for application-

aware configuration, backup, and restore.

NetApp® OnCommand Cloud Manager software provides a simplified management interface

for Cloud ONTAP and NetApp Private Storage (NPS) for Cloud solutions.

OnCommand Insight provides a multi-vendor view of performance metrics, including application

performance, datastore performance, virtual machine performance, and storage infrastructure

performance.

Autosupport is dial-home functionality that allows NetApp to help you proactively manage your

system and diagnose conditions on it. Depending on your service contract, Autosupport allows

NetApp to begin the process of shipping an RMA before you even open a case.

Multiprotocol Unified Architecture A multiprotocol unified architecture provides the ability to support multiple data access protocols concurrently in the same storage system over a whole range of different controller and disk storage types. The supported protocols in clustered Data ONTAP are:

NFS v3, v4, and v4.1, including pNFS

SMB 1, 2, 2.1, and 3, including support for nondisruptive failover in Microsoft® Hyper-V™ environments and Microsoft SQL Server®

iSCSI

Fibre Channel

FCoE

Data replication and storage efficiency features are supported across all protocols in clustered Data ONTAP.

Storage Efficiency Storage efficiency built into clustered Data ONTAP offers substantial space savings, allowing more data to be stored at lower cost.

Thin provisioning is the most efficient way to provision storage, because although the clients see the total storage space assigned to them, the storage is not preallocated up front. When a volume or LUN is created by using thin provisioning, space on the storage system is not used

Page 17: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

17 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved

until it is consumed. The space remains unused until data is written to the LUN or the volume, at which time only enough space to store the data is used. Unused storage is shared across all volumes, and the volumes can grow and shrink on demand.

NetApp FlexClone® technology allows for near-zero-space, exact, writable virtual copies of datasets, including volumes, files, and LUNs. It offers rapid, space-efficient creation of additional data copies ideally suited for disaster recovery testing and test and development environments.

Deduplication removes duplicate data blocks in primary and secondary storage, storing only unique blocks. This results in storage space and cost savings. Deduplication runs on a customizable schedule. Inline deduplication is available beginning with clustered Data ONTAP 8.3.2.

Inline Compression was intruduced for primary workloads such as database and desktop virtualization with clustered ONTAP 8.3.1. Inline compression is on by default in the all-flash FAS product family starting with 8.3.1.

Virtual Storage Tiering allows hot data that is frequently accessed to be transparently stored on flash. Clustered Data ONTAP offers two varieties of virtual storage tiering: NetApp Flash Cache is a PCI-e based read cache inside nodes that make up the cluster, and NetApp Flash Pool is a storage aggregate-level cache used to improve performance of both reads and writes.

Data Protection and Business Continuity

NetApp Snapshot™ copies. These are automatically scheduled point-in-time copies that take up no space and incur no performance overhead when created. Over time, Snapshot copies consume minimal storage space, because only changes to the active file system are written. Individual files and directories can be easily recovered from any Snapshot copy, and the entire volume can be restored back to any Snapshot state in seconds.

Dual Parity RAID can survive two simultaneous disk failures per RAID group. This means that in the event of a drive failure, data is still protected with another parity drive.

NetApp SnapMirror® technology. SnapMirror provides asynchronous replication of volumes, independent of protocol, either within the cluster or to another clustered Data ONTAP system for data protection and disaster recovery. SnapMirror for Storage Virtual Machines, also called Storage Virtual Machine Disaster Recovery (SVM DR) replicates both data and Storage Virtual Machine configuration settings. It is available starting with clustered ONTAP 8.3.1.

NetApp SnapVault® technology. Volumes can be copied for space-efficient, read-only, disk-to-disk backup either within the cluster or to another clustered Data ONTAP system. SnapVault, when used in conjunction with version independent SnapMirror, allows a single destination volume to serve as both a backup and disaster recovery copy.

NetApp MetroCluster® software addresses the need to provide continuous data availability beyond the data center (or beyond the cluster). MetroCluster is native within the NetApp Data ONTAP operating system. It is a synchronous mirroring relationship between two distinct but identically configured two-node clusters up to 200km apart. Syncronous mirroring with MetroCluster between two single-node clusters is available starting with clustered ONTAP 8.3.1.

Storage QoS Clustered Data ONTAP provides storage quality of service (QoS) policies on cluster objects. An entire SVM, or a group of volumes or LUNs within an SVM, can be dynamically assigned to a policy group, which specifies a throughput limit, defined in terms of IOPS or MB/sec. This can be used to reactively or proactively throttle rogue workloads and prevent them from affecting the rest of the workloads.

QoS policy groups can also be used by service providers to prevent tenants from affecting each other, as well as to avoid performance degradation of the existing tenants when a new tenant is deployed on the shared infrastructure.

Page 18: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

18 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved

Infinite Volume Infinite Volume is a type of volume that is contained in a dedicated SVM, which can scale up to 20PB and store up to 2 billion files. An Infinite Volume can coexist with standard SVMs and support both NFS and SMB client access. An Infinite Volume is well suited for enterprise NAS content repositories.

Intelligent Scale-Out Storage With scale-out, as the storage environment grows, additional controllers are added seamlessly to the resource pool residing on a shared storage infrastructure. Scale-out, together with built-in storage virtualization, provides nondisruptive movement of host and client connections, as well as the datastores themselves, anywhere in the resource pool.

With these capabilities, new workloads can be easily deployed and existing workloads can be easily and nondisruptively balanced over the available resources. Technology refreshes such as replacing disk shelves or storage controllers are accomplished while the environment remains online and serving data.

Nondisruptive Operations Nondisruptive operations (NDO) allows the storage infrastructure to remain up and serving data during maintenance and IT lifecycle operations. The goal of NDO is to eliminate downtime, and to allow changes to the system to occur at any time.

Clustered Data ONTAP is highly available by design and can transparently migrate data and client connections throughout the storage cluster.

Data migration may be performed to rebalance capacity usage, to optimize for changing performance requirements, to isolate one or more controllers, to promote data to an all-flash array, to archive data to a capacity configuration, or to execute maintenance or lifecycle operations.

Software updates and configuration changes occur throughout any system’s lifecycle. Additionally, the hardware infrastructure must be added to and replaced, potentially many times. Years after a system was originally commissioned, the data has outlived the hardware, so that little or none of the original hardware may remain. Through NDO capabilities, all of these changes can occur without outages to the applications or attached clients and hosts.

Page 19: Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2 · NetApp Clustered Data ONTAP 8.3 and 8.2.x Technical Report NetApp Clustered Data ONTAP 8.3.x and 8.2.x An Introduction

19 NetApp Clustered Data ONTAP 8.3 and 8.2.x ©2015 NetApp, Inc. All Rights Reserved

Summary

Clustered ONTAP delivers nondisruptive operations, efficiency, and scalability.

Nondisruptive operations eliminate planned downtime. They enable you to continue serving data throughout your entire tech refresh lifecycle and to move workloads between all-flash, high-capacity, and hybrid tiers without reconfiguring your applications or impacting your users.

Efficiency allows you to make the most of both data storage and your organization. Storage efficiency translates to a reduction in capacity requirements and more efficient use of flash. Cloning technologies also help you optimize your most important resource: the people who depend on infrastructure.

Scalability allows organizations to reduce risk by knowing that the same tools and processes will be available as needs grow, and provides the flexibility to expand what is needed, when it is needed. Flash capacity can be increased for performance, and high-density drive capacity can be increased for archive data. Clustered ONTAP allows you can scale up when you need a higher-end storage array or scale out horizontally when you need to distribute a workload. You can scale compute in the cloud while maintaining control of your data.

Clustered ONTAP provides all this functionality in a single platform.

Resources

Clustered Data ONTAP Product Documentation

NetApp Documentation Library

NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any information or recommendations provided in this publication, or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS, and the use of this information or the implementation of any recommendations or techniques herein is a customer’s responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.

© 2015 NetApp, Inc. All rights reserved. No portions of this document may be reproduced without prior written consent of NetApp, Inc. Specifications are subject to change without notice. NetApp, the NetApp logo, Go further, faster, DataMotion, Data ONTAP, Flash Cache, Flash Pool, FlexCache, FlexClone, Manage ONTAP, OnCommand, RAID-DP, SnapMirror, Snapshot, and SnapVault are trademarks or registered trademarks of NetApp, Inc. in the United States and/or other countries. Microsoft is a registered trademark and Hyper-V is a trademark of Microsoft Corporation. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such. TR-3982-0915

Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact product and feature versions described in this document are supported for your specific environment. The NetApp IMT defines the product components and versions that can be used to construct configurations that are supported by NetApp. Specific results depend on each customer's installation in accordance with published specifications.