253
The FlashArray User's Guide Purity Version v3.2

Purestorage_UserGuide3_2

Embed Size (px)

DESCRIPTION

Purestorage User Guide 3.2

Citation preview

Page 1: Purestorage_UserGuide3_2

The FlashArray User's Guide

Purity Version v3.2

Page 2: Purestorage_UserGuide3_2

The FlashArray User's Guide: Purity Version v3.2Copyright © 2011, 2012, 2013

Page 3: Purestorage_UserGuide3_2

iii

Table of ContentsAbout the Guide ................................................................................................................. x

Organization of the Guide ............................................................................................ xA Note on Format and Content ..................................................................................... xPlease Help Us Improve Our Documentation ................................................................... x

Introduction to the FlashArray Architecture ............................................................................ xiModular Hardware Architecture .................................................................................... xiHigh Availability ....................................................................................................... xiiSoftware for Solid State Storage .................................................................................. xiiFlashArray Advantages .............................................................................................. xiii

I. An Overview of FlashArray Hardware and Software .............................................................. 11. FlashArray Hardware ............................................................................................... 3

Scalable, Highly-Available Configurations ............................................................... 3High Availability ........................................................................................ 3Scaling ...................................................................................................... 4

FlashArray Hardware Module Details ..................................................................... 5Controllers ................................................................................................. 5Storage Shelves .......................................................................................... 5

Intra-Array Connectivity ....................................................................................... 6Controller to Drive Connectivity .................................................................... 6Inter-Controller Connectivity ......................................................................... 8

External Connectivity .......................................................................................... 8Host Connectivity ....................................................................................... 8Administrative Network Connectivity ............................................................. 8

FlashArray End-to-End Resiliency ......................................................................... 9Device and Data Resiliency .......................................................................... 9Array-Level Resiliency ............................................................................... 11

2. The Purity Operating Environment ............................................................................ 13Flash Memory: Like Disk, But Different ............................................................... 13

Solid State Drives versus Disks ................................................................... 13Disk and SSD Failure Modes ...................................................................... 14

Purity Operating Environment Goals ..................................................................... 15Designing for Flash ................................................................................... 15Purity Architectural Highlights .................................................................... 16

Data Virtualization ............................................................................................. 16The Virtualization Map .............................................................................. 16Using Virtualization ................................................................................... 17

Storage Layout .................................................................................................. 17Storage Layout Structures ........................................................................... 18Segment Allocation .................................................................................... 18Populating Segments and Scheduling Writes .................................................. 19RAID-3D ................................................................................................. 20RAID-3D Flexibility .................................................................................. 20

Storage Reclamation and Data Reduction .............................................................. 21Storage Reclamation .................................................................................. 21Data Reduction ......................................................................................... 22Data Consolidation .................................................................................... 22Data Movement ......................................................................................... 23Storage Reclamation and Capacity Optimization Priorities and Tradeoffs ............. 23

Read and Write Processing ................................................................................. 24Write Processing ....................................................................................... 24Read Processing ........................................................................................ 25

Page 4: Purestorage_UserGuide3_2

The FlashArray User's Guide

iv

Proactive Troubleshooting ................................................................................... 26Information Sources ................................................................................... 26Reporting Mechanisms ............................................................................... 27Alerts ...................................................................................................... 27The remoteassist Facility ............................................................................ 28

Summarizing the Purity Operating Environment ..................................................... 28II. FlashArray Concepts and Managed Objects ....................................................................... 29

3. FlashArray Storage Capacity and Utilization ............................................................... 31Array Capacity and Storage Consumption .............................................................. 31

FlashArray Storage States ........................................................................... 31Reporting Array Capacity and Storage Consumption ........................................ 32

Volume and Snapshot Storage Consumption .......................................................... 32Provisioning ............................................................................................. 32Data Reduction ......................................................................................... 33Snapshots and Physical Storage ................................................................... 34

Measuring Volume Storage Usage ........................................................................ 35Reporting Volume Size and Storage Consumption ........................................... 35

The FlashArray Data Lifecycle ............................................................................ 364. FlashArray Managed Objects ................................................................................... 38

Physical and Virtual Objects ............................................................................... 38Object Naming .......................................................................................... 39

The Principal Virtual Objects .............................................................................. 40Common Operations on Virtual Objects ........................................................ 40

Volumes .......................................................................................................... 41Managing Volume Size .............................................................................. 41Immediate Volume Eradication .................................................................... 42Changes in Volumes' Storage Consumption .................................................... 42Associated Objects and Attribute Values ....................................................... 43

Snapshots ......................................................................................................... 43Hosts and Host Groups ....................................................................................... 44

Host and Host Group Attributes ................................................................... 45Host-Volume Connections ................................................................................... 45

Private and Shared Connections ................................................................... 45Properties of Shared Connections ................................................................. 46LUN Management ..................................................................................... 46

Managed Hardware Objects ................................................................................. 47Hardware Component Naming ..................................................................... 47FlashArray Hardware Components ............................................................... 47The Array ................................................................................................ 48Solid State Drives ...................................................................................... 49FlashArray Ports ....................................................................................... 49Exporting Volumes to Hosts ........................................................................ 50

III. Using the Purity GUI to Administer a FlashArray .............................................................. 515. The Purity GUI ..................................................................................................... 53

Accessing the GUI ............................................................................................ 536. The Dashboard Tab ................................................................................................ 54

The Capacity Pane ............................................................................................. 56The Performance Panes ...................................................................................... 57

7. The Storage Tab .................................................................................................... 59Volume Administrative Tasks .............................................................................. 60

Creating New Volumes .............................................................................. 60Managing Existing Volumes ........................................................................ 61Deleting Unneeded Volumes ....................................................................... 61Downloading Volume Information ................................................................ 63

Page 5: Purestorage_UserGuide3_2

The FlashArray User's Guide

v

Volume Detail Views ................................................................................. 63Host Group and Host Administrative Tasks ............................................................ 64

Creating New Host Groups and Hosts ........................................................... 65Renaming and Deleting Host Groups and Hosts .............................................. 66The Host Group Details View ..................................................................... 66Adding Hosts and Connecting Volumes to a Host Group .................................. 67Other Host Group Administrative Tasks ........................................................ 69

Host Administrative Tasks .................................................................................. 71Creating and Deleting Host Objects .............................................................. 71The Individual Host Detail View .................................................................. 73Host Object Management ............................................................................ 75

Host-Volume Connection Tasks ........................................................................... 77Making Private Host-Volume Connections ..................................................... 78Shared Connections ................................................................................... 79Breaking Private Connections ...................................................................... 81Breaking Shared Connections ...................................................................... 82

8. The Analysis Tab .................................................................................................. 84Analysis Tab Display Control .............................................................................. 84Analysis Tab Time Scales ................................................................................... 85Analysis Tab Displayed Data ............................................................................... 88

Capacity View .......................................................................................... 88Performance View ..................................................................................... 88

9. The System Tab .................................................................................................... 92The Array ("SYSTEM") Health View ................................................................... 92The Connections View ....................................................................................... 94The Configuration View ..................................................................................... 95

The Array Information Sub-View ................................................................. 95The Networking Sub-View .......................................................................... 96The Support Connectivity Sub-View ............................................................. 97The Alerts Sub-View ................................................................................. 97The SNMP Sub-View ................................................................................ 98The Array Time Sub-View .......................................................................... 99

The Users View .............................................................................................. 10010. The Messages Tab .............................................................................................. 102

Viewing Alert Messages ................................................................................... 103Managing Alert Messages ................................................................................. 104

IV. Using the Purity CLI to Administer a FlashArray ............................................................. 106I. Purity CLI Man Pages ........................................................................................... 108

pureadmin ....................................................................................................... 109purealert ......................................................................................................... 111purearray ........................................................................................................ 115purecli ............................................................................................................ 121pureconfig ...................................................................................................... 128puredns .......................................................................................................... 129puredrive ........................................................................................................ 131pureds ............................................................................................................ 134purehgroup ..................................................................................................... 140purehgroup-connect .......................................................................................... 145purehgroup-list ................................................................................................ 148purehost ......................................................................................................... 152purehost-connect .............................................................................................. 156purehost-list .................................................................................................... 161purehw ........................................................................................................... 165purelicense ...................................................................................................... 171

Page 6: Purestorage_UserGuide3_2

The FlashArray User's Guide

vi

puremonitor .................................................................................................... 181purenetwork .................................................................................................... 183pureport .......................................................................................................... 187puresnap ......................................................................................................... 189puresnmp ........................................................................................................ 198purevol ........................................................................................................... 203purevol-list ..................................................................................................... 208purevol-rename ................................................................................................ 213purevol-setattr ................................................................................................. 215

11. Common CLI Administrative Tasks ....................................................................... 217CLI Help ........................................................................................................ 217

Top-Level Help ....................................................................................... 217Command Help ....................................................................................... 218Subcommand-Level Help .......................................................................... 218Man Page Help ....................................................................................... 219

Getting Started ................................................................................................ 219Creating Volumes .................................................................................... 219Creating Hosts ......................................................................................... 220

Connecting Hosts and Volumes .......................................................................... 221Resizing Volumes ............................................................................................ 222Destroying Volumes ......................................................................................... 223

Recovering and Eradicating Destroyed Volumes ............................................ 224Renaming Volumes .................................................................................. 225

Ongoing Administration of Hosts ....................................................................... 225Monitoring I/O Performance .............................................................................. 226Using listobj Output ......................................................................................... 227

V. Supplementary Information ........................................................................................... 229A. Supported Remote Access Packages ....................................................................... 231B. The Pure Storage Glossary .................................................................................... 232C. References .......................................................................................................... 238D. License and Product Information ............................................................................ 239

License .......................................................................................................... 239About Panel .................................................................................................... 239Pure Storage FlashArray Systems and Components ................................................ 239

E. Contacting Pure Storage ........................................................................................ 240

Page 7: Purestorage_UserGuide3_2

vii

List of Figures1.1. Entry-Level and Highly Available FlashArray Hardware Configurations ................................. 41.2. Drive and Interposer in Carrier ........................................................................................ 61.3. Example of Redundant FlashArray Controller to Storage Shelf Connectivity ............................ 71.4. Redundant Host Connections via Separate Fabrics .............................................................. 82.1. The Purity Virtualization Map ....................................................................................... 172.2. The Purity On-Media Data Layout ................................................................................. 182.3. RAID-3D Protection Spheres ......................................................................................... 202.4. The Purity Write Processing Path ................................................................................... 242.5. The Purity Read Processing Path ................................................................................... 253.1. FlashArray Physical Storage States ................................................................................. 313.2. Data Reduction Example .............................................................................................. 333.3. Snapshot Space Consumption ........................................................................................ 343.4. FlashArray Physical Storage Life Cycle .......................................................................... 364.1. Changes In Volume Size .............................................................................................. 424.2. Snapshot Management .................................................................................................. 445.1. The Purity GUI Login Pane .......................................................................................... 536.1. Dashboard Display Areas .............................................................................................. 556.2. Expanded Graph (Latency History Example) .................................................................... 566.3. Capacity Bar Graph ..................................................................................................... 566.4. Point-in-Time I/O Performance ...................................................................................... 587.1. The Storage Tab (Host Group Detail View) ..................................................................... 597.2. Create Volume Buttons ................................................................................................ 607.3. The Edit Volume Dialog .............................................................................................. 617.4. Deleting a Volume ...................................................................................................... 627.5. Recovering a Deleted Volume ....................................................................................... 637.6. Downloading Volume Information ................................................................................. 637.7. Volume Detail View .................................................................................................... 647.8. Sample Host Group and Host View ................................................................................ 657.9. Renaming a Host Group and Deleting a Host Object ......................................................... 667.10. Adding Hosts and Volumes to a Host Group .................................................................. 677.11. Adding Hosts and Volumes to a Host Group .................................................................. 687.12. Other Host Group Tasks Performed from the Detail View ................................................. 697.13. Host-Volume Connection Map for a Host Group ............................................................. 707.14. Host and Volume Tasks Performed from the Host Group Detail View ................................. 707.15. Creating a Host Object from the Host Group Add Hosts Dialog ......................................... 727.16. Deleting a Host from the Host Group and Hosts View ...................................................... 727.17. Deleting a Host from its Host Detail View ..................................................................... 737.18. Example Individual Host Detail View ........................................................................... 747.19. Renaming a Host Object ............................................................................................. 767.20. Associating Worldwide Name with a Host Object ........................................................... 767.21. Breaking the Association between a Worldwide Name and a Host Object ............................. 777.22. Connecting Multiple Volumes to a Single Host ............................................................... 787.23. Connecting Multiple Hosts to a Volume ........................................................................ 797.24. Connecting One or More Volumes to a Host Group ......................................................... 807.25. Breaking a Private Connection from the Individual Volume Detail View .............................. 817.26. Breaking a Private Connection from the Individual Host Detail View .................................. 827.27. Breaking a Shared Connection from the Individual Host Group Detail View ......................... 837.28. Breaking a Shared Connection from the Individual Volume Detail View .............................. 838.1. The GUI Analysis Tab ................................................................................................. 848.2. Analysis Tab Expanded Graph, Pop-up Numeric Display, and Time Scales ............................ 868.3. Displaying a Subset of the Zoom Interval ........................................................................ 87

Page 8: Purestorage_UserGuide3_2

The FlashArray User's Guide

viii

8.4. Moving the Display within the Zoom Interval .................................................................. 878.5. Data Selection—Capacity View ..................................................................................... 888.6. Data Selection—Performance View ................................................................................ 898.7. Analysis Tab Stacked I/O Performance Display ................................................................ 919.1. The System Tab (User View Selected) ............................................................................ 929.2. The Array Health View ................................................................................................ 939.3. The Connections View ................................................................................................. 949.4. The Array Information Sub-View ................................................................................... 959.5. The Networking Sub-View ............................................................................................ 969.6. The Support Connectivity Sub-View ............................................................................... 979.7. The Alerts Sub-View ................................................................................................... 989.8. The SNMP Sub-View .................................................................................................. 999.9. The Array Time Sub-View .......................................................................................... 1009.10. The Users View ....................................................................................................... 10010.1. The Messages Tab ................................................................................................... 10210.2. Viewing an Informational Alert Message ..................................................................... 10310.3. Viewing an Alert Error Message ................................................................................. 10310.4. Managing Alert Messages .......................................................................................... 10410.5. Restoring Deleted Alert Messages ............................................................................... 105

Page 9: Purestorage_UserGuide3_2

ix

List of Examples3.1. purearray list --space Example .................................................................................... 323.2. Sample purevol list --space CLI Command Output ........................................................... 354.1. Case Insensitivity in Object Names ................................................................................ 4011.1. Top-Level Help ....................................................................................................... 21711.2. Command-Level Help ............................................................................................... 21811.3. Subcommand Help ................................................................................................... 21811.4. man Page Help ....................................................................................................... 21911.5. Creating Volumes .................................................................................................... 22011.6. Creating Hosts ......................................................................................................... 22011.7. Establishing Host-Volume Connections ........................................................................ 22111.8. Resizing Volumes .................................................................................................... 22211.9. Destroying Volumes ................................................................................................. 22311.10. Recovering and Eradicating Volumes ......................................................................... 22411.11. Renaming Volumes ................................................................................................ 22511.12. Administrative Operations on Hosts ........................................................................... 22511.13. Monitoring I/O Performance ..................................................................................... 22611.14. Using the Output of the listobj Subcommand .............................................................. 22711.15. Shell Scripting with the listobj Subcommand .............................................................. 22711.16. Building Arguments with the listobj Subcommand ....................................................... 228

Page 10: Purestorage_UserGuide3_2

x

About the GuideOrganization of the Guide

The target audience for this Guide is administrators of Pure Storage Inc. FlashArray™ storage systems.

Part I, “An Overview of FlashArray Hardware and Software”:introduces the FlashArray hardware architecture and gives an overview of Purity software concepts.

Part II, “FlashArray Concepts and Managed Objects”:defines FlashArray storage capacity measurement and describes the physical and virtual objects man-aged by a FlashArray administrator.

Part III, “Using the Purity GUI to Administer a FlashArray”:describes the use of the browser-based Purity ™ graphical user interface (GUI) to administer aFlashArray.

Part IV, “Using the Purity CLI to Administer a FlashArray”:describes the Purity command line interface (CLI) and its use in FlashArray administration.

Part V, “Supplementary Information”:includes Appendixes that contain supplementary information about Pure Storage products and linksto reference material that storage administrators may find useful.

A Note on Format and ContentThis Guide describes concepts and usage of the Pure Storage FlashArray from the array administrator'spoint of view. FlashArray technology is evolving rapidly. As with all advanced information technologies,once basic architecture is in place, implementation develops at different rates in its different facets, eachpreceded by feature-by-feature detailed design.

This edition of the Guide describes the properties and behavior of arrays that run the v3.2 Release of PuritySoftware. It may include information about planned capabilities whose external form has been specified atthe time of the release. Such material is included to provide users of the v3.2 Release with information fordesign planning purposes, and is subject to change as new functionality is implemented. Material relatingto not-yet-implemented functionality is identified as such in the text.

Please Help Us Improve Our DocumentationPure Storage Inc.'s goal is to deliver the best performing, most reliable, most cost-effective, easiest touse enterprise storage arrays in the market. Performance, reliability, and cost are objective measures, butease of use is to a large extent in the eye of the administrator. We are eager to hear and respond to yourfeedback on how we could make our arrays and our documentation easier to use. To that end, we haveestablished an electronic mailbox at <[email protected]>, to which you cansend comments on this Guide, as well as on any other aspect of FlashArray ease of use. We have alsoenabled the PDF form of the Guide for commenting using the free Adobe Acrobat Reader, and encourageyou to insert your comments in a pdf and return it to us via electronic mail to the above address.

Page 11: Purestorage_UserGuide3_2

xi

Introduction to the FlashArrayArchitecture

The FlashArray architecture defines a design for scalable enterprise-class storage systems based entirelyon flash solid-state drives. FlashArrays that run the v3.2 release of the Purity Operating Environmentconsist of between one and four interconnected storage shelves 1 that house solid state drives, connectedto either one or two controllers that contain the array’s processors, host interfaces and other components.Compared to conventional disk-based array designs, FlashArray all solid state arrays typically deliver:

High performance:An order of magnitude greater I/OP performance density (measured in I/O operations per second pergigabyte of physical storage)

Performance consistency:Consistently low latency, free of "spikes" that can afflict disk-based arrays, even those configuredwith solid state drives

Low operating cost:An 80% reduction in operating expense (i.e., 20% of the rack space, power, and cooling required forequivalent disk-based capacity, as well as elimination of most of the routine operations required toadminister conventional disk-based arrays).

The FlashArray internal design is a radical departure from conventional disk-based array design, however.FlashArrays exploit the unique properties of solid state storage to deliver both higher performance andgreater resiliency than is possible to achieve with disks, and to do so at a comparable or lower effectivecost per byte.

Modular Hardware ArchitectureEach FlashArray is an integrated collection of modular components that includes:

Controllers:One or two controllers, 2 connected to each other via a private Infiniband network, and to hosts viaone or more Fibre Channel or 10GbE (gigabit Ethernet) interconnects

Storage shelves:Between one and four drive chassis, each containing redundant SAS connections to controllers

Solid state drives:SSDs mounted in each storage shelf. An array's first two shelves contain 22 SSDs, with two baysreserved for NVRAMs; each additional shelf contains 24 SSDs

NVRAM:Two NVRAM modules mounted in each of the first two storage shelves.

FlashArrays scale, both in terms of capacity and I/O performance, by component aggregation; the smallestand largest FlashArrays are constructed from the same building blocks.

1Definitions for hyperlinked terms are found in the glossary. Only the first mention of such terms is highlighted.2While Pure Storage Inc. expects that highly available configurations with pairs of controllers will ultimately be the norm, FlashArrays consistingof a single controller and a single storage shelf are completely functional.

Page 12: Purestorage_UserGuide3_2

Introduction to theFlashArray Architecture

xii

High AvailabilityDual-controller FlashArrays are highly available, with completely redundant components and intercon-nects:

Power and cooling:Each controller and storage shelf is equipped with redundant hot-swappable power and cooling mod-ules.

Component interconnects:All paths between controllers and storage shelves are redundant.

Drive connections:Each solid state drive is connected to two or more controller SAS ports via separate Serial AttachedSCSI (SAS) buses.

Host interconnects:A controller's four Fibre Channel or 10GbE ports can be connected to separate storage network fab-rics for redundant host connections. (Whether or not a storage network actually provides redundantconnections to hosts depends upon its end-to-end topology.)

The Purity Operating Environment software balances I/O across all connections, and automatically utilizesalternate paths in event of component failure. In addition, the software generates immediate email alertmessages to designated addresses when exceptional conditions occur.

Software for Solid State StorageFlashArrays' uniqueness derives partly from the modular hardware platform, but primarily from the PurityOperating Environment, the operating software that drives them. Purity is specifically designed to utilizesolid state storage effectively. Rather than being adapted from disk-oriented designs embodying assump-tions such as multi-millisecond access times, fixed RAID geometry, and data recovery by bulk rebuilding,Purity is optimized for solid state storage in four principal ways:

Data layout:Purity’s flexible data layout takes advantage of the sub-millisecond access times of solid state stor-age to avoid the rigid layouts that are a necessary consequence of disks' much larger seek time androtational latency

Capacity utilization:Purity micro-provisions physical storage, reduces data to minimize the physical space it occupies, andcontinuously consolidates both free and occupied space to maximize utilization efficiency as data isstored in and deleted from an array

I/O performance:Purity schedules I/O operations globally across an array to use both internal and external bandwidthefficiently. Moreover, the design completely eliminates the time-consuming log-read-modify-writeoperations that constrain small write performance in disk-based RAID arrays

Data protection:Purity dynamically adjusts the parameters of its RAID-3D multi-dimensional data protection in re-sponse to changes in drive error rates and failures, rather than configuring static device groups withthe lengthy and disruptive rebuilds they imply.

Because accessing data on solid state storage incurs no seeking or rotational delays, any block of datain a FlashArray can be retrieved as quickly as any other. Purity therefore dispenses with disk-oriented

Page 13: Purestorage_UserGuide3_2

Introduction to theFlashArray Architecture

xiii

constructs such as data layouts designed to minimize head motion. Instead, it exploits the “all data isequidistant” property of solid state storage to simultaneously optimize utilization, performance, and datareliability.

FlashArray AdvantagesBecause accessing data on solid state storage incurs no seeking or rotational delays, any block of datain a FlashArray can be retrieved as quickly as any other. Purity therefore dispenses with disk-orientedconstructs such as data layouts designed to minimize head motion. Instead, it exploits the “all data isequidistant” property of solid state storage to simultaneously optimize utilization, performance, and datareliability.

For data center managers, the most important FlashArray properties are significantly higher I/O perfor-mance density (IOPS per gigabyte) and more efficient utilization of physical storage capacity than canbe achieved with disk-based arrays. With FlashArrays, there is no need (and indeed, no mechanism) for"short stroking" (leaving part of a disk’s capacity unused) or for segregating data sets on separate devicesto avoid I/O interference. All of a FlashArray's capacity is a single homogeneous pool of storage.

Moreover, Purity minimizes intra-device write amplification (the "extra" I/O operations that solid statedrives perform to update flash memory) and essentially eliminates the inter-device write amplification thatis due largely to the small write penalty incurred by conventional arrays as they maintain RAID protectionin random access I/O environments.

Page 14: Purestorage_UserGuide3_2

Part I. An Overview of FlashArrayHardware and Software

Page 15: Purestorage_UserGuide3_2

2

Table of Contents1. FlashArray Hardware ....................................................................................................... 3

Scalable, Highly-Available Configurations ....................................................................... 3High Availability ................................................................................................ 3Scaling .............................................................................................................. 4

FlashArray Hardware Module Details ............................................................................. 5Controllers ......................................................................................................... 5Storage Shelves .................................................................................................. 5

Intra-Array Connectivity ............................................................................................... 6Controller to Drive Connectivity ............................................................................ 6Inter-Controller Connectivity ................................................................................. 8

External Connectivity .................................................................................................. 8Host Connectivity ............................................................................................... 8Administrative Network Connectivity ..................................................................... 8

FlashArray End-to-End Resiliency ................................................................................. 9Device and Data Resiliency .................................................................................. 9Array-Level Resiliency ....................................................................................... 11

2. The Purity Operating Environment .................................................................................... 13Flash Memory: Like Disk, But Different ....................................................................... 13

Solid State Drives versus Disks ........................................................................... 13Disk and SSD Failure Modes .............................................................................. 14

Purity Operating Environment Goals ............................................................................. 15Designing for Flash ........................................................................................... 15Purity Architectural Highlights ............................................................................ 16

Data Virtualization ..................................................................................................... 16The Virtualization Map ...................................................................................... 16Using Virtualization ........................................................................................... 17

Storage Layout .......................................................................................................... 17Storage Layout Structures ................................................................................... 18Segment Allocation ............................................................................................ 18Populating Segments and Scheduling Writes .......................................................... 19RAID-3D ......................................................................................................... 20RAID-3D Flexibility .......................................................................................... 20

Storage Reclamation and Data Reduction ...................................................................... 21Storage Reclamation .......................................................................................... 21Data Reduction ................................................................................................. 22Data Consolidation ............................................................................................ 22Data Movement ................................................................................................. 23Storage Reclamation and Capacity Optimization Priorities and Tradeoffs ..................... 23

Read and Write Processing ......................................................................................... 24Write Processing ............................................................................................... 24Read Processing ................................................................................................ 25

Proactive Troubleshooting ........................................................................................... 26Information Sources ........................................................................................... 26Reporting Mechanisms ....................................................................................... 27Alerts .............................................................................................................. 27The remoteassist Facility .................................................................................... 28

Summarizing the Purity Operating Environment ............................................................. 28

Page 16: Purestorage_UserGuide3_2

3

Chapter 1. FlashArray HardwareThe FlashArray architecture is designed to simultaneously deliver:

High I/O performance:High I/O performance at consistently low latency compared to disk-based arrays, particularly withtransactional and multi-application workloads that generate random I/O patterns

High data reliability:Best in class data integrity and availability

Low cost:Cost per gigabyte of user data stored lower than that of disk-based arrays.

These properties stem from a combination of modular hardware that is configurable for redundancy and asoftware architecture that specifically takes full advantage of flash-based solid state storage. This chapterpresents an overview of the Pure Storage modular hardware platform. Chapter 2, The Purity OperatingEnvironment describes the Purity Operating Environment software.

Scalable, Highly-Available ConfigurationsFlashArrays can be as simple as a single controller and storage shelf, or can configured for high avail-ability and/or extended capacity. From smallest to largest, all FlashArrays are constructed from the samecontroller and storage shelf building blocks.

A FlashArray controller contains the processor and memory complex that runs the Purity software, buffersincoming data, and interfaces to storage shelves, other controllers, and hosts. FlashArray controllers arestateless—all metadata related to the data stored in an array is contained in storage shelf storage. Thus itis possible to replace an array’s controller at any time with no data loss.

A storage shelf contains either 22 SSDs and two NVRAM modules (first two shelves) or 24 SSDs (thirdshelf). The NVRAMs provide non-volatile temporary buffering of incoming data. Each shelf includes twoI/O modules (IOMs) that enable two controllers to access SSDs and NVRAMs.

High AvailabilityA highly available FlashArray consists of:

Controllers:Two controllers interconnected by dual point-to-point Infiniband links

Storage shelves:Between one and three storage shelves. As delivered by Pure Storage, the first two storage shelves inan array contain 22 SSDs and two NVRAM modules. The third contains 24 SSDs. All storage shelvesare internally redundant and connect to both controllers.

During normal operation, both controllers are active in the sense that they present disk-like volumes tohosts via all of their Fibre Channel or iSCSI host ports.1

Both controllers have dual connections to all storage shelves, to protect against loss of access to data inthe event of a SAS link failure. Host connections can also be made redundant, as long as each controlleris connected to them via two independent storage network fabrics.

1Administrators can use storage network zoning facilities to limit hosts’ access to specific FlashArray ports.

Page 17: Purestorage_UserGuide3_2

FlashArray Hardware

4

The dual Infiniband controller interconnects protect against failure of an Infiniband path, and enable seam-less failover in the event of a controller failure. Fast failover with no perceptible service interruption makesit possible to perform most maintenance while an array is online.

FlashArrays buffer all incoming data in both NVRAM modules until it has been written to solid statemedia. NVRAMs are accessible by both controllers. Thus, if a controller fails, the surviving controller cancomplete or restart in-progress I/O, while continuing to present volumes to hosts without interruption.

Finally, Purity’s RAID-3D technology, discussed in Chapter 2, The Purity Operating Environment, pro-tects against data loss due to a minimum of two simultaneous read errors and/or SSD failures, and in manycases, more.

Scaling

Figure 1.1 illustrates how the two FlashArray basic array building blocks--controllers and storage shelves--can be configured in both basic and highly-available, scaled up arrays. The left diagram illustrates anentry-level array consisting of one controller and one shelf. Adding storage shelves increases capacity;adding a second controller makes the array highly available, able to survive failure of any single compo-nent, up to and including an entire controller.

Figure 1.1. Entry-Level and Highly Available FlashArray HardwareConfigurations

In addition to making an array highly available, adding a second controller pair increases:

External I/O performance:The additional controller adds four host connection ports to the array

Internal performance:The additional controller adds a processor complex, cache, and internal I/ O bandwidth for accessingsolid-state drives.

Whatever its size, a FlashArray presents a single system image that exports all volumes to all connectedhosts via all of its Fibre Channel or 10GbE iSCSI ports. With the exception of a few infrequent operationssuch as firmware upgrades, a FlashArray can be administered from either of its controllers’ administrativenetwork ports.

Page 18: Purestorage_UserGuide3_2

FlashArray Hardware

5

FlashArray Hardware Module DetailsAlthough array-level high availability stems primarily from redundant components and interconnects, theprincipal FlashArray hardware modules are also internally resilient to most component failures, as theparagraphs that follow describe.

ControllersEach FlashArray controller is a 2U rack-mounted chassis that houses:

Main board:Contains the 12-core processor complex that runs the Purity software, DRAM used to hold Puritycode and for data buffering and staging, 7 PCI Express slots containing SAS, Infiniband, and hostinterface cards, and other internal components and interfaces

Boot drive:A SSD that holds two local copies of Purity for booting convenience, as well as log records containingdiagnostic and service information

Host interfaces:Either (a) two dual-port 8 Gb/s PCI Express Fibre Channel interface cards, or (b) two dual-port 10GbE(iSCSI) cards, that provide the controller’s four host ports

Storage shelf interfaces:Two dual-port PCI Express SAS interface cards whose 4-lane, 6 Gb/s ports provide a total maximumof 96Gb/s data transfer capability to storage shelf I/O modules

Inter-controller interfaces:A dual-port PCI Express Infiniband interface card, used to interconnect the controllers in a high-ly-available FlashArray

Administrative interfaces:GbE Ethernet ports, one of which connects to a network through which an array is administered,and video, serial, and USB ports used for initial configuration. Once configured, FlashArrays areadministered from network workstations using either the browser-based GUI or the CLI.

Power and cooling:Redundant cooling fans and hot-swappable power supplies.

Storage ShelvesPure Storage ships storage shelves fully populated with either 22 SSDs and two NVRAMs or 24 SSDs, butFlashArrays operate correctly and at full data redundancy (albeit at lower physical capacity) when drivesare removed. RAID-3D technology protects segments of data rather than entire SSDs, so it is not necessaryfor all SSDs in an array to have equal capacities.2

Because Purity can support SSDs of different capacities simultaneously, storage capacity can be upgradednon-disruptively, drive-by-drive, while an array is online. When a drive is inserted in a storage shelf bay,Purity determines its operating parameters based the capacity and model type it reports.

Pure Storage ships drives pre-installed in carriers that, when inserted in a storage shelf bay, connect to bothof the shelf ’s SAS I/O modules, providing redundant access to both controllers in highly available arrays.

2SSDs must be supplied by Pure Storage, Inc., however, to guarantee compatibility with Purity.

Page 19: Purestorage_UserGuide3_2

FlashArray Hardware

6

Administrators can install and remove carriers while an array is online. No tools are required for SSDremoval or installation (a Torx driver, supplied by Pure Storage, is required to unlock and lock NVRAMs),nor are any array administrative operations required to accommodate drive removals and additions.

Intra-Array ConnectivityAll interconnections between FlashArray components are redundant. These include:

Controller-to-controller:The two controllers in a highly available array are interconnected by two Infiniband links

Controller-to-drive:Each SSD and NVRAM is connected to an array's controller, or to both controllers in a highly availablearray, by two SAS links.

Controller to Drive Connectivity

Within each FlashArray storage shelf are two I/O modules (IOMs), containing SAS expanders that provideredundant paths between all drives (SSDs and NVRAMS) in the shelf and one or two controllers. Eachdrive is housed in a carrier, illustrated in Figure 1.2 containing an interposer that connects the drive toboth I/O modules, allowing a controller to address any drive via either SAS path.

Figure 1.2. Drive and Interposer in Carrier

SAS links connect the I/O modules in a shelf to PCI Express SAS interface cards mounted in both con-troller chassis. Figure 1.3, reproduced from a FlashArray model FA-420 Installation Guide, is a schematicexample of drive connection redundancy in a highly available array with two storage shelves. Each porton a controller's SAS interface card is part of a loop that connects one of the I/O modules in each storageshelf to it and to a port on a SAS interface card in its partner controller.

Page 20: Purestorage_UserGuide3_2

FlashArray Hardware

7

Figure 1.3. Example of Redundant FlashArray Controller to Storage ShelfConnectivity

Each drive connects to both of its storage shelf’s I/O modules, providing continued connectivity in theevent of failure of any component in the drive-to-controller I/O path:

Controller failure:If a controller fails entirely, its partner controller remains fully connected to all storage shelves andthe drives within them on both paths

SAS interface card failure:If one of a controller’s SAS interface cards fails, the controller remains connected to all storage shelvesand drives via its other SAS card (dark blue and light green paths or light blue and dark green paths),and the partner controller remains fully connected to both paths

Drive connection failure:If any one segment of a SAS path is broken, one controller remains fully-connected to all drives, andthe other remains connected via its alternate path

Page 21: Purestorage_UserGuide3_2

FlashArray Hardware

8

I/O module failure:If a storage shelf I/O module fails, all drives remain connected to both controllers through the shelf'sother I/O module.

Purity detects these failure cases and adjusts its I/O path management automatically. The software logsevents that affect availability and generates electronic mail alerts to inform administrators of componentfailures, so they can be remediated before service is disrupted.

Inter-Controller ConnectivityThe dual-path Infiniband network that interconnects an array's controllers is redundant. Highly availablearrays running Purity v3.2 include one controller pair; each controller connects directly to its partner’stwo Infiniband ports. In the future, scalable arrays that include two or more controller pairs will utilizeInfiniband switches to provide redundant symmetric communication among all controllers. Should oneInfiniband path (link, switch, or interface port) fail, exchange of data and state information continues onthe other path.

In addition, each controller's four host (Fibre Channel or iSCSI) ports enable it to connect redundantlyto two or more storage network fabrics if they are available. Actual redundancy of host connections isdetermined by the design of the storage network.

External ConnectivitySimilarly, connections between a highly available FlashArray and its client hosts can be configured re-dundantly to protect against storage network failures.

Host ConnectivityEach FlashArray controller includes two PCI-to-Fibre Channel interface cards, each with two ports that canbe connected to a Fibre Channel storage network fabric (or two separate fabrics for completely redundanthost-to-array connections), as Figure 1.4 illustrates.

Figure 1.4. Redundant Host Connections via Separate Fabrics

Administrative Network ConnectivityEach FlashArray controller includes two gigabit Ethernet network ports and a serial port, any or all of whichcan be connected to administrative workstations running virtual terminal emulation packages to monitorand control the array using the Purity CLI. The Ethernet ports also enable browser-based administrationusing the Purity GUI. An array with multiple controllers can be completely administered from either of

Page 22: Purestorage_UserGuide3_2

FlashArray Hardware

9

these ports on any of its controllers, with the exception of a few component-specific operations such ascontroller reboots and firmware upgrades.

FlashArray End-to-End ResiliencyWhile SSDs are more reliable than rotating magnetic disk drives, when they do fail, they fail in waysdifferent from the typical failure modes of rotating magnetic disks. The FlashArray architecture is specif-ically designed to provide array-wide resiliency while delivering consistent performance from flash-basedstorage. The FlashArray architecture provides resiliency on three levels:

Array availability:All FlashArray hardware components are redundant, enabling arrays to sustain all single componentfailures, as well as many multiple-component ones without impacting availability. In addition, mostfailed components can be replaced while the array is operating

Data Resiliency:Multiple checksums (parity and other) ensure that data written by hosts is returned as written everytime. Detection and correction of data errors is particularly important with flash, which has a higherraw bit-error rate than magnetic disk storage. Read and Write Processing describes data error detectionand correction in the I/O path.

Performance Consistency:The architecture exploits the fast access time of flash storage to maintain consistent high performanceas arrays ride through failure and maintenance events.

The sections that follow describe the architecture that enables FlashArrays to continue to deliver high-performing, reliable data access services even when component failures and data errors occur.

Device and Data ResiliencyEach FlashArray storage shelf contains twenty-two SSDs and two NVRAM modules. The SSDs use theSATA protocol to communicate with an interposer that converts the protocol to dual-channel SAS (seeFigure 1.3), so that each drive is connected through SAS expanders to both controllers in a HA pair, asFigure 1.4 illustrates.

The NVRAMs use the SAS protocol to communicate with controllers. Using the SAS protocol providesboth controllers in a HA pair with direct access to all storage, and moreover, implements the key SCSIpersistent group reservation feature, used by Purity to determine controller “owns” a device at any giveninstant.

The storage shelves that house SSDs and NVRAMs include redundant fans, power supplies, power distri-bution modules, and SAS I/O modules (IOMs). A passive mid-plane in the shelf chassis provides redun-dant connections between all active components.

Drive Identification, Authentication, and Quorum

Unlike disk-based arrays with their fixed RAID group geometries, FlashArray data protection is doneat the data object level. As the Purity software continuously accommodates new data and reduces andreoptimizes existing data, it augments each protected data object with metadata that makes every driveself-describing and allows array controllers to be stateless.

In addition, each drive contains a unique signature, identifying it both as a FlashArray drive and as partof a particular array’s drive set (called an apartment by Purity). When an array boots, Purity reads thisconfiguration information from each drive. The software requires a quorum, or minimum number of drives

Page 23: Purestorage_UserGuide3_2

FlashArray Hardware

10

with correct signatures before beginning operation. Incorrectly formatted drives are considered “foreign,”and are rejected.

As a consequence of these features, one can literally power an array off, rearrange the drives, and powerit on again without affecting the correctness of the data they contain.

RAID-3D™

Unlike conventional RAID’s drive-level protection, FlashArrays’ RAID-3D technology protects individ-ual data objects a few megabytes in size, called segments. Each segment is distributed across a subset ofan array’s drives.

As data enters a FlashArray, it is checksummed, reduced, copied to NVRAM, and stored in a DRAMsegment buffer, along with descriptive metadata. Segment buffers may contain a mixture of newly enteringdata and data that Purity is reoptimizing (e.g., reducing or consolidating). RAID check data is computedand stored both within each segment buffer and across segment buffers.

When a set of segment buffers fills, Purity computes overarching RAID check data, and writes each bufferto a different SSD. The SSDs for each segment are chosen to equalize both loading and wear. Writes arepartly serialized so as to minimize the potential impact of (lengthy) write operations on (much shorter)read operations.

Purity determines the properties of each segment (number of drives it occupies and its RAID-3D protectionparameters) dynamically, based on conditions within the array. As a consequence, each segment may havedifferent placement and protection parameters. At a minimum, however, each segment is protected againstloss in the event concurrent failure of two SSDs that contain data from it.

Recovering from Drive Failures

An important consequence of RAID-3D segment-level data protection is that failure of an SSD is effec-tively a loss of physical capacity rather than of protection. When an SSD fails, or is removed from an array,Purity determines which segments are affected, and for each one, either reconstructs the missing piecefrom the surviving pieces, or marks the segment as a high-priority candidate for background storage recla-mation and data reduction. In both cases, reconstructed data is distributed among the array’s remainingSSDs. Because the RAID-3D minimum protection threshold is two concurrent failures, the array respondsidentically if a second SSD failure occurs while data from the first failure is still being reconstructed. Oncerecovery is complete, an additional double failure is sustainable, and so forth, until there is insufficientphysical space to reconstruct data.

Reconstruction of a failed SSD’s contents is both immediate upon discovery of the failure and automatic.No administrative intervention or dedicated spares are required. Reconstruction time varies, from a fewtens of minutes to hours, depending on the amount of data to be reconstructed and the host I/O load onthe array. The end result is fully-protected data in an array whose physical storage capacity is reduced bythat of the failed drive. When a failed SSD is replaced, the replacement drive becomes part of the pool ofavailable storage. Initially, a replacement SSD has a high probability of being selected for space allocationdue to its low occupancy and short time in service.

In larger arrays, SSDs are organized as multiple “zones,”each of which is a separate failure recoverydomain as described in the preceding paragraphs. Thus it is possible for a large array to sustain multipledouble failures of SSDs in separate zones.

NVRAMs and Resiliency

FlashArrays copy all data written by hosts to two high-performance NVRAM modules immediately uponits entry to the array. Data remains in NVRAM until it has been reduced (compressed and deduplicated)

Page 24: Purestorage_UserGuide3_2

FlashArray Hardware

11

and written to an SSD as part of a segment. When the segment buffer containing a block of data has beenwritten to an SSD, the data is persistent and protected by RAID-3D, so the NVRAM copies are obsoleted.

As soon as it has made two NVRAM copies of an incoming data block, the array sends a write completionnotification to the host. Thus, from the host’s perspective, write latency is very low.

NVRAMs preserve the integrity of writes that have been acknowledged but whose data has not yet beenreduced and written to SSD media, should an array failure (such as a power outage) occur between thosetwo events. When an array restarts after such a failure, it re-processes any host-written data that remains inNVRAM. Thus, data conveyed to a FlashArray by a host write that has been acknowledged is persistent,even if an array failure occurs before it has been stored on SSD media.

FlashArray NVRAMs are capable of retaining data for a minimum of three months with no external powersupplied. Each FlashArray storage shelf contains two NVRAMs (in slots 0 and 23). In single-shelf arrays,data from each host write is stored in both. In arrays with two or more shelves, each block of incomingdata is written to two NVRAMs, in a rotating pattern designed to equalize load on the devices.

When an NVRAM fails, the array generates an alert, but continues to operate normally (from the hostperspective). A single-shelf array with a failed NVRAM is able to recover from a power failure, but wouldbe susceptible to a second NVRAM failure. In a multi-shelf array, Purity generates an alert and flushes theredundant copies of data held by the failed NVRAM. Host write performance may decrease slightly duringflushing, but the array continues to function normally, making redundant NVRAM copies of incomingdata (unless only one NVRAM is functioning). Should all of an array’s NVRAMs fail, the array stopsaccepting host writes, but continues to process the data it already holds It writes all data accumulated insegment buffers to SSD storage, and continues to execute hosts’ reads.

Array-Level Resiliency

Controller Resiliency

Each FlashArray controller is internally resilient, with redundant fans, power supplies, power distribution,Fibre Channel (or 10GbE) host interfaces, SAS and Infiniband intra-array interfaces, and 1GbE adminis-trative network ports.

In a highly available FlashArray, the two controllers intercommunicate via a redundant Infiniband net-work. Provided that hosts are connected to both controllers, failure of a controller (or taking a controllerout of service for maintenance) need not disrupt host access to data. Current FlashArrays consist of onecontroller or one controller pair. The architecture, however, accommodates scale-out capacity and perfor-mance expansion by interconnection of multiple controller pairs and the storage shelves behind them intoa “single system image”array.

Host Connectivity

Each FlashArray controller has 4 host connection ports (8Gb/s Fibre Channel or 10GbE iSCSI). In a highlyavailable array, both controllers handle incoming host I/O requests, so all of the array’s volumes, seenby hosts as LUNs, are accessible via all array ports (unless access has been restricted by storage networkzoning).

For fully redundant host access, each controller should have at least one port connected to each of twoseparate storage network fabrics, to which hosts should also be connected (see Figure 1.4). Where availableon the host side, multipathing should be configured, with round-robin scheduling among the visible arrayports. With such a configuration, the FlashArray remains accessible with the best possible performancein the presence of any failure in the host connection path (HBA, host port, switch, cabling, array port,array IO card).

Page 25: Purestorage_UserGuide3_2

FlashArray Hardware

12

Controller High Availability

The FlashArray employs redundant active/active storage controllers to ensure high availability of the over-all array. The controllers are connected in two ways; by dual Infiniband links and by the dual access tothe shelves of SSDs via SAS. The Infiniband links are used to transfer I/O requests and metadata updatesbetween the two controllers. The SSDs are used to determine quorum between the controllers (i.e. healthstatus and role).

Under normal circumstances both controllers in a highly available array receive host I/O requests, butoverall workflow within the array is determined by a single primary controller. The two controllers alsouse the Infiniband network synchronize state and exchange work items as necessary. When a controllerdetects a potential failure of its partner, the two use SAS reservation “signatures” written on the SSDs todetermine which of them should assume the primary role.

If a controller fails, I/O request execution continues normally. Hosts detect failed I/O requests addressed tothe failed controller, usually by timeout. If storage network topology and host multipathing configurationpermit, host re-issue failed requests to the surviving controller. Because both controllers have access to allSSDs and NVRAMs, in-flight I/O requests can either be completed by the surviving controller or reissuedby hosts, storage network topology and host multipathing configuration permitting.

Non-Disruptive Software Upgrades

Purity software and firmware can be upgraded while an array is online executing host I/O requests. Toapply an upgrade, one controller is powered off and rebooted so that upgrades can be applied. (Poweringa controller down causes hosts to re-issue I/O requests to the array’s other controller.)

When upgrading is complete, Purity starts, and the upgraded controller joins the array (typically a 3-6minute operation). The second controller is then powered off and upgraded in like manner. As long ashosts have full connectivity to both controllers, there is no service disruption.

Protecting the Integrity of Host-Written Data and Metadata

Because the FlashArray is a highly-virtualized storage system, extensive use and reliance on metadata arepart of the core design. As such, protection and assurance of the integrity of both user data and system-lev-el metadata is of paramount concern. The FlashArray implements a complete end-to-end Data IntegrityFabric, described below, to ensure protection against and recovery from data consistency issues.

User Data Integrity

Purity protects all host-written data with multiple independent checksums to ensure both correct delivery ofdata and delivery of correct data. As each host-written sector enters an array, Purity computes a hash codethat is stored with the (reduced) data and verified each time the data is read. This ensures the correctnessof data along the entire software and storage path within the array. (The hash code is also used to identifypotential duplicate sectors).

Purity packs both host-written data and metadata “logs” that describe current data location and contentin segment buffers for writing to SSD media. The software computes a checksum over each segmentbuffer page. Checksums are “salted” with their page addresses, making is possible to verify both data andmetadata and virtual page location upon reading. This scheme protects against SSD read errors, both indata and positioning. Moreover, segments are self-describing, so in extreme cases, an array’s entire bodyof metadata can be reconstructed from the contents of its SSD segments.

Page 26: Purestorage_UserGuide3_2

13

Chapter 2. The Purity OperatingEnvironment

Externally, a FlashArray resembles a conventional disk array that exports block storage volumes to hostsvia front-end host connection ports attached either to Fibre Channel storage network fabrics or directlyto the hosts Fibre Channel interfaces. As with conventional disk-based arrays, hosts communicate withstorage network targets, and direct I/O requests to the targets' numbered logical units (LUNs 1), eachcorresponding to a storage volume.

Flash Memory: Like Disk, But DifferentFlashArrays, especially the Purity Operating Environment software, are designed specifically to delivermaximum benefit from flash storage. As such, they abandon many of the common principles of disk-basedarray design, and instead adhere to alternate principles that derive maximum benefit from flash SSDs. Thereason for the fundamentally different design is quite simply that flash SSDs behave differently than disksin some important ways.

Solid State Drives versus DisksThe atomic unit for reading and writing data on disks is the 512-byte sector. Disks read and write data, andprotect it with ECC in 512 byte units, so host reads and writes always devolve to multiples of that size.

The analogous unit for flash memory is the flash page. Flash pages, typically 4 or 8 kilobytes in size, aresimilar to disk sectors in that they are the units in which drives read data, and over which ECC is calculated.Writing data to flash memory differs from writing to a disk, however.

A disk can directly overwrite the data in any individual sector with no effect on other sectors. Flash mem-ory, on the other hand, can only be written if it is in a baseline state. Once data has been written to a flashpage, it must be “erased” before being overwritten. Solid state drives erase flash memory in large (~256to 1,024 kilobyte) units called erase blocks. After erasure, each individual page in an erase block can bewritten once, but cannot be overwritten until its entire erase block has been erased again.

Disks also virtualize the storage they present. Hosts address disk sectors by a dense set of numbers betweenzero and the disk’s capacity. Internally, however, a disk may remap sector numbers to alternate mediaareas over time. They do this mainly to avoid using sectors that they discover to be defective.

Solid state drives virtualize their flash pages as well. Hosts address virtual flash pages using numbers thatare analogous to disk drive sector (often called logical block) numbers, but they have no awareness of thepages’ locations within the devices.

SSDs’ virtualization mechanisms attempt to minimize the effect of block erasure on hosts as far as possible,and in addition, attempt to equalize overwrites across all flash memory in order to maximize drive lifetime.A physical flash page may store data for thousands of different virtual page addresses during an SSD’slifetime.

Block erasure is functionally transparent to hosts, which read and write numbered virtual flash pages justas they would disk sectors. Typically, SSD reads significantly outperform disk reads, because there is noseeking or rotational latency. Writes often encounter unexpectedly large (10-20 times normal) latencies,

1The acronym LUN is sometimes used as a synonym for the term volume. This guide generally avoids that usage.

Page 27: Purestorage_UserGuide3_2

The Purity Operating Environment

14

however, because they trigger large block read-erase-overwrite cycles. These variation are especially no-ticeable in random workloads because small writes to random page addresses have a high probability ofcausing large block erasures. One of Purity’s key advantages is that it overcomes SSD write latency vari-ability, and delivers consistently low latency in response to any pattern of host requests.

Disk and SSD Failure ModesIn addition to read errors corrected by drive ECC mechanisms, both magnetic disks and SSDs have threecommon failure modes:

Complete drive failure:Drive completely unresponsive

Read error:Uncorrectable errors discovered by ECC mis-compares when data is read

Undetected read error:Erroneous data delivered due, for example, to disk head mis-registration when reading or when datawas originally written, or to pathological ECC failures that are not detected and reported by drivemechanisms.

With disks, these failures tend to be abrupt; they occur with no or relatively little warning, and once theyoccur, they are typically permanent. With SSDs, however, complete drive failures are relatively rare. Readerrors tend to increase as a function of device and data age (measured in number of overwrites and inmonths or years since most recently read respectively), and can often be recovered by refreshing data, ifit can be recovered by RAID rebuilding techniques.

Disk-based arrays protect data against loss due to these failures by storing RAID check data calculatedover corresponding data blocks on groups of drives. While they are effective at recovering unreadabledata, device-based RAID groups have some limiting side effects:

Hardware requirements:They require dedicated “spare” drives to automate recovery from drive failures

Write amplification:They inherently “amplify” small host writes, because each one requires a “log incoming data-readprevious data-calculate check data-write sequence to maintain synchronization of check data withhost-written data

Degraded performance during rebuilds:Application performance can degrade seriously for hours due to I/O interference while the contentsof a failed disk are rebuilt on a spare

Configuration inflexibility:Once a RAID group is created, reformatting it (for example to add or remove a drive, or to changestripe unit parameters) is very time and resource consuming.

Despite these limitations, disk-oriented RAID techniques are appropriate for conventional arrays, becausedisk failures tend to occur with little notice and be permanent, and because disk seek and rotational latencieseffectively require geometrically regular, unchanging on-media data layouts.

SSDs do not incur large up-front latencies when data is accessed, and moreover, their dominant failuremode is localized read errors for which several alternative low-impact recovery techniques can be em-ployed. On the other hand, it is usually appropriate to augment individual SSDs’ error detection mecha-

Page 28: Purestorage_UserGuide3_2

The Purity Operating Environment

15

nisms so that RAID-like recovery mechanisms can be employed to recover data from read errors that eludethe devices’ own detection mechanisms.

Purity Operating Environment GoalsThe Purity software architecture addresses the three fundamental challenges in flash array design:

Cost:The per-byte cost of flash memory is considerably greater than that of rotating magnetic disk. Puritykeeps effective cost per byte low by reducing (deduplicating and compressing) data extensively, byultra-efficient “micro-provisioning” of physical storage, and by “hardening” commodity class SSDsso that they are suitable for use in the data center environment

Data reliability:Purity uses two basic techniques to maximize data reliability. First, it maximizes SSD life by (a) bal-ancing long-term write load across all of an array’s SSDs, and (b) using both NVRAM and DRAMas buffers to minimize write amplification. Second, it dynamically adapts RAID-3D protection para-meters each time it allocates storage for incoming data, in order to compensate for missing drives,fluctuating SSD error rates, and other changes in conditions within the array

I/O performance:In addition to passing on the benefit of fast SSD access to hosts, Purity uses several mechanisms tokeep I/O latency predictably low. For example, the software caches both data and metadata extensivelyso most small read requests are satisfied without SSD access. Moreover, SSD writes are scheduledto minimize the probability of a lengthy write blocking short reads. In some situations, the softwarerebuilds data to satisfy reads because it determines that to be faster than waiting for a blocking writeto complete.

Designing for FlashDesigning an array solely for solid state storage means adhering to different principles than those thatgovern disk array designs. The most significant ways in which Purity differs from a disk based arraysoftware design are that it:

Reads and writes anywhere:Because every block of data in an all-solid state array can be retrieved in roughly equal time, Purityprioritizes write efficiency and long-term I/O balance well above virtual volume address as it deter-mines where to place incoming data

Maximizes the value of writesWriting flash memory is "expensive" both because it often implies time-consuming read-erase-over-write cycles and because it increases flash cell wear, resulting in increased read error rates. Puritystructures and schedules SSD writes so as to minimize drives’ internal write amplification, and toderive the maximum utility from each one

Reduces data aggressively:Purity uses FlashArray controllers’ abundant processing power to reduce the size of data by dedupli-cating and compressing it as it enters the array, and to increase reduction efficiency throughout its life

Provisions storage precisely:Freed from the need to manage physical storage by disk sector, Purity provisions exactly as muchstorage as each reduced data block requires. Purity allocates space as data arrives, so each block ofSSD storage is typically occupied by data from multiple volumes. There is no space wasted by partiallyfilled fixed-size "chunks"

Page 29: Purestorage_UserGuide3_2

The Purity Operating Environment

16

Adapts to changing error rates:Purity adjusts RAID-3D™ parameters dynamically as SSD error rates and other conditions within anarray change over time, and also according to the nature of the data stored in each storage segment.The result is the appropriate level of protection for each segment of solid state storage at the timethat data is written in it.

All of these principles would be either inappropriate for or impractical to apply in disk-based arrays.Together, they are the essence of what makes the Purity Operating Environment software “purpose-builtfor flash.”

Purity Architectural Highlights

Five aspects of the Purity architecture are especially instructive in illuminating how the software is specif-ically designed for the properties of flash solid state storage:

Data virtualization

Storage layout

RAID-3D data protection

Storage reclamation and capacity optimization

The I/O path traversed by host read and write requests.

The sections that follow describe these aspects of the architecture.

Data VirtualizationA FlashArray exports disk-like storage volumes to hosts. In a host’s view, a volume consists of a consec-utively numbered set of 512-byte sectors used for storing and retrieving data. As with any disk, the numberof sectors is fixed (unless changed by a relatively infrequent administrative operation).

Internally, Purity completely virtualizes its volumes. The software allocates only as much physical storageas is required to hold volume blocks (sequences of sectors) in which hosts have written data. Moreover, itmoves data periodically to consolidate it and to reclaim unused storage. Volume blocks’ physical locationsin an array have no fixed relationship to their virtual sector addresses.

The Virtualization Map

To manage constantly changing virtual-to-physical relationships, Purity maintains a persistent virtualiza-tion map, illustrated in Figure 2.1, that relates each virtual sector stored in an array to a physical location.The structure of the map allows Purity to re-size volume blocks and to relocate them anywhere within anarray, for example, when it re-compresses or consolidates them.

Page 30: Purestorage_UserGuide3_2

The Purity Operating Environment

17

Figure 2.1. The Purity Virtualization Map

When a host reads data, Purity locates it via the virtualization map. When hosts write data, Purity reducesit, allocates storage and writes it, and updates the virtualization map accordingly. There is no “in-place”overwriting of data. Space that contains data superseded by overwriting is continuously reclaimed for otheruse by a background task.

Using Virtualization

Purity takes advantage of volume virtualization in several ways. For example:

Write balancing:The software selects SSDs on which to write data so as to equalize long-term write activity acrossan array

Space reclamation:When reclaiming space occupied by superseded data, the software moves adjacent data that is still“live” to whatever location in the array is most appropriate at the time

Data placement:When opportunistically reducing already-stored data, the software moves the reduced data to whateverlocation that is most appropriate at the time of optimization.

Purity does not directly overwrite superseded data. Instead, it reduces the superseding data, moves it towrite buffers that are currently being populated, and updates the virtualization map with the physical lo-cations to which the buffers are scheduled to be written when filled. The software reclaims the space oc-cupied by overwritten data during an on-going storage reclamation and optimization process.

Storage LayoutThe Purity storage layout is designed to meet two main goals:

Cost:Keep effective cost low by utilizing solid state storage as efficiently as possible

Page 31: Purestorage_UserGuide3_2

The Purity Operating Environment

18

Performance:Maximize performance and device lifetime by (a) balancing write load across all devices and (b)minimizing write amplification (implied “extra” writes that occur because of SSD block erasures andRAID parity updates).

Storage Layout StructuresPurity allocates and organizes data in solid state storage in conceptually rectangular structures suggestedby Figure 2.2. The fundamental organizational unit is the write unit, an erase block-aligned group of con-secutive logical pages2 on a single SSD. Purity reads data in units of one or more logical pages, and writesdata in write units that consist of a fixed number of logical pages.

Purity allocates storage for writing data in units called segments, spread across a subset of an array’sSSDs. Each SSD in a segment's subset contributes a column of adjacent write units to the segment. Thesoftware determines RAID-3D parameters individually for each segment it allocates. (See RAID-3D fora discussion of RAID-3D.)

Figure 2.2. The Purity On-Media Data Layout

Segment AllocationPurity allocates solid state storage in segments consisting of a column of write units on each of severalSSDs. To allocate a segment, the software does the following:

Location selection:Selects a subset of the array’s SSDs based on criteria that include storage capacity balance, numberof active drives in the array, drive read error rates and type of data expected to be stored

2The logical page is Purity's atomic unit for reading data. It consists of one or more flash pages (the unit of data to which an SSD applies its internalECC protection).

Page 32: Purestorage_UserGuide3_2

The Purity Operating Environment

19

Protection selection:Determines appropriate RAID-3D protection parameters based on the same criteria

Space reservation:Reserves enough free space for a column of write units on each SSD in the subset

Buffer allocation:Allocates DRAM buffers corresponding to each write stripe in the segment.

Populating Segments and Scheduling Writes

Purity loads incoming data into segment buffers “horizontally,” one write stripe at a time. Typically, thesoftware populates several write stripe buffers simultaneously, each optimized for a specific type of data,and each with its own RAID-3D parameters. As it moves blocks of reduced data into a buffer, the softwaremirrors them in NVRAM for persistence in case of power failure, calculates RAID-3D check data, andadds log entries to reflect updates to its virtualization map.

When a write stripe’s buffers are fully populated, Purity schedules a write for each of its write units. Purityalways flushes entire write stripes for three reasons:

Efficiency:It uses drive bandwidth efficiently, because writes are relatively large

Minimal SSD write amplification:It minimizes SSDs’ internal write amplification (read-erase-overwrite cycles), because buffers arealigned with erase blocks and entirely filled with useful data

Minimal array write amplification:It minimizes cross-SSD write amplification, because each write stripe contains one or more completeRAID rows.

Until all write units in a stripe have been successfully written, Purity retains the data they contain inNVRAM. This has two important consequences:

Host responsiveness:If an array controller restarts (e.g., due to external power failure) after part of a write stripe has beenwritten, the software reconstructs the entire write stripe from NVRAM contents and rewrites it. Thus,once a host’s write has been acknowledged, data is persistent, even if a controller must restart beforeit has been completely written to solid state storage

Write staggering:Individual write units can be written in sequence rather than concurrently. Because SSD writes canoccupy drives for 10-20 times as long as typical reads, Purity staggers writes to keep as many devicesas possible available for reading. Staggering is feasible because data can be retrieved from NVRAMif necessitated by a controller restart.

Page 33: Purestorage_UserGuide3_2

The Purity Operating Environment

20

RAID-3D

Figure 2.3. RAID-3D Protection Spheres

RAID-3D, illustrated conceptually in Figure 2.3, is designed around protection against data loss due to thecommon failure modes of flash SSDs rather than those of rotating disks.

RAID-3D protects both against3 drive read errors, and against failures that affect entire write units anddrives. It exploits SSDs' low access times to provide more comprehensive protection with lower impact onperformance than is feasible with disk-based arrays. RAID-3D includes three orthogonal tiers of protectionthat Purity applies to each write stripe:

Tier 1:Uses per-logical page checksums to detect read errors that are not corrected by drive mechanisms.Level 1 detection is particularly helpful with latent read errors exposed during rebuilds of failed drivecontents

Tier 2:Uses RAID-5 parity calculated over subsets of the write units in a stripe. Recovers from all singlewrite unit read failures and many multiple failures

Tier 3:Uses RAID-6 type check data calculated over an entire write stripe. Recovers from all double failuresand many situations involving failure to read more than two write units.

Each tier of RAID-3D protects against progressively more severe error scenarios, and naturally, recoveryat each tier takes longer and has a greater impact on processing and I/O resources. Purity attempts torecover from read errors using Tier 2 mechanisms before resorting to those of Tier 3. At all tiers, however,RAID-3D covers the data in a segment, rather than the conventional disk-based array practice of coveringan entire group of drives.

RAID-3D FlexibilityPurity determines RAID-3D parameters dynamically each time it allocates a segment of storage for writing,based on conditions within the array and the type of data for which the segment is to be optimized. For

3For example, Purity may rebuild data to satisfy a host read request rather than waiting for a lengthy write to complete so that a read can be scheduled.In disk-based arrays, rebuilding data is usually considered a last resort, to be employed only in cases of hard read failure.

Page 34: Purestorage_UserGuide3_2

The Purity Operating Environment

21

example, segments slated to contain a predominance of sectors that represent many duplicates may receivemore extensive protection. Dynamically chosen RAID-3D parameters include size and makeup of Level2 and Level 3 RAID groups and check data algorithms.

Whatever the cause of a read failure, Purity protects and rebuilds the contents of individual write units,not entire drives. When a drive does fail, the software only rebuilds write units that contain data, either:

On demand:When a host reads volume blocks that map to the failed drive

During storage reclamation:As it reclaims storage and further reduces data

In the background:For write units that are not rebuilt for the reasons above.

Purity distributes the data rebuilt from a failed drive to minimize rebuild time by distributing rebuildingI/O load. The software schedules rebuilding I/O so that active data is rebuilt at higher priority.

Purity does not change data layout when a drive is replaced or added to an array. The new drive’s storagebecomes part of the pool from which the software allocates segments. Allocation is biased in favor of newlyadded drives as long as their age and occupancy are low compared to those of other drives in the array.

Storage Reclamation and Data ReductionEven today, “commodity” SSDs carry a significantly higher cost per gigabyte than disk drives. To achieveeffective cost per gigabyte below that of disk-based arrays, Purity reduces data to eliminate redundancy,both when hosts write it, and throughout its lifetime, and packs the reduced data in write stripe buffersfor writing. If, for example, a 16 kilobyte block written by a host reduces to 7,125 bytes, Purity allocatesonly that much space in a write stripe for it. If later reduction shrinks it further to 6,800 bytes, the softwarereduces the storage allocated to it during space reclamation.

Purity continuously executes a background process that optimizes physical storage utilization by perform-ing four closely integrated functions:

Storage reclamation:Reclamation of storage occupied by superseded host-written blocks or trimmed by host command

Data reduction:Reduction of already-stored data by opportunistically deduplicating and compressing it as resourcespermit

Data consolidation:Consolidation of blocks of data with similar properties

Data refresh:Refreshing of unchanging data to forestall latent SSD bit errors.

These four functions work in concert to maximize the efficiency with which a FlashArray utilizes thephysical storage available to it.

Storage ReclamationPattern elimination: Purity replaces host-written blocks that consist entirely of repeating patterns (for ex-ample, zeros, spaces, and others) with compact metadata that describes their contents.

Page 35: Purestorage_UserGuide3_2

The Purity Operating Environment

22

Data ReductionPurity reduces data as it enters an array, eliminating redundancy to conserve physical storage. The softwareuses three techniques to minimize the amount of physical storage consumed by host-written data:

Pattern elimination:Purity replaces host-written blocks that consist entirely of repeating patterns (for example, zeros,spaces, and others) with compact metadata that describes their contents

De-duplication:For each sector of data that enters an array, Purity computes a hash checksum which it comparesagainst the checksums of already-stored sectors. If it finds a match, the software reads the stored sectorand compares it with the new one to eliminate the possibility of “hash collisions.” Purity replacesduplicate blocks with pointers to the single copy of their contents4

Compression:Purity attempts to compress the data in blocks that remain after pattern elimination and deduplication,5

choosing among several well-known compression algorithms that balance compression speed againstcompactness of the result. The software stores compressed rather than original host-written data inNVRAM, in write unit buffers, and ultimately on solid state storage.

Purity adjusts its deduplication and compression parameters based on available resources and other factors.For example, the software first compares incoming sectors' hash checksums only against those cached inmemory so that it can respond to hosts’ write commands as quickly as possible. Similarly, the thorough-ness of compression depends on data's age, available processing resources, and the probable payback. Forexample, the space gained gain by compressing a block that had already been compressed to 25% of itsoriginal size is likely to be negligible, so Purity is unlikely to subject it to more rigorous compression.

Data ConsolidationPurity attempts to consolidate data blocks with similar properties such as age, duplication level (numberof distinct host-written sectors whose contents it represents), and proximity of volume sector addresses,in the expectation that hosts are likely to treat data with similar properties similarly. For example:

Adjacency:Blocks with adjacent volume sector addresses often belong to the same file, directory, database table,or virtual machine image, and as such, tend to be accessed at the same time. Although retrieving non-adjacent blocks from solid state storage is much faster than it is with rotating disks, being able toretrieve many blocks of a host’s read request with one SSD read increases net efficiency

High duplication:Blocks that represent many duplicate host-written sectors are likely to "live" for a long time. Consol-idating them creates segments that change little or not at all over time, and that are therefore unlikelycandidates for storage reclamation, but good candidates for capacity optimization

Low duplication:Conversely, blocks with few (zero or one) duplicates are likely candidates for deletion. Purity consol-idates these in segments that therefore become likely candidates for storage reclamation and capacityoptimization

4Both proactive hash collision elimination and sector-by-sector deduplication are possible because of the very fast read access times of SSDs. Theywould be impracticable with disk-type access latencies. De-duplication at the sector level rather than based on the much larger blocks commonlyused by necessity in disk-based array deduplication has been shown to increase reduction by as much as an order of magnitude in some cases.5Duplicate blocks that are compressible will already have been compressed when the first copy entered the array.

Page 36: Purestorage_UserGuide3_2

The Purity Operating Environment

23

Age:Most data is accessed less frequently as it ages. When reclaiming storage, Purity attempts to moveblocks to segments with blocks of similar age. Segments containing older blocks are less likely to bemodified, and thus are low priority candidates for storage reclamation.

Data MovementAs Purity consolidates data over time, segments tend to fall into groups according to the likelihood thattheir contents will change. Purity frequently re-processes segments whose contents are likely to changefrequently, but as data ages or becomes highly duplicated, there is less to be gained from attempting toreclaim storage from the segments it occupies.

But data that is stored in flash and is not accessed for long periods (months) is subject to deterioration thatmanifests itself in the form of read errors when the data is finally retrieved. To counter this eventuality,Purity tracks the “age” (time since last re-processing) of segments, and schedules them for storage recla-mation and capacity optimization when they have been untouched for longer than a period of roughly amonth. Reprocessing these segments presents opportunity for:

Further reduction:For example, by deduplication against an array’s complete database of hash signatures rather thanjust those held in cache

RAID adjustment:For example, to increase the level of protection for data that represents many duplicate sectors

Refresh:Rewriting dormant data in different locations to avoid the possibility of future read errors due to mediadeterioration.

Storage Reclamation and Capacity Optimization Priori-ties and Tradeoffs

Purity constantly adjusts its capacity optimization and storage reclamation parameters to balance the uti-lization of resources (processing, DRAM buffers, and NVRAM), responsiveness to clients, and the likelypayback from transformation.

For example, when data first enters an array, Purity performs a cursory reduction, partly aimed at chan-neling the data to the most appropriate write buffer. Reduction at this stage is cursory for two reasons:

Unknown payback:New data's lifetime may be short. For example, hosts often delete temporary files and overwrite thevirtual storage they occupy shortly after creating them. At best, the payback of exhaustive reductionof new data is unknown

Responsiveness to hosts:Initial reduction precedes copying to NVRAM, which in turn gates an array’s response the host’s I/Ocommand. To keep host-perceived latency low, the software minimizes the processing that precedesits response to host I/O commands.

Similar balances between goals and resources pervade all Purity processes. For example, the software re-compresses data opportunistically as it moves between segments during storage reclamation:

Probability of payback:Data that has only been compressed to, say, 90% of its nominal size is regarded as a good candidatefor more thorough compression. Conversely, further compression of data that is already at 25% of itsnominal size is likely to be negligible

Page 37: Purestorage_UserGuide3_2

The Purity Operating Environment

24

Availability of resources:If processors are busy and NVRAM is filling up faster than data can be flushed, Purity may foregocompression so that segments can be flushed sooner, and the NVRAM they occupy can be freed toaccommodate more incoming data.

Thus, as it simultaneously absorbs host-written data and optimizes storage utilization by continuous re-duction, Purity constantly adjusts its own operating parameters and relative priorities to balance capacityefficiency with the performance it delivers to hosts. Because storage reclamation and capacity optimiza-tion are continuous, data that is not optimally reduced when it first enters the array gradually becomesoptimized as it ages, both with respect to its size and to its location relative to other data.

Read and Write ProcessingPurity digitally “signs” and reduces data entering an array before persisting it, and re-expands and verifiesit before delivering it in response to host read requests. This section describes Purity’s write and readprocessing paths.

Write Processing

Figure 2.4. The Purity Write Processing Path

FlashArray controllers receive hosts’ write requests and the accompanying data via a storage network.Purity strips network protocol and computes a hash signature for each sector (# in Figure 2.4). Signaturesare associated with sectors for as long as they are stored in the array. They serve two purposes:

De-duplication:Purity compares sectors’ signatures to discover duplicates. The software replaces duplicate sectorswith pointers to the common stored data

End-to-end data verification:Immediately prior to delivering sectors read by a host, Purity re-computes their signatures and com-pares them with the stored values to verify that the data being delivered is in fact what was original-ly written. If verification fails, Purity uses the most appropriate RAID-3D mechanism to reproducecorrect data.

As Purity computes hash signatures, it also performs pattern elimination and lightweight compression (#).(Long-lived data is reduced more thoroughly during ongoing storage reclamation and data reduction.)

Page 38: Purestorage_UserGuide3_2

The Purity Operating Environment

25

Purity moves reduced data to a write stripe buffer, and copies it to two NVRAMs (#), after which it sendsending status to the host that made the write request. All further write path operations occur after hostsperceive that their requests are complete, so write latency is consistently low from the host's perspective.

Data remains in NVRAM until it has been written to solid state storage. If a controller reset (e.g., recoveryfrom power failure) occurs before Purity has written data to solid state storage, the software “replays” itfrom NVRAM upon restarting. Consequently, once a host receives ending status from an array, the datait wrote is persistent, even if an outage occurs before it has been written to solid state storage.

As it packs data into write unit buffers, Purity again reduces it (#) by deduplication and, resources permit-ting, more thorough compression. Exhaustiveness of reduction at this stage depends primarily upon arrayactivity and the adequacy of available NVRAM to accommodate the incoming write load.

Purity packs reduced host-written data into write unit buffers for writing to solid state storage. As it fillsbuffers, it calculates RAID-3D check data.

When the buffers that make up a write stripe fill, Purity schedules writes to solid state storage (#). Thesoftware staggers SSD writes so that as many drives as possible remain available for reading while datais being written. By writing aligned buffers that are multiples of SSD erase block sizes, and that are filledwith useful data, the software avoids write amplification within the SSDs. Similarly, because it writesentire RAID stripes, there is also no write amplification at the array level.

When an entire write stripe has been written, Purity frees the NVRAM occupied by the data for furtheruse as temporary persistent storage.

Read Processing

Figure 2.5. The Purity Read Processing Path

Read processing, illustrated graphically in Figure 2.5, is essentially write processing in reverse, with twoexceptions:

No NVRAM copy:Purity does not copy data read by hosts to NVRAM because it can be re-read directly from solid statestorage if necessary

Page 39: Purestorage_UserGuide3_2

The Purity Operating Environment

26

NVRAM lookaside:The software "looks aside" to determine whether the data requested by a host is still in NVRAM,and if so, retrieves it immediately from the corresponding DRAM write unit buffers, bypassing theremainder of the read path.

When it receives a host’s read command and determines that the requested data is not in its NVRAM,Purity processes the command as follows:

Location:Determines the locations of the requested data from its virtualization map

Reading:Reads the logical pages containing the requested data (#), recalculates and validates checksums (#) toverify page contents, and performs any necessary RAID recovery

Data extraction:Unpacks the requested volume blocks from their logical page buffers (#)

Data expansion:Re-expands (decompresses) the blocks (#). (for patterned blocks, reconstructs their contents)

Data verification:Recalculates and checks each sector's hash signature to verify that the data being delivered is identicalto that which the host originally wrote (#). Reconstructs correct data if necessary

Data encapsulation:Encapsulates verified blocks in network protocol, and schedules transmission to the host (#).

When all data specified in a read request has been transmitted, Purity sends an ending status message tothe host that made the read request.

Proactive TroubleshootingPurity supports multiple troubleshooting mechanisms that accommodate a range of data center operatingprocedures. FlashArray troubleshooting is:

Automated:Arrays detect and report potential problems without human intervention

Proactive:Arrays collect information about operating conditions, both periodically, on demand, and immediatelywhen significant events occur, so that problems can be remedied before they become critical.

The FlashArray troubleshooting mechanisms generally assume that there will be active participation byPure Storage Technical Support in maintaining FlashArray ‘health.’ However, for data centers whose op-erating procedures do not permit outside connections to equipment, troubleshooting reports can be directedto internal email addresses, or displayed on a GUI or CLI console.

Information SourcesFlashArray troubleshooting relies on two basic information sources:

Log:Purity continuously logs a variety of array activities, including performance summaries, hardware andoperating status reports, administrative actions, and others

Page 40: Purestorage_UserGuide3_2

The Purity Operating Environment

27

Alerts:For certain array state changes, and events that are potentially significant to array operation, the soft-ware generates alert messages and transmits them to one or more user-selectable destinations for re-medial action.

Reporting MechanismsThe phonehome facility is user-controlled secure direct link between a FlashArray and the Pure StorageTechnical Support web site. The link is used to transmit log contents and alert messages to Pure Storage sothat when diagnosis or remedial action is required, complete recent history about array performance andsignificant events is available to support technicians.

If the phonehome facility is enabled by the array administrator, Purity automatically transmits log and alertinformation directly to Pure Storage Technical Support via a secure network connection. Log contents aretransmitted hourly, and stored at the support web site, enabling detection of array performance and errorrate trends. Alerts are reported immediately when they occur so that timely action can be taken.

An administrator can enable and disable the phonehome facility at any time. All communication betweenan array and Pure Storage Technical Support is secure, using either the secure shell (ssh) or secure hy-pertext transport (https) protocol, selected by the software. A proxy hostname and port may be specifiedfor https communication if required.

In addition, an administrator can send log contents to Pure Storage Technical support on demand, All loginformation stored in the array, or information only for the current or previous day can be sent on demand.Transmission of log information can be status-checked, and cancelled if required.

AlertsFor events that might affect array operation adversely (generally related to component or communicationfailure), or that may be of informational interest to array administrators (e.g., an array coming online),Purity generates and delivers alert messages.

Purity logs all alerts, and in addition, includes three administrator-controlled mechanisms for immediatedelivery:

The phonehome facility:If the phonehome facility is enabled, the software delivers alert messages to Pure Storage Technicalsupport immediately upon event occurrence. If phonehome is disabled, alerts are logged, and deliveredwhen the facility is next enabled, or alternatively, when the purearray phonehome command or theSend Now button in the Support Connectivity view in the GUI SYSTEM tab is used to cause on-demand delivery

Electronic mail:

An administrator can use the purealert create command or the and buttons in theGUISYSTEM view to manage a list of email addresses to which certain alert messages are sent. Individualemail addresses can be added to and removed from the list, and transmission of alert messages tospecific addresses can be temporarily enabled or disabled without removing them from the list

SNMP traps:For each alert it generates, Purity can send a SNMP trap message to one or more designated SNMPmanager systems. Administrators can specify the network hostnames or IP addresses of managersdesignated to receive traps. Communication with SNMP managers is secure; the FlashArray admin-istrator can specify the SNMP authentication and encryption parameters to be used when communi-cating with each designated SNMP manager.

Page 41: Purestorage_UserGuide3_2

The Purity Operating Environment

28

The remoteassist FacilityIn rare instances, the most efficient way to diagnose problems and service an array is direct interventionby a Pure Storage Technical Support representative. Purity’s administrator-controlled remoteassist facilityenables a remote Pure Storage technician to communicate securely with an array, effectively establishingan administrative session for diagnosis and service.

Remoteassist sessions are controlled by the array administrator, who opens a secure channel between thearray and Pure Storage Technical Support, making it possible for a technician to log in to the array. Theadministrator can check session status and close the channel at any time.

Summarizing the Purity Operating EnvironmentPurity enables FlashArrays to achieve a demanding set of apparently conflicting goals--exceeding theperformance, reliability, and administrative simplicity of disk-based arrays, while delivering usable storagecapacity at price points below those of disk-based storage. These goals have been achieved by replacingconventional disk-based array design with an architecture specifically optimized for both the advantagesand constraints of flash SSDs. The main design principles that distinguish Purity from array controllersoftware designed for disks are:

Read and write anywhere:With solid-state storage, there is little need for logically adjacent data to be physically adjacent. Thepenalty for “gather reading” large blocks is so much lower than with disks as to be negligible. Whenplacing data on physical storage, Purity prioritizes write efficiency and long-term I/O balance abovelocation in virtual volume address space

Maximize the “value” of each write:Writing data to flash memory often results in time-consuming read-erase-overwrite cycles that canaccelerate device wear-out. Purity structures and schedules SSD writes so that each one updates oneor more full erase blocks with new or re-reduced data. Read-modify-write cycles are essentially elim-inated and writes are balanced across all of an array’s flash storage, optimizing performance whilemaximizing media lifetime

Provision storage precisely:Freed from the need to allocate in multiples of disk sectors, Purity provisions exactly as much physicalstorage as each host-written block of data requires after it is reduced. Volumes share the storage ineach flash segment as needed, and there is no wastage of physical storage due to allocation in large“chunks” that are only partly filled with data

Reduce data aggressively:Purity uses FlashArrays’ abundant processing power to reduce data as it enters the array, and to in-crease reduction efficiency throughout its life

Adapt to changing read error rates:Purity adjusts its RAID-3D parameters dynamically as SSD utilization, error rates and other conditionswithin an array change over time. The result is the appropriate level of protection for both new andmature data in each solid state storage segment.

These principles, which would be either inappropriate or impossible to adhere to in disk-based arrays,couple with new levels of administrative simplicity and efficient problem resolution to make FlashArraysunique in the enterprise storage space. Purity propels flash SSD storage out of specialty niche applicationsand into the data center cost:performance mainstream.

Page 42: Purestorage_UserGuide3_2

Part II. FlashArray Conceptsand Managed Objects

Page 43: Purestorage_UserGuide3_2

30

Table of Contents3. FlashArray Storage Capacity and Utilization ....................................................................... 31

Array Capacity and Storage Consumption ...................................................................... 31FlashArray Storage States ................................................................................... 31Reporting Array Capacity and Storage Consumption ................................................ 32

Volume and Snapshot Storage Consumption .................................................................. 32Provisioning ..................................................................................................... 32Data Reduction ................................................................................................. 33Snapshots and Physical Storage ........................................................................... 34

Measuring Volume Storage Usage ................................................................................ 35Reporting Volume Size and Storage Consumption ................................................... 35

The FlashArray Data Lifecycle .................................................................................... 364. FlashArray Managed Objects ........................................................................................... 38

Physical and Virtual Objects ....................................................................................... 38Object Naming .................................................................................................. 39

The Principal Virtual Objects ...................................................................................... 40Common Operations on Virtual Objects ................................................................ 40

Volumes .................................................................................................................. 41Managing Volume Size ...................................................................................... 41Immediate Volume Eradication ............................................................................ 42Changes in Volumes' Storage Consumption ........................................................... 42Associated Objects and Attribute Values ............................................................... 43

Snapshots ................................................................................................................. 43Hosts and Host Groups ............................................................................................... 44

Host and Host Group Attributes ........................................................................... 45Host-Volume Connections ........................................................................................... 45

Private and Shared Connections ........................................................................... 45Properties of Shared Connections ......................................................................... 46LUN Management ............................................................................................. 46

Managed Hardware Objects ......................................................................................... 47Hardware Component Naming ............................................................................. 47FlashArray Hardware Components ....................................................................... 47The Array ........................................................................................................ 48Solid State Drives .............................................................................................. 49FlashArray Ports ............................................................................................... 49Exporting Volumes to Hosts ................................................................................ 50

Page 44: Purestorage_UserGuide3_2

31

Chapter 3. FlashArray StorageCapacity and Utilization

The two keys to FlashArray cost-effectiveness are highly efficient provisioning and data reduction. One ofan array administrator's primary tasks is understanding and managing physical and virtual storage capacity.This chapter describes the ways in which physical storage and virtual capacity are used and measured.

Array Capacity and Storage ConsumptionAt the array level, administrators monitor physical storage consumption, and manage it by adding storagecapacity or relocating data sets when available (unallocated) storage becomes dangerously low.

FlashArray Storage States

The physical storage in a FlashArray can be in one of four states, illustrated in Figure 3.1.

Figure 3.1. FlashArray Physical Storage States

The three states of FlashArray storage that hold data are:

Volume data:Reduced host-written data that is not duplicated elsewhere in the array and descriptive metadata

"Shared" (deduplicated) data:Data that comprises the contents of two or more sector addresses (in the same or different volumes—FlashArray deduplication is array-wide)

"Stale" (overwritten or deleted) data:Data representing the contents of virtual sectors that have been overwritten or deleted by a host orby an array administrator. Such storage is deallocated and made available for future use by the con-tinuous storage reclamation process, but because the process runs asynchronously in the background,deallocation is not immediate.

The remaining storage in an array is in the fourth state—unallocated and available for storing incomingdata.

Page 45: Purestorage_UserGuide3_2

FlashArray Storage Ca-pacity and Utilization

32

Reporting Array Capacity and Storage ConsumptionThe purearray list --space command reports an array's physical storage capacity and the amount of storageoccupied by data and metadata, as the output of the sample CLI command in Example 3.1 illustrates.

Example 3.1. purearray list --space Example

pureuser@df-ha-34-35> purearray list --spaceName Capacity Data Reduction System Shared Space Volumes Snapshots Totaldf-ha-34-35 10.05T 5.3 to 1 1.77T 2.78T 2.09T 409.08G 7.04T

In this example:

Capacity:Aggregate physical storage capacity of all SSDs connected to the array

Data Reduction:The ratio of virtual sectors of all volumes to which hosts have written data to the amount of physicalstorage the data occupies

System:Physical storage occupied by RAID-3D parity and other metadata

Shared Space:Physical storage occupied by shared (deduplicated) data.

Volumes:Physical storage occupied by non-shared data (data that appears in only one volume)

Snapshots:Physical storage occupied by volume snapshots.

Total is the sum of the four categories of occupied physical space. The Volumes view of the GUI storagetab displays the same information in graphical and text form.

Volume and Snapshot Storage ConsumptionFlashArrays present disk-like volumes to connected hosts. They also maintain immutable snapshots ofvolume contents. As with conventional disks, a volume's storage capacity is presented as a set of consecu-tively numbered 512-byte sectors into which data can be written and from which it can be read. Hosts readand write data in blocks (consecutively numbered sequences of sectors). Purity allocates ("provisions")storage for data written by hosts, and reduces the data before storing it and throughout its lifetime.

ProvisioningThe provisioned size of a volume is its capacity as reported to host storage administration tools. As withconventional disks, the size presented by a FlashArray volume is nominally fixed, although it can be in-creased or decreased by an array administrator (host-side administrative action may be required to recog-nize volume size changes). To optimize physical storage utilization, however, FlashArray volumes are:

Thin-provisioned:Like conventional arrays that support thin provisioning, FlashArrays do not allocate physical storagefor volume sectors that no host has ever written, or for trimmed (expressly deallocated by host or arrayadministrator command) sector addresses

Page 46: Purestorage_UserGuide3_2

FlashArray Storage Ca-pacity and Utilization

33

Micro-provisioned:Unlike conventional thin provisioning arrays, FlashArrays allocate only the exact amount of physicalstorage required by each host-written block after reduction. In FlashArrays, there is no concept ofallocating storage in "chunks" of some fixed size.

Data ReductionThe second key to FlashArray cost effectiveness is data reduction—elimination of redundant data by threebasic techniques:

Pattern elimination:When Purity detects sequences of incoming sectors whose contents consist entirely of repeating pat-terns, it stores a description of the pattern and the sectors that contain it rather than the data itself. Thesoftware treats zero-filled sectors as if they had been trimmed—no space is allocated for them.

Duplicate elimination:Purity computes a hash value for each incoming sector and attempts to determine whether anothersector with the same hash value is stored in the array. If so, the sector is read and compared with theincoming one to avoid the possibility of aliasing. Instead of storing the incoming sector redundantly,Purity stores an additional reference to the single data representation. Purity deduplicates data globally(across an entire array)—if an identical sector is stored in an array, it is a deduplication candidate,regardless of the volume(s) with which it is associated

Compression:Purity attempts to compress the data in incoming sectors, cursorily upon entry, and more exhaustivelyduring its continuous storage reclamation background process

Purity applies these three techniques to data as it enters an array, as well as throughout the data's lifetime.

Data Reduction: An Example

Figure 3.2 shows a hypothetical example of the cumulative effect of of FlashArray data reduction onphysical storage consumption.

Figure 3.2. Data Reduction Example

In this example, hosts have written data to a total of 1,000 unique sector addresses:

Patterned:100 blocks contain repeated patterns, for which Purity stores metadata descriptors rather than theactual data

Page 47: Purestorage_UserGuide3_2

FlashArray Storage Ca-pacity and Utilization

34

Duplicate:200 blocks are duplicates of blocks already stored in the array; Purity stores references to these ratherthan duplicating stored data

Compressed:The remaining 70% of blocks compress to half their host-written size; Purity compresses them beforestoring (and during continuous storage reclamation).

Thus, the net physical storage consumed by host-written data in this example is 35% of the number ofunique volume sector addresses to which hosts have written data.

The example is hypothetical; each data set reduces differently, and unrelated data stored in an array caninfluence reduction. Nevertheless, administrators can use the array and volume measures reported by theCLI and GUI to estimate the amount of physical storage likely to be consumed by data sets similar tothose already stored in an array.

Snapshots and Physical StorageFlashArray snapshots occupy physical storage only in proportion to the number of sectors of their sourcevolumes that are overwritten by hosts, as Figure 3.3 illustrates.

Figure 3.3. Snapshot Space Consumption

In this example, two snapshots of a volume, S1 and S2, are taken at times t1 and t2 (t1 prior to t2). If a hostwrites data to the volume after t1 but before t2, Purity preserves the overwritten sectors' original contentsand associates them with S1 (i.e., space accounting charges them to S1). If in the interval between t1 andt2 a host reads sectors from snapshot S1, Purity delivers:

For sectors not modified since t1Current sector contents associated with the volume

For sectors modified since t1Preserved volume sector contents associated with S1.

Similarly, if a host writes volume sectors after t2, Purity preserves the overwritten sectors' previous contentsand associates them with S2 for space accounting purposes. If a host reads sectors from S2, Purity delivers:

For sectors not modified since t2Current sector contents associated with the volume

Page 48: Purestorage_UserGuide3_2

FlashArray Storage Ca-pacity and Utilization

35

For sectors modified since t2Preserved volume sector contents associated with S2.

If, however, a host reads sectors from S1 after t2, Purity delivers:

For sectors not modified since t1Current sector contents associated with the volume

For sectors modified between t1 and t2Preserved volume sector contents associated with S1

For sectors modified since t2Preserved volume sector contents associated with S2.

If S1 is destroyed, storage associated with it is reclaimed because there is no longer a need to preserve pre-update content for updates made prior to t2.

If S2 is destroyed, however, storage associated with it is preserved and associated with S1 because the datain it represents pre-update content for sectors updated after t1.

To generalize, for volumes with two or more snapshots:

Destroying the Oldest Snapshot:Space associated with the destroyed snapshot is reclaimed (after the 24-hour eradication delay periodhas elapsed or after a purevol eradicate command is executed)

Destroying Other Snapshots:Space associated with the destroyed snapshot is associated with to the next older snapshot unless itis already reflected there because the same sector was written both after the next older snapshot andafter the destroyed snapshot, in which case it is reclaimed.

Measuring Volume Storage UsageBecause data stored in a FlashArray is virtualized, thin-provisioned, and reduced, volume storage is mon-itored and managed from two viewpoints:

Host view:Virtual storage capacity ("size") and consumption as seen by hosts' storage administration tools

Array view:Physical storage capacity occupied by data and the metadata that describes and protects it.

Reporting Volume Size and Storage ConsumptionFlashArray administrators specify volumes' host-visible capacities ("sizes") when creating them, and mayincrease and decrease size at any time with the purevol setattr and purevol truncate commands respec-tively. Purity reports volume size to host administrative tools via storage network protocols. The CLI andGUI report both size and physical storage consumption, as the output from the example purevol list --space CLI command in Example 3.2 illustrates.

Example 3.2. Sample purevol list --space CLI Command Output

pureuser@df-ha-34-35> purevol list --space production-esx-datastore01Name Size Data Reduction System Shared Space Volume Snapshots Totalproduction-esx-datastore01 3T 3.7 to 1 148.01G - 484.78G 31.85G 664.64G

Page 49: Purestorage_UserGuide3_2

FlashArray Storage Ca-pacity and Utilization

36

The per-volume capacity and physical storage consumption metrics reported by the CLI are:

Size:Provisioned (virtual) size of the volume. The storage capacity that Purity reports to host storage ad-ministration tools. The physical storage occupied by a volume is related to data actually written byhosts, not to its provisioned size

Data Reduction:The ratio of virtual sectors in the volume to which hosts have written data to the amount of physicalstorage the data occupies

System:Physical storage occupied by RAID-3D parity and other metadata

Shared Space:This column is blank except when size and storage consumption of all volumes including totals isreported. In that case, the value reported in the (total) row represents the total physical storage occu-pied by deduplicated data

Volume:Physical storage occupied by volume data that is not duplicated elsewhere in the array

Snapshots:Total physical storage occupied by snapshots of the volume

Total:Sum of values reported in the System, Volume, and Snapshots columns. Total physical storage at-tributable to the volume.

The FlashArray Data LifecycleData stored in a FlashArray undergoes continuous reorganization to improve physical storage utilizationand reclaim storage occupied by data that has been superseded by host overwrite or deletion. Figure 3.4summarizes the FlashArray data life cycle.

Figure 3.4. FlashArray Physical Storage Life Cycle

Page 50: Purestorage_UserGuide3_2

FlashArray Storage Ca-pacity and Utilization

37

The steps enumerated in Figure 3.4 are as follows:

Host Write Processing (1):Data written by hosts undergoes initial processing (illustrated graphically in Figure 2.4, “The PurityWrite Processing Path”) as it enters an array. The result is data that has undergone initial reduction,and been placed in write buffers

Writing to Persistent Storage (2):As write buffers fill, they are written to segments of persistent solid-state storage (illustrated in Fig-ure 2.2, “The Purity On-Media Data Layout”)

Segment Selection (3):A Purity background process continually monitors storage segments for data that has been obsolet-ed by host overwrites, volume destruction or truncation, or trimming (4). Segments that contain apredominance of obsoleted data become high-priority candidates for storage reclamation and furtherreduction of the live data in them

Data Reduction (3):As Purity processes segments, it opportunistically deduplicates and compresses the live data in them,using more exhaustive algorithms than those used during initial reduction (1). Reprocessed data ismoved to write buffers that are being filled; thus, write buffers generally contain a combination ofnew data entering the array and data that has been moved from segments being vacated to improveutilization. Moreover, data stored in an array is continually being moved and reprocessed, minimizingthe potential of idle data deterioration that is common to commodity flash media

Storage Reclamation (6):As live data is moved from segments, they are returned to the pool of storage available for allocatingsegments. Purity treats all of an array's solid-state storage as a single homogeneous pool

Reallocation (7):Purity allocates segments of storage from the pool of available solid-state storage as they are required.Typically, the software fills write buffers for multiple segments concurrently. This allows the softwareto consolidate different types of data (e.g., highly-compressible, highly-duplicated, etc.) so that themost appropriate policies can be applied to them.

Occasionally, continuous data reduction can result in behavior unfamiliar to administrators experiencedwith conventional arrays. For example, as Purity detects additional duplication and compresses block con-tents more efficiently, a volume’s physical storage occupancy may decrease, even as hosts write moredata to it.

Page 51: Purestorage_UserGuide3_2

38

Chapter 4. FlashArray ManagedObjects

For administrators, the primary benefit of the FlashArray architecture is simplicity—there are no RAIDgroups to define, no spares to designate, no juggling of data sets to balance load, no emergency drive re-builds to oversee and so forth. Nevertheless, administrators should be familiar with the FlashArray objectsthat are managed and their attributes. This chapter describes the FlashArray managed objects.

Physical and Virtual ObjectsAdministrators managing FlashArrays control both physical (hardware) and virtual (instantiated by Purity)objects. The managed hardware objects include:

Array:The array and properties that pertain to it as a whole

Drives:SSDs and NVRAMs

Storage and Administrative Networks:Storage network access points (Fibre Channel, iSCSI, and administrative Ethernet ports) and networkaccess properties

Hardware:Hardware components (e.g., power supplies, cooling devices, drive bays, etc.) that are capable ofreporting status and, in some cases, responding to commands.

The managed virtual objects in a FlashArray are:

Volumes and snapshots:The disk-like virtual storage devices presented to hosts and snapshots of their contents

Hosts:Objects that contain the storage network addresses necessary for Purity to communicate with hosts

Host Groups (hgroups):Collections of host objects with common connectivity to volumes

SNMP Managers:Parameters required by Purity's built-in SNMP agent to communicate with remote SNMP managers

Alerts:Messages generated by Purity when significant events occur in an array and the recipients to whichthey are delivered

The FlashArray Administrator:Credentials for the FlashArray administrator. Purity version v3.2 supports one administrative accountcalled pureuser.

The Purity GUI includes interactive views for managing each type of object. These are described in Part III.The CLI includes a command for each type of managed object. Management operations are expressed

Page 52: Purestorage_UserGuide3_2

FlashArray Managed Objects

39

as subcommands, verbs that follow command names. Object instances are expressed as arguments, andoptions specify parameters of the actions. For example, the command:

purevol create —size 500G Vol1

creates a volume called Vol1 with a size (host-visible storage capacity) of 500 gigabytes. In this command:

purevolIndicates an operation on volume objects

createSpecifies that one or more new volume objects are to be created

--size 500GSpecifies that all volumes created by the command should appear to hosts to contain 500 gigabytesof storage capacity

Vol1Implicitly specifies the number of volumes to be created (one) and names the new volume (in opera-tions on existing objects, arguments specify the objects to be operated upon).

Object NamingFlashArray hardware object names are fixed; they encode the objects' positions within an array. The pat-terns for hardware objects names are listed in the purehw(1) man page.

Administrators name volumes, host objects, and host groups when they create them. The names of theseobjects are used in CLI and GUI commands and displays; they have no significance outside the array.Objects can be renamed at any time by their respective rename subcommands or equivalent GUI actions.For example, the command:

purevol rename Vol1 NewVol1

changes the name of Vol1 to NewVol1. After the command executes, the volume is referred to as NewVol1in CLI and GUI interactions. Vol1 is no longer recognized.

The name space for virtual objects is object type-specific. For example, it is possible, although hardly abest practice, to create a host, a host group, and a volume all called VOL1.

Object names use Internet domain name (RFC 1035) syntax with minor exceptions:

• The underscore character (_) is permitted in volume names only

• Array names are limited to 56 characters to allow Purity to construct valid fully-qualified domain names(FQDNs) for each controller in a dual-controller array by appending the controller name to the arrayname

Names consist of 1-63 letters (A-Z and a-z), digits (0-9), and hyphens (and underscore characters forvolume names). A name must start with a letter and end with an alphanumeric character.

Object names are case-insensitive on input. For example, vol1, Vol1, VOL1, etc. all represent the samevolume; CT0, ct0, and Ct0 represent the same controller, and so forth. Purity displays object names asthey were entered in the create or most recent rename subcommand for the object, regardless of the casein which they are entered subsequently. As Example 4.1 illustrates, vol3, VOL3, and vOL3 all refer to thesame volume, whose name was entered as Vol3 when it was created or last renamed.

Page 53: Purestorage_UserGuide3_2

FlashArray Managed Objects

40

Example 4.1. Case Insensitivity in Object Names

pureuser@MyFlashArray-ct0$ purevol create Vol3 —size 100gName Size Source Created SerialVol3 100G - 2013-05-01 10:49 PDT 292D990D791686930001000Apureuser@MyFlashArray-ct0$ purevol list vol3Name Size Source Created SerialVol3 100G - 2013-05-01 10:49 PDT 292D990D791686930001000Apureuser@MyFlashArray-ct0$ purevol list VOL3Name Size Source Created SerialVol3 100G - 2013-05-01 10:49 PDT 292D990D791686930001000Apureuser@MyFlashArray-ct0$ purevol list vOL3Name Size Source Created SerialVol3 100G - 2013-05-01 10:49 PDT 292D990D791686930001000A

A FlashArray has a name, predominantly for interaction with the administrative network. An array's namecan be changed with the purearray rename command. Array names follow the same character set rulesas other objects, but are limited to 56 characters. Purity creates individual controller names by appending -CT0 (and -CT1 in dual controller arrays) to the array name. Controller names change automatically whenan array name changes; they cannot be manipulated independently.

The Principal Virtual ObjectsThe fundamental FlashArray virtual objects are the volume, a disk-like storage device, and the host, acontainer object for storage network addresses (Fibre Channel WWNs or iSCSI IQNs) associated witha computer that reads and writes data in volumes. Host objects and volumes are related by connectionsestablished by administrators. A host must be connected to a volume in order for Purity to honor its I/O commands.

In addition to these, Purity implements a host group object for maintaining consistent connections betweenspecified sets of hosts and volumes. Host group connections are:

Consistent:Connecting a volume to a host group guarantees that (a) all hosts associated with the group are con-nected to the volume, (b) all hosts in the group use the same LUN to address it, and (c) no host in thegroup has a connection to the volume via a different LUN

Automatic:Associating a host with a host group guarantees that (a) it gets connected to all volumes with connec-tions to the group, (b) it uses the same LUN to address each volume as other hosts associated with thegroup, and (c) it does not have a private connection to any volume connected to the group.

Administrators typically use host groups to manage storage for collections of related hosts, such as virtu-al machine or database clusters, that require concurrent access to the same volumes via the same LUNaddresses.

Common Operations on Virtual Objects

Five operations are common to host group, host, and volume objects. The operations are expressed asverbs in the CLI, and through interactive dialogs in the GUI. The operations are:

Page 54: Purestorage_UserGuide3_2

FlashArray Managed Objects

41

create:Creates and names a new object

delete:Deletes an object. Deleting a volume is unique in that it (a) destroys customer data, and (b) is subjectto a 24-hour delay, so the CLI uses the verb destroy rather than delete, to indicate this action. Duringthe 24-hour eradication pending period, the action can be reversed by the purevol recover command,and the volume restored to active status with its data intact

setattr:Assigns values to object attributes, such as size (volumes), WWNs or IQNs (hosts), and hosts (hostgroups)

list:Displays information about specified objects

listobj:Creates white space separated lists of objects related to specified objects (e.g., hosts associated witha host group, volumes connected to a specified host, etc.) that can be used in scripts.

The sections that follow describe the properties of the FlashArray virtual object types.

VolumesFlashArrays present storage to hosts as disk-like volumes. Like a disk, a volume appears to hosts as a setof consecutively numbered 512-byte sectors in which they can write and from which they can read blocks(sequences of consecutively numbered sectors) of data. Hosts use logical unit numbers (LUNs) to addresscommands to volumes to which they are connected.

Pure Storage volumes are virtual; array administrators create, resize, and delete (destroy) them at will.Each volume has a name and a size, specified upon creation and changeable at any time via the purevolrename and the purevol setattr and purevol truncate commands or equivalent GUI actions respectively.Volumes' names identify them in Purity administrative operations and displays; hence they must be uniquewithin an array. They have no meaning outside an array (hosts communicate with an array by the storagenetwork addresses of its ports, and address volumes by LUNs associated with them when connections areestablished).

Managing Volume SizeA volume's size is its storage capacity as perceived by hosts. When a volume is created, its size is specified,either in 512-byte sectors, kilobytes (210), megabytes (220), gigabytes (230), or terabytes (240). At any pointduring its life, a volume's size can be increased or decreased. Figure 4.1 illustrates the effects of increasesand reductions in volume size.

To minimize the possibility of clerical errors unintentionally destroying data, two different CLI subcom-mands are used to increase and decrease a volume’s perceived size:

To increase a volume's size:Use the purevol setattr command (callout 1), specifying a size larger than the volume’s current size.Virtual sectors are added to the top of the volume's sector address space.

To decrease a volume's size:Use the purevol truncate command (callout 2), specifying a size smaller than the volume’s currentsize. Truncating a volume reduces its largest valid sector number.

Page 55: Purestorage_UserGuide3_2

FlashArray Managed Objects

42

Figure 4.1. Changes In Volume Size

Changes in a volume's size take effect immediately. When a volume is truncated, data stored in the trun-cated sectors is eradicated immediately. This differs from volume destruction. When a volume is destroyedin its entirety, the data it contains ramains intact for 24-hours unless specifically eradicated. At any timeduring that interval, the volume can be restored to its original size with data in the truncated blocks intactby the purevol recover command (callout 3).

Neither creating a volume nor increasing its size has any measurable effect on an array's physical storageconsumption, because only data written by hosts consumes physical storage. Decreasing a volume's sizereduces its physical storage consumption to the extent that the range of sectors eliminated is occupied bydata.

Immediate Volume Eradication

Administrators can use the purevol eradicate CLI command to terminate the eradication pending periodfor destroyed volumes and make storage occupied by their data available for reclamation immediately.(See Figure 7.5 for the equivalent GUI actions). Once a volume has been eradicated, its data is no longerrecoverable.

If an array's free storage drops below an internal threshold, Purity effectively "throttles" writes by intro-ducing a slight delay prior to processing them. Administrators can use the purearray list command withthe --space option to monitor the amount of storage that must be reclaimed in order to restore full writeperformance.

Changes in Volumes' Storage Consumption

When administrators create volumes and connect them to hosts, they are immediately available for I/O.Because they are micro-provisioned, however, volumes occupy almost no physical storage until hosts ac-tually write data to them. (The metadata that describes a volume occupies a minuscule amount of storage.)

Like conventional thin-provisioned volumes, FlashArray volumes occupy more storage as hosts writedata to them. Unlike conventional volumes, however, the physical storage occupied by a volume can alsodecrease, in three ways:

Data reduction:A Purity background task reduces the data stored in an array continuously. Over time, more efficientcompression and deduplication tend to reduce the amount of storage occupied by a given body of data.

Page 56: Purestorage_UserGuide3_2

FlashArray Managed Objects

43

Trimming:Purity executes hosts' TRIM commands that deallocate previously written volume blocks that areno longer in use (e.g., as when file systems delete files and directories and reclaim the storage theyoccupy)

Zeroing:Purity detects host writes of sectors of zeros and treats zeroed blocks as though they had been trimmed.

Associated Objects and Attribute Values

Volumes are associated with hosts by means of connections between the two. A host-volume connectionessentially directs Purity to honor I/O requests from the host addressed to the volume. A volume maybe connected to one or more hosts, either individually (private connections) or by virtue of the hosts'association with a host group (shared connections).

To connect a single volume privately to one or more hosts, issue the purehost connect CLI commandwith the hosts as arguments and the volume specified as an option. For example:

purehost connect --vol Vol1 Host1 Host2 ...

To connect a single host to one or more volumes, again, privately, issue the purevol connect CLI commandwith the volumes as arguments and the host specified as an option. For example:

purevol connect --host Host1 Vol1 Vol2 ...

To determine which hosts are connected to a given volume, either privately or with shared connections,view the volume's detail pane in the GUI Storage Tab or issue the purevol listobj CLI command, speci-fying the volume and the --type host option.

To determine which volumes have private connections to a given host, view the host's detail pane in theGUI Storage Tab, or issue the purehost listobj command, specifying the host and the --type vol option.

Similarly, to determine which volumes have shared connections to a given host group, view the host group'sdetail pane in the GUI Storage Tab or issue the purehgroup listobj command, specifying the host groupand the --type vol option.

SnapshotsPurity can create snapshots—point-in-time images of the contents of one or more volumes. Snapshotscapture stable volume states for use in backup and data analysis, and can also be useful in recovering fromdata corruption.

FlashArray snapshots are invisible to hosts—the data they capture cannot be read or written. Internally,they are space-saving—they consume storage only when data on the volumes whose images they captureis overwritten. Figure 4.2 illustrates the CLI commands used to manage snapshots. Each has an equivalentGUI action.

Page 57: Purestorage_UserGuide3_2

FlashArray Managed Objects

44

Figure 4.2. Snapshot Management

The purevol snap command creates snapshots of one or more specified volumes. The snapshots taken bya single command or GUI action are atomic—they all represent the same instant in time. Purity namessnapshots automatically by appending the word snap and a unique number to the name of the snappedvolume (callout 2 in Figure 4.2). (An administrator can use the --suffix option to specify an alternativesuffix for snapshot names.)

The conceptual size of a snapshot is that of the snapped volume. If a volume is snapped, resized, andsnapped again, the two snapshots have different conceptual sizes. Both, however, consume physical storageonly when data in the volume whose contents they capture is overwritten.

To expose the data in a snapshot to hosts, an administrator uses the purevol copy command to (virtually)copy it to an existing (callout 3) or implicitly created (callout 4) new volume. When an existing volumeis specified as the target, Purity makes all storage occupied by its contents eligible for reclamation im-mediately; it cannot be recovered. Moreover, if the size of an existing target volume differs from that ofthe source snapshot, Purity implicitly resizes the target (callout 3) to match the size of the source. Untildata is written to a purevol copy target, it occupies no physical storage; its "contents" consist entirely ofreferences to sectors associated with the source snapshot and its source volume.

The purevol destroy command destroys snapshots. Like volumes, destroyed snapshots undergo a 24-houreradication pending period during which they can be recovered (purevol recover command) with the datathey contain intact (callout 6). The purevol eradicate command can be used to terminate the eradicationpending period, and make storage occupied by data associated with the snapshot eligible for reallocation(callouts 5,7).

Hosts and Host GroupsIn a FlashArray context, hosts are virtual or physical computers (typically application or database servers)whose Fibre Channel or iSCSI ports are attached to the storage network. Purity recognizes and communi-cates with hosts by their storage network addresses—Fibre Channel worldwide names (WWNs) or iSCSI

Page 58: Purestorage_UserGuide3_2

FlashArray Managed Objects

45

qualified names (IQNs) associated with their ports; the storage network routes I/O requests and data be-tween the two. Purity represents hosts as host objects—named data structures that contain lists of WWNsor IQNs. Storage network addresses can be specified when a host object is created, and can be addedand removed using the purehost setattr command with --wwnlist, --iqnlist and related options (see thepurehost setattr command synopsis). A FlashArray running the Purity v3.2 release supports either FibreChannel or iSCSI host connections, but not both.

Host groups are collections of host objects for which Purity maintains consistent shared volume connec-tions. A volume can be connected to multiple host groups simultaneously, but a host can be associatedwith at most one host group at a time. To move a host between host groups, use the purehgroup setattrcommand with the --remhostlist option to remove it from its original group, and the same command withthe --addhostlist option to add it to the new one.

Host and Host Group AttributesA Purity host object's attributes are its name and one or more storage network addresses (Fibre ChannelWWNs or iSCSI IQNs) by which the array recognizes the host it represents. Administrators use either thepurehost create CLI command or the equivalent action on the GUI Storage Tab to create and name hostobjects and optionally to associate WWNs or IQNs with them. Purity host names are used only in arrayadministration; they are not visible outside the array.

A host must have at least one associated storage network address in order to be connected to volumes.Storage network addresses can be assigned with the --wwnlist or--iqnlist options in the purehost createcommand. Addresses can be added and removed, or a list can be replaced in its entirety by specifying the --addwwnlist, --remwwnlist, and --wwnlist options (or their IQN equivalents) respectively in the purehostsetattrcommand.

Host WWNs and IQNs must be unique in an array; a storage network address cannot be associated withmore than one host object.

Similarly, a host group's attributes are its name and one or more host object names. Administrators usethe GUI or the purehgroup setattr CLI command with the --hostlist, --addhostlist, and --remhostlistoptions to associate hosts with or dissociate them from a host group.

Host-Volume ConnectionsIn order to read and write data on FlashArray volumes, hosts must also be connected to the volumes. Aconnection with a volume associates a LUN with the {host, volume} pair. A volume may be connected toany number of hosts, addressed by either the same or different LUNs, but it may only have one connectionusing a single LUN to any given host. By default, Purity assigns valid LUNs when creating connections.These can be overridden by specifying the --lun option with a valid value in the connect command.

Private and Shared ConnectionsPurity distinguishes between two types of host-volume connections:

Private:Connect a single host to a single volume. A volume may have private connections to any number ofhosts; each host may address it by any valid LUN

Shared:Connect all hosts associated with a host group to a single volume. Every host associated with a hostgroup addresses a volume connected to the group by the same LUN.

Page 59: Purestorage_UserGuide3_2

FlashArray Managed Objects

46

Private and shared connections are functionally identical: both make it possible for hosts to read and writedata on volumes. They differ in how administrators create and delete them:

Private connections:To establish a private connection between a volume and a host use either the purehost connect or thepurevol connect command with the --host option

Shared connections:To simultaneously connect a volume to all hosts associated with a host group use either the pure-hgroup connect or the purevol connect command with the --hgroup option.

The purehost connect and purehgroup connect subcommands connect a single volume to one or morehosts or host groups. The purevol connect command connects a single host or host group to one or morevolumes. The two function identically; both are included in the CLI for administrator convenience.

The GUI Storage Tab includes corresponding capabilities, accessed via buttons that display pop-up di-alogs .

A host may have only one connection to a given volume at a time. Administrators must delete any privateconnections between a host and volumes associated with a host group before associating the host with thehost group. Private connections to other volumes are unaffected by association with a host group.

Properties of Shared ConnectionsShared connections between a volume and a host group are consistent and automatic in the followingrespects:

Common LUNs:All hosts associated with a host group address a volume with a shared connection to the group bythe same LUN

Automatic connect/disconnect (volumes):Connecting a volume to a host group automatically connects it to all hosts associated with the group;disconnecting a volume from a host group breaks all of its connections with hosts in the group

Automatic connect/disconnect (hosts):Associating a host with a host group automatically connects it to all volumes with shared connectionsto the group; removing a host from a host group causes all of its shared connections to be broken.

LUN ManagementThe Purity v3.2 release supports LUNs in the range [1…255]. A host that is not associated with a hostgroup can use any LUNs in that range to address privately connected volumes. Hosts associated with hostgroups use LUNs [10...255] for shared connections. LUNs [1...9] remain available for those hosts' privateconnections. Before a host can be associated with a host group, any private connections that use LUNs inthe range [10...255] must be disconnected (They can be re-established using LUNs in the [1…9] range.)

When connecting a volume to a host or host group, an administrator may allow Purity to select the LUN bywhich hosts will address it. Alternatively, a specific LUN may be selected by specifying the --lun optionin the purehost connect or purevol connect command. Purity does not connect a volume to a host or hostgroup if any of the following occurs:

LUN conflict:The specified LUN is already being used to address another volume

Page 60: Purestorage_UserGuide3_2

FlashArray Managed Objects

47

LUN range conflict:The specified LUN is not in the range [1…9] (for private connections to hosts associated with hostgroups), or in the range [10...255] (for shared connections)

No available LUN:The host or host group is already using all LUNs in the appropriate range to address volumes.

Managed Hardware ObjectsThe purehw list CLI command is used to manage a FlashArray's major hardware components. Hardwarecomponent management consists of:

Displaying status:Displaying the operational status (healthy, failed, or at risk) of components

Setting component IDs:Specifying a component identifier (storage shelves only in the Purity v3.2 release)

Identifying components:Turning visual identifiers on or off (drives, controllers, and storage shelves only)

Hardware Component NamingEach hardware component in a FlashArray has a unique name that identifies its location in the array forservice purposes. Controller and storage shelf chassis names have the form XXm; names of other compo-nents have the form XXm.YYn , where:

XX:Denotes the type of chassis (CT for controller; SH for storage shelf)

m:Identifies the specific chassis. For controllers, m has a value of 0 or 1. For storage shelves, it is thenumber assigned to the shelf by the purehw setattr command with the --id option or from the shelffront panel

YY:Denotes the type of component in the chassis (e.g., BAY for drive bay, FAN for cooling device, etc.)Managed component types are listed below

n:Identifies the specific component by its “index” (position within the chassis).

FlashArray Hardware ComponentsThe following array hardware components are capable of reporting operational status:

Controller (CTm):Purity reports an overall controller status (e.g., ready for properly functioning controllers) generatedfrom a combination of (a) status of components in the chassis, and (b) software conditions. The man-ageable components in a controller chassis are:

• Drive Bay (CTm.BAYn): Each controller chassis contains either 12 (FA-300 controllers) or 6(FA-400 chassis) drive bays, one of which houses an SSD from which the controller boots Purity.The other bays are not normally occupied. Controller drive bays report status

Page 61: Purestorage_UserGuide3_2

FlashArray Managed Objects

48

• Ethernet Port (CTm.ETHn): Each controller contains two (FA-300) or four (FA-400) 1GbE Eth-ernet ports, one of which is used for remote array administration, and may contain two dual port10GbE I/O cards (FA-300E and FA-400E). Both types of Ethernet ports report status and link speed

• Fan (CTm.FANn): Each controller contains 10 cooling fans that report status and operating speed.Fan replacement is a service technician operation

• Fibre Channel Port (CTm.FCn): Each Fibre Channel controller contains two dual-port Fibre Chan-nel host bus adapters. Fibre Channel ports report status and link speed

• Infiniband Port (CTm.IBn): (Present in dual controller arrays only) The controllers in FA-320(E)and FA-420(E) arrays intercommunicate via redundant Infiniband interfaces located on a PCI Ex-press adapter card. Infiniband ports report status and communication speed

• Power Supply (CTm.PWRn): Each controller contains two power supplies that report status. Powersupply replacement is a service technician operation

• SAS Port (CTm.SASn): Each controller contains two dual-port SAS interface cards used to com-municate with drives in storage shelves. SAS ports are indexed top-to-bottom, left-to-right, facingthe rear of the controller chassis (i.e., CTm.SAS0 is the upper left port, CTm.SAS1 is the lower leftport, CTm.SAS2 is the upper right port, and CTm.SAS3 is the lower-right port). SAS ports reportstatus and communication speed

• Temperature Sensor (CTm.TMPn): Each controller chassis contains 19 sensors that report temper-ature ar various points within the chassis. These report status and ambient temperature. In steady-state operation, Purity generates alerts when a temperature sensor reports a value outside of normaloperating range.

Storage Shelf (SHm):Purity reports an overall storage shelf status generated from a combination of (a) status of componentsin the chassis, and (b) software conditions. The components in a storage shelf chassis are:

• Drive Bay (SHm.BAYn): Each storage shelf contains a row of 24 hot-swappable bays, indexedfrom left to right starting with zero. In an array's first two shelves, bays 1 through 22 contain SSDsused for data storage; bays 0 and 23 contain NVRAMs used to temporarily buffer host-written data.If a third shelf is present, all 24 of its its bays are populated with SSDs

• Drive (SHm.DRVn): A carrier in each drive bay contains an SSD or NVRAM, whose index is thebay's index. Drive-carrier assemblies are user-replaceable, but Pure Storage does not support drivecarrier disassembly

• Fan (SHm.FANn): Each of a shelf's two power and cooling modules (PCMs) reports cooling fanstatus and current operating speed (RPM)

• Power Supply (SHm.PWRn): The power supply sub-component within each of a shelf's two powerand cooling modules (PCMs) reports its operating status

• SAS Port (SHm.SASn): Two I/O Modules (IOMs) accessed from the rear of the shelf chassiscontain the shelf's six SAS ports.

The ArrayWhen they are powered on, FlashArray controllers and storage shelves automatically discover each otherand self-configure to optimize I/O performance, data integrity, availability, and fault recoverability, allwithout administrator intervention.

Page 62: Purestorage_UserGuide3_2

FlashArray Managed Objects

49

Administration of a FlashArray object consists of managing:

Transmission of Diagnostic Information:Enabling and disabling the automatic transmission of event logs to Pure Storage Technical Support,and email alerts to designated recipients

Remote Access:Enabling, disabling, and testing the phonehome and remoteassist features to provide Pure StorageTechnical Support with service information and access to an array

Array Attributes:Specifying and changing array name, proxy server name (for https log transmission), NTP server,email relay host and sender domain, and iSCSI path timeout

Array Status:Monitoring controller status and physical storage occupancy and responding to failures and low freespace conditions.

Solid State DrivesSolid state drives (SSDs) provide the persistent data storage in FlashArrays. Unlike conventional diskarrays, Purity does not organize drives into static RAID groups. One important consequence of this is thatadministrators do not define RAID groups, designate spares, match drive capacities, or segregate data setson specific drive groups to balance I/O load.

Administrators use the CLI to display information about selected drives or all drives in an array. The CLIdisplays drive capacity, state, and, for drives that have failed, the time of failure and the amount of dataleft to rebuild or, or if rebuilding is complete, the time at which it last finished.

FlashArray PortsFlashArray controllers and storage shelves contain SAS and Infiniband ports through which they connectto each other and ports through which they connect to the external environment. The ports that connectan array to its environment are:

Host ports:8Gb/s Fibre Channel ports or 10Gb/s Ethernet ports used with the iSCSI protocol. A FlashArrayrunning the Purity v3.2 release supports either type of host port, but not both. An array's host portsare the target ends of connections to companion initiator ports in hosts.

Administrative port:A 1GbE port (ETH0) that connects each controller to a network. Used to manage a FlashArray re-motely via the GUI or CLI from any browser or terminal emulator-capable device.

The ports that interconnect an array's controllers and storage shelves are:

Infiniband:Interconnect the controllers in a dual-controller FA-320 or FA-420 array. FA-320 Infiniband portsoperate at 40Gb/s; FA-420 ports operate at 56Gb/s

SAS (Serial-Attached SCSI):4-lane x 6Gb/s ports that interconnect controllers and SAS expander I/O Modules (IOMs) locatedin storage shelves. The IOMs connect to the SSDs and NVRAMs in a storage shelf, making themaccessible to both controllers via two independent paths.

Page 63: Purestorage_UserGuide3_2

FlashArray Managed Objects

50

FlashArray ports are identified by their physical locations in the controller chassis. Each port is on an I/Ocard that occupies a PCI Express slot in the chassis. Identifying labels are attached to I/O cards, controllers,and storage shelf IOMs during manufacture. The CLI and GUI use these numbers to identify array ports.

Exporting Volumes to HostsA FlashArray's host ports are physical objects with no administrator-specifiable attributes (their storagenetwork addresses are permanently assigned by Purity and can be displaying via the pureport list com-mand). In the CLI and GUI, Purity identifies host ports by their locations in the controller chassis (e.g.,FC0, FC1, etc. for Fibre Channel, ETH4, ETH5, etc. for 10GbE iSCSI).

Purity makes all volumes in a FlashArray visible to all connected hosts on all of an array's Fibre Channelor iSCSI ports. To restrict the array ports on which hosts can communicate, use storage network zoning.

Page 64: Purestorage_UserGuide3_2

Part III. Using the Purity GUIto Administer a FlashArray

Page 65: Purestorage_UserGuide3_2

52

Table of Contents5. The Purity GUI ............................................................................................................. 53

Accessing the GUI .................................................................................................... 536. The Dashboard Tab ........................................................................................................ 54

The Capacity Pane ..................................................................................................... 56The Performance Panes .............................................................................................. 57

7. The Storage Tab ............................................................................................................ 59Volume Administrative Tasks ...................................................................................... 60

Creating New Volumes ...................................................................................... 60Managing Existing Volumes ................................................................................ 61Deleting Unneeded Volumes ............................................................................... 61Downloading Volume Information ........................................................................ 63Volume Detail Views ......................................................................................... 63

Host Group and Host Administrative Tasks .................................................................... 64Creating New Host Groups and Hosts ................................................................... 65Renaming and Deleting Host Groups and Hosts ...................................................... 66The Host Group Details View ............................................................................. 66Adding Hosts and Connecting Volumes to a Host Group .......................................... 67Other Host Group Administrative Tasks ................................................................ 69

Host Administrative Tasks .......................................................................................... 71Creating and Deleting Host Objects ...................................................................... 71The Individual Host Detail View .......................................................................... 73Host Object Management .................................................................................... 75

Host-Volume Connection Tasks ................................................................................... 77Making Private Host-Volume Connections ............................................................. 78Shared Connections ........................................................................................... 79Breaking Private Connections .............................................................................. 81Breaking Shared Connections .............................................................................. 82

8. The Analysis Tab .......................................................................................................... 84Analysis Tab Display Control ...................................................................................... 84Analysis Tab Time Scales ........................................................................................... 85Analysis Tab Displayed Data ...................................................................................... 88

Capacity View .................................................................................................. 88Performance View ............................................................................................. 88

9. The System Tab ............................................................................................................ 92The Array ("SYSTEM") Health View ........................................................................... 92The Connections View ............................................................................................... 94The Configuration View ............................................................................................. 95

The Array Information Sub-View ......................................................................... 95The Networking Sub-View .................................................................................. 96The Support Connectivity Sub-View ..................................................................... 97The Alerts Sub-View ......................................................................................... 97The SNMP Sub-View ........................................................................................ 98The Array Time Sub-View .................................................................................. 99

The Users View ...................................................................................................... 10010. The Messages Tab ...................................................................................................... 102

Viewing Alert Messages ........................................................................................... 103Managing Alert Messages ......................................................................................... 104

Page 66: Purestorage_UserGuide3_2

53

Chapter 5. The Purity GUIAccessing the GUI

To administer a FlashArray using the Purity GUI, enter the FlashArray's virtual IP address (specified duringinstallation) in the address bar of one of the major browsers (e.g., Firefox, Chrome, etc.-- Appendix A,Supported Remote Access Packages lists the browser versions with which Pure Storage has tested theGUI). If the array has been registered with DNS, its fully-qualified domain name can be entered instead. Ineither case, the login pane shown in Figure 5.1 is displayed. Enter the administrator username (pureuserin the Purity v3.2 release) and password to log in to the array.

Figure 5.1. The Purity GUI Login Pane

Upon login, the GUI dashboard is displayed. The display contains five tabs:

DASHBOARD:Displays a running graphical overview of the array's storage capacity, performance, and hardwarestatus

STORAGE:Displays “drill-down” views of hosts, host-volume connections, and storage consumption. Enablesadministration of host, host group, and volume objects, and connections between them

ANALYSIS:Displays historical views of of I/O performance (data transfer rate, IOPS, and average latency forselected sets of volumes

SYSTEM:Displays host summary of the "health" of array hardware, showing up/down status for major replace-able units, and the most recent alert messages.

MESSAGES:Displays alert messages in the array's log.

Clicking a tab changes the window to a dynamically updated interactive display of the tab's subject matter.Chapters Chapter 6, The Dashboard Tab through Chapter 10, The Messages Tab describe the displays inand the actions that can be performed from each of the tabs.

Page 67: Purestorage_UserGuide3_2

54

Chapter 6. The Dashboard TabFigure 6.1 illustrates a sample of the Purity dashboard tab as it might appear upon login.

The dashboard window has three panes that summarize the information typically of greatest interest toadministrators:

Capacity:The upper main pane summarizes an array's current virtual (provisioned) and physical storage capacityand consumption

Performance:The lower main pane contains three graphs of recent history for the three most important array I/Operformance metrics—average response time (Latency), I/O operations per second (IOPS), and datatransfer rate (Bandwidth)

Alerts:The pane on the left displays an image that represents the operational status of the array's controller(s)and drives, and lists recent alert messages. The System tab can be used to view the array status inmore detail, while the Messages tab is used to view and manage alert and audit trail messages.

The Alerts and Capacity panes contain the most recent data, updated every few seconds. The graphs inthe Performance pane represent a moving time window. Purity updates the the Capacity and Alerts panesevery few seconds, and drops the oldest (leftmost) data points on the performance graphs, adding newdata on the right. The green rectangle in the upper right corner of each graph represents the most recentdata sample added.

Page 68: Purestorage_UserGuide3_2

The Dashboard Tab

55

Figure 6.1. Dashboard Display Areas

Each of the three performance graphs in the main pane can be displayed and hidden by alternate clicks onits title bar. When a graph is hidden, the displayed graphs expand vertically to fill the pane. This allowsthe history of individual performance metrics to be viewed in greater detail, as Figure 6.2 illustrates forthe case of latency.

Page 69: Purestorage_UserGuide3_2

The Dashboard Tab

56

Figure 6.2. Expanded Graph (Latency History Example)

The Capacity PaneFigure 6.3. Capacity Bar Graph

The horizontal bar graph in the capacity pane, illustrated in Figure 6.3, summarizes an array's physical andvirtual (provisioned) storage capacity and consumption. The pane's title bar lists:

Capacity (total provisioned):The sum of the current provisioned sizes of all volumes

Data Reduction:The ratio of the number of virtual sectors to which hosts have written data to the amount of physicalstorage occupied by the data after reduction

Percentage full:The percentage of the array's physical storage occupied by reduced data and metadata.

Page 70: Purestorage_UserGuide3_2

The Dashboard Tab

57

The bar graph represents the array’s total physical storage capacity (the sum of the capacities of all SSDsconnected to it), which is displayed numerically to the right of the bar.

The colored areas of the bar represent amounts of physical storage occupied by different types of dataand metadata:

SystemSpace occupied by metadata (including RAID-3D check data)

Shared SpaceSpace occupied by deduplicated data (identical sectors that appear in two or more sectors in a volumeor in sectors in two or more volumes)

VolumesSpace occupied by reduced data sectors that are unique to a single volume

SnapshotsSpace occupied by data as a consequence of hosts writing to a volume with one or more snapshots

Empty SpaceUnused space available for allocation

All of these quantities are rounded to two decimal places in the units in which they are expressed.

If possible, the sum of the space occupied by all types of data and metadata is displayed in the EmptySpace section of the graph.

The Performance PanesThe three performance line graphs display an array's recent I/O performance history. A dropdown menuin the lower right corner of the window, illustrated in Figure 6.4), allows the time span represented bythe graphs to be set to the most recent 1 hour, 3 hours, 24 hours, 7 days, or 30 days. The three graphsillustrate the array's:

Latency:Average arrival-to-completion times for hosts' read and write commands. Separate lines representread latency, write latency, and average amount of data transferred in response to each request.

IOPS:Host I/O requests per second executed by the array. This metric counts requests per second, regardlessof how much or how little data is transferred in each. Separate lines represent reads, writes, and thetotal of the two.

Bandwidth:Amount of data transferred per second to and from all hosts. This metric counts data in its expandedform (as originally written by hosts) rather than the reduced form stored in the array, because thatis what is transferred over the storage network. Separate lines represent data read, data written, andthe total of the two.

The performance graphs display between 1 hour and 30 days of history (less if the array has been operatingfor less than the selected interval). Rolling the mouse over any of the graphs displays pop-ups containingvalues for the point in time represented by the mouse position as shown in Figure 6.4.

Page 71: Purestorage_UserGuide3_2

The Dashboard Tab

58

Figure 6.4. Point-in-Time I/O Performance

Page 72: Purestorage_UserGuide3_2

59

Chapter 7. The Storage TabClicking the STORAGE label from any GUI view displays the STORAGE tab (Figure 7.1). The left sideof the tab is a navigation pane used to select a host group, host, or volume object for display. Detailsabout selected objects are displayed in the main pane. For example, Figure 7.1 illustrates the details forHostGroup1.

Figure 7.1. The Storage Tab (Host Group Detail View)

Displaying or hiding objects in the navigation pane is controlled as follows:

Hosts and Volumes group headings:Clicking the # (or #) icon on the heading respectively displays and hides the the array's list of hosts orvolumes. The main pane displays a table of summary information about the array's hosts or volumes

Individual host group icons:Clicking the # (or #) icon beside a host group icon respectively displays and hides the list of hostsassociated with it. The main pane displays a table of summary information about the volumes thathave shared connections to the host group

Individual host icons:Clicking the # (or #) icon beside a host icon respectively displays and hides the list of volumes that haveconnections to it. The main pane displays summary information about the volumes (LUN, provisionedsize, and space occupied by data)

Individual volume icons:Clicking the # (or #) icon beside a volume icon respectively displays and hides its list of snapshots.(The # and ▼ icons are not displayed for volumes with no snapshots) Clicking a volume icon displaysthe volume's details, including the summary information, connections, distribution of space consump-tion, and associated snapshots, in the main pane

Individual snapshot icons:Clicking a snapshot icon displays its details, including serial number, distribution of space consump-tion, and creation time, in the main pane.

The navigation pane and detail views also contain buttons used to create new objects and manage existingones.

Page 73: Purestorage_UserGuide3_2

The Storage Tab

60

Volume Administrative TasksThe main pane in Figure 7.2 shows the view displayed when the the # icon on the Volumes heading isclicked. The table contains a row for each volume showing number of hosts with connections (both sharedand private), provisioned size, physical storage consumption (VOLUME DATA), data reduction ratio, andserial number (assigned by Purity when the volume is created).

Creating New VolumesFlashArrays eliminate drive-oriented concepts such as RAID groups and spare drives that are commonwith disk arrays. Purity treats an array’s entire SSD capacity as a single homogeneous pool from whichit allocates storage only when hosts write data to virtual volumes created by administrators. Creating aFlashArray volume, therefore, is as simple as specifying its provisioned size and a name to be used inadministrative operations and displays.

Either of the buttons highlighted in Figure 7.2 can be clicked to create new volumes. Clicking either ofthe buttons highlighted in Figure 7.2 displays the Create Volume dialog.

Figure 7.2. Create Volume Buttons

To create a volume from the STORAGE tab view:

1. Click the Volumes heading in the navigation pane to display the Volume Summary

2.Click the button in the main pane to display the Create Volume dialog. Alternatively, click the

button in the navigation pane from any STORAGE tab view (both are highlighted in Figure 7.2)

3. Enter a name and a provisioned size (number and units) for the volume

4.Click the button.

Page 74: Purestorage_UserGuide3_2

The Storage Tab

61

Creating a volume creates persistent data structures in the array, but does not allocate any physical stor-age. (Purity allocates physical storage only when hosts write data.) Volume creation is therefore nearlyinstantaneous.

Managing Existing Volumes

Two management operations change properties of existing volumes:

Renaming:Changing the name by which Purity identifies a volume in administrative displays operations

Resizing:Changing the provisioned size of a volume (size as it appears to hosts).

To change either of these, roll the mouse over the volume’s name in the main pane, click the icon thatappears and select the Edit command from the menu that appears, as shown in Figure 7.3. This displaysthe Edit Volume dialog by clicking when and selecting the . Enter a new name and/or a new size (number

and units), and click the button. The new properties take effect immediately.

Figure 7.3. The Edit Volume Dialog

Deleting Unneeded Volumes

Volumes that are no longer useful can be deleted, obliterating the data they contain and reclaiming thestorage occupied by data in them for other purposes.1 Figure 7.4 illustrates the dialog used to delete avolume.

1The CLI subcommand for removing volumes is destroy, rather than delete, to emphasize that volume destruction ultimately destroys data irre-versibly. The underlying operations are the same for CLI and GUI, however.

Page 75: Purestorage_UserGuide3_2

The Storage Tab

62

Figure 7.4. Deleting a Volume

To delete a volume from the STORAGE tab view:

1. Roll the mouse over the volume’s name in the Volumes view to display the icon. Click the iconto display the menu shown in Figure 7.4

2. Select Delete to display the Delete Volume dialog. (The example shows a volume that has connectionsto hosts. For volumes with no connections to hosts or host groups, a shorter warning displays)

3.Click the button in the dialog to delete the volume.

Deleting a volume places it in the eradication pending state for 24 hours. During that time, it can berecovered from the Deleted Volumes view shown in Figure 7.5. To recover a deleted volume during its24-hour eradication pending period:

1. Roll the mouse over the volume’s name to display the icon. Click the icon to display the menushown in Figure 7.5

2. Select Recover. The volume re-appears in the Volumes view with its data intact.

Selecting Delete Permanently on the terminates the eradication pending period, and makes storage oc-cupied by the volume’s data eligible for reallocation immediately. A volume that has been permanentlydeleted can no longer be recovered.

Page 76: Purestorage_UserGuide3_2

The Storage Tab

63

Figure 7.5. Recovering a Deleted Volume

Downloading Volume Information

The button in the upper right corner of the Volumes view main pane (highlighted in Figure 7.6generates a CSV (comma-separated value) text file containing the information displayed in the pane,and downloads it to the administrative workstation. The file can be opened directly by most spreadsheetapplications.

Figure 7.6. Downloading Volume Information

Volume Detail Views

Selecting a volume in the navigation pane displays its details in the main pane, as the example in Figure 7.7illustrates.

Page 77: Purestorage_UserGuide3_2

The Storage Tab

64

Figure 7.7. Volume Detail View

The pane title bar displays the volume’s name, provisioned size, and data reduction ratio.

The bar graph immediately below represents storage occupied by data unique to the volume (space occu-pied by deduplicated data is not included). Clicking the pane title bar alternately hides and displays thebar graph.

A table below the bar graph lists host groups and hosts with connections to the volume, and the LUNsthey use to address it.

From this view, a volume can be edited, disconnected from host groups and hosts, and deleted. Click the

icon in the pane title bar to display the menu and select a command:

Edit:Change the volume’s name or provisioned size

Disconnect All Hosts:Break all private connections to the volume (shared connections are not affected)

Delete:Hide the volume and start its 24-hour eradication pending period.

Commands issued from this view display the same dialogs that appear when they are invoked from theVolumes view and and perform the same actions.

Host Group and Host Administrative TasksThe main pane in Figure 7.8 shows the Hosts view displayed when the the # icon on the navigation paneHosts heading is clicked. The navigation pane displays the names of the array's host groups and hosts.The main pane displays a title bar, an aggregate capacity bar graph and a table containing the followinginformation for each host group and host:

# HOSTS:Number of hosts associated with the host group (1 for lines that describe hosts)

Page 78: Purestorage_UserGuide3_2

The Storage Tab

65

# VOLUMES:For lines representing host groups, number of volumes with shared connections to the host group; forlines representing hosts, number of volumes with private connections to the host

PROVISIONED:Aggregate virtual size of volumes represented in the VOLUMES column

VOLUME DATA:Physical storage occupied by reduced data in volumes represented in the VOLUMES column

REDUCTION:The overall ratio of virtual storage containing host-written data to physical storage occupied for vol-umes represented in the VOLUMES column.

Figure 7.8. Sample Host Group and Host View

Creating New Host Groups and Hosts

Host group and host objects can be created from the view shown in Figure 7.8 by:

Selecting the action:

Click the button on the Hosts heading in the navigation pane and selecting a command from the

menu, or click the (host group) or (host) button on the upper right corner of the mainpane. A dialog, shown in as Figure 7.8, appears

Naming the host or host group:Enter a name for the host group or host in the dialog

Confirming the action:

Clicking the button on the dialog.

The host group or host is available immediately but cannot be connected to volumes because no FibreChannel worldwide names (WWNs) or iSCSI qualified names (IQNs) have been specified.

Page 79: Purestorage_UserGuide3_2

The Storage Tab

66

Renaming and Deleting Host Groups and Hosts

Host group and host objects can be renamed and deleted from the STORAGE tab Hosts view by rolling

the mouse over the name of the host group or host, clicking the icon that displays, and selecting eitherthe Edit or Delete command. A dialog similar to one of those shown in Figure 7.9 displays.

Figure 7.9. Renaming a Host Group and Deleting a Host Object

To rename a host or host group, enter a new name and click the button. The change is effectiveimmediately.

To delete a host group or host, click the button. Click the button to annul theaction.

The Host Group Details View

Clicking a host group’s name in the STORAGE tab navigation pane displays its details, illustrated inFigure 7.10.

Page 80: Purestorage_UserGuide3_2

The Storage Tab

67

Figure 7.10. Adding Hosts and Volumes to a Host Group

The view includes three information areas:

Storage:A title bar containing the host group name, aggregate provisioned size of volumes with shared con-nections to it, and the aggregate data reduction ratio, and a bar graph representing storage consump-tion. The graph breaks consumption into data written by hosts (Volumes) and a System that includescheck data and metadata. Clicking the title bar alternately hides and displays the bar graph.

Hosts:Lists the hosts associated with the host group. It includes buttons for adding hosts to and removingthem from the host group

Connected Volumes:Lists volumes with shared connections to the host group, the LUNs used by group member hosts toaddress them, their provisioned sizes, and the physical space occupied by the data in them.

Adding Hosts and Connecting Volumes to a Host Group

Figure 7.11 illustrates the addition of hosts to a host group and the establishment of shared connectionswith volumes.

Page 81: Purestorage_UserGuide3_2

The Storage Tab

68

Figure 7.11. Adding Hosts and Volumes to a Host Group

Hosts:

To add hosts to a host group, click the button in the upper right corner of the Hosts area,displaying the Add Hosts to Host Group dialog

Volumes:

To connect volumes to a host group, click the button in the upper right corner of the Con-nected Volumes area, displaying the Connect Shared Volumes to Host Group dialog.

Both dialogs are illustrated in Figure 7.11.

The left panes of the dialogs respectively list hosts that can be added and volumes that can be connectedto the host group. Hosts and volumes that are unavailable for addition or connection to the host group donot appear in the lists:

Unavailable hosts:Hosts cannot be added to a host group if (a) they have private connections to volumes with sharedconnections to the group, or (b) they are associated with another host group

Unavailable volumes:Volumes cannot be connected to a host group if they have private connections to hosts associatedwith the group.

Page 82: Purestorage_UserGuide3_2

The Storage Tab

69

Unavailable hosts and volumes can be displayed by clicking the links in the lower left corners of therespective dialogs.

To select a host or volume, click the + symbol on its row in the left pane. This moves the row to theright pane. To deselect a host or volume, click the – symbol on its row in the right pane. This moves the

row back to the left pane. When satisfied with the selections, click the button to make theadditions or connections.

Other Host Group Administrative TasksOther host group administrative tasks, illustrated in Figure 7.12, are also performed from the host groupdetails view.

Figure 7.12. Other Host Group Tasks Performed from the Detail View

Clicking the button in the storage area displays a menu of host group tasks.

The host group tasks performed from this view are:

Edit:Displays the dialog for changing the host group’s name (see Figure 7.9)

Remove All Hosts:Removes all hosts from the host group

Disconnect All Volumes:Disconnects all of the host group’s shared connections. Private connections of member hosts are un-affected

Delete Host Group:Deletes the host group object. Implicitly disconnects any shared connections to volumes and removesall hosts

Connection Map:Displays an informational dialog that graphically depicts the ports via which hosts in the group andand shared volumes communicate.

The first four of these commands display dialogs that summarize the consequences (e.g., connections thatwould be broken etc.), and require confirmation to perform the actions.

Page 83: Purestorage_UserGuide3_2

The Storage Tab

70

The Connection Map command displays a table, illustrated in Figure 7.13, that shows connectivity betweenhost and array controller ports. The table can be downloaded by clicking the Download CSV button.

Clicking the button dismisses the connection map.

Figure 7.13. Host-Volume Connection Map for a Host Group

Host and volume tasks can also be performed from the host group details view. Rolling the mouse over

any of the member host or connected volume names displays a button. Clicking the button displaysmenus of that can be performed on the selected object, as Figure 7.14 illustrates.

Figure 7.14. Host and Volume Tasks Performed from the Host Group Detail View

Host tasks accessible from this view are:

Edit Host:Displays the dialog for changing the host’s name (similar to the Edit Host Group dialog shown inFigure 7.9). Changing a host’s name has no effect on its host group association

Remove:Removes the host from the host group. Disconnects all volumes with shared connections from theremoved host. The removed host’s private volume connections are not affected

Page 84: Purestorage_UserGuide3_2

The Storage Tab

71

Volume tasks accessible from this view are:

Edit:Displays the dialog for changing the volume’s name and/or size (shown in Figure 7.3). The changeshave no effect on the volume’s connection to the host group

DisconnectBreaks the volume's shared connection to the host group. Disconnects all member hosts from thevolume. Other shared and private connections to the volume are unaffected

Edit LUN:Change the logical unit number by which hosts in the host group address the volume. Logical unitnumbers for shared connections must be in the [10…255] range. Changing a LUN may result inmomentary disconnection from some or all of the group's member hosts. Some action to restore visi-bility to the volume (e.g., rescan) may be required of the host .

Host Administrative TasksA Purity host is essentially a named collection of one or more Fibre Channel worldwide names (WWNs)or iSCSI Qualified Names (IQNs) that identify a host computer’s initiators. As with other objects, hosts'names are used solely for FlashArray administration; they have no significance outside an array. Purityestablishes connections between hosts and volumes to enable the two to communicate with each other andexchange data.

The administrative tasks for host objects are:

Creation and deletion:Creating new hosts and deleting existing ones that are no longer required

Host address management:Managing the list of WWNs or IQNs associated with a host

Private connection management:Establishing and breaking private connections between volumes and hosts.

Creating and Deleting Host Objects

Host objects are created either from the menu displayed from the Hosts heading in the navigation pane(illustrated in Figure 7.8) or from the Add Hosts to Host Group dialog (illustrated below in Figure 7.15).Both display the Create Host dialog, in which a name for the host is entered, and creation of the host isconfirmed.

Page 85: Purestorage_UserGuide3_2

The Storage Tab

72

Figure 7.15. Creating a Host Object from the Host Group Add Hosts Dialog

To create a host from the Host Group Add Hosts Dialog:

1. Roll the mouse over the host's row in the main pane to display the button

2. Click the dialog's Create New Host button to display the Create Host dialog

3.Enter a name for the new host and click the button to confirm host creation.

Similarly, existing host objects can be deleted from the host group and host view (Figure 7.16), or alter-natively, from the host's own detail view (Figure 7.17).

Figure 7.16. Deleting a Host from the Host Group and Hosts View

To delete a host from the host group and host view:

1. Roll the mouse over the host's row in the main pane to display the button

2. Click the button to display the menu, and select Delete. This displays the Delete Host dialog

Page 86: Purestorage_UserGuide3_2

The Storage Tab

73

3.Click the button to confirm deletion (Figure 7.16 illustrates deletion of a host with vol-ume connections. Deleting hosts with no connections and no associated WWNs or IQNs displays cor-respondingly simpler dialogs).

Figure 7.17. Deleting a Host from its Host Detail View

To delete a host from its own host detail view:

1. Click the button to display the menu, and select Delete Host. This displays the Delete Host dialog

2.Click the button to confirm deletion (Figure 7.17 illustrates deletion of a host with vol-ume connections. Deleting hosts with no connections and no associated WWNs or IQNs displays cor-respondingly simpler dialogs).

The Individual Host Detail View

Clicking a host’s name in the navigation pane displays a detail view for the host. Figure 7.18 shows anexample host detail view.

Page 87: Purestorage_UserGuide3_2

The Storage Tab

74

Figure 7.18. Example Individual Host Detail View

The main pane of the view includes three information areas:

Storage:The top pane contains a bar graph whose length represents the amount of storage occupied by reduceddata in volumes that have shared or private connections to the host. The graph breaks storage con-sumption into user data (blue) and RAID-3D parity (gray)

Connected Volumes:The pane on the left lists volumes with shared or private connections to the selected host, their provi-sioned sizes, the storage space occupied by data in them, and the LUNs that the host uses to addressthem. Volumes with shared connections are addressed by LUNs in the [10…255] range. Hosts withno host group affiliation address privately connected volumes by LUNs in the [1…255] range. Hoststhat belong to host groups use LUNs in the [1…9] range for private connections

Host Ports:The pane on the right lists the WWNs or IQNs of the ports associated with the host object.

Tasks that administer the selected host or volumes associated with it are performed from this view. Rolling

the mouse over the host name in the main pane activates the icon (turns it blue). Clicking the blue icondisplays the host menu shown in Figure 7.18. Similarly, rolling the mouse over the name of a connected

volume displays a icon. Clicking that icon displays a menu of volume related administrative tasks.

The host-related tasks accessible from the individual host detail view are:

Edit:Displays the dialog for changing the selected host’s name. Changing a host’s name has no effect onits host group membership or volume connections

Disconnect All Volumes:Disconnects all of the selected host’s private volume connections. The host’s shared connections areunaffected

Page 88: Purestorage_UserGuide3_2

The Storage Tab

75

Delete Host:Deletes the selected host object. Disconnects all shared and private connections, and removes world-wide names associated with it from the array’s object database

Connection Map:Displays an informational dialog that depicts the Fibre Channel ports via which the selected host is ableto communicate with the array in tabular form. The dialog is similar to that depicted in Figure 7.13,but shows only connectivity to the selected host. The table can be downloaded in a form suitable for

importing into a spreadsheet by clicking the button.

The Disconnect All Volumes and Delete Host commands display dialogs which summarize the conse-quences of taking the actions (connections broken, configuration information deleted, etc.). Confirmationis required before the actions are taken.

The volume-related tasks accessible from the individual host detail view view are:

Edit Volume:Displays a dialog for changing the volume’s name and/or size. Size changes are immediately visibleto all hosts with shared or private connections to the selected volume. The volume name is used onlyin Purity administrative interactions

Connection InfoDisplays an informational dialog that depicts in tabular form the Fibre Channel ports via which theselected host is able to communicate with the selected volume. The dialog is similar to that depictedin Figure 7.13, but shows only connectivity between the selected host and volume. The table can be

downloaded in a form suitable for importing into a spreadsheet by clicking the button.

Host Object ManagementSome administrative tasks can be performed from multiple GUI views. For example, a host's name can bechanged from any of the following Storage tab views:

Host groups and hosts

Individual host group (for hosts belonging to the host group whose detail pane is displayed)

Individual host details (shown in Figure 7.19)

Individual volume details (for volumes connected to the host to be renamed).

From any of these views:

1. Roll the mouse over the name of the host to be renamed to display the icon

2. Click the icon to display a menu, and select the Edit (or Edit Host command, depending on the display.Either displays the Edit Host dialog

3.Enter a new name for the host and click the button.

For example, Figure 7.19 illustrates renaming a host from the individual host detail view.

Page 89: Purestorage_UserGuide3_2

The Storage Tab

76

Figure 7.19. Renaming a Host Object

A FlashArray communicates with a host via all of its Fibre Channel ports that have connectivity to thehost. The array accepts and responds to commands received on any of its ports from any of the worldwidenames associated with a host.

Worldwide names are typically associated with host objects immediately after the objects are created.There is no practical limit to the number of worldwide names that can be associated with a host object.

Associations can be added or removed at any time from the individual host detail view, as Figure 7.20illustrates.

Figure 7.20. Associating Worldwide Name with a Host Object

To associate an additional worldwide name with a host object:

1.Click the button in the upper right corner of the Host Ports table to display the ConfigureFibre Channel WWNs for Host dialog shown in Figure 7.20. The left pane of the dialog displays the

Page 90: Purestorage_UserGuide3_2

The Storage Tab

77

worldwide names of computers whose initiators have “logged in” (performed the Fibre Channel PLOGIoperation) to the FlashArray

2. Click the + symbol to select a logged-in initiator’s worldwide name (To deselect a worldwide name,click the – symbol associated with it in the right pane of the dialog)

3. Initiators whose worldwide names are known, but which have not yet logged in to the FlashArray (e.g.,because their hosting computers have not yet been booted), can be added to host objects by entering their

worldwide names manually. Click the button to display the Enter WWNs

Manually dialog. Enter the initiator’s worldwide name in the text box and click the button

4. When the desired worldwide names appear in the right pane of the Configure Fibre Channel WWNs

for Host dialog, click the button to effect the change.

Figure 7.21. Breaking the Association between a Worldwide Name and a HostObject

To break the association between a worldwide name and a host (Figure 7.21):

1. Roll the mouse over the worldwide name to be deleted in the Host Ports table to display the icon

2. Click the icon to display the Edit/Remove menu

3. Click the Remove command, which displays a confirmation dialog

4. Verify that the worldwide name displayed in the dialog is the one to be removed and click the

button. Removal is immediate; any communication with the array via the initiator asso-ciated with the removed worldwide name is broken off.

Host-Volume Connection TasksFor a host to read and write data on a FlashArray volume, the two must be connected. Purity only respondsto I/O commands from hosts to which the volume addressed by the command is connected; it ignorescommands from unconnected hosts.

Page 91: Purestorage_UserGuide3_2

The Storage Tab

78

Hosts can be connected to volumes either privately or shared by virtue of association with a host group.

Making Private Host-Volume Connections

Two different GUI operations are used to connect one or more volumes to a single host and one or morehosts to a single volume. Figure 7.22 illustrates the connection of one or more volumes to a single host.

Figure 7.22. Connecting Multiple Volumes to a Single Host

To establish private connections between one or more volumes and a single host:

1. Select the host in the navigation pane. This displays its individual host detail view

2. Click the + button in the upper right corner of the Connected Volumes table title bar to display theConnect Volumes to Host dialog as shown in Figure 7.22. The left pane of the dialog displays a list ofvolumes eligible to be connected to the selected host. If some volumes are ineligible, click the Somevolumes are unavailable link at the bottom of the dialog to display a list of volumes and reasons forunavailability. The most typical reason for unavailability is that a volume is already connected to theselected host

3. Click a + symbol in the left pane of the dialog to select a volume for connection. (Click the – symbolin the right pane to deselect a volume.) There is no practical limit to the number of volumes that maybe selected

4. When selection is complete, click the button to make the connections.

Similarly, private connections between a single volume and one or more hosts can be established from theindividual volume detail view as Figure 7.23 illustrates.

Page 92: Purestorage_UserGuide3_2

The Storage Tab

79

Figure 7.23. Connecting Multiple Hosts to a Volume

To establish private connections between a volume and one or more hosts:

1. Select the volume in the navigation pane to display its detail view

2.Click the button in the upper right corner of the Connected Hosts and Host Groups table title barto display the Connect Hosts dialog as shown in Figure 7.23. The left pane of the dialog displays a list ofhosts eligible to be connected to the selected volume. If some hosts are ineligible, click the Some hostsare unavailable link at the bottom of the dialog to display a list of hosts and reasons for unavailability.The most typical reason for unavailability is that a host is already connected to the selected volume

3. Click a + symbol in the left pane of the dialog to select a host for connection. (Click the – symbol inthe right pane to deselect a host.) There is no practical limit to the number of hosts that may be selected

4.Click the button to complete the connections.

Shared Connections

Shared connections between hosts and volumes are a consequence of hosts’ affiliation with host groups:

Adding a host to a host group:Adding a host to a host group automatically establishes connections between it and all volumes withshared connections to the host group

Connecting a volume to a host group:Connecting a volume to a host group automatically establishes connections between it and all hostsaffiliated with the host group

Removing a volume from a host group:Removing a host from a host group automatically breaks connections between it and all volumes withshared connections to the host group. The removed host’s private connections are unaffected

Page 93: Purestorage_UserGuide3_2

The Storage Tab

80

Disconnecting a volume from a host group:Disconnecting a volume from a host group automatically breaks connections between it and all hostsaffiliated with the host group, Other shared and private connections to the volume are unaffected.

Shared connections are established from the individual host group view. Figure 7.24 illustrates establish-ment of a connection between a host group and one or more volumes.

Figure 7.24. Connecting One or More Volumes to a Host Group

To establish shared connections between a host group and one or more volumes:

1. Select the host group in the navigation pane to display its detail view

2.Click the button in the upper right corner of the Connected Volumes table to display theConnect Shared Volumes to Host Group dialog as shown in Figure 7.24. The left pane of the dialogdisplays a list of volumes eligible to be connected to the selected host group. If some volumes areineligible, click the Some volumes are unavailable link at the bottom of the dialog to display a list ofineligible volumes and reasons for unavailability. The most typical reason for unavailability is that avolume is already connected to the selected host group or has a private connection to a host affiliatedwith the host group

3. Click the desired volume names in the left pane of the Connect Shared Volumes to Host Group dialogto select them for connection. (Volumes can be removed from the selection by clicking their namesin the right pane)

4.When satisfied with the volume selection, click the button to make the shared connections.

Page 94: Purestorage_UserGuide3_2

The Storage Tab

81

Breaking Private Connections

Connections between volumes and hosts or host groups can be broken when there is no longer a need forthe two to communicate. Private connections between hosts and volumes can be broken from:

Host view:The individual host detail view for the host to be disconnected

Volume view:The individual volume detail view for the volume to be disconnected.

Figure 7.25 illustrates breaking a private connection from the individual volume detail view.

Figure 7.25. Breaking a Private Connection from the Individual Volume DetailView

To break a private connection from the individual volume detail view:

1. Roll the mouse over the name of the host whose connection is to be broken to display the icon

2. Click the icon to display the menu and select the Disconnect command to display the confirmationdialog

3.Verify that the correct host and volume are selected and click the button.

Figure 7.26 illustrates breaking a private connection from the individual host detail view.

Page 95: Purestorage_UserGuide3_2

The Storage Tab

82

Figure 7.26. Breaking a Private Connection from the Individual Host Detail View

To break a private connection from the individual host detail view:

1. Roll the mouse over the name of the volume to be disconnected to display the icon

2. Click the icon to display the menu and select the Disconnect command. The confirmation dialogdisplays

3.Verify that the correct host and volume are selected and click the button.

Breaking Shared ConnectionsShared connections between host groups and volumes can be broken from:

Host group view:The individual host group detail view for the host group to be disconnected

Volume viewThe individual volume detail view for the volume to be disconnected.

Figure 7.27 illustrates breaking a shared connection from the individual host group detail view.

To break a shared connection from the individual host group detail view:

1. Roll the mouse over the name of the volume to be disconnected to display the icon

2. Click the icon to display the menu and select the Disconnect command. The confirmation dialogdisplays

3.Verify that the correct host group and volume are selected and click the button.

Page 96: Purestorage_UserGuide3_2

The Storage Tab

83

Figure 7.27. Breaking a Shared Connection from the Individual Host Group DetailView

To break a shared connection from the individual volume detail view (Figure 7.28):

1. Roll the mouse over the name of the host group to be disconnected to display the icon

2. Click the icon to display the menu and select the Disconnect command. The confirmation dialogdisplays

3.Verify that the correct host group and volume are selected and click the button.

Figure 7.28. Breaking a Shared Connection from the Individual Volume DetailView

Page 97: Purestorage_UserGuide3_2

84

Chapter 8. The Analysis TabThe GUI Analysis tab provides a flexible mechanism for viewing an array's I/O performance or storagecapacity and consumption history from a variety of viewpoints. Figure 8.1 shows a sample of the Analysistab, with its three panes outlined.

Figure 8.1. The GUI Analysis Tab

Analysis Tab Display ControlThe three panes of the Analysis tab are used as follows:

Content selection:The CAPACITY and PERFORMANCE title bars in this pane select between displays of I/O perfor-mance and storage capacity and consumption history respectively for selected volumes. Clicking ei-ther bar expands a volume selection check list below it and hides the list below the other bar. Checkingthe box associated with a volume includes information for that volume in the display

Time scale:Selects the time interval over which I/O performance or storage capacity and consumption history isdisplayed. The right side of the pane contains a pop-up menu for selecting the interval covered by the

Page 98: Purestorage_UserGuide3_2

The Analysis Tab

85

graphs. When performance history is displayed, the left side of the pane contains a pop-up menu forselecting the statistics displayed in the graphs

Performance or capacity history (main pane):This pane displays I/O performance or storage capacity (Host Capacity) and consumption (ArrayCapacity) history for the selected volumes over the selected time scale.

A FlashArray maintains a rolling 1-year history of I/O performance and storage consumption for all of itsvolumes. The granularity of historical data increases with age; older data points are spaced further apartin time than more recent ones.

The history pane displays aggregated I/O performance or storage capacity and consumption data for se-lected volumes for the selected time scale. The display format is similar to the Dashboard display in that:

Vertical graph expansion:Individual graphs can be compressed, causing the remaining graphs to expand vertically to fill thepane. The increased resolution this provides can be helpful when several volumes' I/O performanceis displayed

Point-in-time displays:Rolling the mouse horizontally over any of the graphs displays the numeric values of the individualdata points that comprise the graph's curves in pop-up boxes, providing greater precision than can beconveyed in graphical form.

Figure 8.1 illustrates both of these features.

Analysis Tab Time ScalesThe Analysis tab supports a range of time scales in which capacity or performance history can be viewed.The overall interval represented by the full width of the graphs is set by clicking one of the seven optionsat the right of the Zoom dropdown in the time scale pane. This sets the interval represented by the entirewidth of the graph to one of seven values between three hours and one year. (In Figure 8.2, a 24 hourinterval has been selected.)

The slider bar in the time scale pane provides a second, finer level of control over the display interval andgranularity. Dragging the buttons at the ends of the bar left or right selects a subset of the overall intervalselected in the Zoom dropdown and shown in the history pane. Figure 8.2, for example, shows the barfully expanded, so the graph represents the full 24 hour interval selected by the drop-down.

Page 99: Purestorage_UserGuide3_2

The Analysis Tab

86

Figure 8.2. Analysis Tab Expanded Graph, Pop-up Numeric Display, and TimeScales

By moving the end points of the time scale slider back and forth a subset of the full Zoom interval can beselected. In Figure 8.3, the left button has been dragged to the right to telescope the display so that the fullwidth of the graph represents approximately 10 minutes.

Page 100: Purestorage_UserGuide3_2

The Analysis Tab

87

Figure 8.3. Displaying a Subset of the Zoom Interval

When a zoomed-in subset of the selected interval is selected, any part of the full interval can be displayed bydragging the center icon on the zoom interval bar left or right to the desired point, as Figure 8.4 illustrates.

Figure 8.4. Moving the Display within the Zoom Interval

Page 101: Purestorage_UserGuide3_2

The Analysis Tab

88

Analysis Tab Displayed DataThe Analysis tab can display capacity and performance data for any combination of an array’s volumes. Inboth cases, the volumes for which data is to be included in the display are selected by checking the boxesadjacent to their names in the Capacity or Performance panes respectively.

Capacity ViewFigure 8.5 illustrates the Capacity view. Up to Checking the All Volumes box includes a graph that rep-resents the total for the entire array.

Figure 8.5. Data Selection—Capacity View

The Capacity view presents two graph sets, representing physical storage consumption (ArrayCapacitygraph) and provisioned size (HostCapacity graph) respectively. The graph lines themselves depict storagecapacity history over the selected period. In the HostCapacity graph, the height of a volume’s graph linechanges each time the volume’s size is increased or decreased. In the Array Capacity graph, line heightchanges represent changes in physical storage consumed by a volume. Increases may result from more databeing written to the volume, or from eradication of other volumes containing shared data, or other causes.Decreases may result from trimming or from increased sharing of deduplicated data with other volumes.

Performance ViewFigure 8.6 illustrates the Performance view of the Analysis tab, displayed by clicking the Performancetitle bar in the left pane of the display. The view displays up to five graph lines of performance history.Clicking a check box when five are already checked has no effect.

Page 102: Purestorage_UserGuide3_2

The Analysis Tab

89

Figure 8.6. Data Selection—Performance View

The view includes three graphs that show different performance metrics. Any or all of the graphs can besuppressed and displayed by successive clicks of its title bar. When a graph is suppressed, the displayedgraphs expand to fill the entire view. The three graphs display:

Latency:Short-term average I/O service time for the selected data type. If All Volumes is checked, displaysaverage latency across all volumes for the selected I/O type

IOPS:I/O operations of the selected type (read, write, or both) per second

IOPS:I/O operations of the selected type (read, write, or both) per second

The lower left corner of the main pane contains the I/O Type pop-up, from which any of five data typescan be selected for display:

Write:One line per selected volume on each of the IOPS and bandwidth graphs, containing write operationsper second and bytes written per second respectively. Two lines per selected volume on the latency

Page 103: Purestorage_UserGuide3_2

The Analysis Tab

90

graph containing average write latency and average write I/O size respectively. Vertical scale gradu-ations for average write size are on the right of the graph

Read:One line per selected volume on each of the IOPS and bandwidth graphs, containing read operationsper second and bytes read per second respectively. Two lines per selected volume on the latency graphcontaining average read latency and average read I/O size respectively. Vertical scale graduations forfor average read size are on the right of the graph

Total R+W:One line per selected volume on each of the IOPS and bandwidth graphs, containing I/O operationsper second and bytes transferred per second respectively. Two lines per selected volume on the latencygraph containing average I/O latency and average I/O size respectively. Vertical scale graduations forfor average I/O size are on the right of the graph

Total & R & W:Three lines per graph for each selected volume, containing write, read and total data (average for thelatency graph) as above

R & W Stacked:Two lines shaded below per selected volume on each of the IOPS and bandwidth graphs, representingwrite operations per second with read operations per second stacked above and bytes written persecond with bytes read per second stacked above. Each volume’s data is stacked above that for theprevious one, so the topmost line on each of the IOPS and bandwidth graphs represents total I/O for allselected volumes. Three lines per selected volume on the latency graph, representing average read andwrite latency and average I/O size for each selected volume respectively. Vertical scale graduationsfor average I/O size are on the right of the graph.

Figure 8.7 illustrates the R & W StackedAnalysis tab view for two volumes.

Page 104: Purestorage_UserGuide3_2

The Analysis Tab

91

Figure 8.7. Analysis Tab Stacked I/O Performance Display

Page 105: Purestorage_UserGuide3_2

92

Chapter 9. The System TabThe System tab is used to view and manage attributes of an array as a whole. The tab includes four views,as Figure 9.1 illustrates. A view is selected by clicking its name in the navigation pane,

Array Health:A graphical display of up/down status of array hardware components

Connections:A graphical display of connectivity between host and array Fibre Channel ports

Configuration:Five sub-views that enable viewing and management of array attributes

Users:A tabular listing of users that enables viewing and management of user attributes (limited to the built-in pureuser account in the Purity v3.2 Release).

Figure 9.1. The System Tab (User View Selected)

The Array ("SYSTEM") Health ViewFigure 9.2 is an example of the Array Health view of the Array tab. The example is a schematic represen-tation of the hardware components of a FA-320 2-shelf array (Each array configuration is represented bya display that corresponds to the installed hardware).

Page 106: Purestorage_UserGuide3_2

The System Tab

93

Figure 9.2. The Array Health View

Each hardware component that reports status is associated with a square icon whose color represents thecomponent's current status:

Green:fully functional

Amber:at risk or out of range (e.g., operating temperature)

Red:failed

Rolling the mouse over a component's status icon displays a pop-up containing status and parameter valueinformation. Figure 9.2, for example, illustrates1 status information for:

Shelf 0 temperature sensors

Shelf 1, Slot 1, Port 3 (the left-most SAS port in I/O module 1)

Shelf 1 Drive Bay 0 (containing an NVRAM module)

For hardware components that can be actively managed, such as drive bays, whose identify LEDs can bemade to flash, pop-ups contain command buttons that perform functions indicated on their captions.

1Figure 9.2 shows multiple pop-ups in the interest of conciseness. In the actual GUI, one pop-up at a time is displayed. Pop-ups disappear whenthe mouse moves away from their status icons.

Page 107: Purestorage_UserGuide3_2

The System Tab

94

The Connections ViewThe Connections view of the Array tab, shown in Figure 9.3, has two purposes:

Network toplogy display:It illustrates graphically which host ports are able to communicate with which array ports

Worldwide name display:It displays the Fibre Channel worldwide names (WWNs) or iSCSI qualified names (IQNs) of the fouror eight controller ports.

Figure 9.3. The Connections View

Figure 9.3 illustrates a single-controller array connected to one host with four initiator ports. A similarblock of table rows would display for each host with connections to an array.

The top data row of the block contains the Purity name of the host object, the status of the host’s ability tocommunicate with the array (Redundant, Single Controller, or No Connectivity), and an icon representingeach array port’s ability to communicate with at least one of the host’s ports.

The top row for a host is followed by one or more rows representing its initiator ports. Each row containsthe port’s worldwide name, and icons that represent its ability to communicate with each array port. Darkgray icons containing green squares indicate ability to communicate; light gray icons indicate no commu-nication.

Page 108: Purestorage_UserGuide3_2

The System Tab

95

The Configuration ViewThe Configuration view includes five sub-views:

Array Information

Networking

Support Connectivity

Alerts

SNMP

Array (System) Time.

The sub-view names appear in the navigation pane when Configuration view is selected.

The Array Information Sub-View

The Array Information sub-view, illustrated in Figure 9.4, displays the array’s name, its Purity version andrevision numbers, and the aggregate physical storage capacity of all configured SSDs.

Figure 9.4. The Array Information Sub-View

An array’s name identifies it in alert email messages. It can be changed from this view by deleting orselecting the existing name, entering a new name, and clicking the check mark to confirm the change.Syntax rules for array names are described in purecli(7).

Purity does not register array names with DNS, so if an array’s name is changed, it must be re-registeredbefore the array can be addressed by name in browser address bars, ICMP ping commands, etc.

Page 109: Purestorage_UserGuide3_2

The System Tab

96

The Networking Sub-View

Figure 9.5. The Networking Sub-View

The Networking sub-view, illustrated in Figure 9.5, displays IP configuration parameters for an array’sEthernet ports, both administrative and iSCSI, as well as the IP addresses of any configured DNS servers.Both physical and virtual (floating) port information is displayed.

A port’s parameters can be changed by rolling the mouse over its name, and clicking the icon that appears

to display the menu illustrated in Figure 9.5. Clicking the icon displays the Edit Port dialog, in which anIP address, netmask, and gateway IP address for the interface can be entered.

Optionally, between one and three DNS server IP addresses can be configured. Click the pencil icon thatappears when the mouse is rolled over an information row in this section of the display. displays the EditDNS dialog, in which between one and three DNS server addresses can be entered if static mode is selected.If dhcp is selected in this dialog, Purity attempts to locate and configure DNS servers.

Page 110: Purestorage_UserGuide3_2

The System Tab

97

The Support Connectivity Sub-View

Figure 9.6. The Support Connectivity Sub-View

Figure 9.6 illustrates the Array tab Support Connectivity sub-view. The tasks performed from this view are:

Remote assist status display:Display current status of (a) the remote assist feature that allows a Pure Storage Technical Supportrepresentative to log into an array for diagnosis and troubleshooting and (b) the automatic phonehomefeature that transmits log contents to Pure Storage Technical Support every hour

Manage remote support:

Manage the remote assist connection:Enable (Connect) and disable (Disconnect) the remote assist feature

Manage the phonehome feature:Enable (Enable) and disable (Disable) hourly automatic log transmission

Transmit logs:Transmit (Send Now) selected logs to Pure Storage Technical Support on demand

Download logs:Download selected logs to files on the administrative workstation through which the session isbeing conducted.

Purity deletes old log information periodically. All log information stored in an array can be transmittedor downloaded. Alternatively, transmission and download can be limited to information from the currentor previous day.

The Alerts Sub-ViewThe Array tab Alerts sub-view, illustrated in Figure 9.7, is used to manage the list of addresses to whichPurity delivers alert notifications, and the attributes of alert message delivery.

Page 111: Purestorage_UserGuide3_2

The System Tab

98

Figure 9.7. The Alerts Sub-View

Any valid email address can be designated to receive Purity alert messages. Click the button in theupper-right corner of the view to display the Create Alert User dialog, in which an email addresscan be entered. Up to 20 alert message recipients can be designated (19 in addition to the [email protected] [mailto:[email protected]] address).

This view displays the hostname or IP address of an email relay host if one is configured for the array.Rolling the mouse over the row below the Relay Host heading displays a pencil icon. Click the icon todisplay the Edit Relay Host dialog, in which the IP address or fully-qualified hostname of an email relayhost can be entered.

Purity uses the array name as the alert email sender account name, and the specified sender domain (ifany) as the domain. Roll the mouse over the row below the SenderDomain heading to display a pencilicon. Click the icon to display the Edit Sender Domain dialog. Delete or select the sender domain name,

enter an alternate name, and click the button to change the alert email sender domain.

The SNMP Sub-ViewThe SNMP sub-view, shown in Figure 9.8, displays the list of SNMP managers with which the arraycommunicates. It is also used to configure SNMP managers and to download the array’s MIB to the ad-ministrative workstation.

To download the array’s MIB to the administrative workstation, click the button in the upperright corner of the main pane in the view. The browser’s mechanism is used for the download.

To configure the array to send alert messages to an SNMP manager, click the button in the upperright corner of the main pane to display the Add SNMP Trap Manager dialog (Figure 9.8). Enter a name

Page 112: Purestorage_UserGuide3_2

The System Tab

99

(to be used by Purity), an IP address for the manager, and a user name recognized by the manager. If DNSis configured for the array, a hostname can be entered in place of an IP address.

Figure 9.8. The SNMP Sub-View

Purity supports SNMP protocol versions v2c and V3. Select the version in use by the manager by clickingthe SNMP Version dropdown.

SNMP version 3 supports secure user authorization and message transmission. When SNMP version 3is selected, the Auth Protocol dropdown is enabled. Select the authorization protocol option used by themanager and enter its passphrase.

Selecting an authorization protocol enables the Privacy Protocol dropdown. If the manager encrypts SNMPmessages, select the encryption protocol from the dropdown and enter the manager’s privacy passphrase.

When all manager attributes have been entered correctly, click the button to save the managerconfiguration in Purity’s object database.

The Array Time Sub-View

The Array Time sub-view, shown in Figure 9.9, displays the array’s current time, and the IP address orfully qualified hostname of a Network Time Protocol (NTP) server with which array time is synchronizedif one is configured. With the Purity v3.2 Release , Pure Storage technicians set each array’s time zoneduring installation. An NTP server operated by Pure Storage is typically used for synchronization. Todesignate an alternate NTP server, enter its IP address or hostname in the text box and click the buttonto the right to confirm the change.

Page 113: Purestorage_UserGuide3_2

The System Tab

100

Figure 9.9. The Array Time Sub-View

The Users ViewThe Users view of the Array tab, illustrated in Figure 9.10, is used to manage administrator accounts ThePurity v3.2 Release supports only the built-in pureuser account, which cannot be renamed or deleted.

Figure 9.10. The Users View

Purity provides for either password or public/private key pair user authentication. When an array is in-stalled, the password of the pureuser account is set to pureuser. No public key is provided. Clicking the

icon displays a menu that allows a public key to be entered or the password to be changed:

To add a public key:Click the Update Public Key command to display the Set Public Key dialog. Enter a public key and

click the button. Up to 8 public keys may be entered, typically by copying and pastingfrom a key generator

Page 114: Purestorage_UserGuide3_2

The System Tab

101

To replace a password:Click the Set Password command to display the Set Password dialog. Enter the current password,

the new password, and new password confirmation, and click the button. A password mayconsist of between 0 and 32 keyable characters.

Page 115: Purestorage_UserGuide3_2

102

Chapter 10. The Messages TabPurity generates alerts when significant events occur within a FlashArray. Alerts are logged and transmittedto Pure Storage Technical Support via the phonehome facility. In addition, alerts may be sent as electronicmail messages to designated addresses, and as SNMP traps to designated SNMP managers.

Alert messages are either informational, or they describe warnings and errors that may require some cor-rective action. Corrective actions may be user motivated (e.g., removing data from an array whose freespace drops below a threshold), or performed in cooperation with Pure Storage Technical Support (e.g., re-placement of a failed hardware component). A green vertical bar at the left of the subject column indicatesan informational message, yellow, orange, and red bars indicate warnings and errors of increasing severity.

The Purity GUI MESSAGES tab is used to manage alert messages generated by an array. Messages storedin the array's log are displayed as a table in the main pane. The table can be sorted in date, category, andsubject order by clicking the respective column headings, as Figure 10.1 indicates.

Figure 10.1. The Messages Tab

In addition, dropdown menus make it possible to filter the display by:

Category:Limits the display to messages pertaining to hardware, software, the array as a whole, or displaysall alerts

Severity:Limits the display to Info (informational), Reminder (periodic repeats of previously-generated warn-ings), Warning, or Critical alerts. This filter is cumulative--all messages at and below the selectedseverity are displayed

State:Limits the display to Read, Unread, Read and Unread, or Deleted messages. (Deleted messages arenot normally displayed, but remain in the array's log for 30 days after deletion)

Age:Limits the display to messages generated during one of six time intervals (illustrated in Figure 10.1)

Page 116: Purestorage_UserGuide3_2

The Messages Tab

103

Viewing Alert MessagesFigure 10.2 illustrates an informational alert message viewed from the Messages tab display. Rolling themouse over a message subject line highlights it; clicking it displays a popup containing the message body.The message header contains a brief description of the event (e.g., Purity monitoring started in Figure 10.2),and a numeric identifier for use by Pure Storage Technical Support. The body of the message containsfurther description of the event and indicates a suggested action (None, in the case of informational events).

Figure 10.2. Viewing an Informational Alert Message

Figure 10.3 illustrates a Critical alert message. In this case, the message body contains a suggested action(in this example, contact Pure Storage Technical Support with the message text in hand), and variablesthat impart more specific information about the event (in this example, the failed controller and its status).

Figure 10.3. Viewing an Alert Error Message

Page 117: Purestorage_UserGuide3_2

The Messages Tab

104

Reading an alert message in this way causes its entry in the table to be displayed in gray (as, for example,in the top line in Figure 10.2), but the message remains accessible, and can be managed along with unreadmessages.

Managing Alert MessagesThe alert messages in a FlashArray's log can also be managed from the MESSAGES tab. Figure 10.4illustrates alert message management actions.

Figure 10.4. Managing Alert Messages

Rolling the mouse over a message subject line and clicking the icon that appears to the right displays amenu from which the message can be deleted or toggled between the Read (gray) and Unread (blue) states.

Deleted messages are not displayed unless the Deleted filter is applied, but they remain in the log for30 days after deletion. A deleted message can be restored by applying the Deleted filter to the display,

clicking the message's icon, and selecting Restore, as Figure 10.5 illustrates. Messages are restored tothe Read state.

Page 118: Purestorage_UserGuide3_2

The Messages Tab

105

Figure 10.5. Restoring Deleted Alert Messages

The four filter menus (Figure 10.1) are used to limit the messages displayed in the main pane by category,

severity, state (Read or Unread) and age. Clicking the button in the upper right corner of the mainpane (see Figure 10.4 ) displays a dropdown menu from which all currently displayed messages can bemarked either Read or Unread, and from which all messages in the Read state can be deleted. Messagesthat are not displayed due to filtering are not affected by these commands.

Page 119: Purestorage_UserGuide3_2

Part IV. Using the Purity CLIto Administer a FlashArray

Page 120: Purestorage_UserGuide3_2

107

Table of ContentsI. Purity CLI Man Pages ................................................................................................... 108

pureadmin ............................................................................................................... 109purealert ................................................................................................................. 111purearray ................................................................................................................ 115purecli .................................................................................................................... 121pureconfig .............................................................................................................. 128puredns .................................................................................................................. 129puredrive ................................................................................................................ 131pureds .................................................................................................................... 134purehgroup ............................................................................................................. 140purehgroup-connect .................................................................................................. 145purehgroup-list ........................................................................................................ 148purehost ................................................................................................................. 152purehost-connect ...................................................................................................... 156purehost-list ............................................................................................................ 161purehw ................................................................................................................... 165purelicense .............................................................................................................. 171puremonitor ............................................................................................................ 181purenetwork ............................................................................................................ 183pureport .................................................................................................................. 187puresnap ................................................................................................................. 189puresnmp ................................................................................................................ 198purevol ................................................................................................................... 203purevol-list ............................................................................................................. 208purevol-rename ........................................................................................................ 213purevol-setattr ......................................................................................................... 215

11. Common CLI Administrative Tasks ............................................................................... 217CLI Help ................................................................................................................ 217

Top-Level Help ............................................................................................... 217Command Help ............................................................................................... 218Subcommand-Level Help .................................................................................. 218Man Page Help ............................................................................................... 219

Getting Started ........................................................................................................ 219Creating Volumes ............................................................................................ 219Creating Hosts ................................................................................................. 220

Connecting Hosts and Volumes .................................................................................. 221Resizing Volumes .................................................................................................... 222Destroying Volumes ................................................................................................. 223

Recovering and Eradicating Destroyed Volumes .................................................... 224Renaming Volumes .......................................................................................... 225

Ongoing Administration of Hosts ............................................................................... 225Monitoring I/O Performance ...................................................................................... 226Using listobj Output ................................................................................................. 227

Page 121: Purestorage_UserGuide3_2

108

Purity CLI Man PagesTo administer an array via the CLI, an administrator using a UNIX workstation starts a secure shell (ssh)session; an administrator using a Windows workstation starts a remote terminal emulator such as PuTTY.In both cases, the administrator connects to the array IP address specified during installation. Appendix A,Supported Remote Access Packages contains the list of supported remote terminal access packages.

The remainder of this chapter describes each of the CLI commands.

Table of Contentspureadmin ....................................................................................................................... 109purealert ......................................................................................................................... 111purearray ........................................................................................................................ 115purecli ............................................................................................................................ 121pureconfig ...................................................................................................................... 128puredns .......................................................................................................................... 129puredrive ........................................................................................................................ 131pureds ............................................................................................................................ 134purehgroup ..................................................................................................................... 140purehgroup-connect .......................................................................................................... 145purehgroup-list ................................................................................................................ 148purehost ......................................................................................................................... 152purehost-connect .............................................................................................................. 156purehost-list .................................................................................................................... 161purehw ........................................................................................................................... 165purelicense ...................................................................................................................... 171puremonitor .................................................................................................................... 181purenetwork .................................................................................................................... 183pureport .......................................................................................................................... 187puresnap ......................................................................................................................... 189puresnmp ........................................................................................................................ 198purevol ........................................................................................................................... 203purevol-list ..................................................................................................................... 208purevol-rename ................................................................................................................ 213purevol-setattr ................................................................................................................. 215

Page 122: Purestorage_UserGuide3_2

109

Namepureadmin — manage security for the pureuser administrative account

Synopsis

pureadmin setattr [-h | --help] [--password] [--publickey]

Options

-h | --helpCan be used with any command or subcommand to display a brief syntax description.

--passwordIndicates a request to change the password for the pureuser administrative account.

--publickeyIndicates a request to change the public key for pureuser administrative account access by the work-station account from which the session is being conducted.

Description

The current Purity release supports a single administrative account named pureuser. The account is pass-word-protected, and may alternatively be accessed using a public-private key pair. The latter mechanismis specific to an administrative workstation account; i.e., a public key entered in response to the pureadminsetattr subcommand must correspond to a private key known to the workstation and account under whichthe session is initiated.

The pureadmin setattr subcommand is unique in the Purity CLI in that it elicits subsidiary prompts forattribute values rather than parsing values entered on the command line:

--passwordWhen the --password option is specified, no value is supplied. The CLI prompts for the "old" pass-word (the password under which the current session is being conducted). If the current password isentered correctly, there are two further prompts--one for initial entry of the new password and onefor confirmation. If the responses to the two prompts are identical, the password is changed immedi-ately. All future sessions authenticate using the new password (the current session is not affected).Passwords may be between one and 32 characters in length, and may include any character that canbe entered from a US keyboard.

--publickeyWhen the --publickey option is specified, the CLI prompts for a new public key. A new public key istypically entered by copying a value from a key generation application running in a local window onthe administrative workstation and pasting it into the administrative session window. Newly enteredpublic keys are added to the list of keys that may be used to authenticate to the pureuser account. Eachpublic key must correspond to a private key in the account from which a session is being conducted.

Exceptions

None.

Page 123: Purestorage_UserGuide3_2

pureadmin

110

Examples

Example 1

pureadmin setattr --password

Indicates a request to change the password for the pureuser administrative account. Elicits a prompt forthe "old" (current) password. If the current password is entered correctly, elicits two prompts for the newpassword (entry and confirmation).

Example 2

pureadmin setattr --publickey

Indicates a request to change the public key for the workstation account from which the session is beingconducted. Elicits a prompt for a new public key, which would typically be copied from a key generationtool running in a local window on the administrative workstation.

See Alson/a

AuthorPure Storage Inc. <[email protected]>

Page 124: Purestorage_UserGuide3_2

111

Namepurealert, purealert-create, purealert-delete, purealert-enable, purealert-list, purealert-listobj, purealert-test— manages alert history and the list of designated email addresses to which Purity sends alert messageswhen significant events occur in an array

Synopsis

purealert create ADDRESS...

purealert delete --address ADDRESS...

purealert delete --alert ALERT-ID...

purealert disable ADDRESS...

purealert enable ADDRESS...

purealert list [--cli] [ --csv | --nvp ] [--notitle] [ --address | --alert ] [ADDRESS...]

purealert listobj [--csv] [--type address] [ADDRESS...]

purealert test [ADDRESS...]

Arguments

ADDRESSAny valid electronic mail address.

ALERT-IDUnique number that identifies an alert. Displayed in the first column of output form the purealert list--alert subcommand.

Options

-h | --helpCan be used with any command or subcommand to display a brief syntax description.

--addressDisplays state information for specified email addresses (purealert list) or deletes specified emailaddresses from the list of those designated to receive alert email messages (purealert delete).

--alertDisplays information about alerts (purealert list) or deletes specified alerts from displayed history(purealert delete).

--type {address} (purealert listobj)Outputs a whitespace-separated list containing the specified email addresses. If no addresses are spec-ified, contains all addresses designated to receive alert messages, whether enabled or not.

--type (purealert list)Type of information to be displayed (alert history or designated recipients for email alert messages).If not specified, defaults to designated email recipients.

Page 125: Purestorage_UserGuide3_2

purealert

112

--cliDisplays specified information in the form of CLI commands that could be issued to assign the currentvalues to the specified attributes. Not meaningful when combined with non-settable attributes.

--csvLists information in comma-separated value format. This format is designed for importation intospreadsheets and for scripting.

--notitleSuppresses generation of an initial line of output containing column titles.

--nvpLists each argument's name and specified information items, one to a line, in the form ITEM-NAME=VALUE. Argument names and information items are displayed flush left. This format is de-signed both for convenient viewing of what might otherwise be wide listings, and for parsing individ-ual items separated by whitespace for insertion into scripts.

DescriptionPurity generates log records called alerts when significant events occur within an array.

The purealert list --alert subcommand displays a history of the alerts generated by Purity. Each alertis numbered with an ALERT-ID, which is displayed in the first column of purealert list --alert output.The purealert delete --alert subcommand deletes alerts from the history as they are dealt with or theirimpact is understood.

Administrators can designate electronic mail addresses to which Purity will send electronic mail messageswhen alert-generating events occur.

The purealert create subcommand can designate any valid electronic mail address to receive Puri-ty alert messages. Up to 20 addresses can be designated in an array (19 in addition to the [email protected]). The purealert delete --address subcommand removes email addresses fromthe designated list.

FlashArray systems are delivered with the address [email protected] pre-designated to receivealerts. This address can be disabled (see below), temporarily suspending the transmission of alert messagesto Pure Storage Technical Support, but cannot be deleted.

The purealert test subcommand tests an array's ability to send email to any email address (designated ornot). If no email address arguments are specified, test messages are sent to all designated addresses.

Verification of successful test message transmission is done at the destination. The only console responseto the purealert test subcommand is the next Purity prompt.

The purealert list subcommand lists any or all designated email addresses and their states (enabled ordisabled).

The purealert listobj subcommand creates whitespace-separated (no formatting option specified) or com-ma-separated (--csv specified) lists of designated email addresses for scripting. If no addresses are spec-ified, produces a list of all designated addresses. The same output list is produced whether or not the --type address option is specified.

The purealert enable and purealert disable subcommands respectively enable and disable the sendingof alert messages to one or more designated email addresses. They do not send alert messages themselves.If no email address arguments are specified, the subcommands enable and disable the sending of alertmessages to all designated addresses, including the built-in [email protected].

Page 126: Purestorage_UserGuide3_2

purealert

113

When sending alerts to a designated email address is no longer appropriate, the purealert delete subcom-mand removes it from the list of designated addresses.

The sending account name for Purity email alert messages is the array name. This name and the senderdomain for alert messages can be viewed and altered by the purearray list and purearray setattr subcom-mands respectively.

The purealert list --cli subcommand displays the CLI command line that would produce the array's currentlist of designated email addresses in their current states. This list can, for example, be copied and pastedto create an identical email alert configuration in another array, or saved as a backup.

Examples

Example 1

purealert test [email protected]# verify at the destination that mail was received successfullypurealert create [email protected] test

Sends a test message to [email protected]. After receipt of the message at the destination has benverified by external means, designates [email protected] to receive Purity alert messages. The sec-ond purealert test subcommand sends test messages to all designated addresses, whether they are enabledor disabled.

Example 2

purealert disable [email protected] connect --host HOST1 VOL1 VOL2purealert enable [email protected]

Inhibits Purity from sending alert messages to [email protected], so that the account does notreceive the messages generated when the private connections between HOST1 and VOL1 and HOST1 andVOL2 are established. Re-enables sending to the [email protected] account after the connectionsare established, so that the account receives subsequent alert messages.

Example 3

purealert list --alert# view output alert historypurealert delete --alert 135 136 137 138

Displays the history of alerts generated by the array. After the display is viewed, deletes alerts with theIDs 135, 136, 137, and 138.

See Alsopurearray(1)

Page 127: Purestorage_UserGuide3_2

purealert

114

AuthorPure Storage Inc. <[email protected]>

Page 128: Purestorage_UserGuide3_2

115

Namepurearray, purearray-list, purearray-monitor, purearray-rename, purearray-setattr — manages attributesthat pertain to the array as a wholepurearray-disable, purearray-enable, purearray-logiostat, purearray-phonehome, purearray-remoteassist— manages support-related functionspurearray-diagnose, purearray-test — performs array diagnostic functions

Synopsispurearray diagnose [ --email | --log | --phonehome ]

purearray disable {phonehome}

purearray enable {phonehome}

purearray list [ --banner | --controller | --ntpserver | --phonehome | --proxy | --relayhost | --senderdomain| --space ] [--cli] [ --csv | --nvp ] [--notitle]

purearray logiostat [ --status | --stop ] [--interval SECONDS] [--nrep REPEAT-COUNT]

purearray monitor [--csv] [ --historical { 1h | 3h | 24h | 7d | 30d | 90d | 1y } | [--interval SECONDS][--nrep REPEAT-COUNT] [--size] ] [--notitle]

purearray phonehome { --cancel | --sendall | --sendtoday | --sendyesterday | --status }

purearray remoteassist { --connect | --disconnect | --status }

purearray rename NEW-NAME

purearray setattr { --banner | --ntpserver NTP-SERVER | --proxy HTTPS-PROXY | --relayhost RE-LAY-HOST | --senderdomain SENDER-DOMAIN }

purearray test {phonehome}

Options--cancel

Cancels any in-progress transmission of event logs to Pure Storage Technical Support.

--connectEnables Pure Storage Technical Support to initiate a remoteassist session.

--controllerDisplays the current mode of the array's controllers (primary, secondary, or not present for controllersthat are powered off). For arrays to which a second controller has never been connected, only a singlecontroller mode is displayed. Once a second controller has been connected, two controllers' modesare always shown.

--csvLists information in comma-separated value format. This format is designed for importation intospreadsheets and for scripting.

--disconnectTerminates an in-progress remoteassist session with Pure Storage Technical Support.

Page 129: Purestorage_UserGuide3_2

purearray

116

--emailSends subcommand output to addresses that are designated and enabled to receive email alerts.

-h | --helpCan be used with any command or subcommand to display a brief syntax description.

--historical {1h, 3h, 24h, 7d, 30d, 90d, 1y}Display historical performance data at the specified resolution.

--interval SECONDSSets the interval (in seconds) to log I/O statistics or performance data. If omitted, the interval defaultsto once every second for purearray logiostat or every 5 seconds for purearray monitor.

--logSends subcommand output to the array's log.

--notitleSuppresses generation of an initial line of output containing column titles.

--nvpLists each argument's name and specified information items, one to a line, in the form ITEM-NAME=VALUE. Argument names and information items are displayed flush left. This format is de-signed both for convenient viewing of what might otherwise be wide listings, and for parsing individ-ual items separated by whitespace for insertion into scripts.

--nrep REPEAT-COUNTSets the number of times to log I/O statistics or performance data. For purearray logiostat, if omit-ted, logging continues until it is explicitly stopped with purearray logiostat --stop. For purearraymonitor, if omitted, --nrep defaults to 1.

--bannerDisplays or sets the content to be displayed during login.

--ntpserver [NTP-SERVER]Displays or sets the hostnames or IP addresses of the NTP servers currently being used by the arrayto maintain reference time. Up to four NTP servers may be specified in the purearray setattr sub-command.

--phonehomeFor purearray diagnose, sends subcommand output to Pure Storage Technical Support via the phone-home channel. For purevol list, displays the current state of the Purity phonehome automatic hourlylog transmission facility (enabled or disabled).

--proxy [HTTPS-PROXY]Displays or sets the proxy host for the phonehome facility when https is the phonehome protocol (thephonehome facility itself determines which protocol to use) The format for the option value in thepurearray setattr subcommand is: HTTPS://HOSTNAME:PORT, where, HOSTNAME is the nameof the proxy host, and PORT is the TCP/IP port number used by the proxy host.

--relayhost [RELAY-HOST]Displays or sets the hostname or IP address of the electronic mail relay server currently being used asa forwarding point for email alerts generated by the array.

--sendallSends all log information stored in the array to Pure Storage Technical Support via the phonehomechannel.

Page 130: Purestorage_UserGuide3_2

purearray

117

--senderdomain [SENDER-DOMAIN]Displays or sets the domain name from which Purity sends email alert messages.

--sendtodaySends log information from the current day (in the array's time zone) to Pure Storage Technical Sup-port via the phonehome channel.

--sendyesterdaySends log information from the previous day (in the array's time zone) to Pure Storage TechnicalSupport via the phonehome channel.

--sizeDisplays the average I/O sizes per operation (read, write, and total).

--spaceDisplays amount of physical storage connected to the array and amount occupied by data.

--statuspurearray logiostat --status displays the status of I/O statistics logging.

purearray remoteassist --status displays the status of in-progress log transmission jobs or remoteassist session.

--stopStops logging I/O statistics.

DescriptionThe purearray diagnose subcommand executes a series of CLI subcommands that capture a snapshotof an array's I/O performance, configuration, and hardware status. By default, the output is sent to theconsole; this can be changed with one of the following options:

--emailMail the output to all enabled alert message recipients.

--logWrite the output to the array's log.

--phonehomeTransmit the output to Pure Storage Technical Support via the secure channel.

The purearray enable phonehome and purearray disable phonehome commands respectively, enableand disable automatic hourly transmission of array logs to Pure Storage Technical Support.

The purearray list command with no option specified displays an array's name and the Purity version andrevision numbers. Options can be specified to display controller name(s) and states, the NTP (NetworkTime Protocol) server name, the state of and proxy host for the phonehome facility, the relay host viawhich electronic mail is sent, and the sender domain for electronic mail alert messages.

The purearray list --space command displays physical storage capacity, both total configured and occu-pied by data.

The purearray logiostat command starts and stops logging of I/O statistics. By default, this feature isturned off. When it is turned on, I/O statistics are logged (at specified interval and number of repetitions;these logs are periodically sent to Pure Storage Technical Support (see purearray phonehome.

Page 131: Purestorage_UserGuide3_2

purearray

118

The purearray monitor command can be used to display instantaneous performance data. The purearraymonitor --historical command can be used to display historical performance data at one of the followingresolutions: 1 hour, 3 hours, 24 hours, 7 days, 30 days, 90 days, or 1 year. These commands can used withthe --csv and --notitle options to export historical data.

The purearray phonehome subcommand initiates immediate transmission of event logs stored in an arrayto Pure Storage support. Options determine whether the current day, previous day, or all log informationin the array are transmitted. Only one transmission may be in progress at any instant. A transmission inprogress can be cancelled by specifying the --cancel option in the subcommand. The phonehome facilityautomatically selects either the ssh or the https protocol for communicating with Pure Storage support. Ifa proxy host is required by https, use the purearray setattr --proxy command to specify its hostname andport. Specify the full URL string as described in the OPTIONS section above.

Purity can transmit an array's log contents to Pure Storage Technical Support (a) automatically once perhour, or (b) on demand. Pure Storage Technical Support stores log contents for immediate or later analysis.On-demand log transmission may be complete (--sendall option) or selective (--sendtoday or --sendyes-terday options). On-demand transmissions may be status-checked (--status option) or cancelled (--canceloption).

Purity can collect samples of hardware status, monitored performance, and array configuration on demand,and either (a) mail them to enabled alert recipients, (b) write them into the array's log, or (c) transmit themimmediately to Pure Storage Technical Support for analysis.

The purearray remoteassist subcommand enables or terminates a detached remote assistance sessionwith Pure Storage Technical Support. When the purearray remoteassist --connect command enablesa session, a port ID is displayed. The administrator relays this ID to a Pure Storage Technical Supportrepresentative, usually by phone, to enable the technician to connect to the array and perform diagnosticfunctions. The administrative session from which the purearray remoteassist --connect command is is-sued is unaffected, and can terminate the session at any time via the purearray remoteassist --disconnectsubcommand.

One remote assistance session can be active at a time; executing the purearray remoteassist --connectcommand while a session is active results in an error message. The purearray remoteassist --statuscommand displays remote assistance status (capability disabled, capability enabled, or session active).

An array's name can be changed by the purearray rename command. Other attributes can be changed byoptions in the purearray setattr command. To set a null value, use an empty string ("").

Purity can test certain array functionality on demand. The purearray test phonehome command testsan array's ability to communicate with Pure Storage Technical Support by establishing a secure shell orsecure http connection and verifying that test messages can be exchanged.

Note on Array Time

The installation technician sets the proper time zone for an array when it is installed. During operation, ar-rays maintain time synchronization by interacting with a Network Time Protocol (NTP) server (by default,time.purestorage.com). The purearray setattr --ntpserver NTP-SERVER command can be executed tospecify an alternate NTP server by IP address or hostname.

Examples

Example 1

Page 132: Purestorage_UserGuide3_2

purearray

119

purearray enable phonehome

Enables automatic hourly transmission of log contents to the Pure Storage Technical Support web site.

Example 2

purearray list --phonehome

Displays the current state of the phonehome automatic hourly log transmission facility (enabled or dis-abled).

Example 3

purearray setattr --ntpserver MyNTPServer1.com,MyNTPServer2.com

Assigns the NTP servers MyNTPServer1.com and MyNTPServer2.com as the array's sources for referencetime. Supersedes any previous NTP server assignments.

Example 4

purearray setattr --relayhost ""

Sets the relayhost attribute value to null, causing Purity to send alert email messages directly to recipientaddresses rather than routing them via a relay (mail forwarding) server.

Example 5

purearray diagnose --phonehome

Takes a snapshot of performance, configuration, and hardware status and transmits it to Pure StorageTechnical Support via the phonehome channel.

Example 6

purearray phonehome --sendtoday

Initiates immediate transmission of the current day's event logs to Pure Storage Technical Support via thephonehome network channel.

Example 7

purearray remoteassist --connect

Page 133: Purestorage_UserGuide3_2

purearray

120

Enables a Pure Storage Technical Support representative to initiate a remoteassist session with the array.

Example 8

purearray monitor --historical 1h --csv

Produces CSV output containing historical performance data for the array at the 1 hour resolution.

Example 9

purearray monitor --nrep 60 --interval 1

Displays 60 seconds' worth of instantaneous performance data for the array.

See Alsopurealert(1), puredns(1)

AuthorPure Storage Inc. <[email protected]>

Page 134: Purestorage_UserGuide3_2

121

Namepurecli — summarizes Purity CLI general concepts and conventions

Synopsis

purecli

Options

None

Description

Conventions

This and all Purity CLI man pages adhere to the following conventions:

• Constant-width (Courier) font is used for text that is to be entered verbatim (e.g., command and sub-command names such as purehost list).

• Uppercase italic text represents entered text whose content is based on the nature of the subcommand(e.g., --size SIZE, where SIZE represents the value to be entered, such as 100g etc.)

• The brackets, braces, and parentheses in subcommand synopses are not entered. Brackets ("[ ]") enclosechoices of which zero or one may be entered. Braces ("{ }") and parentheses ("( )") enclose choices ofwhich exactly one must be entered. Absence of brackets, braces, and parentheses indicates a requiredsyntactic element.

CLI Command Syntax

The Purity command line interpreter (CLI) implements a single command for each type of managed objectin a FlashArray system. Commands have the general form:

pureCOMMAND SUBCOMMAND OPTIONS OBJECT-LIST

The parts of a command are:

pureCOMMANDType of FlashArray object to be acted upon prefixed by "pure". For example, the purevol commandacts on Purity virtual storage volumes.

SUBCOMMANDAction to be performed on specified objects.

OPTIONSOptions that specify attribute values to be assigned or modify the action performed by the subcom-mand. Some options can be multi-valued (e.g., --wwn, --hostlist, etc.). Multiple option values arespecified in comma-separated lists, preceded and followed by white space, with individual valuesseparated by commas, but no spaces.

Page 135: Purestorage_UserGuide3_2

purecli

122

OBJECT-LISTList of objects upon which the command is to operate. Some subcommands inherently act on a singleobject (e.g., purehgroup setattr); others can operate on multiple objects (e.g., purehgroup connect).The latter are indicated by ellipses (...) following the object specification in man page command syn-opses (e.g., VOL...).

Subcommands that change object state (e.g., create, delete, setattr) require that one or more objects be spec-ified. In passive subcommands such as list, that do not change object state, absence of an >OBJECT-LISTis equivalent to specifying all objects of the type. For example, purevol list with no volumes specifieddisplays information about all volumes in an array.

Managed Objects

FlashArray administrators manage hardware and network objects and abstract objects instantiated by Pu-rity.

The manageable hardware and network objects are:

arrayManageable attributes of the array as a whole.

DNS serviceData center DNS services used by the array.

driveSolid-state data storage drives and NVRAM modules.

hardware componentsHardware components such as power supplies, interface ports, etc.

portFibre Channel ports that connect an array to the storage network.

networkThe Ethernet ports that connect an array to the data center's administrative network.

The abstract objects instantiated by Purity are:

volumeDisk-like virtual storage devices used for storing client data.

hostFibre Channel or iSCSI-attached "host" computers that use FlashArray data storage and retrieval ser-vices.

host groupGroups of hosts and volumes that share the same logical connections (called shared connections) toeach other.

snapshotImmutable point-in-time images of volume contents.

SNMP managerExternal computers running SNMP management software to which arrays communicate alerts andresponses to SNMP queries.

Page 136: Purestorage_UserGuide3_2

purecli

123

Connections

In order for a host computer to access a FlashArray virtual volume, the Purity host object that identifies itmust have a connection to the volume. Host-volume connections may be:

privateThese connect one host to one volume, independent of any other connections. Private connections aretypically used for boot volumes and for stand-alone (non-clustered) host applications.

sharedThese are a consequence of association with a host group, which defines a consistent set of connec-tions between one or more volumes and one or more hosts. Shared connections are useful for clus-ter applications in which several related hosts requireconsistent (same LUN) connectivity to a set ofstorage volumes.

LUN Assignment

A host-volume connection has three components: a host, a volume, and a logical unit number (LUN) usedby the host to address the volume. At the present time, Purity supports LUNs in the [1...255] range.

LUNs in the [1...9] range are reserved for private connections. Private connections to hosts that are notassociated with host groups may also use LUNs in the [10...255] range. If a host is associated with a hostgroup, however, LUNs in the [10...255] range are reserved for shared connections. A host with privateconn ctions that use LUNs in the [10...255] range cannot be associated with a host group until such privateconnections have been broken (and optionally re-established using LUNs in the [1...9] range).

If no LUN is specified in a connect subcommand, Purity automatically assigns a LUN in the appropri-ate range. An administrator may override automatic LUN assignment with the --lun option). Administra-tor-specified LUNs for private connections must be in the appropriate range ([1...255] for a host not asso-ciated with a host group, [1...9] for a host that does have a host group association).

Object Names

Purity object names use the Internet domain name (RFC 1035) character set plus the underscore character(_). The valid characters are letters (A-Z and a-z), digits (0-9), and the hyphen (-) and underscore (_)characters. The first and last characters of a name must be alphanumeric, and a name may not be entirelynumeric.

Array names may be 1-56 characters in length; other objects that can be named (host groups, hosts, SN-MP managers, and volumes) may be 1-63 characters in length. (Array name length is shorter so that thenames of individual controllers, which are assigned by Purity based on the array name, do not exceed themaximum allowable by DNS.)

Names are case-insensitive on input. For example, vol1, Vol1, VOL1, etc. all represent the same volume.Purity displays names in the case in which they were specified in the create or rename subcommand thatcreated the objects, regardless of the case in which they are entered in management commands.

Number Syntax

The Purity CLI requires numbers as input values in several contexts:

Volume SizesSpecified as an integer, optionally followed by one of the characters S, K, M, G, T, or P, denoting512-byte sectors, KiB, MiB, GiB, TiB, and PiB respectively, where "Ki" denotes 2^10, "Mi" denotes2^20, etc.). If no suffix letter is specified, size is assumed to be expressed in sectors. Volumes must bebetween one megabyte and four petabytes in size. Purity adjusts specified sizes less than one megabyte

Page 137: Purestorage_UserGuide3_2

purecli

124

to one megabyte, and fails commands in which sizes larger than four petabytes are specified. Digitseparators are not permitted in size specifications (e.g., 1000g is valid, but 1,000g is not).

Fibre Channel Worldwide NamesHosts' ("initiators") Fibre Channel worldwide names are specified as 16 hexadecimal digits, eitherwithout separators (e.g., 0123456789abcdef) or as eight 2-digit groups separated by colons (e.g.,01:23:45:67:89:ab:cd:ef). The hexadecimal digits a-f may be entered in upper or lower case. World-wide names associated with FlashArray Fibre Channel ports start with the Pure Storage vendor iden-tifier (24:A9:37). They are assigned during manufacture and cannot be altered.

Hardware Component NamesEach FlashArray hardware component has a name that is unique within an array. Controller and stor-age shelf chassis names have the form XXm; names of other hardware components have the formXXm.YYn, where:

Each component has a name that is unique within its array. Controller and storage shelf chassis nameshave the form XXm; names of other components have the form XXm.YYn, where:

XXDenotes the type of chassis (CT for controller; SH for storage shelf) that is or that houses thecomponent.

mIdentifies the specific chassis. For controllers, m has a value of 0 or 1. For storage shelves, it isthe number assigned to the shelf, either during initial configuration or by the purehw setattr --id command (see below).

YYDenotes the type of component (e.g., BAY for drive bay, FAN for cooling device, etc.) Therecognized component types are listed below.

nIdentifies the specific component by its index, or relative position within the chassis.

MiscellaneousA few other options require integer values, for example, --nrep in the puremonitor command. Theseintegers are assumed to be decimal, and are entered without separators (e.g., --nrep 1000 is valid,but --nrep 1,000 is not).

Common CLI Subcommands

Most CLI subcommands are common to some or all object types; others (e.g., purearray phonehome)are object type-specific. Subcommands common to most or all object types are:

createCreates and names one or more objects of the type indicated by the command. Applicable to abstractobjects.

deleteDeletes one or more specified objects of the type indicated by the command. Applicable to abstractobjects with the exception of volumes, for which the destroy and eradicate subcommands are usedinstead.

listLists information specified by options about one or more objects of the type indicated by the command.If no objects are specified in a command line, lists the specified information about all objects of thetype. Applicable to both abstract and hardware and network objects.

Page 138: Purestorage_UserGuide3_2

purecli

125

listobjCreates whitespace-separated lists of objects or attributes related to one or more objects of the typeindicated by the command (e.g., purevol listobj --type host creates a list of the hosts to which volumeshave connections). Intended primarily to create lists of object and attribute names for scripting.

renameChanges the Purity name of the specified object. Applicable to abstract objects (host groups, hosts,volumes, and the array itself). Hardware object names cannot be changed.

setattrChanges the specified attribute values for objects of the specified type. Some attributes (e.g., --hostlist) inherently apply only to a single object; others (e.g., --size) can be set for multiple objectsin a single command. Applicable to abstract objects.

Common List Formatting Options

CLI list subcommands display specified attribute values for specified objects on stdout (the administrativeconsole). The attributes displayed are object type-specific. Several display formats are supported for alllist subcommands:

no format specified (default)

--cliDisplays specified information in the form of CLI commands that could be issued to assign the currentvalues to the specified attributes. Not meaningful when combined with non-settable attributes.

--csvLists information in comma-separated value format. This format is designed for importation intospreadsheets and for scripting.

--nvpLists each argument's name and specified information items, one to a line, in the form ITEM-NAME=VALUE. Argument names and information items are displayed flush left. This format is de-signed both for convenient viewing of what might otherwise be wide listings, and for parsing individ-ual items separated by whitespace for insertion into scripts.

--notitleSuppresses generation of an initial line of output containing column titles.

--totalFollows output lines with a single line containing column totals in columns where they are meaningful.Ignored when --nvp is specified (where permitted).

CLI Transactional Behavior

Some CLI subcommands can act on multiple objects (e.g., a single purevol setattr --size command canexpand multiple volumes to the same size). When a command specifies multiple objects, the behavior istransactional with respect to each object, but not for all objects. For example, in the command

purevol setattr --size 100g VOL1 VOL2

each volume is resized atomically, but it is possible for another command (e.g., one executed by anotheradministrator or by a script) to execute between the two resizings. In the unlikely event that administratorA executed the above command, and administrator B resized VOL2 to 200 gigabytes while Purity was

Page 139: Purestorage_UserGuide3_2

purecli

126

resizing VOL1 to 100 gigabytes, the resize of VOL2 to 100 gigabytes would fail, because the setattrsubcommand cannot be used to reduce the size of a volume.

The purevol snap command is an exception to the per-object atomicity rule. Snapshots of all volumesspecified in a single purevol snap command are taken atomically and represent the contents of the volumesas of a single instant in time.

Online CLI Documentation

Terse descriptions of commands, subcommands, and the effect of options are embedded in the CLI itself.These can be displayed by typing:

COMMAND -h

for commands, or

COMMAND SUBCOMMAND -h

for subcommands.

More extensive CLI help is available in these man pages in the form of a man page for each CLI commandand subcommand. Man page help on commands is displayed by entering:

pureman COMMAND

or

pureman COMMAND-SUBCOMMAND

The command-level man pages are listed in the SEE ALSO section below. Individual subcommand manpages are listed in the commands' pages.

Related subcommands that act on the same object type (e.g., purevol create, purevol destroy, purevoleradicate, purevol recover) are described on the same man page.

Similarly, certain subcommands can be executed on different object types (e.g., purevol connect andpurehost connect). These are described on the same man page.

ExamplesSee individual subcommand pages.

See Alsopureadmin(1), purealert(1), purearray(1), pureconfig(1), puredns(1), puredrive(1), pureds(1), pure-hgroup(1), purehost(1), purehw(1), purelicense(7), puremonitor(1), purenetwork(1), pureport(1), puresn-mp(1), purevol(1)

Page 140: Purestorage_UserGuide3_2

purecli

127

AuthorPure Storage Inc. <[email protected]>

Page 141: Purestorage_UserGuide3_2

128

Namepureconfig — displays commands required to reproduce an array's current volume, host, host group, con-nection, network, alert, and array configuration

Synopsispureconfig list

OptionsNone

DescriptionDisplays an array's current configuration of volumes, hosts, shared and private connections, administrativenetwork parameters, alert email addresses, and array parameters in the form of CLI commands that wouldbe required to reproduce the configuration on a newly-installed or otherwise unconfigured array. (Theoutput does not contain delete or destroy subcommands).

The output of this command can be captured on the administrative workstation and used as a script toconfigure a previously unconfigured array identically to the array on which the command is executed.

Executing the pureconfig list command is roughly equivalent to executing the following commands insequence:

purevol list --clipurehost list --clipurehgroup list --clipurehost list --connect --clipurehgroup list --connect --clipurenetwork list --clipurealert list --clipurearray list --cli

See Alsopurealert(1), purearray(1), purehgroup-list(1), purehost-list(1), purenetwork(1), purevol-list(1)

AuthorPure Storage Inc. <[email protected]>

Page 142: Purestorage_UserGuide3_2

129

Namepuredns — manages an array's DNS attributes

Synopsispuredns list [ --cli | --csv | --nvp ] [--notitle]

puredns setattr [--domain DOMAIN-NAME] [--nameserver DNS-SERVER-IP-ADDRESS-LIST]

Options-h | --help

Can be used with any command or subcommand to display a brief syntax description.

--domainDomain suffix to be appended by the array when doing DNS lookups.

--nameserverA comma-separated list of up to three DNS server IP addresses. The order of the list is significant.Purity queries DNS servers in the order in which their IP addresses are listed in this option.

--cliDisplays specified information in the form of CLI commands that could be issued to assign the currentvalues to the specified attributes. Not meaningful when combined with non-settable attributes.

--csvLists information in comma-separated value format. This format is designed for importation intospreadsheets and for scripting.

--notitleSuppresses generation of an initial line of output containing column titles.

--nvpLists each argument's name and specified information items, one to a line, in the form ITEM-NAME=VALUE. Argument names and information items are displayed flush left. This format is de-signed both for convenient viewing of what might otherwise be wide listings, and for parsing individ-ual items separated by whitespace for insertion into scripts.

DescriptionManages DNS attributes for an array's administrative network. Displays and sets DNS parameters (DNSserver addresses and the domain suffix for searches). The --domain option sets the domain suffix to beappended to DNS queries. For example,

puredns --domain mydomain.com --nameserver 192.168.0.25

specifies the IP address of the (single) DNS server and causes queries for IP addresses to be satisfied eitherby the specified hostname or by hostname.mydomain.com.

The list of DNS name server IP addresses specified in the --nameserver option replaces the list of nameservers in effect prior to command execution.

Page 143: Purestorage_UserGuide3_2

puredns

130

The puredns list --cli command displays the CLI command line that would reproduces the array's currentDNS configuration. This can, for example, be copied and pasted to create an identical DNS configurationin another array, or saved as a backup.

Examples

Example 1

puredns --domain mydomain.com --nameserver 192.168.0.125,192.168.2.125

Specifies the IP addresses of two DNS servers for Purity to use to resolve hostnames to IP addresses, andthe domain suffix mydomain.com for DNS searches.

Example 2

puredns --domain ''

Removes the domain suffix from Purity DNS queries.

Example 3

puredns --nameserver 0.0.0.0

Unassigns DNS server IP addresses (Purity ceases to make DNS queries).

See Alsopurearray(1), purenetwork(1)

AuthorPure Storage Inc. <[email protected]>

Page 144: Purestorage_UserGuide3_2

131

Namepuredrive — displays information about an array's solid-state drives and NVRAM modules

Synopsispuredrive list [ --csv | --nvp ] [--notitle] [--total] [DRIVE...]

ArgumentsDRIVE

Name of a drive about which information is to be displayed. Includes shelf identifier (e.g., SH0.DRV0designates the drive in bay #0 of storage shelf #0, which is an NVRAM module).

Options-h | --help

Can be used with any command or subcommand to display a brief syntax description.

--csvLists information in comma-separated value format. This format is designed for importation intospreadsheets and for scripting.

--notitleSuppresses generation of an initial line of output containing column titles.

--nvpLists each argument's name and specified information items, one to a line, in the form ITEM-NAME=VALUE. Argument names and information items are displayed flush left. This format is de-signed both for convenient viewing of what might otherwise be wide listings, and for parsing individ-ual items separated by whitespace for insertion into scripts.

--totalFollows output lines with a single line containing column totals in columns where they are meaningful.Ignored when --nvp is specified (where permitted).

DescriptionLists information about some or all of an array's solid-state drives (SSDs--used for persistent storage of userdata) and NVRAM modules (used as non-volatile write cache). If no drives are specified, lists informationabout all of an array's SSDs and NVRAM modules.

The information listed for each specified drive includes:

NameName by which Purity identifies the drive in administrative operations. Drive names identify physicallocations in terms of shelf and bay numbers.

From BayLocation (in drive name format, indicating shelf and bay) at which a missing drive was installed priorto failure. This column is included in the output display only when there are missing drives for whichevacuation has not completed, and contains values only for drives with a status of missing.

TypeSSD (data storage) or NVRAM (write cache).

Page 145: Purestorage_UserGuide3_2

puredrive

132

StatusOne of the following:

healthyDrive is functioning and belongs to the array

foreignDrive is a functioning FlashArray drive, but does not belong to the array

reachableDrive is functioning, but is not initialized as a FlashArray drive

failedDrive may be installed, but does not respond to I/O commands

missingDrive is a failed FlashArray drive that has been removed from its bay and whose contents have notyet been completely evacuated. The Name column is blank, and the Last Bay column indicatesthe shelf and bay in which the missing drive was formerly installed.

CapacityPhysical storage capacity of the drive.

Evac RemainingFor a missing drive, the amount of data that remains to be evacuated (reconstructed and stored onother drives).

Last FailureTime at which a drive became non-responsive.

Last Evac CompletedTime at evacuation of data from a non-responsive drive completed.

The puredrive list output display shows information both for drives that are physically present in an array'sstorage shelves, and for missing drives whose data is being evacuated (reconstructed and stored at otherlocations within the array). Physically present drives are indicated by non-null values in the Name columnof the display that indicate their locations. Missing drives are indicated by empty Name column entries,and non-null values indicating their former locations in the From Bay column. Missing drives are indicatedin the display only as long as evacuation of their data is in progress.

Examples

Example 1

puredrive list sh0.drv0

Lists the abovementioned information about drive DRV0 in shelf SH0.

See Alson/a

Page 146: Purestorage_UserGuide3_2

puredrive

133

AuthorPure Storage Inc. <[email protected]>

Page 147: Purestorage_UserGuide3_2

134

Namepureds, pureds-disable, pureds-enable, pureds-list, pureds-setattr, pureds-test — manages FlashArray in-tegration with a directory service

Synopsispureds disable [--checkpeer]

pureds enable [--checkpeer]

pureds list [[--cli] | [--csv] | [--nvp] | [--notitle]] [--groups] [--certificate]

pureds setattr [--admingroup ADMIN-GROUP] [--basedn BASE-DN] [--bindpw] [--binduser BIND-USER] [--certificate] [--auto-fetch] [--groupbase groupbase] [--readonlygroup READONLY-GROUP][--uri URI-LIST] [--usergroup USER-GROUP]

pureds test

Options-h | --help

Can be used with any command or subcommand to display a brief syntax description.

--admingroup ADMIN-GROUPCommon Name (CN) of the directory service group containing users with administrative privilegeswhen managing the FlashArray. The name should be just the Common Name of the group withoutthe "CN=" specifier. Common Names should not exceed 64 characters in length.

--basedn BASE-DNThe base of the Distinguished Name (DN) of the directory service groups. The base should consist ofonly Domain Components (DCs). This field will populate with a default value when a URI is enteredby parsing domain components from the URI. The base DN should specify "DC=" for each domaincomponent and multiple DCs should be separated by commas.

--bindpwDisplays a prompt from which the password of the binduser account is entered interactively.

--binduser BIND-USERUsername - often referred to as sAMAccountName or User Logon Name - that can be used to bind toand query the directory. sAMAccountNames must not contain the characters:

" [ ] : ; | = + * ? < > / \ ,

and should be not exceed 20 characters in length.

--certificateDisplays a prompt from which PEM formatted (base64 encoded) CA certificate data is entered inter-actively. This data should include the "-----BEGIN CERTIFICATE-----" and "-----END CERTIFI-CATE-----" lines and should not exceed 3000 total characters. The certificate can be cleared by en-tering blank lines at the prompt.

--checkpeerUsed with pureds enable or pureds disable, the checkpeer option toggles server authenticity en-forcement with the configured CA certificate. Therefore, this option can only be enabled if certificatedata has been provided. If this option is enabled and certificate data is cleared, it will revert back todisabled.

Page 148: Purestorage_UserGuide3_2

pureds

135

--auto-fetchAttempts to get CA certificate data from the configured URI or the first URI in the list if more thanone URI is configured. This option is only used with the --certificate option.

--groupbase GROUP-BASESpecifies where the configured groups are located in the directory tree. This field consists of Orga-nizational Units (OUs) that combine with the base DN attribute and the configured group CNs tocomplete the full Distinguished Name of the groups. The group base should specify "OU=" for eachOU and multiple OUs should be separated by commas. The order of OUs is important and should getlarger in scope from left to right. Each OU should not exceed 64 characters in length.

--readonlygroup READONLY-GROUPCommon Name (CN) of the configured directory service group containing users with read-only privi-leges on the FlashArray. This name should be just the Common Name of the group without the "CN="specifier. Common Names should not exceed 64 characters in length.

--uri URI-LISTA comma separated list of up to 30 URIs of the directory servers. These must be full URIs includingthe scheme: ldap:// or ldaps://. The domain names should be resolvable by configured DNS servers(see puredns(1)).

If the scheme of the URIs is ldaps://, SSL is enabled. SSL is either enabled or disabled globally, sothe scheme of all supplied URIs must be the same. They must also all have the same domain. If baseDN is not configured and a URI is provided, the base DN will automatically default to the DomainComponents of the URIs.

Standard ports are assumed (389 for ldap, 636 for ldaps). Non-standard ports can be specified in theURI if they are in use.

--usergroup USER-GROUPCommon Name (CN) of the configured directory service group containing users with generic privi-leges on the FlashArray. This name should be just the Common Name of the group without the "CN="specifier. Common Names should not exceed 64 characters in length.

--cliDisplays specified information in the form of CLI commands that could be issued to assign the currentvalues to the specified attributes. Not meaningful when combined with non-settable attributes.

--csvLists information in comma-separated value format. This format is designed for importation intospreadsheets and for scripting.

--nvpLists each argument's name and specified information items, one to a line, in the form ITEM-NAME=VALUE. Argument names and information items are displayed flush left. This format is de-signed both for convenient viewing of what might otherwise be wide listings, and for parsing individ-ual items separated by whitespace for insertion into scripts.

--notitleSuppresses generation of an initial line of output containing column titles.

DescriptionFlashArrays can integrate with an existing directory service to allow multiple users to log in and usethe array and to provide role-based access control. Integrating with an existing directory service, such as

Page 149: Purestorage_UserGuide3_2

pureds

136

Microsoft Active Directory, leverages the directory to maintain credentials, group/password policy andhandle authentication.

Directory Service User Authentication

Configuring and enabling the Pure Directory Service changes the FlashArray to use the directory whenperforming user account and permission level searches. If a user is not found locally, the directory serversare queried. There is no need to specify the domain as part of the login process, just the username.

Some directory services do not support anonymous binds and queries, in which case a 'bind' account mustbe configured to allow the FlashArray to read the directory. It is good practice for this account to not betied to any actual person and to have different password restrictions, such as password never expires. Thisshould also not be a privileged account, since only read access to the directory is required. One or moreURIs are configured to be connected to and queried. Configuring more than one URI provides redundancyif a single directory server is unable to handle directory queries.

Accounts with usernames that conflict with local accounts will not be authenticated against the directory.These account names include, but are not limited to:

• pureuser

• os76

• root

• daemon

• sys

• man

• mail

• news

• proxy

• backup

• nobody

• syslog

• mysql

• ntp

• avahi

• postfix

• sshd

• snmp

Additionally, users with disabled accounts will not have access to the FlashArray.

The Domain Component base of the Distinguished Name, the Organizational Unitbase of the configured groups and the Common Names of the groups them-

Page 150: Purestorage_UserGuide3_2

pureds

137

selves build the unique Distinguished Name of the groups. For example,"CN=purereadonly,OU=PureGroups,OU=SAN,OU=IT,OU=US,DC=mycompany,DC=com" is the full,unique name of the configured read-only group "purereadonly" at the group base"OU=PureGroups,OU=SAN,OU=IT,OU=US" and with base DN "DC=mycompany,DC=com".

A user must be a member of at least one configured group to have access to the FlashArray. This meansthat to enable the Pure Directory Service, at least one group must be configured. For Active Directory,setting a group to be a user's primary group removes the memberOf attribute, which makes filtering bygroup name impossible. Therefore, configured groups must not be the primary group of the user.

Role-Based Access Control

Role-based access control is achieved by configuring groups in the directory that correspond to differentpermission levels.

Users in the read-only group have access to execute commands that convey the state of the FlashArray,but cannot alter this state. Generic users have access to execute almost all commands with the exception ofselect privileged commands that only users in the admin group have access to. Administrators have accessto execute all FlashArray CLI commands, such as managing credentials for other users.

Members of more than one group have privileges corresponding to the most privileged group. In otherwords, a user who is both a member of the configured read-only group and the admin group will haveadministrator level privileges.

To simplify configuration, only a single group base is required when integrating the FlashArray with thedirectory service. This means that all groups of users who have access to the FlashArray must be in thesame place in the directory tree, usually a common Organizational Unit (OU). OUs are typically nested,getting more specific in purpose with each nested OU.

Nested groups, or groups as members of different groups, are supported. Users in groups which are them-selves members of the configured groups will also have access to the FlashArray, as if they were a memberof the configured group itself.

When a user logs in to the FlashArray, only CLI actions the user has permission to execute will be visible.Similarly in the GUI, actions the user does not have permission to execute will be grayed out or otherwisedisabled. The permission level of an individual user is cached locally to prevent frequently binding andquerying the directory. The cache entries expire after a time limit at which point the directory is queriedagain and the cache entry is updated.

Certificates

If the configured directory servers have been issued certificates, the certificate of the issuing certificateauthority can be stored on the FlashArray to validate the authenticity of the directory servers. When per-forming directory queries, the certificate presented by the server is validated using CA certificate.

Server authenticity is only enforced if checkpeer is enabled. Checkpeer may only be enabled if a CAcertificate has been configured.

Only one certificate can be configured at a time, so the same certificate authority should be the issuer ofall directory server certificates. The certificate should be PEM formatted (base64 encoded) and should notexceed 3000 characters in total length.

When certificate data with valid syntax is supplied, the certificate trust is checked to determine if thecertificate is self-signed or signed by a trusted root certificate authority. If the trust cannot be determined,the certificate data can still be saved, but server authenticity enforcement using the certificate may fail.

Page 151: Purestorage_UserGuide3_2

pureds

138

As a convenience, the Pure Directory Service can attempt to automatically fetch certificate data from thedirectory server. If certificate data is successfully retrieved from the server, it undergoes the same trustcheck as manually entered certificate data, however if trust cannot be determined the operation will notcontinue. If certificate data is successfully retrieved and the CA certificate is trusted, a final prompt toconfirm the data is displayed.

pureds Subcommands

The pureds disable subcommand disables the directory service. This will stop any users in the directoryfrom logging in. If used with the --checkpeer option, server authenticity enforcement using a certificateis disabled, but the directory service status remains unchanged.

The pureds enable subcommand enables the directory service. This will allow users in the directory tolog in. At a minimum, a URI, base DN, bind user and bind pw, and at least one group must be configuredbefore the Pure Directory Service can be enabled. If used with the --checkpeer option, server authentic-ity enforcment using the configured CA certificate is enabled, but the directory service status remainsunchanged. A certificate must be configured before enabling the checkpeer option.

The pureds list subcommand displays the current base configuration. Alternatively, if the --groups optionis specified, group configuration consisting of the group names and group base is displayed. If the --certificate option is specified, currently configured CA certificate data is displayed.

The pureds setattr subcommand can be used to set or clear URIs, base DN, bind user, bind password,read-only group, user group, admin group, group base, and certificate data.

The pureds test subcommand tests the current configuration by running a series of tests. This commandcan be run at any time. Running the command verifies that the URI can be resolved and that we can bindand query the tree using the bind user credentials. It also verifies that it can find all the configured groupsto ensure the Common Names and group base are correctly configured. If checkpeer is enabled, the intialbind and query test is repeated while enforcing server authenticity using the CA certificate. Addtionally,the tests to find configured groups also enforce server authenticity.

Examples

Example 1

pureds setattr --uri "ldaps://ad1.mycompany.com,ldaps://ad2.mycompany.com" --basedn "DC=mycompany,DC=com" --binduser ldapreader

Sets the URI to both ad1.mycompany.com and ad2.mycompany.com and the scheme to ldaps:// to enableSSL. This also sets the base DN to be the correct Domain Components and sets the bind user username (orsAMAccountName). If there was no base DN set previously, explicitly setting the base DN is unnecessary.

Example 2

pureds setattr --readonlygroup purereadonly --usergroup pureusers --admingroup pureadminspureds setattr --groupbase "OU=PureGroups,OU=SAN,OU=IT,OU=US"

Sets the groups to be the Common Names of the directory groups. Also sets the groupbase to be the nested Organizational Units where the groups can be found in the tree.Combined with Example 1, the full Distinguished Name of the read-only group would be:"CN=purereadonly,OU=PureGroups,OU=SAN,OU=IT,OU=US,DC=mycompany,DC=com"

Page 152: Purestorage_UserGuide3_2

pureds

139

Example 3

pureds setattr --bindpwEnter bind password:Retype bind password:

Shows the interactive prompt for entering a password for the bind user account. The password is not shownwhile typing, so a confirmation prompt is presented. If the passwords do not match, no change is made.

Example 4

pureds testTesting from ct0:Searching ldaps://ad1.mycompany.com... PASSEDSearching for group CN=purereadonly... PASSEDSearching for group CN=pureusers... PASSEDSearching for group CN=pureadmins... PASSED

Searching ldaps://ad2.mycompany.com... PASSEDSearching for group CN=purereadonly... PASSEDSearching for group CN=pureusers... PASSEDSearching for group CN=pureadmins... PASSED

pureds enable

Shows successful output of testing the current configuration and enabling the service following the suc-cessful test.

See Alsopuredns(1),

AuthorPure Storage Inc. <[email protected]>

Page 153: Purestorage_UserGuide3_2

140

Namepurehgroup, purehgroup-create, purehgroup-delete — manage the creation, deletion, and population ofPurity host group (hgroup) objects.

Synopsispurehgroup create [--hostlist HOST-LIST] HGROUP...

purehgroup delete HGROUP...

purehgroup setattr { --addhostlist HOST-LIST | --hostlist HOST-LIST | --remhostlist HOST-LIST} HGROUP

ArgumentsHGROUP

Host group name. The purehgroup create command assigns the name to the host group being created.The purehgroup delete and purehgroup setattr commands use host group names to identify thehost groups to be operated upon.

Object Names

Purity object names use the Internet domain name (RFC 1035) character set plus the underscore character(_). The valid characters are letters (A-Z and a-z), digits (0-9), and the hyphen (-) and underscore (_)characters. The first and last characters of a name must be alphanumeric, and a name may not be entirelynumeric.

Array names may be 1-56 characters in length; other objects that can be named (host groups, hosts, SN-MP managers, and volumes) may be 1-63 characters in length. (Array name length is shorter so that thenames of individual controllers, which are assigned by Purity based on the array name, do not exceed themaximum allowable by DNS.)

Names are case-insensitive on input. For example, vol1, Vol1, VOL1, etc. all represent the same volume.Purity displays names in the case in which they were specified in the create or rename subcommand thatcreated the objects, regardless of the case in which they are entered in management commands.

Options-h | --help

Can be used with any command or subcommand to display a brief syntax description.

--addhostlist (purehgroup setattr only)Comma-separated list of one or more additional hosts to be associated with the host group. Has noeffect on hosts already associated with the group.

--hostlistComma-separated list of one or more hosts to be associated with the host group. When specified inthe purehgroup setattr command, replaces the entire membership of a host group.

--remhostlist (purehgroup setattr only)Comma-separated list of one or more host objects whose associations with the host group are to bebroken.

Page 154: Purestorage_UserGuide3_2

purehgroup

141

DescriptionA Purity host group is an abstraction that implements consistent connections between a set of hosts andone or more volumes. Connections are consistent in the sense that all hosts associated with a host groupaddress a volume connected to the group by the same LUN. Host groups are typically used to provide acommon view of storage volumes to the hosts in a clustered application.

When a host is associated with a host group, Purity automatically connects it to all volumes connectedto the group.

The hosts associated with a host group may be specified via the --hostlist option when the group is created.Hosts may be added to and removed from a host group at any time by the purehgroup setattr command.

A host may only be associated with one host group at a time, whereas a volume may be connected tomultiple host groups as well as to individual hosts simultaneously.

Deleting Host Groups

The purehgroup delete command removes host groups that are no longer required. Before a host groupcan be deleted, it must be depopulated by the purehgroup setattr --remhostlist command. (Alternatively,the --hostlist '' option may be specified, replacing the host group's complement of hosts with a null set.)

Host Groups and LUNs

When a volume is connected to a host group, it is assigned a LUN in the range [10...255]. All hosts in thegroup use this LUN to communicate with it. Purity will automatically assign the lowest available LUN inthe [10...255] range, or, alternatively, an administrator may specify the --lun option in the purehgroupconnect command to designate a specific LUN within the range when connecting a volume to a host group.

LUNs in the range [1...9] are reserved for private host-volume connections. LUNs in the range [10...255]may be specified for private connections to hosts that have no host group association, but a host cannotbe associated with a host group if it has private connections in the [10...255] range. Any such connectionsmust be broken and re-established using LUNs in the range [1...9].

Purity automatically establishes and breaks hosts' shared connections to volumes as it adds them to andremoves them from a host group. No further administrative action is required to manage these connections:

• The purehgroup setattr command with the --addhostlist option automatically connects the new mem-ber hosts to all volumes with connections to the group.

• The purehgroup connect command connects a volume to all hosts in the group, assigning the sameLUN (either determined automatically or specified by the administrator) to each connection.

Similarly, purehgroup setattr with the --remhostlist option breaks all of the removed hosts' connectionsto volumes connected to the group (removed hosts' private connections are unaffected). Likewise, pure-hgroup disconnect breaks connections between all hosts associated with the group and the specified vol-ume.

Volume connections to host groups (rather than individual hosts) are referred to as shared, to distinguishthem from private connections between individual hosts and volumes (see purehost-connect(1) for a dis-cussion of private connections).

Shared vs. Private Connections

To summarize the differences between private and shared host-volume connections:

Page 155: Purestorage_UserGuide3_2

purehgroup

142

• The purehost connect and purevol connect commands establish private connections, whereas sharedconnections are established either automatically when a host is associated with a host group or by thepurehgroup connect command.

• The purehost disconnect and purevol disconnect commands break private connections, whereas,shared connections are broken either automatically when a host is removed from a host group or by thepurehgroup disconnect command.

• Private connections are made and broken independently of one another, whereas all shared connectionsto a volume are broken simultaneously when the volume is disconnected from a host group. Likewise,all of a host's shared connections are broken simultaneously by dissociating the host from its host group.

• Purity may assign different LUNs to the private connections between one volume and two or morehosts, even if the connections are made by the same purehost connect command, whereas all sharedconnections between a volume and the hosts associated with a host group use the same LUN. (Anadministrator may force specific LUNs to be used for private connections by specifying the --lun optionin each purehost connect and purevol connect command, but assignments are not automatic as theyare with shared connections.)

ExceptionsPurity does not create a host group if:

• Another host group with the specified name already exists within the array.

Purity does not delete a host group if:

• Any hosts are associated with the group or any volumes are connected to it.

Purity does not associate a host with a host group if:

• The host is part of another host group. A host may only be associated with one host group at a time.

• The host has a private connection to a volume that uses a LUN in the [10...255] range.

• The host has a private connection to a volume that is associated with the host group.

Purity does not connect a volume to a host group if:

• The volume has a private connection to a host that uses a LUN in the [10...255] range.

• The volume has a private connection to a host that is associated with the host group.

Examples

Example 1

purehost create --wwnlist 0123456789abcde6,0123456789abcde7 HOST6purehost create --wwnlist 0123456789abcde8,0123456789abcde9 HOST7purevol create --size 100g VOL1 VOL2 VOL3 VOL4purehgroup create --hostlist HOST6,HOST7 HGROUP3purehgroup connect --vol VOL1 HGROUP3purehgroup connect --vol VOL2 HGROUP3purehgroup connect --vol VOL3 HGROUP3

Page 156: Purestorage_UserGuide3_2

purehgroup

143

purehgroup connect --vol VOL4 --lun 25 HGROUP3

Typical usage of the purehgroup create command. Creates hosts HOST6 and HOST7. Creates 100 giga-byte volumes VOL1, VOL2, VOL3, and VOL4. Creates host group HGROUP3 and associates HOST6 andHOST7 with it. Connects VOL1, VOL2, VOL3, and VOL4 to HGROUP3. Purity assigns the lowest availableLUNs in the [10...255] range to volumes VOL1, VOL2, and VOL3. If available, LUN 25 is assigned toVOL4. Both hosts communicate with the volumes via the same LUNs.

Example 2

purehgroup create --hostlist HOST1,HOST2,HOST3 HGROUP1

Creates host group HGROUP1 and associates host objects HOST1, HOST2, and HOST3 with it. No volumesare connected to the group at the time of creation.

Example 3

purehgroup create HGROUP2

... time passes...

purehgroup connect --vol VOL1 --lun 11 HGROUP2

... more time passes...

purehgroup setattr --addhostlist HOST4,HOST5 HGROUP2

Creates host group HGROUP2, but does not associate any hosts with it. At a later time, VOL1 is connectedto the group, with LUN 11 assigned to it. Still later, HOST4 and HOST5 are associated with HGROUP2,causing Purity to establish shared connections between them and VOL1, using LUN 11 for communication.

Example 4

purehgroup setattr --host '' HGROUP2purehgroup delete HGROUP2

Removes all hosts from HGROUP2 and deletes the host group.

See Alsopurehgroup-connect(1), purehgroup-list(1)

purehost(1), purevol(1)

Page 157: Purestorage_UserGuide3_2

purehgroup

144

AuthorPure Storage Inc. <[email protected]>

Page 158: Purestorage_UserGuide3_2

145

Namepurehgroup-connect, purehgroup-disconnect — manage shared connections between host groups and vol-umes

Synopsispurehgroup connect --vol VOL [--lun LUN] HGROUP...

purehgroup disconnect --vol VOL HGROUP...

ArgumentsHGROUP

Host group with which a shared connection to the specified volume is to be created or broken.

VOLName of a volume to be connected to or disconnected from a host group.

Options-h | --help

Can be used with any command or subcommand to display a brief syntax description.

--lunLogical unit number (LUN) in the range [10...255] by which hosts associated with the host group areto address the volume. If not specified, Purity assigns the lowest available LUN in the [10...255] range.

--volName of the volume to be connected to or disconnected from the specified hosts host groups. Exactlyone volume must be specified.

DescriptionMakes and breaks shared connections between hosts associated with host groups and volumes.

In order for a host computer to access a FlashArray virtual volume, the Purity host object that identifies itmust have a connection to the volume. Host-volume connections may be:

PrivateThese connect one host to one volume, independent of any other connections. Private connectionsare typically used for boot volumes and for stand-alone (non-clustered) host applications. The currentPurity release supports up to 255 private connections to a host that is not associated with a host group,and up to 9 private connections to a host that has a host group association.

SharedThese are a consequence of a host's association with a host group, which defines a consistent set ofconnections between one or more volumes and one or more hosts. Shared connections are useful forcluster applications in which several related hosts require consistent (same LUN) connectivity to a setof storage volumes. The current release supports up to 246 shared connections to a host group.

Shared connections are the subject of this man page. Private connections are managed via host objects,and are described in man page purehost-connect(1).

Page 159: Purestorage_UserGuide3_2

purehgroup-connect

146

A host may have only one connection to a volume at any instant. Attempts to make a second connectionbetween a host and a volume, private or shared, fail.

The purehgroup disconnect command breaks shared connections that are no longer required. When ashared connection has been broken, its LUN is available for reuse.

LUN Assignment

Any host-volume connection has three components: a host, a volume, and a logical unit number (LUN)used by the host to address the volume. The current Purity release supports LUNs in the [1...255] range.

LUNs in the [1...9] range are reserved for private connections. Private connections to hosts that are notassociated with host groups may also use LUNs in the [10...255] range. If a host is associated with a hostgroup, however, LUNs in the [10...255] range are reserved for shared connections to it. A host with privateconnections that use LUNs in the [10...255] range cannot be associated with a host group until such privateconnections have been broken (and optionally re-established using LUNs in the [1...9] range).

If no LUN is specified in a purehgroup connect command, Purity automatically assigns a LUN in the[10...255] range to be used by all hosts associated with the group to address the volume. An administratormay override automatic LUN assignment with the --lun option. Administrator-specified LUNs must bein the [10...255] range.

ExceptionsPurity does not establish a connection between a volume and a host group if:

• No LUN in the [10...255] range is available (i.e., all LUNs in the range are in use by other connections).

• An administrator-specified LUN is in use by another shared connection.

• The specified volume is already connected to a specified host group, or has a private connection to ahost associated with the host group.

Examples

Example 1

purehgroup connect --vol VOL1 HGROUP1 HGROUP2

Establishes shared connections between VOL1 and the hosts associated with HGROUP1 and those associ-ated with HGROUP2. Purity assigns a LUN to the connections with HGROUP1's hosts and another to thosewith HGROUP2's hosts; the two LUNs may be the same or different.

Example 2

purehgroup connect --vol VOL2 --lun 15 HGROUP3 HGROUP4

Establishes shared connections between VOL2 and the hosts associated with HGROUP3 and those associ-ated with HGROUP4. If LUN 15 is already being used by shared connections to either of the host groups,no connections are made between that group's hosts and VOL2.

Page 160: Purestorage_UserGuide3_2

purehgroup-connect

147

Example 3

purehgroup disconnect --vol VOL6 $(purevol listobj --type hgroup VOL6)

Breaks all shared connections between VOL6 and host groups. (The inner purevol listobj command pro-duces a list of all host groups with shared connections to VOL6.)

Private connections to VOL6 are unaffected.

See Alsopurehgroup(1), purehgroup-list(1)

purehost(1), purevol(1)

AuthorPure Storage Inc. <[email protected]>

Page 161: Purestorage_UserGuide3_2

148

Namepurehgroup-list, purehgroup-listobj — display host groups' attributes and storage consumption

Synopsispurehgroup list [ --cli | --csv | --nvp ] [ --connect | --space ] [--notitle] [HGROUP...]

purehgroup listobj [--csv] [ --type { hgroup | host | vol } ] [HGROUP...]

ArgumentsHGROUP

Host group for which the information specified by options is to be displayed.

OptionsOptions that control information displayed:

-h | --helpCan be used with any command or subcommand to display a brief syntax description.

default (no content option specified in purehgroup list command)Displays names and associated hosts for the specified host groups.

--connect (purehgroup list only)Displays volumes associated with the specified host groups, and the LUNs used to address them.

--space (purehgroup list only)Displays size and space consumption information items listed below for each volume associated witheach specified host group.

--type (purehgroup listobj only)Specifies the type of information about the specified host groups that is to be produced in white-space-separated format suitable for scripting.

Options that control display format:

--cliDisplays specified information in the form of CLI commands that could be issued to assign the currentvalues to the specified attributes. Not meaningful when combined with non-settable attributes.

--csvLists information in comma-separated value format. This format is designed for importation intospreadsheets and for scripting.

--notitleSuppresses generation of an initial line of output containing column titles.

--nvpLists each argument's name and specified information items, one to a line, in the form ITEM-NAME=VALUE. Argument names and information items are displayed flush left. This format is de-signed both for convenient viewing of what might otherwise be wide listings, and for parsing individ-ual items separated by whitespace for insertion into scripts.

Page 162: Purestorage_UserGuide3_2

purehgroup-list

149

DescriptionThe purehgroup list command displays the information indicated by content options for the specified hostgroups. If no host groups are specified, the display includes the specified information for all host groups.

• If no options are specified, the display lists the hosts associated with each specified host group.

• If the --connect option is specified, the display lists volumes associated with the specified host groups,and the LUNs used to address them.

• If the --space option is specified, the display lists the following information about provisioned (virtual)size and physical storage consumption for each volume associated with the specified host groups:

SizeSize of the volume as perceived by host storage administration tools.

Data ReductionRatio of unique volume sectors containing host-written data to the physical storage space currentlyoccupied by the data after reduction.

SystemAmount of physical storage space occupied by RAID-3D and other array metadata.

VolumesAmount of physical storage space currently occupied by host-written data (exclusive of array meta-data or snapshots).

SnapshotsAmount of physical storage space currently occupied by data unique to one or more snapshots.

TotalAmount of physical storage space currently occupied by host-written data and the array metadatathat describes and protects it.

The purehgroup listobj command creates lists of certain attributes of specified host groups in eitherwhitespace or comma-separated form, suitable for scripting. The command produces one of three typesof lists:

--type hgroup (default if --type option not specified)Produces a list of the specified host group names. If no host group names are specified, the list containsthe names of all host groups.

--type hostProduces a list of hosts associated with the specified host groups. If no host groups are specified, thelist contains names of all hosts associated with host groups.

--type volProduces a list of volumes associated with the specified host groups. If no host group argument isspecified, the list contains all volumes that are associated with any host group.

Lists are whitespace-separated by default. Specify the --csv option to produce a comma-separated list.

ExceptionsNone.

Page 163: Purestorage_UserGuide3_2

purehgroup-list

150

Examples

Example 1

purehgroup list --connect

For all host groups, displays names of associated volumes, and the logical units used by associated hoststo address them.

Example 2

purehgroup list

Displays the names of all host groups and the volumes associated with them.

Example 3

purehgroup list --space --csv HGROUP1 HGROUP2 HGROUP3

Displays the abovementioned virtual and physical space consumption for volumes associated with eachof HGROUP1, HGROUP2, and HGROUP3.

Example 4

purehgroup list --connect $(purevol listobj --type hgroup VOL1)

The inner purehgroup listobj command produces a list of all host groups with which VOL1 is associated.This list is input to the outer purehgroup list command to display a list of all volumes associated withhost groups with which VOL1 has an association.

Example 5

purevol list --space --total $(purehgroup listobj --type vol HGROUP1)

The inner purehgroup listobj command produces a list of the volumes associated with HGROUP1. Thislist is input to the outer purevol list command to display the space occupied by each of these volumes, aswell as the total space occupied by all volumes associated with HGROUP1.

See Alsopurehgroup(1), purehgroup-connect(1)

purehost(1), purevol(1)

Page 164: Purestorage_UserGuide3_2

purehgroup-list

151

AuthorPure Storage Inc. <[email protected]>

Page 165: Purestorage_UserGuide3_2

152

Namepurehost, purehost-create, purehost-delete, purehost-setattr — manage creation, deletion, and attributes ofthe Purity host objects used to identify computers ("hosts") that use FlashArray storage services

Synopsispurehost create --iqnlist IQN-LIST --wwnlist WWN-LIST HOST...

purehost delete HOST...

purehost setattr { --addiqnlist IQN-LIST | --addwwnlist WWN-LIST | --iqnlist IQN-LIST | --remiqn-list IQN-LIST | --remwwnlist WWN-LIST | --wwnlist WWN-LIST } HOST

ArgumentsHOST

Name of a host object. The purehost create command associates the name with the new object. Thepurehost delete and purehost setattr commands use names to identify the host objects to be operatedupon.

Object Names

Purity object names use the Internet domain name (RFC 1035) character set plus the underscore character(_). The valid characters are letters (A-Z and a-z), digits (0-9), and the hyphen (-) and underscore (_)characters. The first and last characters of a name must be alphanumeric, and a name may not be entirelynumeric.

Array names may be 1-56 characters in length; other objects that can be named (host groups, hosts, SN-MP managers, and volumes) may be 1-63 characters in length. (Array name length is shorter so that thenames of individual controllers, which are assigned by Purity based on the array name, do not exceed themaximum allowable by DNS.)

Names are case-insensitive on input. For example, vol1, Vol1, VOL1, etc. all represent the same volume.Purity displays names in the case in which they were specified in the create or rename subcommand thatcreated the objects, regardless of the case in which they are entered in management commands.

Options-h | --help

Can be used with any command or subcommand to display a brief syntax description.

--addiqnlist ((purehost setattr only)Adds the IQNs in the comma-separated list to those already associated with the specified host.

--addwwnlist ((purehost setattr only)Adds the worldwide names in the comma-separated list to those already associated with the specifiedhost.

--iqnlistComma-separated list of one or more iSCSI qualified names (IQNs) to be associated with the specifiedhost. In the purehost setattr command, this option replaces all IQNs previously associated with thespecified host with those in the list.

Page 166: Purestorage_UserGuide3_2

purehost

153

--remiqnlist ((purehost setattr only)Dissociates the IQNs in the comma-separated list from the specified host.

--remwwnlist ((purehost setattr only)Dissociates the worldwide names in the comma-separated list from the specified host.

--wwnlistComma-separated list of one or more Fibre Channel worldwide names (WWNs) to be associatedwith the specified host. In the purehost setattr command, this option replaces all worldwide namespreviously associated with the specified host with those in the list.

DescriptionThe host object is the abstraction used by Purity to organize the storage network addresses (Fibre Channelworldwide names or iSCSI qualified names in the current release) of client computers and to controlcommunications between clients and volumes. In the current Purity release, the host object's only attributesare lists of one or more Fibre Channel worldwide names (WWNs) or iSCSI qualified names (IQNs) thatidentify the host's initiators.

The purehost create command creates a host object and may optionally associate one or more Fibre Chan-nel worldwide names or iSCSI IQNs with it. Purity does not support Fibre Channel and iSCSI connectionsto the same array, but does not preclude association of both WWNs and IQNs with a host object.

Once a host object's worldwide names or IQNs have been specified, a FlashArray administrator can enablecommunication between it and volumes by establishing connections between the two. (See purehost-con-nect(1) for private connection syntax and information, and purehgroup-connect(1) for information aboutshared connections.)

A host's WWNs or IQNs need not be specified when it is created, but it cannot communicate with theFlashArray system until at least one WWN or IQN has been associated with it. The purehost setattrcommand adds to, removes from, or completely replaces the list of WWNs associated with a host.

The purehost delete command removes host objects that are no longer required. Hosts cannot be deletedwhile they have private connections to volumes or while they are associated with host groups.

ExceptionsPurity does not create a host object if:

• The specified name is already associated with another host object in the array.

• Any of the specified WWNs or IQNs is already associated with an existing host object.

Purity does not delete a host object if:

• The host has private connections to one or more volumes.

• The host is associated with a host group.

Purity does not associate a worldwide name or IQN with a host if:

• The host has private connections to one or more volumes.

• The specified worldwide name or IQN is already associated with another host object.

Page 167: Purestorage_UserGuide3_2

purehost

154

Examples

Example 1

purehost create HOST1

Creates a host object called HOST1. HOST1 cannot be connected to volumes or associated with a hostgroup until at least one worldwide name has been associated with it.

Example 2

purehost create --wwnlist 0123456789abcde1,0123456789abcde2 HOSTFCpurehost create --iqnlist hostiqn.domain.com HOSTISCSI

Creates a host object called HOSTFC and associates WWNs 01:23:45:67:89:ab:cd:e1 and01:23:45:67:89:ab:cd:e2 with it.

Creates a second host object called HOSTISCSI and associates IQN hostiqn.domain.com with it.

Example 3

purehost setattr --wwnlist 0123456789abcdef,01:23:45:67:89:ab:cd:ee HOST3

Replaces all worldwide names previously associated with HOST3 with the two specified. This examplealso illustrates the two formats for entering worldwide names.

Example 4

purehost setattr --remwwnlist 01:23:45:67:89:ab:cd:ed HOST4purehost setattr --addwwnlist 0123456789abcdec,0123456789abcdeb HOST4

Dissociates worldwide name 01:23:45:67:89:ab:cd:ed from HOST4 and replaces it with01:23:45:67:89:ab:cd:ec and 01:23:45:67:89:ab:cd:eb. Other worldwide names associated with HOST4are unaffected.

Example 5

purehost delete HOST5

Deletes host object HOST5. HOST5's private connections and host group association (if any) must previ-ously have been broken.

See Alsopurehost-connect(1), purehost-list(1),

Page 168: Purestorage_UserGuide3_2

purehost

155

purehgroup(1), purevol(1)

AuthorPure Storage Inc. <[email protected]>

Page 169: Purestorage_UserGuide3_2

156

Namepurehost-connect, purehost-disconnect, purevol-connect, purevol-disconnect — manage private connec-tions between hosts and volumes

Synopsispurehost connect --vol VOL [--lun LUN] HOST...

purehost disconnect --vol VOL HOST...

purevol connect { --host HOST | --hgroup HGROUP } [--lun LUN] VOL...

purevol disconnect { --host HOST | --hgroup HGROUP } VOL...

ArgumentsHOST

Name of a host to be connected to or disconnected from a volume.

VOLName of a volume to be connected to or disconnected from a host.

Options-h | --help

Can be used with any command or subcommand to display a brief syntax description.

--hgroupHost group with which shared connections to the specified volumes are to be made or broken bypurevol connect and purevol disconnect commands. Exactly one host group must be specified bythis option. The option is mutually exclusive with the --host option.

--hostHost with which private connections to the specified volumes are to be made or broken by purevolconnect and purevol disconnect commands. Exactly one host must be specified by this option. Theoption is mutually exclusive with the --hgroup option.

--lun

(Optional) Logical unit number (LUN) by which specified hosts are to address specified volume. Ifthis option is specified:

purehost connect CommandThe connection fails for any host for which the specified LUN is already in use.

purevol connect CommandExactly one volume must be specified.

If the --lun option is not specified, Purity assigns the lowest available LUN for each connection itmakes, starting with LUN 1 for private connections and LUN 10 for shared connections.

--volVolume with which private connections to the specified hosts are to be made or broken by purehostconnect and purehost disconnect commands. Exactly one volume must be specified by this option.

Page 170: Purestorage_UserGuide3_2

purehost-connect

157

DescriptionThese commands make and break private connections between hosts and volumes and shared connectionsbetween host groups and volumes.

• purehost connect and purehost disconnect manage connections between a single volume and one ormore hosts.

• purevol connect and purevol disconnect with the --host option manage private connections betweena single host and one or more volumes.

• purevol connect and purevol disconnect with the --hgroup option manage shared connections betweena single host group and one or more volumes. (See the purehgroup-connect man page for the alternatemechanism for managing shared connections.)

The two private connect subcommands are functionally identical, as are the two private disconnect sub-commands; both pairs are provided for administrator convenience.

FlashArray Host-Volume Connection Paradigms

Host-volume connections may be represented as per-host tables like the one below.

LUN Volume +---------+----------------------+ -+ | 1 | VOL-NAME or "none" | | | . | . | |- Private | . | . | | | 9 | . | -+ +---------+----------------------+ | -+ | 10 | VOL-NAME or "none" | | | | . | . | | | | . | . | | |-Shared | 255 | . | | | +---------+----------------------+ -+ -+

For a given host, each of the 255 LUNs supported by Purity may be used to address one connected volumeor it may be unused.

As suggested by the table, however, Purity supports two types of host-volume connections:

PrivateConnects one volume and one host. Uses either an administrator-specified LUN or the lowest availableLUN in the range [1...9] for hosts associated with host groups, and the lowest available LUN in therange [1...255] otherwise.

Private connections are independent of one another. For example, the sequence:

purevol connect --host HOST1 VOL1 VOL2 purehost disconnect --vol VOL1 HOST1

connects HOST1 to VOL1 and HOST1 to VOL2, and then disconnects HOST1 and VOL1, leavingHOST1 connected to VOL2.

Page 171: Purestorage_UserGuide3_2

purehost-connect

158

SharedConnects a designated set of hosts (a host group) to a designated set of volumes, providing the hostswith a consistent "view" of the volumes. (All associated hosts use the same LUN to address a givenassociated volume.) Uses LUNs in the range [10...255] for a maximum of 246 shared connections. Allhosts and volumes associated with a host group are automatically connected to each other by virtueof their associations with the group.

Shared connections are established by the purevol connect command with the --hgroup option orby the purehgroup connect command. The latter is discussed in the purehgroup-connect man page.For example, the command:

purevol connect --hgroup HGROUP1 VOL1 VOL2

is equivalent to the sequence:

purehgroup connect --vol VOL1 HGROUP1 purehgroup connect --vol VOL2 HGROUP1

Both establish shared connections between the hosts associated with HGROUP1 and VOL1 and VOL2.

A host may have only one connection to a given volume. An attempt to connect a host to a volume towhich it already has either a private or a shared connection fails, even if the connection attempt uses thealternate paradigm or specifies a different LUN. To change the LUN associated with a private connection,the connection must first be broken and then recreated by purehost connect or purevol connect. See thepurehgroup-connect man page for further discussion of shared connections.

LUN Management

Hosts address I/O commands to volumes via the volumes' logical unit numbers (LUNs). The current Purityrelease supports two disjoint LUN ranges:

[1...9]Used only for private connections.

[10...255]Always used for shared connections (see purehgroup(1)). May be used for private connections to hoststhat are not associated with host groups.

If the --lun option is not specified in a purehost connect or purevol connect command directed to a hostwith no host group association, Purity assigns the lowest available LUN in the [1...255] range. For privateconnections to hosts that are associated with host groups, Purity assigns the lowest available LUN in the[1...9] range.

If multiple hosts are specified in a single purehost connect command, there is no guarantee that the sameLUN will be associated with each connection established.

When LUN and host are both specified in a purehost connect or a purevol connect command, exactlyone host and one volume must be specified.

A host cannot be associated with a host group while it has private connections to volumes that utilize LUNsin the [10...255] range. Any such connections must be broken and re-established using LUNs in the [1...9]range before the host can be added to a host group.

Page 172: Purestorage_UserGuide3_2

purehost-connect

159

ExceptionsPurity does not establish a private connection between a host and a volume if:

• No LUN in the appropriate range is available (i.e., not being used by another connection).

• The --lun option is specified, and the specified LUN is not available.

• The --lun option is specified, with a value that is not in the appropriate range (e.g., in the [10...255]range in a connection to a host associated with a host group).

• A specified host already has either a private or a shared connection to a specified volume.

Examples

Example 1

purevol connect --host HOST1 VOL1 VOL2

Establishes private connections between HOST1 and VOL1 and between HOST1 and VOL2. Purity assignsa LUN to each connection. If HOST1 is associated with a host group, the LUNs will be the lowest availablein the range [1...9]; if not, Purity will use the lowest available LUNs in the range [1...255].

Example 2

purehost connect --vol VOL3 --lun 4 HOST2purevol connect --host HOST2 --lun 5 VOL4

Establishes private connections between HOST2 and VOL3 and between HOST2 and VOL4. If LUN 4 orLUN 5 is already in use by another connection to HOST2, the corresponding connection fails.

Example 3

purehgroup setattr --addhost HOST2 HGROUP1purehost connect --vol VOL5 --LUN 1 HOST2purevol connect --host HOST2 VOL6 VOL7 VOL8 VOL9 VOL10 VOL11 VOL12

Assuming that LUNs 4 and 5 are in use per the preceding example, LUN 0 is used by a previously madeconnection (for example, to a boot volume), and that no other LUNs in the range [1...9] are in use byHOST2, this example associates HOST2 with HGROUP1, forcing all private connections to use LUNs in therange [1...9]. The second command establishes a private connection between HOST2 and volume VOL5,using LUN 1. The third command connects VOL6, VOL7, VOL8, VOL9, VOL10, and VOL11, assigningLUNs 2, 3, 6, 7, 8, and 9 respectively. Because no more LUNs in the appropriate range are available, theprivate connection to VOL12 fails.

See Alsopurehost(1), purehost-list(1)

Page 173: Purestorage_UserGuide3_2

purehost-connect

160

purehgroup(1), purevol(1)

AuthorPure Storage Inc. <[email protected]>

Page 174: Purestorage_UserGuide3_2

161

Namepurehost-list, purehost-listobj — display information about Purity host objects, host-volume connections,and storage provisioning and consumption.

Synopsispurehost list [ --all | --connect | --space ] [ --cli | --csv | --nvp ] [--notitle] [ --private | --shared ] [HOST...]

purehost listobj [--csv] [ --type { host | iqn | vol | wwn } ] [HOST...]

ArgumentsHOST

Host object for which the information specified by options is to be displayed.

OptionsOptions that control information displayed:

-h | --helpCan be used with any command or subcommand to display a brief syntax description.

default (no content option specified with purehost list)Displays associated worldwide names and host groups for the specified hosts.

--all (purehost list only)Displays all visible attributes of the specified hosts. See DESCRIPTION section below for a list ofattributes displayed.

--connect (purehost list only)Displays volumes connected to the specified hosts and the LUNs used to address them.

--private (purehost list connect only)Restricts the list or display of volumes connected to specified hosts to those with private connections.Invalid when combined with other options.

--shared (purehost list connect only)Restricts the display of volumes connected to specified hosts to those with shared connections. Invalidwhen combined with other options.

--space (purehost list only)Displays size and space consumption information items listed below for each volume connected toa specified host.

--type (purehost listobj only)Specifies the type of information about specified hosts that is to be produced in whitespace-separatedformat suitable for scripting.

Options that control display format:

--cliDisplays specified information in the form of CLI commands that could be issued to assign the currentvalues to the specified attributes. Not meaningful when combined with non-settable attributes.

Page 175: Purestorage_UserGuide3_2

purehost-list

162

--csvLists information in comma-separated value format. This format is designed for importation intospreadsheets and for scripting.

--notitleSuppresses generation of an initial line of output containing column titles.

--nvpLists each argument's name and specified information items, one to a line, in the form ITEM-NAME=VALUE. Argument names and information items are displayed flush left. This format is de-signed both for convenient viewing of what might otherwise be wide listings, and for parsing individ-ual items separated by whitespace for insertion into scripts.

DescriptionThe purehost list command displays the information indicated by content options for the specified hosts.If no hosts are specified, the display includes the specified information for all hosts. The information tobe displayed is specified by including one of the following options:

• If no options are specified, displays names, associated worldwide names and host groups for specifiedhosts.

• If the --all option is specified, displays all visible attributes of the specified hosts. Display includesassociated worldwide names, host groups, connected volumes and the LUNs used to address them, arrayport worldwide names through which the volumes are visible.

• If the --connect option is specified, displays volumes associated with the specified hosts, and the LUNsused to address them.

• If the --space option is specified, displays the following information about provisioned (virtual) sizeand physical storage consumption for each volume connected to the specified hosts:

SizeSize of the volume as perceived by host storage administration tools.

Data ReductionRatio of unique volume sectors containing host-written data to the physical storage space currentlyoccupied by the data after reduction.

SystemAmount of physical storage space occupied by RAID-3D and other array metadata.

VolumesAmount of physical storage space currently occupied by host-written data (exclusive of array meta-data or snapshots).

SnapshotsAmount of physical storage space currently occupied by data unique to one or more snapshots.

TotalAmount of physical storage space currently occupied by host-written data and the array metadatathat describes and protects it.

By default, space consumption for all connected volumes, both private and shared, is displayed. The displaycan be restricted to volumes with private or shared connections by specifying the --private or the --sharedoption.

Page 176: Purestorage_UserGuide3_2

purehost-list

163

The purehost listobj command produces lists of certain attributes of specified hosts in either whitespaceor comma-separated form, suitable for scripting. The command produces one of four types of lists:

--type host (default if --type option not specified)List contains the specified host names. If no host names are specified, contains the names of all hostobjects.

--type iqnList contains the IQNs associated with each specified host. If no hosts are specified, list contains allIQNs (administratively assigned and discovered) known to the array.

--type volList contains the volumes connected to the specified hosts. If no hosts are specified, list containsnames of all volumes connected to any host. List can be restricted to show only private connectionsby specifying the --private option.

--type wwnList contains the worldwide names associated with each specified host. If no hosts are specified, listcontains all worldwide names (administratively assigned and discovered) known to the array.

Lists are whitespace-separated by default. Specify the --csv option to produce a comma-separated list.

ExceptionsNone.

Examples

Example 1

purehost list --connect

Displays names, connected volumes and logical units for all hosts. Both private and shared volume con-nections are displayed.

Example 2

purehost list

Displays names, associated worldwide names, and associated host groups (if any) for all hosts.

Example 3

purehost list --space --private --csv --notitle HOST1 HOST2 HOST3

Displays the above mentioned virtual and physical space consumption for volumes associated with eachof HOST1, HOST2, and HOST3.

Page 177: Purestorage_UserGuide3_2

purehost-list

164

Example 4

purevol list --space $(purehost listobj --type vol HOST1 HOST2)

The inner purehost listobj command produces a whitespace-separated list of the volumes connected toHOST1 and HOST2. The outer purevol list command displays space consumption for the volumes spec-ified in the inner command.

Example 5

purehost listobj --type vol --private HOST1 HOST2

Lists all volumes with private connections to HOST1 and HOST2.

Example 6

purevol list --connect --private $(purehost listobj --type vol HOST1)

The inner purehost listobj command produces a list of the volumes to which HOST1 is connected. Thelist is input to the purevol list command to display all hosts with private connections to those volumes.

See Alsopurehost(1), purehost-connect(1)

purehgroup(1), purevol(1)

AuthorPure Storage Inc. <[email protected]>

Page 178: Purestorage_UserGuide3_2

165

Namepurehw — displays information about and controls visual identification of FlashArray hardware compo-nents

Synopsispurehw list [--all] [ --csv | --nvp ] [--notitle] [--type COMPONENT-TYPE] [COMPONENT...]

purehw setattr [ --id ID | --identify {off | on} ] COMPONENT...

ArgumentsCOMPONENT

Hardware component whose information is to be displayed or whose attribute is to be set to the spec-ified value.

Options-h | --help

Can be used with any command or subcommand to display a brief syntax description.

--allIncludes additional information in the display that is primarily of use to Pure Storage Technical Sup-port representatives.

--idInteger identifier for the component. Valid for storage shelves, whose front panel LED identifiers aresettable by the purehw setattr command (as well as by manipulation of the button on the panel).

--identifyTurns a visual identifier for the component on or off. Valid for drives, storage shelves, and controllers.

--typeType of component for which information is to be displayed. When this option is specified, informa-tion is displayed for all components of the specified type. Valid values for --type are: ct (controller),bay (drive bay), eth (Ethernet port), fan (fan), fc (Fibre Channel port), ib (Infiniband port), pwr (powersupply), sas (SAS port), tmp (temperature sensor), sh (storage shelf), and drv (drive).

--csvLists information in comma-separated value format. This format is designed for importation intospreadsheets and for scripting.

--notitleSuppresses generation of an initial line of output containing column titles.

--nvpLists each argument's name and specified information items, one to a line, in the form ITEM-NAME=VALUE. Argument names and information items are displayed flush left. This format is de-signed both for convenient viewing of what might otherwise be wide listings, and for parsing individ-ual items separated by whitespace for insertion into scripts.

DescriptionMost FlashArray hardware components are capable of reporting their operational status. In addition, con-trollers, drive bays, and storage shelves can be configured to identify themselves visually by flashing LEDs

Page 179: Purestorage_UserGuide3_2

purehw

166

or showing numbers in LED displays respectively. The purehw list command displays information aboutspecified hardware components (when no components are specified, information about all components isdisplayed). The purehw setattr command controls visual identification of specified controllers, storageshelves, and storage shelf drive bays.

Hardware Component Status InformationThe purehw list command displays information about array hardware components that are capable ofreporting their status. The display is primarily useful for diagnosing hardware-related problems. For eachcomponent specified in a command line, the following information is reported.

Name

Each component has a name that is unique within its array. Controller and storage shelf chassis nameshave the form XXm; names of other components have the form XXm.YYn, where:

XXDenotes the type of chassis (CT for controller; SH for storage shelf) that is or that houses the com-ponent.

mIdentifies the specific chassis. For controllers, m has a value of 0 or 1. For storage shelves, it is thenumber assigned to the shelf, either during initial configuration or by the purehw setattr --id com-mand (see below).

YYDenotes the type of component (e.g., BAY for drive bay, FAN for cooling device, etc.) The recognizedcomponent types are listed below.

nIdentifies the specific component by its index, or relative position within the chassis.

Status

Each component reports its status as either:

okFunctioning properly at full capacity.

degradedFunctioning, but not at full capability due to a non-fatal failure.

failedInstalled but not functioning.

not installedDoes not appear to be installed.

Slot

Slot number occupied by the PCI-Express card that hosts the component. (Relevant only for ports hostedby PCI Express cards in controller chassis).

Identify

State of a LED (on or off) used to visually identify the component. (Relevant only for controllers, storageshelves, and the drive bays in storage shelves.)

Page 180: Purestorage_UserGuide3_2

purehw

167

Speed

Speed at which the component is operating. (Relevant only for fans [RPM] and interface ports [bits/sec-ond].)

Temperature

Temperature reported by the component. (Relevant only for temperature sensors.)

FlashArray Hardware Component TypesThe purehw list command displays the information below about some or all of the hardware componentsin an array. If no component names are specified in the command line, information is displayed for allcomponents capable of reporting operational status. If one or more component names (entered as displayedin the "Name" column of the output display) are specified, information is displayed for those componentsonly. If the --type option is present, information for all components of the specified type is displayed.

The --all option augments the display with internal information about components that is primarily usefulto Pure Storage Inc. service technicians.

The following FlashArray components are capable of reporting operating status:

Controller (CTm)

Purity reports an overall controller status generated from a combination of (a) status of components in thechassis, and (b) ambient software conditions. The components in a controller chassis are:

Drive Bay (CTm.BAYn)

Each controller chassis contains drive bays (12 in FA-300 series controllers; four in FA-400 series con-trollers), one of which houses an SSD from which the controller boots the Purity Operating Environment.The other bays are not normally occupied. Only FA-300 controller drive bays report status.

Ethernet Port (CTm.ETHn)

Each FlashArray controller contains 1-gigabit Ethernet ports (two in FA-300 series controllers; fourin FA-400 series controllers), one of which (normally ETH0) is used for remote array administration.FA-3xxE and FA-4xxE (iSCSI-based) FlashArrays contain four 10-gigabit Ethernet ports rather than FibreChannel ones. Observing from the rear of the controller, 10GbE ports are indexed as follows:

FA-300 Series PortsTop-to-bottom, left-to-right. CTm.ETH2 is the upper left port, CTm.ETH3 is the lower left port,CTm.ETH4 is the upper right port, and CTm.ETH5 is the lower-right port.

FA-400 Series PortsLeft-to-right, top-to-bottom. ETH4 is the left port in slot 6, CTm.ETH5 is the right port in slot 6,CTm.ETH6 is the left port in slot 7, and CTm.ETH7 is the right port in slot 7.

10GbE ports report status and communication speed.

Fan (CTm.FANn)

Each controller contains cooling fans that report status and operating speed. Fan replacement is a servicetechnician operation.

Page 181: Purestorage_UserGuide3_2

purehw

168

Fibre Channel Port (CTm.FCn)

Each Fibre Channel-based controller contains two dual-port Fibre Channel host bus adapters. Observingfrom the rear of the controller, ports are indexed as follows:

FA-300 Series PortsTop-to-bottom, left-to-right. CTm.FC0 is the upper left port, CTm.FC1 is the lower left port, CTm.FC2is the upper right port, and CTm.FC3 is the lower-right port.

FA-400 Series PortsLeft-to-right, top-to-bottom. CTm.FC0 is the left port in slot 6, CTm.FC1 is the right port in slot 6,CTm.FC2 is the left port in slot 7, and CTm.FC3 is the right port in slot 7.

Fibre Channel ports report status and communication speed.

Infiniband Port (CTm.IBn)

(Used in FA-320 and FA-420 dual controller arrays only) The two controllers in a FA-320 or FA-420 arrayintercommunicate via redundant Infiniband interfaces located on a single PCI Express host bus adapter.Observing from the rear of the controller, Infiniband ports are indexed as follows:

FA-320 PortsCTm.IB0 is the upper port; CTm.IB1 is the lower).

FA-420 PortsCTm.IB0 is the left port in slot 4; CTm.IB1 is the right port in slot 4.

Infiniband ports report status and communication speed.

Power Supply (CTm.PWRn)

Each controller contains two power supplies that report status. Power supply replacement is a servicetechnician operation.

SAS Port (CTm.SASn)

Each controller contains two dual-port SAS interface cards that are used to communicate with drives instorage shelves. Observing from the rear of the controller, SAS ports are indexed as follows:

FA-300 Series PortsTop-to-bottom, left-to-right. CTm.SAS0 is the upper left port, CTm.SAS1 is the lower left port,CTm.SAS2 is the upper right port, and CTm.SAS3 is the lower-right port.

FA-400 Series PortsLeft-to-right, top-to-bottom. CTm.SAS0 is the left port in slot 2, CTm.SAS1 is the right port in slot2, CTm.SAS2 is the left port in slot 3, and CTm.SAS3 is the right port in slot 3.

SAS ports report status and communication speed.

Temperature Sensor (CTm.TMPn)

Each controller chassis contains 19 sensors that report temperature ar various points within the chassis.These report status and ambient temperature. In steady-state operation, Purity generates alerts when atemperature sensor reports a value outside of normal operating range.

Page 182: Purestorage_UserGuide3_2

purehw

169

Storage Shelf (SHm)

Purity reports an overall storage shelf status generated from a combination of (a) status of components inthe chassis, and (b) software conditions. The components in a storage shelf chassis are:

Drive Bay (SHm.BAYn)

Each storage shelf contains a row of 24 hot-swappable bays, 22 of which house carriers containing SSDsused for data storage and two of which house carriers containing NVRAM modules used as temporaryholding areas for host-written data. Bays are indexed from left-to-right, facing the front of the shelf.

Drive (SHm.DRVn)

Each drive bay holds a carrier containing an SSD or NVRAM, whose index is the bay's index. Drivereplacement is a user service operation.

Fan (SHm.FANn)

The fan sub-component within each of a shelf's two power and cooling modules (PCMs) reports its statusand current operating speed (RPM).

Power Supply (SHm.PWRn)

The power supply sub-component within each of a shelf's two power and cooling modules (PCMs) reportsits operating status.

SAS Port (SHm.SASn)

Two I/O Modules (IOMs) accessed from the rear of the shelf chassis contain the shelf's six SAS ports.

Visually Identifying Hardware ComponentsFlashArray controllers, shelves, and storage shelf drive bays are all equipped with LEDs that can be il-luminated to identify the components definitively, for example, if replacement is required. The purehwsetattr command with the --identify option turns a component's identifying LED on or off based on thevalue specified for the option. The Identify column in purehw list output displays the current state of theidentifying LED for each component that has one.

In addition to identifying LEDs, storage shelf control panels contain numeric displays in which numberscan be set to uniquely identify shelves in multi-shelf arrays. Once set by the purehw setattr commandwith the --id option, a storage shelf's number becomes part of the name of each component in the shelf.For example, the purehw setattr --id 2 SH0 command causes components in the shelf to assume names ofthe form SH2.COMPONENT-NAME (e.g., SH2.DRV0, SH2.FAN1, etc.) in subsequent commands anddisplays.

ExceptionsPurity does not execute the purehw setattr command for components other than controllers, storage shelves,and storage shelf drive bays.

Examples

Example 1

Page 183: Purestorage_UserGuide3_2

purehw

170

purehw list SH0.DRV21 SH0.DRV22

Displays information for SSDs in slots 21 and 22 of storage shelf 0.

Example 2

purehw setattr --identify on SH1.DRV0 SH1.DRV1

Illuminates the identifying LEDs for drive bays 0 and 1 in shelf SH0.

Example 3

purehw list --type fan

Displays information for all cooling fans in the array.

See Alsopuredrive(1)

AuthorPure Storage Inc. <[email protected]>

Page 184: Purestorage_UserGuide3_2

171

Namepurelicense — End User Agreement (EUA) between Pure Storage Inc. and end users of the company'sproducts

End User Agreement, Version 2.5

Available online at: http://www.purestorage.com/agreements/Pure_enduser_agreement_v2.5.pdf [http://www.purestorage.com/agreements/Pure_enduser_agreement_v2.5.pdf].

IMPORTANT: PLEASE READ THIS END USER AGREEMENT ("AGREEMENT") BEFORE INS-TALLING, CLICKING ON THE 'ACCEPT' BUTTON, CONFIGURING AND/OR USING THE PURESTORAGE FLASHARRAY PRODUCT (("PRODUCT"). THIS AGREEMENT APPLIES TO THEPRODUCT THAT YOU OR THE ENTITY THAT YOU REPRESENT ("CUSTOMER") OBTAINEDEITHER DIRECTLY FROM PURE STORAGE, INC. ("PURE") OR FROM A PURE AUTHORIZEDRESELLER. BY INSTALLING, CONFIGURING AND/OR USING THE PRODUCT IN ANY WAY,YOU REPRESENT AND WARRANT THAT YOU HAVE THE AUTHORITY TO BIND CUSTOMERTO THESE TERMS, AND CUSTOMER IS UNCONDITIONALLY CONSENTING TO BE BOUNDBY AND IS BECOMING A PARTY TO THIS AGREEMENT WITH PURE. IN ADDITION, YOUREPRESENT AND WARRANT THAT YOU ARE WHO YOU PURPORT TO BE AND YOU ARENOT SEEKING ACCESS TO THE PRODUCT FOR THE BENEFIT OF A THIRD PARTY OR TO EN-ABLE A THIRD PARTY TO BREACH ANY OF THE TERMS OF THIS AGREEMENT, INCLUDING,BUT NOT LIMITED TO, THE RESTRICTIONS IN SECTION 3.1 BELOW. PURE DOES NOT AGREETO ANY OTHER TERMS, INCLUDING WITHOUT LIMITATION ANY TERMS ON CUSTOMER'SPURCHASE ORDER.

1. PURE STORAGE GUARANTEE PROGRAM.

Unless you have completed a free trial "proof of concept" for the Products, all new Products purchasedcome with a 30 day "money back" guarantee. You may receive a complete refund for the Products (andapplicable support services) if you notify Pure, or the applicable authorized reseller you purchased theProducts from, of your intent to elect for a refund within 30 days of receipt, provided that you have per-formed a good-faith installation of the Product that allows the Product to "phone home" to Pure (sharediagnostic and performance data with Pure). Following such notice, you and Pure may mutually agree toa specific time interval during which Pure may attempt to "cure" whatever concerns you have raised. Atthe end of such time (or immediately if you decide that you do not want to work with Pure to cure) youmay decide in good faith to either keep your Products or return such products in like new condition inoriginal packaging and receive a full refund, except as noted below. Pure will pay the cost of shipping forguarantee program returns that are shipped according to Pure's standard RMA procedures. Pure reservesthe right to charge refurbishing fees, in Pure's sole discretion, for any Products that are returned damaged.The optional installation service, which constitutes a Pure systems engineer being on-site for one day toassist with physical installation, initial configuration, and basic operational training, is non-refundable.Refunds will take two to four weeks to process following receipt of the returned product by Pure.

2. EVALUATION ONLY PRODUCT TERMS.

2.1. General.

If Customer has not yet purchased the Products, but has obtained them for evaluation purposes ("Evalua-tion Products"), then the terms and conditions in this Section 1 shall apply and those in Section 2 do notapply. Reference Section 2 for the terms applicable to purchased Products.

Page 185: Purestorage_UserGuide3_2

purelicense

172

2.2. Evaluation Product Delivery.

Pure shall deliver the Evaluation Product to Customer at the address agreed to by the parties. Risk ofloss shall pass to Customer upon delivery and Customer shall have and maintain appropriate insuranceto cover loss of or damage to the Product. Evaluation Products shall remain Pure's sole and exclusivepersonal property and Customer shall not encumber, sell or otherwise dispose of the Product withouthaving received prior written authorization from Pure.

2.3. Evaluation License and Term.

Subject to the terms and conditions of this Agreement (excluding its Section 3), Pure hereby providesCustomer the right to use the Product (including any software embedded therein) solely for the purposesof evaluating the performance and functionality of the Product and not for storage of production data.Customer agrees to use and evaluate the Product (in accordance with the Product documentation madeavailable by Pure on-line) and report on its operations to Pure, for the period of time specified by Pure inwriting, or if no such period is specified then for thirty (30) days from the date of delivery to Customer (the"Evaluation Term"). The license in this Section 2.3 and all of Customer's rights to use the Product willterminate immediately in the event that Customer materially breaches any provision of this Agreement.Upon any such termination, Sections 2.4, 4, 5, 6.2, 7, 8.4, 10, 11, 12, 13 and 14 will survive and Customershall promptly discontinue all use of the Product.

2.4. Return of Evaluation Product.

At the end of the Evaluation Term or upon earlier termination, if Customer elects not to purchase theProduct, then Customer shall (i) promptly contact Pure regarding the return of the Product to obtain anRMA number, packaging instructions and shipping address; and (ii) promptly return the Product to Purein its original packaging and in accordance with Pure's instructions. Products returned to Pure shall bein good condition, normal wear and tear excepted. If Customer does not have the original packaging,Pure will provide replacement packaging to Customer for a fee of US$100, which Customer shall paywithin thirty (30) days of receipt of invoice. Customer shall reimburse Pure for any repair or replacementcosts associated with any damage to the Product (other than normal wear and tear) while in Customer'spossession or resulting from improper handling or the use of un-approved packaging.

3. PURCHASED PRODUCTS TERMS.

3.1. General.

If Customer has submitted a purchase order for the Product, and such order has been accepted by Pure or itsauthorized reseller, then the Product will be a purchased Product and is subject to the terms and conditionsof this Section 2 and those in Section 1 do not apply. If Customer previously obtained the Product forevaluation and subsequently elected to purchase the Product, then the terms of Section 2 shall supersedethose in Section 1, once Customer's purchase order has been accepted by Pure or its authorized reseller.

3.2. Purchased Product Delivery and Acceptance.

Pure shall use its reasonable commercial efforts to ship the Product to the address requested. Title toProducts (except Software as defined in Section 2.3) and risk of loss of the Products will pass upon deliveryto Customer, FOB Pure's place of shipment. Unless otherwise agreed by Pure in writing, Customer will beresponsible for all shipment costs. Products will be deemed accepted upon delivery and Customer herebywaives all right of revocation.

Page 186: Purestorage_UserGuide3_2

purelicense

173

3.3. Software License.

Subject to the terms and conditions of this Agreement and any other limitations that may be expresslystated in the pricing or quote pursuant to which the Product was purchased, Pure grants to Customer anontransferable, nonexclusive, royalty-free, fully paid, revocable, worldwide license (without the right tosublicense) to use and execute the software provided with or incorporated in the Product (the "Software"),in executable object code format only, and solely to the extent necessary to operate the Product in accor-dance with the Product documentation made available by Pure on-line.

3.4. Termination of Software License.

The license in Section 3.3 and all of Customer's rights to use the Software will terminate immediately inthe event that Customer returns the Product to Pure in exchange for a refund or in the event that Customermaterially breaches any provision of this Agreement. Upon any such termination, Sections 2.4, 4, 5, 6.2,7, 8.4, 10, 11, 12, 13 and 14 will survive and Customer shall promptly discontinue all use of the Software.

4. PRODUCT RESTRICTIONS AND TITLE.

4.1. Restrictions.

Customer agrees that it will not (i) reproduce, modify, distribute, publish, rent, lease, sublicense or as-sign, disclose, transfer or make available to any third party any portion of the Software (or any relateddocumentation) in any form; (ii) reverse engineer, decompile, or disassemble any portion of the Software,or otherwise attempt to decrypt, extract or derive source code for, or any algorithms or data structuresembodied within, the Software or any parts thereof (such as by connecting an data analyzer to internalinterfaces within the product); (iii) use the Product, including, but not limited to, running the Software, inorder to build a similar or competitive product or service; (iv) transfer, copy or use the Software to or onany other product or device for any purpose; or (v) publish or disclose to any third party any performanceor benchmark tests or analyses or other non-public information relating to the Product, the Software or theuse thereof, except as may be authorized by Pure in writing. Customer acknowledges and agrees that if anyof the foregoing restrictions are breached, the Customer, its affiliates, and its and their development teams,shall be deemed to have had access to Pure's trade secrets and highly valuable confidential information. Inthe event that Customer or its affiliates also develop competing storage products, Customer acknowledgesand agrees that the Customer and its affiliates shall be deemed to have used such highly valuable confi-dential information in developing its and their storage products and related service offerings. Any futurerelease, update, or other addition to functionality of the Software made available by Pure to Customer,shall be subject to these terms and conditions, unless Pure expressly states otherwise. The Software iscopyrighted and protected by the laws of the United States and other countries, and international treatyprovisions. Customer shall preserve and shall not remove any copyright or other proprietary notices in theSoftware, its documentation and all copies thereof.

4.2. Title to Software and Evaluation Products.

Pure and its suppliers shall retain all right, title and interest in the Software and all intellectual propertyrights therein, including without limitation all patent, trademark, trade name and copyright, whether reg-istered or not registered. For Evaluation Products that are subject to Section 1, Pure and its suppliers retainall right, title and interest to the entire Product. No license or other express or implied rights of any kindare granted or conveyed except for the limited internal license expressly provided above. Any rights notexpressly granted by Pure in this Agreement are reserved.

5. THIRD PARTY CODE.Certain items of software code provided with the Product are subject to "open source" or"free software" licenses ("Third Party Code"), a list of which is available on Pure's web-

Page 187: Purestorage_UserGuide3_2

purelicense

174

site: http://www.purestorage.com/agreements/Pure_third_party_code.pdf [http://www.purestorage.com/agreements/Pure_third_party_code.pdf].

Such Third Party Code (for example, the Linux operating system) is opaquely embedded within the Productand is not directly accessible by, nor does it interface directly with, Customer's software or infrastructure,so as to avoid any open source licensing incompatibilities with Customer's intellectual property. The ThirdParty Code is not subject to the terms and conditions of this Agreement, except for Sections 4, 7.4, and 10.Instead, each item of Third Party Code is licensed under the terms of the license that accompanies suchThird Party Code. Nothing in this document limits Customer's rights under, or grants Customer rights thatsupersede, the terms and conditions of any applicable license for the Third Party Code, including any rightsto copy, modify, or distribute Third Party Code under the applicable license. If Pure makes modificationsto such Third Party Code and if the applicable license requires that such modifications be made availableand Pure does not already publish such modifications via the applicable Third Party Code community,then Pure will make its modifications available on its website.

6. PRE-RELEASE SOFTWARE, FEEDBACK, AND CUS-TOMER REFERENCES.

6.1. Pre-Release Software.

Pure may periodically make available to Customer a beta or other pre-release version of the Software("Pre-Release Software"). Use of Pre-Release Software is subject to the terms of Section 2, if Customerhas an Evaluation Product, and Section 3, if Customer has purchased the Product. Although Pure intendsthat the Pre-Release Software will be free of major errors, Customer acknowledges that the Pre-ReleaseSoftware (i) is not at the level of performance or compatibility of a final, generally available Softwareoffering; (ii) may not operate correctly; and (iii) may be substantially modified prior to it being madecommercially available as a Software release, Customer further acknowledges that the Pre-Release Soft-ware is not to be used in a production environment or for production data. In consideration of obtainingaccess to and use of such Pre-Release Software, Customer agrees to notify Pure of any and all problemsrelating to its use.

6.2. Feedback.

Pure may periodically request that Customer provide, and Customer agrees to provide to Pure, feedbackregarding the use, operation, performance, and functionality of the Products, Evaluation Products and Pre-Release Software (collectively, "Feedback"). Such Feedback will include information about operatingresults, known or suspected bugs, errors or compatibility problems and user-desired features. Customerhereby grants to Pure a perpetual, irrevocable, worldwide, sublicenseable, and royalty-free right to use andotherwise exploit the Feedback in any manner, and such right shall survive any expiration or terminationof this Agreement. Pure shall not disclose Customer's name or the name of any Customer employee to athird party in connection with any Feedback.

6.3.Customer References.

Customer agrees to provide references from time to time as reasonably requested by Pure in the form ofpress releases, blog posts, testimonial videos, case studies, as well as personal references (e.g., telephoneor email conversations) with existing or prospective customers or partners of Pure ("References"). Thecontents of any public-facing References shall be approved by the parties and Pure shall obtain Customer'sprior approval for any personal references, such approvals not to be unreasonably withheld. Customer alsoagrees that Pure may use Customer's name and logo, subject to Customer's then-current trademark usageguidelines, in Pure's marketing materials or communications (including, but not limited to, Pure's websiteand in Pure's marketing presentations) for the sole purpose of indicating Customer as a user of the Products.

Page 188: Purestorage_UserGuide3_2

purelicense

175

7. EXCLUDED USES.Customer acknowledges that the Product is not designed or intended for use in life support, life sustaining,nuclear or other applications in which failure of such Products could reasonably be expected to result inpersonal injury, loss of life or catastrophic property damage (the "Excluded Uses") and Customer agrees(i) not to use the Products in or for any such Excluded Uses; and (ii) to indemnify and hold Pure andits suppliers harmless from and against any claims, losses and damages to the extent arising from suchExcluded Uses.

8. PRODUCT WARRANTY.

8.1. Purchased Product Warranty.

Products purchased by Customer are warranted to perform in substantial accordance with the correspond-ing Pure documentation for a period of one (1) year from the date of shipment by Pure. Pure, at its option,either will repair or replace any defective Product which is returned to Pure at Customer's expense or willrefund the purchase price paid to Pure for such Product. Replacement Products will continue to be war-ranted for the remainder of the applicable warranty term. Repair, replacement, or refund is the sole andexclusive remedy for breach of this warranty and Pure reserves the right for any replacement or repairsto consist, in whole or in part, of new components or refurbished components that are functionally indis-tinguishable from the original components. This warranty is extended to Customer only and in no eventto any other party. This warranty does not cover defects or damages resulting from: (i) use of Productsother than in a normal and customary manner in accordance with Pure's documentation; (ii) physical orelectronic abuse or misuse, accident, or neglect; or (iii) alterations or repairs made to Products that arenot authorized by Pure in writing. Products under warranty that are returned to Pure must be returned inaccordance with Pure's RMA process, as further described on Pure's website.

8.2. No Warranty or Maintenance and Support for Evaluation Prod-ucts.

The warranty provided under Section 8.1 does not apply to Evaluation Products or Pre-Release Software.Pure provides Evaluation Products and Pre-Release Software for evaluation only on an "AS IS" basis, foruse by Customer at its own risk. Although Pure does not provide a warranty or maintenance and supportfor Evaluation Products or Pre-Release Software, Customer should promptly notify Pure of any problemswith an Evaluation Product or Pre-Release Software and Pure will use reasonable commercial efforts toassist Customer in resolving such identified problems. Customer agrees that any issues or bugs found inCustomer's evaluation of Evaluation Products and Pre-Release Software are not guaranteed by Pure tobe fixed.

8.3. Stored Data.

Pure will use reasonable commercial efforts to erase all of the data contained in or stored on any Productthat is returned to Pure for repair, whether or not under warranty, or at the end of the Evaluation Term,but Customer acknowledges and agrees that Pure shall have no responsibility for any loss or disclosure ofany data that is stored on a Product that is returned to Pure or Pure's supplier as designated by the RMAprocess or pursuant to Section 2.4.

8.4. Disclaimer.

THE WARRANTY IN SECTION 8.1 FOR PURCHASED PRODUCTS IS GIVEN IN LIEU OF ALLOTHER WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AND PURE AND ITS SUPPLIERSHEREBY DISCLAIM ALL OTHER WARRANTIES RELATING TO THE PRODUCTS AND RELAT-

Page 189: Purestorage_UserGuide3_2

purelicense

176

ED SERVICES INCLUDING ANY IMPLIED WARRANTIES OF MERCHANTIBILITY, FITNESSFOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. PURE DOES NOT WAR-RANT THAT THE OPERATION OF THE PRODUCT WILL BE UNINTERRUPTED OR ERRORFREE. EXCEPT AS EXPRESSLY STATED IN THIS SECTION 7, PURE AND ITS SUPPLIER'SPROVIDES THE PRODUCTS (INCLUDING ANY SOFTWARE) ON AN "AS IS" BASIS.

9. MAINTENANCE AND SUPPORT.During the term for which Customer has ordered and paid for maintenance and support, Pure or its desig-nated supporting resellers or distributors ("Support Partners") will provide the maintenance and supportset forth in Exhibit A (Maintenance and Support). As noted in Section 8.2, maintenance and supportservices are not available for Evaluation Products or Pre-Release Software.

10. INDEMNIFICATION.Pure will defend at its own expense any action against Customer brought by a third party to the extentthat the action is based upon a claim that the Product (including any Evaluation Product and Pre-ReleaseSoftware) directly infringes any copyrights or U.S. patents issued as of the date of Pure's shipment or mis-appropriates any trade secrets and Pure will pay those costs and damages finally awarded against Customerin any such action that are specifically attributable to such claim or those costs and damages agreed to ina monetary settlement of such action. If the Product becomes, or in Pure's opinion is likely to become, thesubject of an infringement claim, Pure may, at its option and expense, either (i) procure for Customer theright to continue exercising the rights licensed to Customer in this Agreement; (ii) replace or modify theProduct so that it becomes non-infringing and remains functionally equivalent; or (iii) accept return of theProduct from Customer and pay to Customer a prorated refund of money paid to Pure for the purchase ofsuch Product, reduced on a straight-line basis over three (3) years from the date of delivery of such Productby Pure. Notwithstanding the foregoing, Pure will have no obligation under this Section 9 or otherwisewith respect to any infringement claim based upon (a) any use of the Product that is not in accordance withPure's documentation; (b) any use of the Product in combination with other products, equipment, software,or data not supplied by Pure if such infringement would not have arisen but for such combination; (c) anyuse of any release of the Software other than the most current release made available to Customer; or (d)any modification or alteration of the Product by any person other than Pure. This Section 9 states Pure'sentire liability and Customer's sole and exclusive remedy for infringement claims and action. The fore-going obligations are conditioned on Customer notifying Pure promptly in writing of such action, givingPure sole control of the defense thereof and any related settlement negotiations, and cooperating and, atPure's reasonable request and expense, assisting in such defense.

11. LIMITATION OF LIABILITY.TO THE MAXIMUM EXTENT PERMITTED BY LAW, CUSTOMER AGREES THAT NEITHERPURE NOR ITS SUPPLIERS SHALL BE RESPONSIBLE FOR ANY LOSS OR DAMAGE TO CUS-TOMER, ITS CUSTOMERS, OR THIRD PARTIES CAUSED BY FAILURE OF PURE TO DELIVERTHE PRODUCT, FAILURE OF THE PRODUCT TO FUNCTION, OR FOR LOSS OR INACCURACYOF DATA OR COST OF PROCUREMENT OF SUBSTITUTE GOODS OR TECHNOLOGY. IN NOEVENT WILL PURE OR ITS SUPPLIERS BE LIABLE FOR ANY SPECIAL, CONSEQUENTIAL, EX-EMPLARY, INCIDENTAL, OR INDIRECT DAMAGES, INCLUDING LOST PROFITS, IN CONNEC-TION WITH THE USE OF THE PRODUCT OR OTHER MATERIALS PROVIDED ALONG WITHTHE PRODUCT OR IN CONNECTION WITH ANY OTHER CLAIM ARISING FROM THIS AGREE-MENT, EVEN IF PURE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. TOTHE MAXIMUM EXTENT PERMITTED BY LAW, PURE'S AGGREGATE CUMULATIVE LIABILI-TY UNDER OR RELATING TO THIS AGREEMENT (I) FOR PURCHASED PRODUCTS, SHALLNOT EXCEED THE AMOUNT PAID BY CUSTOMER FOR THE PRODUCT THAT GAVE RISETO SUCH CLAIM; AND (II) FOR EVALUATION PRODUCTS AND PRE-RELEASE SOFTWARE,

Page 190: Purestorage_UserGuide3_2

purelicense

177

SHALL NOT EXCEED THE AMOUNT OF $5,000.00 US DOLLARS. CUSTOMER AGREES THATPURE'S SUPPLIERS WILL HAVE NO LIABILITY TO CUSTOMER OF ANY KIND UNDER OR ASA RESULT OF THIS AGREEMENT.

12. CONFIDENTIAL INFORMATION."Confidential Information" means any nonpublic information of a party (the "Disclosing Party"),whether disclosed orally or in written or digital media, that is identified as "confidential" or with a similarlegend at the time of such disclosure or that the receiving party (the "Receiving Party") knows or shouldhave known is the confidential or proprietary information of the Disclosing Party. Information will notconstitute the other party's Confidential Information if it (i) is already known by the Receiving Party with-out obligation of confidentiality; (ii) is independently developed by the Receiving Party without access tothe Disclosing Party's Confidential Information; (iii) is publicly known without breach of this Agreement;or (iv) is lawfully received from a third party without obligation of confidentiality. The Receiving Partyshall not use or disclose any Confidential Information except as expressly authorized by this Agreementand shall protect the Disclosing Party's Confidential Information using the same degree of care that it useswith respect to its own confidential information, but in no event with safeguards less than a reasonablyprudent business would exercise under similar circumstances. The Receiving Party shall take prompt andappropriate action to prevent unauthorized use or disclosure of the Disclosing Party's Confidential Infor-mation. If any Confidential Information must be disclosed to any third party by reason of legal, accountingor regulatory requirements, the Receiving Party shall promptly notify the Disclosing Party of the order orrequest and permit the Disclosing Party (at its own expense) to seek an appropriate protective order.

13. PRODUCT DIAGNOSTIC REPORTING.Customer acknowledges that the Product will store certain diagnostic information about the routine oper-ations of the Product (including, without limitation, its performance, data reduction ratios, configurationdata, and any hardware faults) and will periodically transmit this diagnostic information to Pure. For clar-ity, there is no actual user data of Customer that is transmitted or provided to Pure. In addition, if Purerequests more detailed diagnostics, Customer will reasonably cooperate with Pure to enable the insertionof additional hard-drives into the Product so as to capture and transmit to Pure the metadata configurationof the Product's array. Again, for clarity, no actual user data of Customer is transmitted or provided to Purein this process. Customer will control Pure's physical access to the Product and no interruption of serviceis required to gather such detailed diagnostics. Customer agrees that Pure has a perpetual, irrevocable,worldwide, sublicenseable, and royalty-free right to use this diagnostic information in any manner and thatCustomer will not interfere with the collection or transmission of such information to Pure.

14. GENERAL PROVISIONS.

14.1. Governing Law and Venue.

This Agreement will be governed and interpreted by and under the laws of the State of California, withoutgiving effect to any conflicts of laws principles that require the application of the law of a different state.Each party hereby expressly consents to the personal jurisdiction and venue in the state and federal courtsin Santa Clara County, California for any lawsuit filed there arising from or related to this Agreement.The United Nations Convention on Contracts for the International Sale of Goods shall not apply to thisAgreement.

14.2. Notices.

All notices or other communications required under Sections 9, 10, 11 and 13 of this Agreement shall be inwriting and shall be delivered by personal delivery, certified overnight delivery such as Federal Express,or registered mail (return receipt requested) and shall be deemed given upon personal delivery or upon

Page 191: Purestorage_UserGuide3_2

purelicense

178

confirmation of receipt. All other notices and communications may be made by email or other applicablemethod.

14.3. Severability; Waiver.

If any provision of this Agreement is, for any reason, held to be invalid or unenforceable, the other provi-sions of this Agreement will remain enforceable and the invalid or unenforceable provision will be deemedmodified so that it is valid and enforceable to the maximum extent permitted by law. Any waiver or failureto enforce any provision of this Agreement on one occasion will not be deemed a waiver of any otherprovision or of such provision on any other occasion.

14.4. Export.

The Product, its Software and related technology are subject to U.S. export control laws and may be subjectto export or import regulations in other countries. Customer agrees not to export, reexport, or transfer,directly or indirectly, any U.S. technical data acquired from Pure, or any products incorporating such data,in violation of the United States export laws or regulations.

14.5. No Assignment.

This Agreement, and Customer's rights and obligations herein, may not be assigned by Customer withoutPure's prior written consent, and any attempted assignment in violation of the foregoing will be null andvoid.

14.6. U.S. Government End Users.

The Product, its software and related documentation, are "commercial items" as defined in 48 CFR 2.101and their use is subject to the policies set forth in 48 CFR 12.211, 48 CFR 12.212 and 48 CFR 227.7202,as applicable.

14.7. Force Majeure.

Pure shall not be liable hereunder by reason of any failure or delay in the performance of its obligationsunder this Agreement on account of strikes, shortages, riots, insurrection, fires, flood, storm, explosions,acts of God, war, governmental action, labor conditions, earthquakes, material shortages or any other causethat is beyond the reasonable control of Pure.

14.8. Entire Agreement; Modification.

This Agreement constitutes the entire agreement between the Customer and Pure and supersedes in itsentirety any and all oral or written agreements previously existing between Customer and Pure with re-spect to the subject matter hereof including, without limitation, any Evaluation Agreement providing forevaluation of the Product. This Agreement may only be amended in a writing signed by duly authorizedrepresentatives of the parties.

Exhibit A. Maintenance and Support Terms and Condi-tions

1. SCOPE OF COVERAGE.

For Purchased Products, during the title for which Customer has ordered and paid for Maintenance andSupport (as defined herein), Pure or its Support Partners will provide Product maintenance ("Mainte-nance") and technical support ("Support") services under this Exhibit for (a) generally available releases

Page 192: Purestorage_UserGuide3_2

purelicense

179

only (the services under this Exhibit do not apply to early access, pre-release or beta releases of the Productor its Software or any Products provided solely for evaluation purposes); and (b) those Major Releases (asdefined herein) of the Software, specifically identified on Pure's website as still under Support. As usedherein, an "Error" means any reproducible defect in the Product that causes the Product to not perform inall material respects in accordance with the Product documentation provided on-line by Pure.

2. SERVICES.

Subject to the titles of this Maintenance and Support Exhibit, and so long as Customer is in compliancewith the titles and conditions of the Agreement, Pure or its Support Partners will provide the followingservices:

2.1 Hardware Maintenance.

Pure or its Support Partners will use commercially reasonable efforts to attempt to correct any Errorsin the Product's hardware. Pure's Hardware Maintenance services for the Products may include on-siteinstallation of field replaceable units (FRUs) by Pure certified maintenance personnel and/or providingCustomer with customer replaceable units (CRUs) for Customer's own installation.

2.2 Software Support.

Pure or its Support Partners will also use commercially reasonable efforts to attempt to correct any Errorsin the Product's software. Pure's Software Support services may include bug fixes, emergency patches,workarounds, and new software releases.

(a) Software Releases.

In order to receive Support for a given Major Release, the Product must be updated to the most recentMinor Release applicable to that Major Release. As used herein, a "Major Release" is any version ofthe Software that is generally denoted by a change in the version number to the left or right of the firstdecimal (i.e., #.#.1). A "Minor Release" is any version of the Software generally denoted by a change inthe version number to the right of the second decimal (i.e., 2.3.#). Major Releases and Minor Releases arecollectively referred to as "Releases." Support is only provided by Pure for the current and immediatelypreceding Major Release of the Software, in each case with its most recent Minor Release, unless Pureelects to provide support for additional Releases as noted on Pure's website.

(b) Access to Releases.

Pure will provide Releases to the Software, as such Releases are made available by Pure for general com-mercial release and then only to the extent compatible with the Customer's Product hardware. Any Releas-es of the Software provided shall be subject to the titles and conditions set forth in the Agreement.

3. Technical Support and Service Levels.

Pure or its Support Partners will provide web portal, email and telephone support to up to five designatedpoints of contact of Customer, and will acknowledge and respond to Errors in the Product, in each case inaccordance with the Severity Levels defined in the Pure Storage Customer Support Guide, available on-line at: http://www.purestorage.com/agreements/Pure_support_guide.pdf [http://www.purestorage.com/agreements/Pure_support_guide.pdf]. Additional Customer points of contact for Support may be approvedby Pure.

4. No Warranty.

Any deliverables and services provided by Pure pursuant to this Maintenance and Support Exhibit areprovided "AS IS" and without any additional warranty, express or implied. Notwithstanding the foregoing,

Page 193: Purestorage_UserGuide3_2

purelicense

180

if a Product or Product component is replaced under Maintenance and the original warranty for suchProduct has not yet expired, such replacement Product shall continue to be warranted for the remainingportion of the original Product warranty pursuant to Section 8.1 of the Agreement.

5. Service Limitations.

The Maintenance and Support Fee does not include, nor will Pure be obligated to provide, services re-quired as a result of: (a) any modification, reconfiguration or maintenance of the Product not performedin accordance with Pure's instructions; (b) any use of the Product in a configuration or on a system thatdoes not meet Pure's minimum standards for such Product, as set forth in the applicable documentation;or (c) any errors or defects in third party software or hardware.

AuthorPure Storage Inc. <[email protected]>

Page 194: Purestorage_UserGuide3_2

181

Namepuremonitor — displays I/O performance information for specified volumes

Synopsispuremonitor [--csv] [--interval SECONDS] [--nrep REPEAT-COUNT] [--size] [--total] [--vollist VOL-LIST]

Options--csv

Lists information in comma-separated value format. This format is designed for importation intospreadsheets and for scripting.

-h | --helpCan be used with any command or subcommand to display a brief syntax description.

--interval SECONDSNumber of seconds between display updates (defaults to 5 seconds).

--nrep REPEAT-COUNTNumber of display updates to produce. If this and the --interval option are both omitted, one displayis produced. If this option is omitted, and the --interval option is specified, Purity produces displaysat the specified interval until the command is terminated by typing CTRL-C.

--sizeDisplays the average I/O sizes per operation (read, write, and total).

--totalFollows output lines with a single line containing column totals in columns where they are meaningful.Ignored when --nvp is specified (where permitted).

--vollist [VOL-LIST]Comma-separated list of volumes for which to display data. If this option is omitted, data for allvolumes is displayed.

DescriptionNote, this command is deprecated and will be removed in the future. Please use purearray monitor andpurevol monitor instead.

Produces a display of array I/O performance data containing sections for the specified volumes. The displaycan be repeated with updated data at specified intervals by supplying values for the --nrep and/or --intervaloptions.

For a given volume, each interval of performance information will be listed as a row that contains variouscolumns such as: bytes per second read/write, operations per second read/write, duration per operationread/write.

Examples

Example 1

Page 195: Purestorage_UserGuide3_2

puremonitor

182

puremonitor --vollist --nrep 10 --interval 10 --total

Produces a display containing a line of performance data for each volume and a line that aggregates the datafor all volumes. The display is updated every 10 seconds for 100 seconds. Data in the first display representsaverage performance since array boot time; data in subsequent displays represents average performanceover the most recent 10-second interval.

Example 2

puremonitor --vollist VOL1,VOL2 --nrep 10

Produces a display containing lines of performance data for VOL1 and VOL2. The display is updatedevery 5 seconds (the default interval) for 50 seconds (10 repetitions).

Example 3

puremonitor --vollist `purehgroup listobj --type vol --csv HG1`

Produces a single display containing a line of performance data for each volume with a shared connection tohost group HG1. The inner purehgroup listobj command produces a comma-separated list of the volumeswith shared connections to HG1, which becomes the value for the --vollist option.

See Alsopurearray(1), purehost(1), pureport(1), purevol(1)

AuthorPure Storage Inc. <[email protected]>

Page 196: Purestorage_UserGuide3_2

183

Namepurenetwork, purenetwork-disable, purenetwork-enable, purenetwork-list, purenetwork-setattr — man-ages the Ethernet interfaces used to connect a FlashArray system to an administrative networkpurenetwork-ping, purenetwork-trace — pings remote destinations and traces routes on the administrativenetwork

Synopsispurenetwork disable ETHERNET-INTERFACE

purenetwork enable ETHERNET-INTERFACE

purenetwork list [--cli] [ --csv | --nvp ] [--notitle]

purenetwork ping [--count PING-COUNT] DESTINATION

purenetwork setattr [--address CIDR-IP-ADDRESS] [--address IP-ADDRESS --netmask NET-MASK] [--gateway GATEWAY-IP-ADDRESS] [--mtu MTU-SIZE] ETHERNET-INTERFACE

purenetwork trace DESTINATION

ArgumentsDESTINATION

IP address or full hostname of a ping target or of a remote computer to which the network route isto be determined.

ETHERNET-INTERFACEName of the Ethernet interface to be operated upon in the form CTx.ETHy, where x denotes thecontroller (0 or 1) and y denotes the interface (0 or 1). In CLI commands, "CT" and "ETH" are case-insensitive. For example, CT0.ETH0, Ct0.Eth0, CT0.eth0, ct0.eth0, and so forth, all refer to controllerCT0's right-hand ETH0 port.

Options-h | --help

Can be used with any command or subcommand to display a brief syntax description.

--address {CIDR-IP-ADDRESS | IP-ADDRESS}IPv4 address to be associated with the specified interface. IP addresses may be specified in CIDRformat (ddd.ddd.ddd.ddd/dd), or alternatively, the --netmask option may be used along withthe --address option to specify the netmask in ddd.ddd.ddd.ddd format.

--count PING-COUNTNumber of ICMP ping messages to send in sequence.

--gateway GATEWAY-IP-ADDRESSIPv4 address of the gateway through which the specified interface is to communicate with the network(in ddd.ddd.ddd.ddd format).

--mtu MTU-SIZEMaximum message transfer unit (packet) size for the interface in bytes. Valid values are integersbetween 1280 and 9216 (inclusive). Defaults to 1500 if not specified.

Page 197: Purestorage_UserGuide3_2

purenetwork

184

--netmask NETMASKNetmask in ddd.ddd.ddd.ddd format. Used in conjunction with --address when the IP addressis not specified in CIDR format.

--csvLists information in comma-separated value format. This format is designed for importation intospreadsheets and for scripting.

--notitleSuppresses generation of an initial line of output containing column titles.

--nvpLists each argument's name and specified information items, one to a line, in the form ITEM-NAME=VALUE. Argument names and information items are displayed flush left. This format is de-signed both for convenient viewing of what might otherwise be wide listings, and for parsing individ-ual items separated by whitespace for insertion into scripts.

DescriptionManages an array's Ethernet interfaces and administrative network connection attributes.

Each FlashArray controller is equipped with two 1-gigabit Ethernet interfaces that connect to a data centernetwork for array administration. The interfaces are called CTx.ETH0 and CTx.ETH1, where x denotesthe array controller number (0 for FA-310 models and 0 or 1 for FA-320 models). Physical interface portsare located on controller rear bulkheads, and are labeled ETH1 (left) and ETH0 (right).

The purenetwork enable and purenetwork disable subcommands enable and disable individual Ethernetinterfaces. Care should be taken not to disable the interface through which the administrative session isbeing conducted.

The purenetwork list subcommand displays the visible attributes of specified Ethernet interfaces (or ofall interfaces if none is specified explicitly).

The purenetwork ping subcommand is used to determine whether a remote computer can be accessed bythe array (provided that the remote computer is ICMP-enabled). The ping target can be specified either byIP address, or if DNS service is available and has been configured for the array, by hostname.

The IP address, netmask, and gateway for each Ethernet interface can be specified individually by thepurenetwork setattr command. Ethernet interface IP addresses and netmasks are set explicitly (i.e.,DHCP is not supported), along with the corresponding netmasks. Netmasks can be specified either in CI-DR format, e.g.,

purenetwork setattr --address 192.168.0.25/24 CT0.ETH0

or by specifying the --netmask option, e.g.,

purenetwork setattr --address 192.168.0.25 --netmask 255.255.255.0 CT0.ETH0

Gateways are specified by IP address. To remove a port's gateway specification, specify either a null valuefor the --gateway option or a zero IP address. For example, either

Page 198: Purestorage_UserGuide3_2

purenetwork

185

purenetwork setattr --gateway '' CT0.ETH0

or

purenetwork setattr --gateway 0.0.0.0 CT0.ETH0

unassigns whatever gateway IP address had been associated with CT0.ETH0.

The purenetwork trace subcommand traces and displays the route to a remote computer identified by IPaddress or, if a DNS service is available and configured for the array, by hostname.

Examples

Example 1

purenetwork enable ct0.eth0

Enables controller CT0's administrative network interface ETH0 to communicate with the administrativenetwork.

Example 2

purenetwork list

Lists the status (enabled or disabled), IP address, netmask, gateway IP address, and MAC for each of thearray's administrative network interfaces. For multi-controller arrays, all controllers' interface informationis listed.

Example 3

purenetwork setattr ct0.eth0 --address 192.168.0.24 --netmask 255.255.255.0purenetwork setattr ct0.eth0 --address 192.168.0.24/24

Assigns the IP address 192.168.0.24 to administrative Ethernet interface ct0.eth0. Both commands areequivalent.

Example 4

purenetwork ping --count 100 myhost.mydomain.com

Sends 100 ICMP ping messages to host myhost.mydomain.com and displays the response received foreach one.

Page 199: Purestorage_UserGuide3_2

purenetwork

186

Example 5

purenetwork trace 192.168.0.1

Displays the route taken by a ping request to network address 192.168.0.1.

See Alsopurearray(1)

AuthorPure Storage Inc. <[email protected]>

Page 200: Purestorage_UserGuide3_2

187

Namepureport, pureport-disable, pureport-enable, pureport-list — manages an array's host connection ports

Synopsispureport disable

pureport enable

pureport list [--initiator] [ --csv | --nvp ] [--notitle]

Options-h | --help

Can be used with any command or subcommand to display a brief syntax description.

--initiatorDisplays host worldwide names (both those discovered by Purity and those assigned by administra-tors) and the array ports (targets) on which they are eligible to communicate.

--csvLists information in comma-separated value format. This format is designed for importation intospreadsheets and for scripting.

--notitleSuppresses generation of an initial line of output containing column titles.

--nvpLists each argument's name and specified information items, one to a line, in the form ITEM-NAME=VALUE. Argument names and information items are displayed flush left. This format is de-signed both for convenient viewing of what might otherwise be wide listings, and for parsing individ-ual items separated by whitespace for insertion into scripts.

DescriptionThe pureport enable and pureport disable subcommands enable and disable communication with hostsvia an array's ports.

The pureport list subcommand with no option specified displays the worldwide names assigned to anarray's ports. Array port worldwide names are assigned by Purity when controllers are manufactured, andcannot be changed. When the --initiator option is specified, the display includes all host worldwide namesknown to the array, either through discovery or through administrator assignment, and the array ports onwhich they are eligible to communicate.

The pureport list output does not include information about administrative network Ethernet ports--seepurenetwork(1).

Examples

Example 1

pureport disable

Page 201: Purestorage_UserGuide3_2

pureport

188

Disables an array's ports, blocking all communication with hosts.

Example 2

pureport list --initiator --notitle

Displays initiator worldwide names known to the array, both discovered and assigned by administratorsduring host object creation or by the purehost setattr --wwn subcommand.

See Alsopurearray(1), purehost(1), purenetwork(1)

AuthorPure Storage Inc. <[email protected]>

Page 202: Purestorage_UserGuide3_2

189

Namepuresnap — snapshot concepts and the purevol subcommands that create and manage snapshots of vol-umes.

Synopsis

purevol copy SOURCE TARGETVOL

purevol destroy TARGET...

purevol eradicate TARGET...

purevol list [ --cli | --csv | --nvp ] [--notitle] [--pending] [--snap] [--total] [TARGET...]

purevol listobj [--csv] [--pending] [ --type { host | snap | vol } ] [TARGET...]

purevol recover TARGET...

purevol rename OLD-NAME NEW-NAME

purevol snap [--suffix SUFFIX] VOL...

Arguments

SOURCEName of a volume or snapshot whose data is copied to TARGETVOL by a purevol copy command.

TARGETName of a volume or snapshot that is acted upon by any of the purevol destroy, eradicate, list,listobj, recover, or setattr subcommands.

TARGETVOLName of the volume that receives the data copied from SOURCE by the purevol copy command.

VOLName of a volume whose contents are frozen as a snapshot by the purevol snap subcommand.

Object Names

Purity object names use the Internet domain name (RFC 1035) character set plus the underscore character(_). The valid characters are letters (A-Z and a-z), digits (0-9), and the hyphen (-) and underscore (_)characters. The first and last characters of a name must be alphanumeric, and a name may not be entirelynumeric.

Array names may be 1-56 characters in length; other objects that can be named (host groups, hosts, SN-MP managers, and volumes) may be 1-63 characters in length. (Array name length is shorter so that thenames of individual controllers, which are assigned by Purity based on the array name, do not exceed themaximum allowable by DNS.)

Names are case-insensitive on input. For example, vol1, Vol1, VOL1, etc. all represent the same volume.Purity displays names in the case in which they were specified in the create or rename subcommand thatcreated the objects, regardless of the case in which they are entered in management commands.

Page 203: Purestorage_UserGuide3_2

puresnap

190

Options-h | --help

Can be used with any command or subcommand to display a brief syntax description.

--snap (purevol list command)Causes command to display information about snapshots rather than volumes.

--suffixSpecifies a name suffix for the snapshots created. Specifying this option causes snapshots to havenames of the form VOL.SUFFIX rather than the default VOL.snap.NNN form. The names of allsnapshots created by a single command that specifies this option have the same suffix.

--type (purevol listobj command)Type of related object for which purevol listobj produces a list of names, usually for insertion intoother commands.

Options that control display format in list and listobj subcommands:

--cliDisplays specified information in the form of CLI commands that could be issued to assign the currentvalues to the specified attributes. Not meaningful when combined with non-settable attributes.

--csvLists information in comma-separated value format. This format is designed for importation intospreadsheets and for scripting.

--notitleSuppresses generation of an initial line of output containing column titles.

--nvpLists each argument's name and specified information items, one to a line, in the form ITEM-NAME=VALUE. Argument names and information items are displayed flush left. This format is de-signed both for convenient viewing of what might otherwise be wide listings, and for parsing individ-ual items separated by whitespace for insertion into scripts.

--totalFollows output lines with a single line containing column totals in columns where they are meaningful.Ignored when --nvp is specified (where permitted).

Description

Snapshot Objects

Purity can create snapshots of volumes. A snapshot is an immutable image of a volume's data at an instantin time. An array can support 5,000 concurrent snapshots.

Snapshots cannot be connected to host groups or hosts for writing and reading, but the the purevol copycommand can copy the contents of a snapshot to a new or existing volume that is host-accessible. Snapshotscan be destroyed, eradicated, recovered, and renamed by the respective purevol commands.

Snapshot Naming

The purevol snap command creates snapshots of the volumes specified as arguments. By default, Puritynames each snapshot it creates in the form VOL.snap.NNN where:

Page 204: Purestorage_UserGuide3_2

puresnap

191

• VOL is the volume whose contents the snapshot contains.

• snap indicates a snapshot created by the purevol snap subcommand and named by Purity.

• NNN is a unique monotonically increasing decimal number.

Administrators can select an alternate form for snapshot names at creation time by specifying a value for the--suffix option in the purevol snap subcommand. The suffix must conform to volume naming syntax (seethe purevol-create man page), but because they contain period characters, the resulting snapshot namesdo not. When this option is specified, snapshots are named in the form VOL1.SUFFIX, VOL2.SUFFIX,etc., where:

• VOL1, etc., are the names of the snapped volumes.

• SUFFIX is the value of the --suffix option.

Snapshots can be renamed by the purevol rename subcommand. Renaming a snapshot in this way replacesits entire name, not just the suffix. Specified names must conform to volume naming rules for characterset and length.

Because the snapshot names Purity creates contain period characters, they do not conform to volume nam-ing rules, so snapshots named by Purity cannot be mistaken for volumes in CLI commands. For snapshotsrenamed by administrators, it is the administrator's responsibility to avoid potential confusion with vol-umes.

Creating Snapshots

The purevol snap command creates snapshots of one or more volumes. Each snapshot has a notional sizeequal to that of the volume from which it is created. When a snapshot is copied to a volume, the size ofthe volume becomes the snapshot's notional size.

All snapshots created by a single purevol snap subcommand are atomic; they represent the contents ofthe specified volumes as of a single instant in time.

Creating Volumes by Copying

The purevol copy subcommand copies the contents of a snapshot or volume (SOURCE) to volume TAR-GETVOL. If TARGETVOL does not exist, the command creates it. The host-visible size of TARGETVOLbecomes that of SOURCE, and its contents become those of the volume or snapshot from which it is created.

Targets of a purevol copy subcommand are always host-accessible volumes. Any number of identicalvolumes can be created from a single snapshot by specifying multiple arguments in a purevol copy com-mand. If an argument of the purevol copy command represents an existing volume, the volume's sizeis changed to that of SOURCE, and its contents are overwritten with those of SOURCE. Overwriting isimmediate and irrevocable. The previous contents of an overwritten volume cannot be retrieved.

Volumes created or overwritten by the purevol copy command have a Source attribute:

Volumes copied from Source VolumesThe Source of the created volume is the volume from which it was copied

Volumes copied from SnapshotsThe Source of the created volume is the source of the snapshot

Volumes created by the purevol create command have a null value for the Source attribute (reported inCLI output as "-").

Page 205: Purestorage_UserGuide3_2

puresnap

192

Destroying Snapshots

The purevol destroy, eradicate, and recover subcommands accept snapshot as well as volume arguments.As with volumes, destroying a snapshot starts a 24-hour eradication pending interval during which thedestruction may be nullified by executing the purevol recover subcommand. A snapshot that has beendestroyed can be eradicated at any point during the eradication pending interval by executing the purevoleradicate subcommand. This command terminates the eradication pending period and immediately beginsreclamation of physical storage occupied by data "charged" to the snapshot. Once eradication has begun,a snapshot can no longer be recovered.

The Source of a snapshot is ultimately a volume. When a volume is destroyed (and eradicated), all ofits snapshots are implicitly destroyed (and eradicated) along with it. Similarly, if a destroyed volume isrecovered, all snapshots that were implicitly destroyed along with it are recovered as well. Snapshots of thevolume that were destroyed explicitly by separate commands are not recovered by recovering the volume,however.

Volume and Snapshot Space Accounting

The purevol list --space --snap command displays the physical storage space occupied by specified snap-shots or snapshots of specified volumes. FlashArray snapshots are inherently space saving in that a snap-shot only consumes physical storage when data on its source volume is overwritten. Similarly, a volumecreated from a snapshot only consumes space when its contents change. Output of the purevol list --snapcommand displays the amount of physical storage “charged” to specified volumes and their snapshots.

For example, if two snapshots of a volume, s1 and s2, are taken at times t1 and t2 (t1 < t2), spaceconsumption is accounted as follows:

Volume data written before t1If no snapshots exist, the volume is “charged” for space occupied by host-written data.

Volume data written after t1 but before t2Snapshot s1 is “charged” for space occupied by the pre-update contents of the updated volume data.

Volume data written after t2Snapshot s2 is “charged” for space occupied by the pre-update contents of the updated volume data(s1 shares the pre-update contents with s2).

If s1 is destroyed but s2 remains, storage charged to s1 is reclaimed because there is no longer a need topreserve pre-update content for updates made prior to t2. If s2 is destroyed, however, storage charged toit must be preserved (it represents pre-update content for updates made after t1, which must be preservedas long as s1 exists).

To generalize, if a volume has two or more snapshots taken at different times:

Destroying the oldest snapshotSpace charged to the destroyed snapshot is reclaimed (after the 24-hour eradication delay period haselapsed or after a purevol eradicate command is executed).

Destroying a snapshot other than the oldestSpace charged to the destroyed snapshot is charged to the next older snapshot unless it is alreadyreflected in that snapshot (because the same volume block address was written both after the nextolder snapshot and after the snapshot being destroyed), in which case it is reclaimed.

Displaying and Listing Snapshot Information

When the --snap option is specified, the purevol list subcommand displays information about snapshotsof specified volumes. Similarly, the purevol listobj subcommand, whose main purpose is to create lists

Page 206: Purestorage_UserGuide3_2

puresnap

193

of objects for input to other commands, can output a list of snapshots whose sources are one or morespecified volumes.

Snapshots in the purevol list Command

Specifying the --snap option in the purevol list command displays information about specfied snapshotsrather than volumes. The --snap option may be specified with the default (no other content option), ortogether with the --pending and --space options. It is not valid with the --all, --connect, --private, or --shared options.

As with volumes, if a purevol list --snap subcommand includes no arguments, the information specifiedby other options is displayed for all snapshots. When volumes are specified as arguments in conjunctionwith the --snap option, information for the specified volumes' snapshots is displayed. When snapshotsare specified as arguments, information for the specified snapshots is displayed. Volume and snapshotarguments may be specified in the same purevol list --snap subcommand.

A snapshot may not be specified as a purevol list argument unless the --snap option is specified.

Snapshots in the purevol listobj Command

Specifying --type snap causes the purevol listobj command to produce a space or comma-separated listof the snapshots of one or more specified volumes (or of all volumes if no argument is specified).

ExceptionsSee purevol(1), purevol-list(1), and purevol-rename(1).

Examples

Example 1: Purity-named and Administrator-named Snapshots

purevol snap Vol1 purevol snap --suffix MONDAY1 Vol1 purevol snap --suffix MONDAY2 Vol1 Vol2

Create snapshot Vol1.snap.1. (Purity chooses the unique number.)Create snapshot Vol1.MONDAY1, representing Vol1 at time t1.Create two snapshots:

• Vol1.MONDAY2, representing Vol1 at time t2.

• Vol2.MONDAY2, representing Vol2 at time t2.

Example 2: Using A Snapshot to Recover from Data Corruption

purevol snap --suffix SNAPSHOT Vol1

... applications using Vol1 encounter data corruption...

purevol copy Vol1.SNAPSHOT GoodVol1

Page 207: Purestorage_UserGuide3_2

puresnap

194

... hosts connect to GoodVol1, validate data, and resume...

purevol destroy Vol1.SNAPSHOT purevol eradicate Vol1.SNAPSHOT

In this example:

Create snapshot Vol1.SNAPSHOT at a time when data is known to be correct. Some time later,applications using Vol1 encounter data corruption.Create volume GoodVol1 from snapshot Vol1.SNAPSHOT. The volume contains a pre-corruptiondata image of Vol1. Hosts recover lost information, for example by replaying logs, and continueprocessing data on GoodVol1.Destroy snapshot Vol1.SNAPSHOT when it is no longer required.Start the process of reclaiming physical storage charged to Vol1.SNAPSHOT.

Example 3: Cloning Multiple Volumes from a Snapshot

purevol snap --suffix TUESDAY Vol1 Vol2

... main application continues using Vol1 and Vol2...

purevol copy Vol1.TUESDAY Vol1Backuppurevol copy Vol2.TUESDAY Vol2Backup purevol connect --host HostBackup Vol1Backup Vol2Backup

... backup starts ...

purevol copy Vol1.TUESDAY Vol1Analytics purevol connect --host HostAnalytics Vol1Analytics

... analytics starts, using Vol1 image only ...

... backup completes ...

purevol disconnect --host HostBackup Vol1Backup Vol2Backuppurevol destroy Vol1Backup Vol2Backup

... analytics completes ...

purevol disconnect --host HostAnalytics Vol1Analyticspurevol destroy Vol1Analytics

purevol eradicate Vol1Analytics Vol1Backup Vol2Backup

In this example:

Create snapshots Vol1.TUESDAY and Vol2.TUESDAY, representing the data on the respectivevolumes at a single instant in time. The main application continues to operate using Vol1 and Vol2.Create accessible volumes Vol1Backup and Vol2Backup.Connect the two backup volumes to HostBackup, making it possible to back up the data imagesthey contain.

Page 208: Purestorage_UserGuide3_2

puresnap

195

(Assuming that the analytics application only requires data from Vol1.) Create volumeVol1Analytics from the same snapshot used to create Vol1Backup.Connect Vol1Analytics to HostAnalytics, making it possible to analyze the data imagefrom Vol1.When backup application completes, disconnect the two backup volumes from HostBackup anddestroy them.When analytics application completes, disconnect Vol1Analytics from HostAnalytics anddestroy the volume.Eradicate the three volumes used by the ancillary backup and analytics applications, starting theprocess of reclaiming the space charged to them.

Example 4: Destroying and Recovering Snapshots and Volumes

purevol snap --suffix FirstSnap Vol1purevol snap --suffix SecondSnap Vol1purevol snap --suffix ThirdSnap Vol1purevol snap --suffix FourthSnap Vol1 purevol destroy Vol1.FirstSnappurevol eradicate Vol1.FirstSnap purevol destroy Vol1.SecondSnap purevol destroy Vol1 purevol recover Vol1 purevol recover Vol1.SecondSnap

In this example:

Create snapshots of Vol1 at four different points in time.Destroy and eradicate Vol1.FirstSnap. The snapshot is no longer recoverable.Destroy, but do not eradicate, Vol1.SecondSnap. For 24 hours after the command executes,Vol1.SecondSnap can be recovered.Destroy, but do not eradicate, Vol1, and implicitly, its two remaining snapshots,Vol1.ThirdSnap and Vol1.FourthSnap.(Executes within 24 hours of execution). Recover Vol1 and the snapshots that were implic-itly destroyed along with it. Recover Vol1.ThirdSnap and Vol1.FourthSnap, but notVol1.SecondSnap (because it was destroyed separately), or Vol1.FirstSnap (because it wasdestroyed and eradicated separately).(Executes within 24 hours of execution). Recover snapsnot Vol1.SecondSnap. BecauseVol1.FirstSnap was eradicated, it is not possible to recover it with a similar command.

Example 5: Renaming Snapshots

purevol snap --suffix MONDAY Vol1 XXX purevol snap --suffix TUESDAY Vol1 XXX

... etc., for remaining days of the week...

purevol rename Vol1.SUNDAY ArchiveWEEK01 purevol destroy Vol1.MONDAY Vol1.TUESDAY ... purevol eradicate Vol1.MONDAY Vol1.TUESDAY ...

Page 209: Purestorage_UserGuide3_2

puresnap

196

In this example:

Create snapshot Vol1.MONDAY, presumably representing a stable application point, such as endof processing day.Create corresponding snapsets for the remaining days of the week.Rename snapshot Vol1.SUNDAY to ArchiveWEEK01.Destroy snapshots Vol1.MONDAY through Vol1.SATURDAY.Eradicate destroyed snapshots Vol1.MONDAY through Vol1.SATURDAY, starting reclamationof the space charged to them, and, more importantly for the workflow, making the namesVol1.MONDAY, Vol1.TUESDAY, etc., available for reuse in the next weekly cycle.

The procedure can be adapted, for example, to rename every fourth ArchiveWEEKxx as Archive-MONTHyy.

Example 6: The purevol listobj Command

purevol destroy $(purevol listobj --type snap Vol1)

In this example:

• The purevol listobj (“inner”) command produces a whitespace-separated list of the snapshots of Vol1.

• The purevol destroy (“outer”) command destroys the snapshots enumerated in the inner command.

Example 7: Refreshing Volume Contents from Snapshots and Vol-umes

purevol snap --suffix SNAPSHOT Vol1

... applications continue to access Vol1...

purevol copy Vol1.SNAPSHOT Vol1IMAGE

... update data format on Vol1IMAGE...

purevol disconnect --host Host1 Vol1purevol copy Vol1IMAGE Vol1purevol connect --host Host1 Vol1

... restart applications using updated data format...

... detect problem with updated format...

purevol disconnect --host Host1 Vol1purevol copy Vol1.SNAPSHOT Vol1purevol connect --host Host1 Vol1

... resume applications using older data format...

Create snapshot Vol1.SNAPSHOT of volume Vol1.

Page 210: Purestorage_UserGuide3_2

puresnap

197

Create volume Vol1IMAGE. Use it to update data format for applications that use Vol1.Momentarily disconnect Host1 from its data, and overwrite said data with data in the updated formatfrom volume Vol1IMAGE. Applications detect problems with updated data format.Momentarily disconnect Host1 from its volume, revert data on Vol1 to previous format by copyingfrom snapshot Vol1.SNAPSHOT, and reconnect Vol1 to Host1 so that applications can resumeprocessing using older format data.

See Alsopurevol(1), purevol-list(1), purevol-rename(1)

AuthorPure Storage Inc. <[email protected]>

Page 211: Purestorage_UserGuide3_2

198

Namepuresnmp, puresnmp-create, puresnmp-delete, puresnmp-list, puresnmp-rename, puresnmp-setattr —manages connections to Simple Network Management Protocol (SNMP) managers

Synopsispuresnmp create[VOL-LIST] [--authpassphrase AUTH-PASSPHRASE] [--authprotocol AUTH-PRO-TOCOL] [--community COMMUNITY] --host HOST [--privacypassphrase PRIVACY-PASSPHRASE] [--privacyprotocol PRIVACY-PROTOCOL] [--user USER] [--version {v2c | v3}] MANAGER

puresnmp delete MANAGER...

puresnmp list [--cli] [ --csv | --nvp ] [--title] [--engineid] MANAGER...

puresnmp rename OLD-NAME NEW-NAME

puresnmp setattr [--authpassphrase AUTH-PASSPHRASE] [--authprotocol AUTH-PROTOCOL] [--com-munity COMMUNITY] --host HOST [--privacypassphrase PRIVACY-PASSPHRASE] [--privacyprotocolPRIVACY-PROTOCOL] [--version {v2c | v3}] MANAGER

ArgumentsMANAGER

Name used by Purity to identify a SNMP Network Management System ("Manager").

Options-h | --help

Can be used with any command or subcommand to display a brief syntax description.

--authpassphrase AUTO-PHRASESNMP v3 only. Passphrase used by Purity to authenticate the array with the specified managers.Required if the authprotocol attribute value is not null; ignored otherwise. 1-32 characters from theset {[A-Z], [a-z], [0-9], _ (underscore), and - (hyphen)}.

--authprotocol AUTH-PROTOCOLSNMP v3 only. Hash algorithm used to validate the authentication passphrase. Valid values are MD5,SHA, or null (indicated by "" or '').

--community COMMUNITYSNMP v2c only. Manager community ID under which Purity is to communicate with the specifiedmanagers. Required if the value of --version is v2c; ignored otherwise. 1-32 characters from the set{[A-Z], [a-z], [0-9], _ (underscore), and - (hyphen)}.

--engineidSNMP v3 only. Displays the SNMP v3 engine ID generated by Purity for the array. (Some managersmay require the engine ID as a configuration input.)

--host HOSTDNS hostname or IP address of a computer that hosts an SNMP manager to which Purity is to sendtrap messages when it generates alerts.

--privacypassphrase PRIVACY-PASSPHRASESNMP v3 only. Passphrase used to encrypt SNMP messages. 8-63 non-space ASCII characters.

Page 212: Purestorage_UserGuide3_2

puresnmp

199

--privacyprotocol PRIVACY-PROTOCOLSNMP v3 only. Encryption protocol for SNMP messages. Valid values are AES, DES, or null (indi-cated by "" or '').

--userSNMP v3 only. User ID recognized by the specified SNMP managers which Purity is to use in com-munications with them. 1-32 characters from the set {[A-Z], [a-z], [0-9], _ (underscore), and - (hy-phen)}.

--versionVersion of the SNMP protocol to be used by Purity in communications with the specified manager(s).Valid values are v2c (the default) and v3 (case sensitive).

--cliDisplays specified information in the form of CLI commands that could be issued to assign the currentvalues to the specified attributes. Not meaningful when combined with non-settable attributes.

--csvLists information in comma-separated value format. This format is designed for importation intospreadsheets and for scripting.

--nvpLists each argument's name and specified information items, one to a line, in the form ITEM-NAME=VALUE. Argument names and information items are displayed flush left. This format is de-signed both for convenient viewing of what might otherwise be wide listings, and for parsing individ-ual items separated by whitespace for insertion into scripts.

--notitleSuppresses generation of an initial line of output containing column titles.

DescriptionFlashArrays can integrate with SNMP-based data center management frameworks in two ways: throughthe use of SNMP traps or via a built-in SNMP agent.

SNMP Traps

Purity can be configured to generate and transmit SNMP trap messages to designated SNMP managersrunning in hosts. The software supports SNMP versions v2c and v3. Each trap generated corresponds toan alert message (see purealert(1)).

Purity generates log records called alerts when significant events occur within an array. These can betransmitted to designated electronic mail addresses and/or sent to designated SNMP managers as trapmessages.

The Built-in SNMP Agent

A built-in SNMP agent in Purity responds to SNMP information retrieval requests (Get, Get Next, GetBulk) made by v1 and v2 SNMP managers in the same SNMP community as the FlashArray. The agentappears as localhost in puresnmp list output. It cannot be deleted or renamed.

SNMP managers communicate with the agent via the standard TCP port 161, which cannot be changed.The agent responds to GET-type requests, returning values for the purePerformance information block, orindividual variables within it, depending on the type of request issued. The variables supported are:

Page 213: Purestorage_UserGuide3_2

puresnmp

200

pureArrayReadBandwidth Current array-to-host data transfer ratepureArrayWriteBandwidth Current host-to-array data transfer ratepureArrayReadIOPS Current read request execution ratepureArrayWriteIOPS Current write request execution ratepureArrayReadLatency Current average read request latencypureArrayWriteLatency Current average write request latency

The FlashArray Management Information Base (MIB) describes the variables for which values can berequested. It can be downloaded by clicking a link on the GUI System tab, and imported into SNMPmanagers. Presently, The MIB cannot be downloaded via the CLI.

The Purity SNMP agent supports GET-type SNMP requests from managers in the same community thatutilize protocol version v2c. The community is specified using the puresnmp setattr command with the--community option and localhost specified as the manager. An array can be removed from a communityby specifying the --community option with a value of "" or '' (null).

The Purity SNMP agent supports protocol version v2c. SNMP v3 option values assigned to localhost bythe puresnmp setattr command have no effect.

puresnmp Subcommands

In addition to email alert messages, Purity generates and transmits SNMP trap messages to designated(SNMP) managers that support v2c or v3 of the SNMP protocol. Subcommands of the puresnmp commanddesignate managers and configure their communication and security attributes.

The puresnmp create subcommand creates a Purity SNMP manager object that identifies a host (SNMPmanager) and specifies the protocol attributes for communicating with it. Transmission of SNMP traps isenabled immediately upon creation of the manager object.

The puresnmp delete subcommand stops communication with the specified managers and deletes thecorresponding Purity manager objects.

The puresnmp list subcommand displays the communication and security attributes of the specified man-ager objects (if no manager objects are specified, displays attributes for all designated managers). Alter-natively, if the --engineid option is specified, displays the array's engine ID, which may be required toconfigure some SNMP managers.

The puresnmp rename subcommand changes the name of the specified SNMP manager object. SNMPmanager names are used in Purity administrative commands, and have no external significance.

The puresnmp setattr subcommand changes the hostname, IP address, or SNMP version and correspond-ing protocol and security attributes of the specified SNMP manager.

SNMP Protocol

Purity supports SNMP versions v2c and v3. The --host and --version option values are used to specifyattributes that are common to both versions of the protocol. Each protocol version has slightly differentsecurity attributes:

SNMP v2cA value must be specified for the --community option. Purity sets other security attribute values tonull when a SNMP v2c manager is created or when an existing manager's version attribute is changedto v2c.

Page 214: Purestorage_UserGuide3_2

puresnmp

201

SNMP v3A value must be specified for the --user attribute. Any value specified for the --community attribute isignored. If either the --authprotocol or the --privacyprotocol option is specified, the correspondingpassphrase must also be specified.

Examples

Example 1

puresnmp create --host MyHost.com --community MyCmty SNMPMANAGER1

Creates a SNMP manager object for the SNMP manager running in host MyHost.com. Purity uses SNMPv2c to communicate in SNMP community MyCmty.

Example 2

puresnmp delete SNMPMANAGER2

Stops transmission of any future traps to SNMPMANAGER2 and deletes the manager object from thePurity object database. If SNMPMANAGER2 is recreated at a later time, its attributes must be re-entered.

Example 3

puresnmp list SNMPMANAGER3

Displays the protocol and security attributes for SNMPMANAGER3.

Example 4

puresnmp rename SNMPMANAGER4 SNMPMANAGER5

Changes the name of SNMPMANAGER4 to SNMPMANAGER5. None of the manager object's protocoland security attributes are changed.

Example 5

puresnmp setattr --host 172.169.0.12 SNMPMANAGER6

Changes the IP address associated with SNMPMANAGER6 to 172.169.0.12.

See Alsopurearray(1), purealert(1)

Page 215: Purestorage_UserGuide3_2

puresnmp

202

AuthorPure Storage Inc. <[email protected]>

Page 216: Purestorage_UserGuide3_2

203

Namepurevol, purevol-copy, purevol-create, purevol-destroy, purevol-eradicate, purevol-recover, purevol-snap— manage the creation and destruction of Purity virtual storage volumes and snapshots of their contents,as well as reclamation of physical storage occupied by the data in them

Synopsispurevol copy SOURCE TARGETVOL

purevol create --size SIZE VOL...

purevol destroy TARGET...

purevol eradicate TARGET...

purevol recover TARGET...

purevol snap VOL...

ArgumentsTARGET

Name of a volume or snapshot object operated upon by the purevol destroy, purevol eradicate, andpurevol recover subcommands.

VOLName of a volume object. The purevol create subcommand associates the name with the new object.In the purevol snap subcommand, denotes the volumes to be snapped.

Object Names

Purity object names use the Internet domain name (RFC 1035) character set plus the underscore character(_). The valid characters are letters (A-Z and a-z), digits (0-9), and the hyphen (-) and underscore (_)characters. The first and last characters of a name must be alphanumeric, and a name may not be entirelynumeric.

Array names may be 1-56 characters in length; other objects that can be named (host groups, hosts, SN-MP managers, and volumes) may be 1-63 characters in length. (Array name length is shorter so that thenames of individual controllers, which are assigned by Purity based on the array name, do not exceed themaximum allowable by DNS.)

Names are case-insensitive on input. For example, vol1, Vol1, VOL1, etc. all represent the same volume.Purity displays names in the case in which they were specified in the create or rename subcommand thatcreated the objects, regardless of the case in which they are entered in management commands.

Options-h | --help

Can be used with any command or subcommand to display a brief syntax description.

--sizeVirtual volume capacity as perceived by hosts.

Page 217: Purestorage_UserGuide3_2

purevol

204

--suffixIn the purevol snap subcommand, an optional suffix that is appended to the names of snappedvolumes to create snapshot names. If not supplied, Purity constructs snapshot names of the formVOL.snap.NNN, where NNN is a unique number assigned by Purity.

Volume Sizes

Specified as an integer, optionally followed by one of the characters S, K, M, G, T, P, denoting 512-byte sectors, KiB, MiB, GiB, TiB, and PiB respectively, where "Ki" denotes 2^10, "Mi" denotes 2^20, etc.If no suffix letter is specified, size is assumed to be expressed in sectors.

Volumes must be between one megabyte and four petabytes in size. Purity adjusts specified sizes less thanone megabyte to one megabyte, and fails commands in which sizes larger than four petabytes are specified.

Digit separators are not permitted in size specifications (e.g., 1000g is valid, but 1,000g is not).

DescriptionThis page describes management of volumes.

The purevol create command creates one or more Purity virtual storage volumes of the host-visible sizespecified by the --size option.

Volumes do not consume physical storage until data is actually written to them, so volume creation hasno immediate effect on an array's physical storage consumption.

Connecting Volumes to Hosts or Host Groups

Volumes must be connected to hosts in order for the hosts to read and write data on them. The purevolconnect command, described on the purehost-connect(1) man page, establishes either private connectionsbetween volumes and individual hosts or shared connections between volumes and host groups. The pure-hgroup and purehost man pages describe alternate mechanisms for making host-volume connections.

Destroying Volumes and Snapshots

When a volume or snapshot is no longer required, it can be destroyed to reclaim the physical storage occu-pied by its data (purevol destroy command). Destroying a volume implicitly destroys all of its snapshots.

Destroying an individual snapshot only reclaims space that is uniquely charged to the snapshot. Spaceshared with older snapshots or with the originating volume remains allocated.

Before a volume can be destroyed, its private connections must be disconnected (purevol disconnectcommand), and its host group association, if any, must be broken (purehgroup setattr command, speci-fying the --remvol option).

Destruction of volumes and snapshots is not immediate. The purevol destroy command places volumesand snapshots in the eradication pending state, making them inaccessible immediately, but leaving dataintact for 24 hours (unless the array runs low on space). At any time during the eradication pending period,the contents of a destroyed volume or snapshot can be recovered intact by the purevol recover command.

(In contrast, storage occupied by data in virtual sectors truncated from a volume by the purevol truncatecommand is reclaimed immediately.)

When a destroyed volume or snapshot's 24-hour eradication pending period has lapsed (or when arrayfree space becomes dangerously low), Purity reclaims the physical storage occupied by its data during thecontinuous capacity optimization process.

Page 218: Purestorage_UserGuide3_2

purevol

205

If the physical storage occupied by a destroyed volume or snapshot's data is required immediately (forexample, if a large amount of new data is to be written), an administrator can terminate the eradicationpending period and initiate immediate storage reclamation with the purevol eradicate command. Thepurevol eradicate command only acts on previously destroyed volumes and snapshots.

Once reclamation starts, whether because an eradication pending period has lapsed, because array freespace has become dangerously low, or because a purevol eradicate command was executed, a destroyedvolume or snapshot's data can no longer be recovered.

Snapshots

The purevol snap command creates point-in-time snapshots of the contents of one or more volumes.FlashArray snapshots make use of Purity deduplication technology; they consume space only when datais written to the volumes on which they are based. An array can support 5,000 concurrent snapshots.

All snapshots taken as a result of a single purevol snap command are atomic. They represent the specifiedvolumes' contents as of a single instant in time.

Snapshots are immutable; they cannot be connected to hosts or host groups, and therefore the data theycontain cannot be read or written.

The purevol copy command can be used to create a volume from another volume or from a snapshot, orto replace the contents of an existing volume with those of another volume or a snapshot. Volumes createdin this way have a Source attribute whose value is the name of the originating volume.

By default, Purity assigns a unique name to each snapshot it creates by appending the string .snap. anda unique number to the name of the parent volume or snapshot. Assigned snapshot names may be longerthan the 63-character allowable maximum for volume names. The purevol rename command can be usedto rename a snapshot, but names assigned in this way must conform to the 63-character maximum lengthand other volume naming rules.

Snapshot Attributes

Snapshots have a size, which cannot be changed. When a snapshot is copied to a new or existing volume,the volume's size becomes that associated with the snapshot. The purevol setattr and purevol truncatecommands can be used to respectively increase and decrease the size of volumes, including those createdby copying snapshots.

Snapshots and volumes created from them have an additional Source attribute. The value of the attributeis the name of the volume that originated the chain of volumes and snapshots. Volumes created by thepurevol create subcommand have a null value for the source attribute.

Exceptions• Destroyed volumes cannot be truncated directly. They must be recovered to their original sizes first

(provided that eradication has not begun) and then truncated. Truncation is immediate and irrevocable.

• Volumes cannot be eradicated unless (a) they have previously been destroyed, and (b) they are withintheir 24-hour eradication pending periods.

• The names of destroyed volumes cannot be reused to create other volumes until eradication has begun.

• When a volume is destroyed, all of its existing snapshots are destroyed as well. Similarly, when a volumeis recovered, snapshots that were implicitly destroyed as a consequence of the volume's destructionare recovered as well. Snapshots that are destroyed explicitly are not recovered by a purevol recovercommand specifying the volumes from which they were created.

Page 219: Purestorage_UserGuide3_2

purevol

206

Examples

Example 1

purevol create --size 100G VOL1 VOL2 VOL3

Creates three volumes called VOL1, VOL2, and VOL3, each with a host-visible capacity of 100 gigabytes(10^2 * 2^30 bytes).

Example 2

purevol destroy VOL4

... less than 24 hours later...

purevol recover VOL4

Makes VOL4 inaccessible and starts the 24-hour eradication pending period after which Purity would beginto reclaim the physical storage occupied by the volume's data. During the eradication pending period, thevolume is restored to active status with its data intact.

Example 3

purevol create --size 100G VOL5 VOL6

... time passes...

purevol destroy VOL5

... less than 24 hours passes...

purevol eradicate VOL5

Creates 100-gigabyte volumes VOL5 and VOL6. Some time later, destroys VOL5, making it inaccessible tohosts and starting the 24 hour eradication pending period after which Purity reclaims the physical storageoccupied by its data. The purevol eradicate command begins reclamation of physical storage occupiedby VOL5's data immediately; VOL5 can no longer be recovered after this point.

See Alsopurevol-list(1), purevol-rename(1), purevol-setattr(1)

purehgroup(1), purehgroup-connect(1), purehost(1), purehost-connect(1)

Page 220: Purestorage_UserGuide3_2

purevol

207

puresnap(7)

AuthorPure Storage Inc. <[email protected]>

Page 221: Purestorage_UserGuide3_2

208

Namepurevol-list, purevol-listobj, purevol-monitor — display volumes' attributes and information about thevirtual and physical storage they consume

Synopsispurevol list [ --all | --connect | --space ] [ --cli | --csv | --nvp ] [--notitle] [--pending] [ --private | --shared ][--snap] [--thin-provisioning] [--total] [VOL...]

purevol listobj [--csv] [--pending] [ --type { host | snap | vol } ] [VOL...]

purevol monitor [--csv] [ --historical { 1h | 3h | 24h | 7d | 30d | 90d | 1y } | [--interval SECONDS] [--nrep REPEAT-COUNT] [--size] ] [--notitle] [VOL...]

ArgumentsVOL

Volume for which the information specified by options is to be displayed

OptionsOptions that control information displayed:

-h | --helpCan be used with any command or subcommand to display a brief syntax description.

default (no display content option specified)Displays names, virtual sizes, and reported serial numbers (created by Purity when volumes are cre-ated) for the specified volumes.

--all (purevol list only)Displays names, virtual sizes, connected hosts and host groups, and host and array worldwide namesor IQNs for the specified volumes.

--connect (purevol list only)Displays names, virtual sizes, connected hosts and host groups for specified volumes.

--intervalSets the interval (in seconds) to log performance data. If omitted, the interval defaults to every 5seconds.

--nrep REPEAT-COUNTSets the number of times to log performance data. If omitted, the repeat count defaults to 1.

--pendingIncludes destroyed volumes that are in the eradication pending state in the display. If not specified,these volumes are not shown.

--private (purevol list only)Used with purevol list --connect to restrict the display to specified volumes' privately connectedhosts. Invalid when combined with other options.

--shared (purevol list with --connect option only)Used with purevol list --connect to restrict the display to specified volumes' shared connections.Invalid when combined with other options.

Page 222: Purestorage_UserGuide3_2

purevol-list

209

--sizeDisplays the average I/O sizes per operation (read, write, and total).

--snap (purevol list only)Displays information for snapshots of the specified volumes rather than for the volumes themselves.

--space (purevol list only)Displays size and space consumption information items listed below for each specified volume.

--thin-provisioning (purevol list --space only)Displays thin-provisioning savings (i.e., the percentage of a volume's blocks that are currently un-mapped).

--type (purevol listobj only)Specifies the type of information (connected hosts, snapshots, or echoed volume names) about spec-ified volumes to be produced in whitespace-separated list format suitable for scripting.

Options that control display format:

--cliDisplays specified information in the form of CLI commands that could be issued to assign the currentvalues to the specified attributes. Not meaningful when combined with non-settable attributes.

--csvLists information in comma-separated value format. This format is designed for importation intospreadsheets and for scripting.

--notitleSuppresses generation of an initial line of output containing column titles.

--nvpLists each argument's name and specified information items, one to a line, in the form ITEM-NAME=VALUE. Argument names and information items are displayed flush left. This format is de-signed both for convenient viewing of what might otherwise be wide listings, and for parsing individ-ual items separated by whitespace for insertion into scripts.

--totalFollows output lines with a single line containing column totals in columns where they are meaningful.Ignored when --nvp is specified (where permitted).

DescriptionThe purevol list command displays the information indicated by content options for each specified vol-ume. If no volumes are specified, the command displays information for all volumes.

• If no options are specified, the command displays virtual size and serial number (generated by Puritywhen volumes are created) for each specified volume.

• The mutually exclusive --all, --connect, and --space options:

--allDisplays specified volumes' sizes, LUNs, and "paths" to connected hosts (host and correspondingarray port WWNs or IQNs).

--connectDisplays the hosts and host groups to which the specified volumes are connected and the LUNsused by each to address them.

Page 223: Purestorage_UserGuide3_2

purevol-list

210

--space

Displays the following information about provisioned (virtual) size and physical storage consump-tion for each specified volume:

SizeSize of the volume as perceived by host storage administration tools.

Data ReductionRatio of unique volume sectors containing host-written data to the physical storage space cur-rently occupied by the data after reduction.

SystemAmount of physical storage space occupied by RAID-3D and other array metadata.

Shared SpaceAmount of physical storage space shared with other volumes and snapshots as a result of datadeduplication.

VolumeAmount of physical storage space currently occupied by host-written data (exclusive of arraymetadata or snapshots).

SnapshotsAmount of physical storage space currently occupied by data unique to one or more snapshots.

TotalAmount of physical storage space currently occupied by host-written data and the array meta-data that describes and protects it.

The --pending option may be used in conjunction with other display content options to include specifiedvolumes whose state is eradication pending in the output display. In the absence of this option, destroyedvolumes are not displayed.

The --total option is not valid with either of the --all or --connect options because these options can con-ceivably display more than one output line per volume. Use purevol list --total to determine the aggregatehost-visible size of some or all volumes.

The purevol listobj command creates lists of certain attributes of specified volumes in either whitespaceor comma-separated form, suitable for scripting. The command produces one of four types of lists:

--type hostList contains hosts to which the specified volumes are connected. If no volumes are specified, listcontains names of all hosts connected to any volume.

--type vol (default if --type option not specified)List contains the specified volume names. If no volume names are specified, contains the names ofall volumes. Snapshots are not included in the listing.

The purevol monitor command can be used to display instantaneous performance data for the specifiedvolumes. If no volumes are specified, the command displays information for all volumes.

The purevol monitor --historical command can be used to display historical performance data at one ofthe following resolutions: 1 hour, 3 hours, 24 hours, 7 days, 30 days, 90 days, or 1 year. These commandscan used with the --csv and --notitle options to export historical data.

Page 224: Purestorage_UserGuide3_2

purevol-list

211

Lists are whitespace-separated by default. Specify the --csv option to produce a comma-separated list.

ExceptionsNone.

Examples

Example 1

purevol list

Displays names, sizes, and serial numbers all volumes. Both directly connected hosts and hosts connectedby virtue of their associations with host groups are displayed.

Example 2

purevol list --space --csv --notitle VOL1 VOL2 VOL3

Displays names, sizes, data reduction ratios, and physical storage space occupied by volume data andRAID-3D check data for volumes VOL1, VOL2, and VOL3, in comma-separate value format. The --notitleoption suppresses the line of column titles that would ordinarily precede the output.

Example 3

purevol list --connect --shared --nvp VOL4 VOL5

For volumes VOL4 and VOL5, displays names, sizes, connected host groups and their associated hosts, andLUNs used to address the volumes, all in name-value pair (ATTRIBUTE-NAME=ATTRIBUTE-VALUE)format. Only shared connections are included in the display.

Example 4

purevol list --connect --shared VOL1

Displays host groups to which VOL1 has shared connections and the LUNs used by hosts in the groupsto address it.

Example 5

purehost listobj --type vol --private $(purevol listobj --type host --private VOL1)

Produces a list of hosts with private connections to VOL1, which is input to the purehost listobj commandto display a list of the volumes to which those hosts have private connections.

Page 225: Purestorage_UserGuide3_2

purevol-list

212

Example 6

purehost list --connect $(purevol listobj --type host VOL1)

The inner purevol listobj command produces a whitespace-separated list of hosts to which VOL1 is con-nected. The list becomes input to the outer purevol list command. The result is a display of informationabout hosts connected to VOL1.

Example 7

purevol destroy VOL2purehost list --space --pending

Places VOL2 in the eradication pending state for 24 hours. Produces a list of volumes and the space theyoccupy, including destroyed volume VOL2, whose data continues to occupy space for 24 hours followingexecution of the purevol destroy command or until the volume is eradicated by the purevol eradicatecommand.

Example 8

purevol monitor --historical 24h --csv VOL1 VOL2

Produces CSV output containing historical performance data for volumes VOL1 and VOL2 at the 24 hourresolution.

Example 9

purevol monitor --nrep 60 --interval 1 --csv --notitle --size --total

Produces CSV output containing 60 seconds' worth of instantaneous performance data for all volumes,including average I/O sizes and a total row for all volumes.

See Alsopurevol(1), purevol-rename(1), purevol-setattr(1)

purehgroup(1), purehgroup-connect(1), purehost(1), purehost-connect(1)

puresnap(7)

AuthorPure Storage Inc. <[email protected]>

Page 226: Purestorage_UserGuide3_2

213

Namepurearray-rename, purehgroup-rename, purehost-rename, purevol-rename — change the names by whichPurity refers to objects in administrative interactions

Synopsispurearray rename NEW-NAME

purehgroup rename OLD-NAME NEW-NAME

purehost rename OLD-NAME NEW-NAME

purevol rename OLD-NAME NEW-NAME

ArgumentsOLD-NAME

Name of the object to be renamed. (Not applicable in the case of purearray rename).

NEW-NAMEName by which the object is to be known after the command executes.

Options-h | --help

Can be used with any command or subcommand to display a brief syntax description.

DescriptionThe purearray rename command changes the name of an array to NEW-NAME.

Other rename subcommands change the names of the objects identified by OLD-NAME to NEW-NAME.Changes are effective immediately; once a rename subcommand executes, OLD-NAME is no longer rec-ognized in CLI or GUI interactions. In the Purity GUI, NEW-NAME appears upon the next refresh of anypage containing the name of the specified object.

Object Names

Purity object names use the Internet domain name (RFC 1035) character set plus the underscore character(_). The valid characters are letters (A-Z and a-z), digits (0-9), and the hyphen (-) and underscore (_)characters. The first and last characters of a name must be alphanumeric, and a name may not be entirelynumeric.

Array names may be 1-56 characters in length; other objects that can be named (host groups, hosts, SN-MP managers, and volumes) may be 1-63 characters in length. (Array name length is shorter so that thenames of individual controllers, which are assigned by Purity based on the array name, do not exceed themaximum allowable by DNS.)

Names are case-insensitive on input. For example, vol1, Vol1, VOL1, etc. all represent the same volume.Purity displays names in the case in which they were specified in the create or rename subcommand thatcreated the objects, regardless of the case in which they are entered in management commands.

Page 227: Purestorage_UserGuide3_2

purevol-rename

214

ExceptionsNone.

Examples

Example 1

purearray rename NEWARRAYNAME

Changes the name of an array to NEWARRAYNAME.

Example 2

purevol rename VOL1 VOL1NEW

Changes the name of VOL1 to VOL1NEW.

See Alsopurearray(1), purehgroup(1), purehost(1), purevol(1)

puresnap(7)

AuthorPure Storage Inc. <[email protected]>

Page 228: Purestorage_UserGuide3_2

215

Namepurevol-setattr, purevol-truncate — increase and decrease volume sizes

Synopsispurevol setattr --size NEW-SIZE VOL...

purevol truncate --size NEW-SIZE VOL...

ArgumentsVOL

Name of the volume object whose size is to be changed.

Options-h | --help

Can be used with any command or subcommand to display a brief syntax description.

--sizeNew virtual size for the specified volumes.

Volume Sizes

Specified as an integer, optionally followed by one of the characters S, K, M, G, T, P, denoting 512-byte sectors, KiB, MiB, GiB, TiB, and PiB respectively, where "Ki" denotes 2^10, "Mi" denotes 2^20, etc.If no suffix letter is specified, size is assumed to be expressed in sectors.

Volumes must be between one megabyte and four petabytes in size. Purity adjusts specified sizes less thanone megabyte to one megabyte, and fails commands in which sizes larger than four petabytes are specified.

Digit separators are not permitted in size specifications (e.g., 1000g is valid, but 1,000g is not).

DescriptionThe purevol setattr command increases volumes' provisioned sizes (virtual sizes as perceived by hosts).The purevol truncate command decreases provisioned volume sizes. Changes in volume size are imme-diate and always affect the highest-numbered sector addresses.

Changes in volume size are visible to connected hosts immediately. When a volume is truncated, any datain truncated virtual sectors is destroyed immediately.

The size of a truncated volume can be increased by the purevol setattr command, but data in truncatedsectors cannot be retrieved. When a volume's size is increased, reading data in the new virtual sectoraddresses before they are written returns zeros.

Exceptions• The purevol setattr command cannot reduce the size of a volume, nor can purevol truncate increase

the size of a volume.

• The size of a volume cannot be reduced to less than one megabyte or increased to more than 4 petabytes.If a --size smaller than one megabyte is specified, Purity changes the volume's size to one megabyte. Ifa --size greater than 4 petabytes is specified, the command fails.

Page 229: Purestorage_UserGuide3_2

purevol-setattr

216

Examples

Example 1

purevol setattr --size 100G VOL1 VOL2

Increases the sizes of VOL1 and VOL2 to 100 gigabytes each. If either volume is larger than 100 gigabytes,an error is logged, and that volume's size does not change.

Example 2

purevol setattr --size 1000000000S VOL2

Increases the size of VOL2 to 1,000,000,000 sectors, or 512,000,000,000 bytes (assuming that the volume'scurrent size is less than that amount).

See Alsopurevol(1), purevol-list(1), purevol-rename(1)

purehgroup(1), purehgroup-connect(1), purehost(1), purehost-connect(1)

AuthorPure Storage Inc. <[email protected]>

Page 230: Purestorage_UserGuide3_2

217

Chapter 11. Common CLIAdministrative TasksCLI Help

Four varieties of interactive help are available within the Purity CLI:

Top-levelA listing of the commands supported by the Purity CLI

CommandA brief description of the function of a CLI command and its subcommands, if any

SubcommandA brief description of the function of a CLI subcommand and its arguments and options

man pageThe Linux man command, with Purity CLI command names as arguments.

The sections that follow describe these four varieties.

Top-Level HelpThe purehelp command lists the commands supported by the Purity CLI, as Example 11.1 illustrates.

Example 11.1. Top-Level Help

pureuser@MyFlashArray-ct0> purehelpAvailable commands:-------------------pureadminpurealertpurearraypureconfigpurednspuredrivepuredspurehelppurehgrouppurehostpurehwpureloggerpuremanpuremonitorpurenetworkpureportpuresnmppurevolexitlogout

Page 231: Purestorage_UserGuide3_2

Common CLI Administrative Tasks

218

Command HelpAll CLI commands accept a --help (short form: -h) option, causing them to display the subcommandsavailable with that command. Example 11.2 illustrates command-level help for the purevol command. Atthis level, the CLI lists the subcommands available with each command and a brief description of each.

Example 11.2. Command-Level Help

pureuser@MyFlashArray-ct0> purevol -husage: purevol [-h]

{connect,copy,create,destroy,disconnect,eradicate,list,listobj,recover,rename,setattr,snap,truncate} ...

positional arguments: {connect,copy,create,destroy,disconnect,eradicate,list,listobj,recover,rename,setattr,snap,truncate} connect connect one or more volumes to a host copy copy a volume or snapshot to one or more volumes create create one or more volumes destroy destroy one or more volumes or snapshots disconnect disconnect one or more volumes from a host eradicate eradicate one or more volumes or snapshots list display information about volumes or snapshots listobj list objects associated with one or more volumes recover recover one or more destroyed volumes or snapshots rename rename a volume or snapshot setattr set volume attributes (increase size) snap take snapshots of one or more volumes truncate truncate one or more volumes (reduce size)

optional arguments: -h, --help show this help message and exit

Subcommand-Level HelpSimilarly, the --help (or -h) option can be used with each subcommand to display the subcommand’s syntaxand available options. Example 11.2 illustrates subcommand-level help for the purevol create command.

Example 11.3. Subcommand Help

pureuser@MyFlashArray-ct0> purevol create -husage: purevol create [-h] --size SIZE VOL ...

positional arguments: VOL volume name

optional arguments: -h, --help show this help message and exit

Page 232: Purestorage_UserGuide3_2

Common CLI Administrative Tasks

219

--size SIZE virtual size as perceived by hosts (e.g., 100M, 10G, 1T)

Man Page HelpMore extensive help is available in Linux man pages for each CLI command and subcommand. To view theman page for a command or subcommand, enter pureman COMMAND or pureman COMMAND-SUB-COMMAND (the hyphen is required) respectively, as shown in Example 11.4.

Example 11.4. man Page Help

pureuser@MyFlashArray-ct0> pureman purevol-createPUREVOL(1) Purity CLI Man Pages PUREVOL(1)

NAME purevol, purevol-copy, purevol-create, purevol-destroy, purevol-eradicate, purevol- recover, purevol-snap - manage the creation and destruction of Purity virtual storage volumes and snapshots of their contents, as well as reclamation of physical storage occupied by the data in them

SYNOPSIS purevol copy SOURCE TARGETVOL

purevol create --size SIZE VOL...

purevol destroy TARGET...

purevol eradicate TARGET...

purevol recover TARGET...

purevol snap VOL...

ARGUMENTS TARGET Name of a volume or snapshot object operated upon by the purevol destroy, purevol eradicate, and purevol recover subcommands.

...

Getting StartedFlashArrays present disk-like volumes to hosts. From the Purity standpoint, volumes and hosts are abstractobjects which must be created by an administrator and connected to each other before actual host computerscan write and read data on volumes.

Creating VolumesOnce a FlashArray is installed and initially configured, an administrator's typical first task is creation ofvolumes that hosts will subsequently format and use to store and access data. Example 11.5 illustratesthree examples of volume creation using the CLI.

Page 233: Purestorage_UserGuide3_2

Common CLI Administrative Tasks

220

Example 11.5. Creating Volumes

pureuser@MyFlashArray-ct0> purevol create vol1 usage: purevol create [-h] --size SIZE VOL ...purevol create: error: argument --size is requiredpureuser@MyFlashArray-ct0> purevol create vol1 --size 100g Name Size Serialvol1 100G 7CBFCE0B2660CC3000010001pureuser@MyFlashArray-ct0> purevol create vol2 vol3 vol4 --size 200g Name Size Serialvol2 200G 7CBFCE0B2660CC3000010002vol3 200G 7CBFCE0B2660CC3000010003vol4 200G 7CBFCE0B2660CC3000010004

Command fails, because no virtual size is specified for the volume.Creates a volume that hosts will perceive as containing 100 gigabytes of storage, although it occupiesalmost no physical space until hosts write data to itCreates three volumes in a single command. There is no practical limit to the number of volumesthat can be created in a single command, but only one --size option can be specified in the command,so all volumes initially have the same virtual size.

The output of the purevol create command lists the name, size, and Purity-generated serial number ofeach volume created.

Creating HostsInternally, Purity represents host computers that use FlashArray storage services as host objects (usuallyjust called "hosts"). A Purity host object has a name and one or more Fibre Channel worldwide names(WWNs) or iSCSI qualified names (IQNs). WWNs and IQNs correspond to initiators, typically host com-puters or host computer ports with which a FlashArray exchanges commands, data, and messages. Purityrepresents a host's WWNs and IQNs as a multi-valued attribute, specified via either the --wwnlist (alias --wwn) or the --iqnlist (alias --iqn) option when hosts are created. Fibre Channel WWNs consist of exactly16 hexadecimal digits, and may be specified either as continuous strings or as eight digit pairs separatedby colons. iSCSI IQNs conform to Internet fully-qualified domain name (FQDN) standards Example 11.6illustrates four examples of host creation.

Example 11.6. Creating Hosts

pureuser@MyFlashArray-ct0> purehost create host1 Name WWNhost1 -pureuser@MyFlashArray-ct0> purehost create --wwnlist 0123456789abcde2 Host2 Name WWNHost2 01:23:45:67:89:AB:CD:E2pureuser@MyFlashArray-ct0> purehost create --wwnlist 0123456789abcde3,01:23:45:67:89:ab:cd:e4 Host3 Name WWNHost3 01:23:45:67:89:AB:CD:E3 01:23:45:67:89:AB:CD:E4pureuser@MyFlashArray-ct0> purehost create --wwn 01:23:45:67:89:ab:cd:e4 Host4 Error on Host4: The specified WWN is already in use.

Page 234: Purestorage_UserGuide3_2

Common CLI Administrative Tasks

221

Creates host object host1. At the time of creation, host1 cannot connect to volumes because noWWNs or IQNs are associated with it.Creates Host2 and assigns WWN 0123456789abcde2 to it. Volumes can be connected to Host2immediately.Creates Host3 and associates two WWNs with it (typically because the host is equipped with twoFibre Channel ports). This example also illustrates the two forms in which worldwide names can beentered--continuous digit string and colon-separated two-digit groups.Attempts to create Host4 and fails, because the WWN specified is already associated with anotherhost. Because they identify host ports on the storage network, WWNs must be unique within an array.

Connecting Hosts and VolumesFor a host to read and write data on a volume, a connection between the two must be established. Aconnection is effectively permission for an array to respond to I/O commands from a given set of storagenetwork addresses (host WWNs or IQNs). In SCSI terminology, a connection is an Initiator-Target-LUN(I_T_L) nexus, a completely specified path for message and data exchange. By default, Purity chooses aLUN based on availability at both the initiator (host) and target (array) when a connection is established.Alternatively, an administrator can specify an unused LUN when establishing a connection.

The purehost connect command establishes private connections between a single volume and one or morehosts. The purevol connect command establishes connections between a single host and one or morevolumes. The two commands are functionally identical; both are provided for administrative convenience.Example 11.7 illustrates the use of both commands.

Example 11.7. Establishing Host-Volume Connections

pureuser@MyFlashArray-ct0> purehost connect --vol vol1 host1 host2 host3 Name Vol LUNhost1 vol1 1Host2 vol1 1Host3 vol1 1pureuser@MyFlashArray-ct0> purevol connect --host host1 vol2 vol3 vol4 Name Host Group Host LUNvol2 - host1 2vol3 - host1 3vol4 - host1 4pureuser@MyFlashArray-ct0> purevol connect --lun 249 --host host1 vol5 Name Host Group Host LUNvol5 - host1 249pureuser@MyFlashArray-ct0> purehost disconnect --vol vol1 host2 Name Volhost2 vol1pureuser@MyFlashArray-ct0> purevol disconnect --host host3 vol1 Name Hgroup Hostvol1 host3

Connects vol1 to hosts host1, host2, and host3Connects host1 to volumes vol2, vol3, and vol4Illustrates explicit specification of the LUN to be associated with the connection. Because host1 isnot associated with a host group, any of the LUNs in the range [1…255] can be used for private

Page 235: Purestorage_UserGuide3_2

Common CLI Administrative Tasks

222

connections to it. If a LUN is specified in a purehost connect or purevol connect command, exactlyone host and one volume must be specified.The purehost disconnect or purevol disconnect commands in the example break the private con-nections between Host2 and Vol1, and between Host3 and Vol1 respectively. Either can be usedto break a private connection between a host and a volume, regardless of how the connection wasestablished. As with the connect subcommands, purehost disconnect can disconnect multiple hostsfrom a single volume, and purevol disconnect can disconnect multiple volumes from a single host.The two subcommands are provided for administrative convenience.

Whichever way a connection is made, the host administrator can immediately rescan the host's disk devices(if necessary) to discover the logical units represented by newly-connected volumes, create file systems,and mount them for storing and retrieving data.

Resizing VolumesFlashArray administrators also perform routine management tasks on objects within an array. Typically themost frequent of these are operations on volumes. You can change the size (host-visible storage capacity)of a volume, disconnect it from hosts, rename it, destroy it, eradicate all or part of the data it contained,and under certain circumstances, restore the contents of a destroyed volume or truncated sector range.

To minimize the possibility for inadvertent errors, the CLI uses different subcommands for increasingand decreasing the host-perceived sizes of volumes. The purevol setattr command increases the sizes ofone or more volumes to a given level; the purevol truncate command decreases volumes’ sizes. Bothcommands can operate on multiple volumes, but purevol setattr can only increase volume sizes, andpurevol truncate can only decrease them.

Example 11.8 illustrates the use of these commands to change volumes’ sizes.

Example 11.8. Resizing Volumes

pureuser@MyFlashArray-ct0> purevol list vol1 vol2 vol3 vol4Name Size Serialvol1 100G 7CBFCE0B2660CC3000010001vol2 200G 7CBFCE0B2660CC3000010002vol3 200G 7CBFCE0B2660CC3000010003vol4 200G 7CBFCE0B2660CC3000010004pureuser@MyFlashArray-ct0> purevol setattr --size 300g vol1 vol2 Name Size Serialvol1 300G 7CBFCE0B2660CC3000010001vol2 300G 7CBFCE0B2660CC3000010002pureuser@MyFlashArray-ct0> purevol truncate --size 100g vol3 vol4 Name Size Serialvol3 100G 7CBFCE0B2660CC3000010003vol4 100G 7CBFCE0B2660CC3000010004pureuser@MyFlashArray-ct0> purevol setattr --size 250g vol2 vol3 Name Size Serialvol3 250G 7CBFCE0B2660CC3000010003Error on vol2: Implicit truncation not permitted.pureuser@MyFlashArray-ct0> purevol list vol2 vol3Name Size Serialvol2 300G 7CBFCE0B2660CC3000010002vol3 250G 7CBFCE0B2660CC3000010003

Page 236: Purestorage_UserGuide3_2

Common CLI Administrative Tasks

223

pureuser@MyFlashArray-ct0> purevol truncate --size 275g vol2 vol3 Name Size Serialvol2 275G 7CBFCE0B2660CC3000010002Error on vol3: Target size exceeds volume size.pureuser@MyFlashArray-ct0> purevol list vol2 vol3Name Size Serialvol2 275G 7CBFCE0B2660CC3000010002vol3 250G 7CBFCE0B2660CC3000010003

Increases the sizes of vol1 and vol2 from 100 gigabytes and 200 gigabytes respectively to 300 gi-gabytes.Reduces the sizes of vol3 and vol4 from 200 gigabytes to 100 gigabytes.Attempts to change the sizes of vol2 and vol3 to 250 gigabytes. The command succeeds for vol3, butfails for vol2 because it is already larger than 250 gigabytes.Attempts to change the sizes of vol2 and vol3 to 275 gigabytes. The command succeeds for vol2, butfails for vol3 because it is already smaller than 275 gigabytes (the “target size”).

Increasing a volume's host-perceived size does not increase its consumption of physical storage. Volumesonly consume storage when hosts write data to previously unused volume block addresses or overwriteblocks with less-compressible data.

If the physical storage occupied by data in truncated sectors is urgently required, the eradication pendingperiod can be terminated, and space reclamation initiated immediately by issuing the purevol truncatecommand, as illustrates.

Destroying VolumesWhen the data stored in a volume is no longer of interest, the purevol destroy command obliterates thedata it contains and frees the storage it occupies. Destroyed volumes become invisible to hosts and tomost CLI and GUI displays immediately, but their data is not obliterated, nor is the storage it occupiesreclaimed until a 24-hour eradication pending period has elapsed. During this period, a destroyed volumecan be recovered with its data intact. The purevol eradicate command terminates the eradication pendingperiod and begins storage reclamation immediately. Once eradication has begun, a volume can no longerbe recovered.

Example 11.9 illustrates the use of the purevol destroy command.

Example 11.9. Destroying Volumes

pureuser@MyFlashArray-ct0> purevol list --connect vol1 Name Size LUN Hgroup Hostvol1 100G 1 host1pureuser@MyFlashArray-ct0> purevol destroy vol1 Error on vol1: Could not destroy volume: volume still connected to host.pureuser@MyFlashArray-ct0> purevol disconnect --host host1 vol1Name Hgroup Hostvol1 host1root@gt2-ct0:~# purevol destroy vol1 Namevol1pureuser@MyFlashArray-ct0> purevol list

Page 237: Purestorage_UserGuide3_2

Common CLI Administrative Tasks

224

Name Size Serialvol2 200G 27CA508DA38AF9F400010001vol3 300G 27CA508DA38AF9F400010002vol4 300G 27CA508DA38AF9F400010003pureuser@MyFlashArray-ct0> purevol list --pending Name Size Time Remaining Source Created Serialvol1 100G 1 day, 0:00:00 - 2013-04-01 15:45 PDT 27CA508DA38AF9F400010000vol2 100G - - 2013-04-01 15:46 PDT 27CA508DA38AF9F400010001vol3 100G - - 2013-04-01 15:47 PDT 27CA508DA38AF9F400010002vol4 100G - - 2013-04-01 15:48 PDT 27CA508DA38AF9F400010003

As background, illustrates that vol1 is connected to host1Attempt to destroy vol1 fails, because it has a connection.After vol1's connection to host1 is broken, the purevol destroy command succeeds.Illustrates that a volume in the eradication pending state (vol1) is only displayed by the purevol listcommand, when the --pending option is specified.

Recovering and Eradicating Destroyed VolumesTo provide a margin of error for administrators, Purity retains a volume's contents and structure for 24hours after it is destroyed. The purevol recover command restores a destroyed volume to its size at thetime of destruction with its contents intact, as Example 11.10 illustrates.

Example 11.10. Recovering and Eradicating Volumes

pureuser@MyFlashArray-ct0> purevol destroy vol1Namevol1pureuser@MyFlashArray-ct0> purevol list --pending vol1Name Size Source Time Remaining Created Serialvol1 100G - 1 day, 0:00:00 2013-04-12 15:45 PDT 27CA508DA38AF9F400010004pureuser@MyFlashArray-ct0> purevol recover vol1 Namevol1pureuser@MyFlashArray-ct0> purevol list vol1Name Size Source Created Serialvol1 100G - 2013-04-12 15:45 PDT 27CA508DA38AF9F400010004pureuser@MyFlashArray-ct0> purevol destroy vol2Namevol2pureuser@MyFlashArray-ct0> purevol eradicate vol2 Namevol2pureuser@MyFlashArray-ct0> purevol recover vol2 Error on vol2: Volume does not exist.

Restores destroyed volume vol1 (provided that the command executes within the volume's eradica-tion pending period)Obliterates the data from destroyed volume vol2, and begins reclamation of the storage it had occu-pied

Page 238: Purestorage_UserGuide3_2

Common CLI Administrative Tasks

225

Fails to recover vol2, illustrating that once it is eradicated, a volume can no longer be recovered.

Renaming Volumes

Volume names identify volumes in CLI and GUI commands and displays. Volume names are assignedat creation, and, as with other objects, can be changed at any time by the purevol rename command, asExample 11.11 illustrates. Renaming a volume has no effect on its attributes or connections.

Example 11.11. Renaming Volumes

pureuser@MyFlashArray-ct0> purevol rename vol4 NewVol4NameNewVol4pureuser@MyFlashArray-ct0> purevol list vol4Error on vol4: Volume does not exist.pureuser@MyFlashArray-ct0> purevol list NewVol4Name Size Source Created SerialNewVol4 300G - 2013-04-12 15:48 PDT 27CA508DA38AF9F400010003

The name of vol4 is changed to NewVol4. The purevol list commands that follow illustrate that afterrenaming, a volume is is immediately visible under its new name, and is no longer visible under its formername. All attributes, including the volume serial number, which is visible to hosts, remain unchanged.

Ongoing Administration of HostsPurity host administration includes the following tasks:

• Creation, deletion and renaming of host objects

• Connecting hosts to and disconnecting them from volumes

• Changing the Fibre Channel WWNs or iSCSI IQNs associates with a host

• Adding hosts to and removing them from from host groups.

Example 11.6 illustrates host creation.

Example 11.7 illustrates both connection of multiple volumes to a single host (purevol connect) andmultiple hosts to a single volume (purehost connect).

Example 11.12 illustrates the remaining host operations.

Example 11.12. Administrative Operations on Hosts

pureuser@MyFlashArray-ct0> purehgroup listobj --type host hgroup1 host1host2pureuser@MyFlashArray-ct0> purehost list --connect host1 Name LUN Vol Host Group

Page 239: Purestorage_UserGuide3_2

Common CLI Administrative Tasks

226

host1 1 vol1pureuser@MyFlashArray-ct0> purehost delete host1 Error on host1: Could not delete host.pureuser@MyFlashArray-ct0> purehgroup setattr --remhost host1 hgroup1Name Hostshgroup1 host1pureuser@MyFlashArray-ct0> purehost delete host1 Error on host1: Could not delete host.pureuser@MyFlashArray-ct0> purehost disconnect --vol vol1 host1 Name Volhost1 vol1pureuser@MyFlashArray-ct0> purehost delete host1Namehost1

These commands illustrate that host1 is associated with host group hgroup1, and has a private con-nection to vol1.Attempt to delete host1 fails because it is associated with a host group.After host1 is removed from hgroup1, the attempt to delete it still fails because it is still connectedto volume vol1.Deletion of host1 succeeds because it is not associated with a host group and has no private connec-tions to volumes.

Monitoring I/O PerformanceThe puremonitor command produces periodic displays of I/O performance information for some or allof an array's volumes, as Example 11.13 illustrates.

Example 11.13. Monitoring I/O Performance

pureuser@MyFlashArray-ct0> puremonitor --vollist vol1,vol2 --nrep 3 --interval 1 --totalName B/s(read) B/s(write) op/s(read) op/s(write) us/op(read) us/op(write)vol1 0.00 0.00 0 0 0 0vol2 0.00 0.00 0 0 0 0(total) 0.00 0.00 0 0 0 0Name B/s(read) B/s(write) op/s(read) op/s(write) us/op(read) us/op(write)vol1 0.00 0.00 0 0 0 0vol2 0.00 0.00 0 0 0 0(total) 0.00 0.00 0 0 0 0Name B/s(read) B/s(write) op/s(read) op/s(write) us/op(read) us/op(write)vol1 0.00 0.00 0 0 0 0vol2 0.00 0.00 0 0 0 0(total) 0.00 0.00 0 0 0 0

The --interval option specifies the number of seconds between displays of data. The --nrep option spec-ifies the number of displays that the command produces. The default interval is 5 seconds; the defaultrepetition count is 1.

Displays read and write operations and data transfer performance for volumes vol1 and vol2 with totals.Three sets of results are produced (--nrep 3) at intervals of one second (--interval 1).

Page 240: Purestorage_UserGuide3_2

Common CLI Administrative Tasks

227

Using listobj OutputThe shell in which the Purity CLI runs supports command substitution, the embedding of a list-type innercommand within an outer command, so that the output of the inner command becomes the argument list forthe outer one. The listobj subcommands of the purehost, purehgroup, purevol, and purealert commandsare included in the CLI primarily for this purpose.

The output of an embedded command becomes input to the command in which it is embedded. This featurecan be used, for example, to operate on selected sets of objects, as Example 11.14 illustrates.

Example 11.14. Using the Output of the listobj Subcommand

pureuser@MyFlashArray-ct0> purehost list --connect host1 Name LUN Vol Host Grouphost1 1 vol1 -host1 2 vol2 -pureuser@MyFlashArray-ct0> puremonitor --vollist vol1,vol2 Name B/s(read) B/s(write) op/s(read) op/s(write) us/op(read) us/op(write)vol1 0.00 0.00 0 0 0 0vol2 0.00 0.00 0 0 0 0pureuser@MyFlashArray-ct0> puremonitor --vollist $(purehost listobj --csv --type vol host1) Name B/s(read) B/s(write) op/s(read) op/s(write) us/op(read) us/op(write)vol1 0.00 0.00 0 0 0 0vol2 0.00 0.00 0 0 0 0

As background, shows that host1 has private connections to vol1 and vol2Displays performance information for vol1 and vol2 by specifying their names explicitlyDisplays performance information for the same two volumes by specifying them in effect as: “thevolumes that have connections to host1.”

Administrators can also use shell scripting to construct sequences of commands dynamically. Exam-ple 11.15 illustrates one use of scripting.

Example 11.15. Shell Scripting with the listobj Subcommand

pureuser@MyFlashArray-ct0> BASENAME=VOLpureuser@MyFlashArray-ct0> for I in 100 150 200> do> purevol create --size $I\g $BASENAME$I\GB> purevol connect --host host1 $BASENAME$I\GB> doneName Size Source Created SerialVOL100GB 100G - 2013-04-12 15:57 PDT 1B4B44F7DEAFEF5000010010Name Host Group Host LUNVOL100GB - host1 3Name Size Source Created SerialVOL150GB 150G - 2013-04-12 15:57 PDT 1B4B44F7DEAFEF5000010011Name Host Group Host LUNVOL150GB - host1 4Name Size Source Created Serial

Page 241: Purestorage_UserGuide3_2

Common CLI Administrative Tasks

228

VOL200GB 200G - 2013-04-12 15:57 PDT 1B4B44F7DEAFEF5000010012Name Host Group Host LUNVOL200GB - host1 5

The purevol create command within the loop creates three volumes whose names are keyed to their sizes:Vol100GB has a size of 100 gigabytes, etc. The purevol connect command, also within the loop, connectseach newly-created volume to host1 in turn.

Example 11.16 demonstrates another way to use the listobj subcommand to build an argument list.

Example 11.16. Building Arguments with the listobj Subcommand

pureuser@MyFlashArray-ct0> for VOLNAME in $(purehost listobj --type vol host1)> do> purevol disconnect --host host1 $VOLNAME> purevol truncate --size 80g $VOLNAME> purevol destroy $VOLNAME> purevol eradicate $VOLNAME> doneName Host Group HostVOL100GB - host1Name Size SerialVOL100GB 80G 1B4B44F7DEAFEF5000010015NameVOL100GBNameVOL100GBName Host Group HostVOL150GB - host1Name Size SerialVOL150GB 80G 1B4B44F7DEAFEF5000010016NameVOL150GBNameVOL150GBName Host Group HostVOL200GB - host1Name Size SerialVOL200GB 80G 1B4B44F7DEAFEF5000010017NameVOL200GBNameVOL200GB

The purehost listobj command creates a whitespace-separated list of volumes that have private connec-tions to host1. The list becomes the set of objects for which the do loop iterates. The four commandswithin the loop use the volume names to disconnect, truncate, destroy, and eradicate each of the volumesin succession.

Page 242: Purestorage_UserGuide3_2

Part V. Supplementary Information

Page 243: Purestorage_UserGuide3_2

230

Table of ContentsA. Supported Remote Access Packages ............................................................................... 231B. The Pure Storage Glossary ............................................................................................ 232C. References .................................................................................................................. 238D. License and Product Information .................................................................................... 239

License .................................................................................................................. 239About Panel ............................................................................................................ 239Pure Storage FlashArray Systems and Components ........................................................ 239

E. Contacting Pure Storage ................................................................................................ 240

Page 244: Purestorage_UserGuide3_2

231

Appendix A. Supported RemoteAccess Packages

The following tables list the remote terminal access packages (for CLI-based administration) and browsers(for GUI-based administration) that have been successfully tested for use with FlashArray systems.

Remote Terminal Access Packages

The CLI has been tested with the following remote access packages.

• ssh (all common UNIX and Linux distributions)

• PuTTY

Browsers

Pure Storage tests the Purity GUI with the two most recent versions of the following browsers.

• Google Chrome

• Mozilla Firefox

• Microsoft Internet Explorer

• Apple Safari

Page 245: Purestorage_UserGuide3_2

232

Appendix B. The Pure StorageGlossary

administrator An individual who administers a FlashArray system. Administratorsmay be authorized to control an array or may be limited to a monitorrole, in which they can observe the array's configuration, state, andperformance, but cannot alter its operating parameters.

allocation unit The unit in which Purity manages solid state storage. Each allocationunit consists of several consecutively numbered erase blocks on a sol-id state drive. Purity allocates storage in groups of allocation units ondifferent drives.

array management port An Ethernet interface in each FlashArray controller that providesadministrative access to the array from browser or virtual termi-nal-equipped workstations.

back-end controller intercon-nect network

The dual-path Infiniband network that interconnects FlashArray con-trollers.

back-end controller intercon-nect port

A FlashArray back-end controller port used for interconnecting withother controllers in the array.

back-end storage interconnec-tion port

A Serial Attached SCSI (SAS) port in a FlashArray controller or stor-age shelf used to connect controllers to storage shelves and the driveswithin them.

basic object A manageable object in a FlashArray system. Basic objects may behardware (controllers, storage shelves, drives, front-end ports) or vir-tual (volumes, hosts, and host groups).

client, client computer Synonym for host.

client block map A set of persistent data structures in which Purity maintains the cor-respondence between volume block addresses exported to hosts andphysical storage locations of data.

command line interface (CLI) The Purity virtual console-based tool used by administrators to mon-itor and control FlashArray systems. Administrators launch the PurityCLI via a secure shell (ssh) connection (for Windows administrativeworkstations, typically through a terminal emulator such as PuTTY).

continuous storage reclamation The Purity process that continually frees lightly occupied allocationunits by moving data from them to more densely populated ones. Pu-rity reuses reclaimed allocation units to store host-written data anddata consolidated from reclaimed storage.

controller The FlashArray component that contains a processor complex,DRAM cache, front-end and back-end ports, and an M-port. The twocontrollers in a highly-available array cooperate with each other toprovide a failure tolerant, high-performing, single-image storage sys-tem.

controller pair Synonym for partner controllers.

Page 246: Purestorage_UserGuide3_2

The Pure Storage Glossary

233

data reduction Reduction in the size of a set of data by elimination of duplication.Purity reduces blocks of data written by hosts by eliminating repeat-ing patterns, deduplicating entire blocks against each other, and com-pressing non-duplicate blocks, continuously throughout their lives inan array.

disk block Synonym for sector. The atomic unit in which magnetic disk drivesread, write, and ECC-protect data. Analogous to a flash page in a flashsolid state drive.

eradication pending state A state in which an administrator places a volume by executing apurevol destroy command against it. Purity does not export volumesin the eradication pending state; they are invisible to most administra-tive operations. Volumes remain in the eradication pending state for24 hours, after which Purity eradicates them, deleting their data andfreeing the storage capacity they occupy.

erase block The unit of memory in which flash memory erases data prior to over-writing. Each erase block consists of multiple consecutively addressedflash pages.

export (v.) To present one or more objects to clients. A FlashArray systemexports disk-like volumes to its clients (host computers).

Fibre Channel front-end port A FlashArray controller front-end port that implements Fibre Channelphysical signaling and low-level protocol.

flash page The atomic unit in which data is read from flash memory.

FlashArray A scalable enterprise-class high-performance data storage systembased entirely on solid state storage.

front-end host connection port A FlashArray controller port used to connect host(s) to the array, usu-ally via a storage network. In the v3.2 Release, host connection portsimplement either the Fibre Channel Protocol or the iSCSI protocol.

graphical user interface (GUI) A user interface to a computer that represents objects and methodsgraphically rather than with character strings. The Purity GUI runswithin the context of a browser.

group object A Purity construct for managing collections of basic objects (hostsand volumes). Administrators can create group objects for manage-ment convenience, particularly in arrays that support large numbersof volumes.

high availability The ability of a system to sustain a major component failure and con-tinue to perform its function. Typically achieved by configuring andintegrating redundant components and connections and implementingsoftware that detects and isolates failures, “heals” the functions theyaffect, and assists in repair, replacement, and reassimilation of com-ponents.

host A server or desktop computer connected to a FlashArray system viaa storage network that makes use of the array's data storage services.Hosts may be physical or virtual, and are identified to Purity by theport names used to connect them to the FlashArray system.

Page 247: Purestorage_UserGuide3_2

The Pure Storage Glossary

234

host capacity Synonym for size (q.v.).

host occupancy The amount of host-visible storage occupied by host-written data; thenumber of unique volume sector addresses written by hosts (and nottrimmed) multiplied by the size of a sector (512 bytes).

host port (H-port) An I/O port in a host used to connect to a FlashArray system, usuallyvia a storage network.

host-written data Data written by a host to one or more volume sector addresses in aFlashArray volume.

Infiniband (IB) An interconnect standard created by server manufacturers. Used pri-marily to connect servers at high speeds over short distances. FlashAr-rays use Infiniband to interconnect a highly-available array’s con-trollers.

initial data reduction Purity’s first-pass reduction of data as it enters an array. Initial datareduction includes elimination of patterned blocks and may also in-clude deduplication and compression of non-duplicates.

interposer A device that (a) converts between the SATA protocol native toFlashArray SSDs and the SAS protocol used to communicate withcontrollers, and (b) provides dual-channel access to a drive. Inter-posers are mounted in the drive carriers that hold SSDs.

I/O card A PCI-express interface module in a FlashArray controller containingeither SAS ports that connect to storage shelves, Infiniband ports usedto interconnect controllers, or Fibre Channel or iSCSI ports that pro-vide access to the external environment.

I/O module A FlashArray storage shelf component containing a SAS expanderthat fans out an incoming SAS bus to the drives within the shelf andto other shelves downstream from it. Each storage shelf contains twoI/O modules, providing redundant access paths to the drives within it.

I/O performance density The number of I/O operations per second of which a storage systemis capable divided by the effective data storage capacity of the sys-tem. For example, a storage system that presents 10 terabytes of us-able storage and is capable of 100,000 IOPS has an I/O performancedensity of 10,000 IOPS per terabyte.

I/O port A connection point (socket) on an I/O card. I/O ports interconnectarray components, and provide access to the external environment forI/O. The I/O ports in storage shelves are SAS ports; those in controllersare back-end SAS or Infiniband ports, or Fibre Channel or iSCSI front-end ports.

IOPS I/O operations per second. The preferred measure of I/O performancewith random (non-sequential) I/O workloads.

logical block Synonym for sector.

logical block address (LBA) The mechanism by which logical blocks are addressed in read andwrite commands to disk drives.

logical block number (LBN) Synonym for logical block address.

Page 248: Purestorage_UserGuide3_2

The Pure Storage Glossary

235

logical page The atomic unit in which Purity reads data from solid state storage.Consists of one or more flash pages in which data and metadata arestored. Purity computes and stores a checksum in each logical page,and recalculates it when the page is read to detect read errors that arenot discovered by drive mechanisms.

logical page checksum A checksum that Purity calculates and appends to each logical pageit writes for the purpose of detecting read errors that are not detectedby drive ECC mechanisms.

logical unit Synonym for volume.

logical unit number (LUN) The mechanism for addressing a logical unit or volume.

management port (M-port) A FlashArray controller port used for connecting a management work-station to the array, usually via a data center network.

micro-provisioning Purity's advanced implementation of thin provisioning, in which onlythe storage required to contain reduced data blocks and the metadatathat describes them is allocated.

non-volatile random accessmemory (NVRAM)

Randomly addressable memory whose contents survive power out-ages intact. FlashArray controllers store host-written data temporarilyin NVRAMs mounted in storage shelves prior to writing it to solidstate storage, so that power outages and system resets do not resultin loss.

object database A persistent database used internally by Purity to maintain recordsrelated to both the intrinsic and administrator-defined virtual objectsin an array.

occupancy A measure of the physical or host-visible storage occupied by host-written data.

page In this guide, a synonym for flash page (q.v.).

partner controllers Two Pure Storage controllers connected to the same (two or more)storage shelves. Partner controllers maintain synchronized copies ofeach others’ NVRAM contents so that if one of them should fail, itspartner can control all drives and export volumes seamlessly to hostson its front-end host connection ports.

patterned block A block of data received from a host whose contents consist of a re-peated data pattern of between 1 and 8 bytes. Purity does not allocatestorage for patterned block data, but records their existence in its in-ternal metadata.

physical occupancy The amount of physical storage occupied by host-written data afterreduction by Purity.

port An interface between FlashArray components (controllers and stor-age shelves), or between controllers and a network or host. Arraycontrollers contain front-end ports for connecting to hosts via storagenetworks, back-end ports that interconnect the controllers in a highlyavailable array, back-end SAS ports that connect controllers to stor-age shelves, and GbE administrative ports that connect controllers toa network for administration. Storage shelves contain SAS ports that

Page 249: Purestorage_UserGuide3_2

The Pure Storage Glossary

236

connect to controllers and interconnect storage shelves with each oth-er.

Purity CLI The CLI used to administer a FlashArray system. The Purity CLI runson an array; it is accessed via a virtual terminal emulator such as PuT-TY.

Purity GUI The browser-based graphical user interface used to administerFlashArray systems.

Purity Operating Environment(Purity)

(1) The operating system and array logic that runs in each FlashArraycontroller. (2) The set of all instances of software that run in a partic-ular array's controllers.

RAID-3D FlashArrays’ dynamic multi-level scheme for protecting against dataloss due to uncorrectable read errors and device failures. RAID-3Dminimizes the impact of read error recovery, and automatically adjustsprotection parameters based on the nature of stored data and condi-tions within an array.

reduction Synonym for data reduction.

reduction ratio Any of several ratios of the amount of physical storage occupied bydata and the data’s host-visible size.

sector The common 512-byte unit in which data is written to, read from, andECC-protected on disk drives.Pure Storage volumes present virtualsectors of storage in which hosts can read and write data.

segment A Purity data structure consisting of groups of allocation units, each ona separate drive. The segment is the unit in which Purity allocates stor-age capacity for writing data and metadata, as well as the unit of RAIDprotection; Purity assigns each segment’s RAID protection scheme atallocation time.

Serial-Attached SCSI (SAS) The industry standard 6 Gb/second interconnect technology that con-nects storage shelves (and the SSDs within them) to controllers inFlashArray systems.

size The storage capacity of a volume as it appears to hosts' storage ad-ministration tools.

snapshot A host-accessible virtual image of the contents of a set of data as theyappeared at some point in time. Purity snapshots capture virtual im-ages of volume contents, and appear to hosts as read-only volumes.

solid state drive (SSD) A data storage device based on a persistent solid state memory tech-nology such as flash memory. Solid State drives are the physical datastorage elements in FlashArray systems.

storage area network (SAN),storage network

A network whose primary purpose is to enable communication be-tween hosts and storage devices or systems. In the v3.2 Release ,FlashArray systems support Fibre Channel storage networks.

storage shelf The FlashArray component that houses solid state storage. In a highlyavailable array, each storage shelf connects to a pair of controllers toprovide redundant access and performance scaling.

Page 250: Purestorage_UserGuide3_2

The Pure Storage Glossary

237

thin provisioning A storage virtualization technique in which allocation of physical stor-age to volume blocks is deferred until a host writes data to the vol-umes' block addresses.

trim (v.) Deallocation of storage capacity that holds host-written blocksthat a host has indicated via a SCSI command of the same name areno longer in use.

virtual space The amount of virtual space in a volume or array that is occupied byhost-written data.

volume A disk-like random access virtual storage device that a FlashArraysystem exports to hosts via a logical unit number (LUN). To a host,a FlashArray volume contains a number of 512-byte sectors in whichdata can be written and from which it can be read.

write amplification The generation of additional I/O internal to a conventional array orflash SSD as a consequence of a single write request. A write to aRAID group in an array that is smaller than the group's data stripe sizeresults in one or more internally generated log-read-modify-write se-quences in order to synchronize RAID check data. A write to a flashSSD may require block erasure, resulting in a read of an erase block topreserve existing data, a block erasure, and an overwrite of the blockwith a combination of new and existing data. Write amplification isfunctionally invisible to the writer, but takes time and consumes in-ternal bandwidth.

write buffer A DRAM buffer in which Purity stages data for writing to solid statestorage. The size of a write buffer is the same as that of a write stripeunit. Purity writes all buffers in write stripe in sequence.

Page 251: Purestorage_UserGuide3_2

238

Appendix C. ReferencesJESD218A

Solid State Drive (SSD) Requirements and Endurance Test Method [http://www.jedec.org/standards-doc-uments/docs/jesd218a].

[Others to be available.]

Page 252: Purestorage_UserGuide3_2

239

Appendix D. License and ProductInformationLicense

See purelicense(7).

About PanelTBS

Pure Storage FlashArray Systems and Compo-nents

The following array versions are offered by Pure Storage Inc.

FlashArray 300-Series:

• 1-controller model: FA-310 (1 or 2 storage shelves)

• 2-controller (HA) model: FA-320 (1 or 2 storage shelves)

FlashArray 400-Series:

• 1-controller model: FA-410 (1 or 2 storage shelves)

• 2-controller (HA) model: FA-420 (1 or 2 storage shelves)

Page 253: Purestorage_UserGuide3_2

240

Appendix E. Contacting Pure StoragePure Storage is always eager to hear your feedback, on our products, sales, service, and generally onanything that interests you about data storage. You can contact us in any of the ways listed on this page.

Want to visit us?

Our headquarters are located at:

650 Castro Street, Suite 400Mountain View, CA, 94041Phone: +1-650-290-6088

Want to talk with our sales department at your convenience?

Send us an email containing your contact information, and times of convenience for you. A Pure Storagesales representative will contact you.

Want to ask questions or make comments via email?

Email your question or comment to [email protected] [mailto:[email protected]]. Any subjectis fair game—we will route your query to the correct Pure Storage department. You will receive a returnemail with a response to your query.

Want to contact our support organization?

Log in to the Pure Storage Technical Support website [http://support.purestorage.com/], where registeredusers can view the status of outstanding cases, view case histories, and browse our knowledge base.

Want to visit our website?

Visit the Pure Storage website [http://www.purestorage.com/] to learn more about our company and prod-ucts.