Upload
norisham-abd-rahman
View
213
Download
0
Embed Size (px)
Citation preview
7/23/2019 E - NetworkAttachedStorage.pdf
1/65
Network Attached Storage
real tes ng real data real results10/1
7/23/2019 E - NetworkAttachedStorage.pdf
2/65
7/23/2019 E - NetworkAttachedStorage.pdf
3/65
Page 3 of 65
Project Title
Table of Contents
1 Project Executive Summary.................................................................................................... 4
2 Introduction............................................................................................................................. 43 Project Overview/Description................................................................................................. 44 Project Goals......................................................................................................................... 135 Tests and Evaluations .............................................................................................................. 136 Evaluation Results .................................................................................................................. 17
6.1.1 System deployment, setup and scalability ................................................................... 226.1.2 Management................................................................................................................. 246.1.3 Operating System and File system............................................................................... 256.1.4 Security ........................................................................................................................ 266.1.5 Reliability..................................................................................................................... 266.1.6 Performance................................................................................................................. 26
6.2 NetApp.......................................................................................................................... 326.2.1 System Deployment, Setup and Scalability................................................................. 376.2.2 Management................................................................................................................. 396.2.3 Operating System and File system............................................................................... 406.2.4 Security ........................................................................................................................ 416.2.5 Reliability..................................................................................................................... 426.2.6 Performance................................................................................................................. 43
6.3 BlueArc......................................................................................................................... 486.3.1 System Deployment and Setup .................................................................................... 536.3.2 Management................................................................................................................. 556.3.3 Operating System and File system............................................................................... 576.3.4 Security ........................................................................................................................ 576.3.5 Reliability..................................................................................................................... 586.3.6 Performance................................................................................................................. 60
7 Analysis of Deviations from Predictions................................................................................. 637.1 Isilion .............................................................................................................................. 637.2 NetApp............................................................................................................................ 637.3 BlueArc........................................................................................................................... 64
8 Conclusions............................................................................................................................. 659 Proposed Next Steps ............................................................................................................... 65
7/23/2019 E - NetworkAttachedStorage.pdf
4/65
Page 4 of 65
1 Project Executive Summary
Network Attached Storage (NAS) is now a mainstay in the storage industry, and it is
continuing to grow. NAS has traditionally been used for general office file sharing.However, now that it is becoming more reliable and scalable, it can support many more usesas a primary storage device in an IT enterprise. Some potential benefits of a NAS are areduction in operational costs vs. traditional servers with attached storage and a mechanismto share the data. These cost reductions are reported in the ease of deployment, ease ofmanagement, and the reduced complexity of maintaining a redundant solution.
This project investigated and evaluated three NAS products from different vendors to gainunderstanding of NAS functionality, capabilities, performance, to create a methodology forevaluating NAS products, and identify key questions to ask NAS vendors.
2 Introduction
Description of the Company/Organization
The Air Force Research Laboratory (AFRL) Major Shared Resource Center (MSRC) is ahigh performance computing (HPC) facility committed to providing the resources necessaryfor Department of Defense (DoD) scientists and engineers to complete their research,development, testing and evaluation projects.
Since its inception in 1996 as part of the DoD High Performance Computing ModernizationProgram (HPCMP), the AFRL MSRC has supported the warfighter by combining powerful
computational resources, secure interconnects, and application software with renownedservices, expertise and experience.
3 Project Overview/Description
Project Background
The Air Force Research Laboratory Major Shared Resource Center (AFRL MSRC)provides centralized file sharing capabilities for home directories and some application
programs. Known as the High Availability File Server (HAFS), this system is acombination of hardware and software provided by multiple vendors, and it is architectedto be redundant. The HAFS will be four years old in FY2008.
The architecture utilizes two Sun V480 servers, an STK D280 Disk Subsystem, VeritasCluster Server software, and multiple fibre and copper gigabit Ethernet (GigE)connections for Network File System (NFS) mounts to the High Performance Computing(HPC) platforms, user workstations, and other infrastructure systems at the AFRL
7/23/2019 E - NetworkAttachedStorage.pdf
5/65
Page 5 of 65
MSRC. The systems operate in an Active-Passive mode, where half the NFS exports areactive on one host, and the other half are active on another. The HAFS utilizes SunsQFS file system for user data. Figure 3-1 below conceptually illustrates the HAFS.
Figure 3-1 - HAFS Conceptual Diagram
Sites and Equipment InvolvedThe DICE site located at the AFRL MSRC at Wright Patterson AFB in Dayton, OH wasused for this project. Each evaluation utilized the equipment located at this facility andadditional equipment provided by the vendor. The architectures for each evaluation and allutilized equipment are described below:
Isilon
Isilon Systems Inc. provided a three node IQ3000i cluster to evaluate. Isilon provided thehardware and software shown below in table 3-1. The Data Intensive ComputingEnvironment (DICE) program provided the DICE cluster Spirit (a 32 node Dell LinuxCluster) along with other ancillary nodes within DICE at the AFRL MSRC to conduct theevaluation. Each Isilon IQ3000i node has two 1Gbps copper Ethernet ports. Only oneIsilon network connection can be active to a particular LAN network segment. BothEthernet connections can be connected to different networks and be active at the same
7/23/2019 E - NetworkAttachedStorage.pdf
6/65
Page 6 of 65
time. In our evaluation, we connected one Ethernet port on each Isilon node to the DICESpirit clusters internal Netgear network switch and one Isilon Ethernet port on each nodeto the Force10 switch in DICE. Each Isilon cluster node was connected, by a singleInfiniband, to a port on the 8 port Infiniband switch. The purpose for the Infinibandconnections is inter-cluster communication and data movement between the nodes. Refer
to figures 3-2 and 3-3 below for a conceptual diagram of the cluster setup and of the rearcommunication connections on an Isilon node. In a production environment, it would beadvantageous to have two Infiniband switches with a connection from each node to eachswitch for redundancy. The Isilon nodes had 3.2Ghz Intel Xeon processors. The Isilonsoftware version loaded on the nodes was 4.5.1.
Qty Part Number# QuantityIncluded Description
OneFS 3000OneFS Software License: Isilon IQ System Software includes all itemsbelow:
OneFS-ABAutoBalance Software: Automatically balances data onto new Isilon IQnodes
OneFS-NFS NFS Protocol: NFS file serving protocol for Unix/Linux clients
OneFS-CIFS CIFS Protocol: CIFS file serving protocol for MS Windows clients
OneFS-HTTPHTTP/WebDav Protocol: Reading and writing via HTTP/WebDav fileserving protocol
OneFS-FTP FTP Protocol: FTP file serving protocol
OneFS-NDMP NDMP Protocol: Backup and restore via NDMP protocol
3
OneFS-CMICluster Management Interface: Web GUI and command line clustermanagement
3Enterprise-App-BUNDLE-#1
Single node license for both SnapshotIQ & SmartConnect Advanced(Enterprise Edition)
3 IQ 3000i Node: 2U node with 3.52TB capacity. InfiniBand enabled
36 300 GB SATA-2 disk drives
10.56 Total raw TB's of capacity
3 3.2 GHz Intel Xeon Processors
6 Gigabit Ethernet front-side connectivity ports
3 IQ3000i
6 InfiniBand intracluster HCA connectivity ports
1IQSwitch--Flextronics-8 Flextronics InfiniBand 8-port Switch
3 IB-Cable-3 InfiniBand cable, 3 meters in length (minimum 1 per node)
Table 3-1 - Isilon Equipment List
7/23/2019 E - NetworkAttachedStorage.pdf
7/65
Page 7 of 65
Figure 3-2 - Conceptual Diagram of the Isilon Evaluation Setup
Figure 3-3 - Isilon Node (Front and Rear Views)
7/23/2019 E - NetworkAttachedStorage.pdf
8/65
Page 8 of 65
NetApp
Network Appliance Inc. (NetApp) provided a two node FAS3070 GX cluster, with 12TB of raw
disk space, and a Data ONTAP GX 10.0.2 operating system for the NAS evaluation. Table 3-2below shows the parts list for the supplied system. The Data Intensive Computing Environment(DICE) program provided the DICE High Performance Cluster (HPC)Spirit (a 32 node DellLinux Cluster),along with other ancillary nodes within DICE at the AFRL MSRC, to conductthe evaluation.
Each NetApp FAS3070 node had four internal 1Gbps copper Ethernet ports and a PCIe adaptercard with four additional 1Gbps copper Ethernet ports. All ports, on the NetApp node, can beactive and provide redundant failover capabilities to other ports. The eight Ethernet ports oneach node were configured as follows:
Three ports were configured to provide the data path for NFS traffic from our testcluster. Two of these ports were connected into the internal 1Gbps switch in theHPC cluster. One port was connected to the Force10 E300, in order to provideaccess to ancillary nodes outside the HPC cluster.
Four ports on each node were used for inter-cluster communication. NetApp saidthat, as a rule of thumb, the number of inter-cluster communication ports shouldbe equal to the total number of user accessed data ports plus one.
One port on each node was configured for out of band NetApp Clustermanagement.
Refer to figures 3-4 and 3-5 below for a conceptual diagram of the cluster setup and of the rear
communication connections on the NetApp nodes.
For storage, NetApp provided six disk trays. Each tray had 14, 144GB 15K fibre channel diskdrives. Each tray had redundant fibre channel connections that provided data access from bothFAS3070 nodes.
Item Description Qty
FAS3070-R5 FAS3070 Model, R5 1
FAS3070GX-BNDL-R5 FAS3070GX, OS, SFO, NFS, CIFS, FV-HPO, R5 2
X1941A-R6-C Cable, Cluster 4X, Copper, 5M, -C, R6 2
X6524-R6-C Cable, Optical, Pair, LC/LC, 2M, -C, R6 4
X6530-R6-C Cable, patch, FC SFP to SFP, 0.5M, -C, R6 8
X1049A-R6-C NIC, 4-Port, GbE, Copper, PCIe, -C, R6 2
LOOPS Storage Loops Attached Quantity 2
NAS All down port initiators 1
X5517-R6-C Storage Cabinet, Rail Set, 42U, -C, R6 1
X8773-R6-C Mounting Bracket, Tie-down, Multiple, -C, R6 2
X6539-R6-C SFP, Optical, 4.25Gb, -C, R6 8
MULTIPATH-C Multi-path configuration 1
SW-ONTAPGX-3XXX Data ONTAP GX, FAS3XXX 2
7/23/2019 E - NetworkAttachedStorage.pdf
9/65
Page 9 of 65
X74015B-ESH4-R5-C DS14MK4 SHLF, AC, !4x144GB, 15K, B, ESH4, -C, R5 6
DOC-3XXX-C Documents 3XXX, -C 1
X8730A-R6-C Storage Equipment Cabinet, NEMA 30A, -C, R6 1
X800-42U-R6-C Cabinet Component Power Cable, -C, R6 1
SW-GX-LICENSEMGR Data ONTAP GX License Manager 2
SW-GX-BASE GX BASE, NFS, SFO Node License 2
SW-GX-CIFS GX DIFS, Node License 2
SW-GX-FV-HPO GX FlexVol HPO, Node License 2
SW-T5C-GX-B-BASE Data ONTAP GX, B, Node License, T5C 2
SW-T5C-GX-B-CIFS GX CIFS, B, Node License, T5C 2
SW-T5C-GX-B-FV-HPO GX FlexVol HPO, B, Node License, T5C 2
Table 3-2 - NetApp Equipment List
Figure 3-4 - Conceptual Diagram of the NetApp Dual FAS3070 GX Evaluation Setup
7/23/2019 E - NetworkAttachedStorage.pdf
10/65
Page 10 of 65
Figure 3-5 NetApp FAS3070 Node (Rear View)
BlueArc
BlueArc provided a two-node Titan 2200 NAS with 28TB of raw disk space and version5.1.1156.06 operating system for the NAS evaluation. Table 3-3 below shows the parts list forthe supplied system. The Data Intensive Computing Environment (DICE) program provided thehigh performance cluster Spirit (a 32 node Dell Linux Cluster),along with other ancillary nodeswithin DICE at the AFRL MSRC, to conduct the evaluation.
Each Titan 2200 node had six 1Gbps copper Ethernet ports. All ports on the BlueArc node canbe active and provide redundant failover capabilities to other ports. The six Ethernet ports oneach node were configured as follows:
Four ports were configured to provide the data path for NFS traffic from the testcluster.
One port was connected to the Force10 E300 to allow ancillary nodes outside theHPC clusters network to access the NFS file systems of the NAS.
Refer to figures 3-6 and 3-7 below for a conceptual diagram of the cluster setup and an image ofthe rear communication connections on the BlueArc Titan nodes.
For storage, BlueArc provided six disk trays with two disk controllers. Each tray had sixteen300GB 10K RPM fibre channel disk drives and redundant fibre channel connections thatprovided data access from both Titan 2200 nodes.
7/23/2019 E - NetworkAttachedStorage.pdf
11/65
Page 11 of 65
ProductPart
Number Quantity
KIT: Titan Server Model 2500 SX345164 2
Ethernet Cable - 1.0m (3ft) - RJ45 SX222063 1
Ethernet Cable - 1.5m (4.5ft) - RJ45 SX222068 5
Ethernet Cable - 2.0m (6ft) - RJ45 SX222064 4
KIT: Ethernet Switch - 16-port 10/100 Mbps SX345111 1
KIT: SMU Model 200 - Titan SX345204 1
Titan Software License - 1TB Storage Capacity SX435003 1,024
Titan Software License - Cluster Name Space SX530135 2
Titan Software License - Cluster Node SX530082 2
Titan Software License -- File System Rollback SX530138 2
Titan Software License - iSCSI SX530111 2
Titan Software License -- Quick Snapshot Restore SX530137 2
Titan Software License -- Storage Pools SX530136 2
Titan Software License - Unix (NFS) SX530084 2
Titan Software License - Virtual Server Security SX435005 2
Titan Software License - Windows (CIFS) SX530085 2KIT: DS16EXP Storage Expansion SX345123 4
KIT: RC16TB Storage Controller - 4 Gbps Dual RAID Controller SX345122 2
RC16 Disk Drive Module - FC 300GB 10,000 rpm, 2 Gbps (16pk) SX345147 6
FC Cable (Fiber) - 0.5m (1.5ft) - Multimode Fiber Optic LC Connectors SX222042 14
FC Cable (Fiber) - 1.0m (3ft) - Multimode Fiber Optic LC Connectors SX222043 2
FC Cable (Fiber) - 1.5m (4.5ft) - Multimode Fiber Optic LC Connectors SX222076 1
FC Cable (Fiber) - 2.0m (6ft) - Multimode Fiber Optic LC Connectors SX222044 1
KIT: FC Switch - 4 Gbps 16-port with SFPs, Full Fabric (Brocade200E) SX345120 2
SFP - 1000BaseT Copper SX350004 12
SFP - Multi-mode Fiber - 4 Gbps SX350010 8
XFP - Multi-mode Fiber - 10 Gbps SX350011 4
KIT: APC .Storage System Cabinet - 42U 1070mm 208V (US) SX345102 1
Power Cable - 208/220V, 10A, 2m (6ft) SX222020 6
Power Cable Dual "Y" - 208/220V, 10A, 2m (6ft) SX222023 12
Documentation Wallet - Titan (US Sales) SX540010 2
Serial Cable - 2m (6ft) RS232 DE9-Mini Din6 (Engenio) SX222072 1
System Cabinet Layout, Option C - Dual Titan SX542005 1
Table 3-3 - BlueArc Equipment List
7/23/2019 E - NetworkAttachedStorage.pdf
12/65
Page 12 of 65
Figure 3-6 - Conceptual Diagram of the BlueArc Titan 2200 Evaluation Setup
7/23/2019 E - NetworkAttachedStorage.pdf
13/65
Page 13 of 65
Figure 3-7 BlueArc Titan 2200 Node (Rear View)
4 Project Goals
This project will help understand and identify performance and functionality capabilities ofnetwork attached storage. The primary goal is to create a detailed and systematicmethodology for evaluating NAS devices by examining some of the NAS technologiesavailable today. The criteria for the methodology and initial evaluations include, but are notlimited to: performance, reliability, security, data access, management, scalability, quotas,and support.
This project will also develop detailed questions HPC centers should ask NAS vendorsabout their technology that go beyond just marketing material
5 Tests and Evaluations
A. Methodology and Vendor Questions
Important keys to the understanding and evaluation of the actual functionality, capabilities,and performance characteristics of NAS storage equipment are a detailed methodology and alist of important questions to the vendors that go beyond the marketing and datasheet
publications. A methodology includes the initial investigation of technologies, developmentand execution of a test plan, and follow-up discussions with the vendors once the testing iscompleted. The following outline shows the methodology followed for this evaluation:
1. Investigationa. Vendor Briefings on NAS technologies
i. Description of the needs of the HPC centerii. Presentations on Functionality, Performance, and Roadmap
7/23/2019 E - NetworkAttachedStorage.pdf
14/65
Page 14 of 65
iii. In depth discussion of the product concerning the following topics:1. General Architecture and Features2. Performance3. Reliability and Availability4. Connectivity and Security
5. NAS Clients: Number and Configuration6. Backup Support7. Capacity Planning and Performance Analysis8. Automation9. Storage Management10.Diagnostic Tools11.Support and Service Offerings
2. Install Equipment and Execute Test plana. Install equipment into test environment that reflects requirements
b. Execute test plan and record resultsc. The test plan used for this evaluation is shown below:
DICE NAS Test Plan
A. System Deployment and Initial Configurationa. Deploy the system to be available through 4-copper Ethernet interfaces
to the HPC clusters internet network.b. Evaluate the initial setup and configuration that is delivered and
recommended with the solution.
c. How long did it take?d. Were there any major difficulties?e. Are there any other ways to setup (I.E. automatic discovery, manual,
etc.)f. Document and diagram the system setup.
B. Finalize the general evaluation criteria of the NAS system questions.C. Evaluate the security of the system
a. Identify NAS security settings via exploring the system GUI/CLI,manuals and look for security bulletins on the NAS vendor websiteand other security related websites.
b. Remove any unnecessary network services from the NAS.c. Can NAS administration privileged functionality be isolated from the
general LAN user and be placed behind a dedicated managementnetwork.
d. Have the local security team perform network based security scan ofthe NAS devices and document the results.
e. Determine if NAS can work within the current AFLC MSRC securityenvironment.
f. Verify the ability to configure file system directory and file access
7/23/2019 E - NetworkAttachedStorage.pdf
15/65
Page 15 of 65
permissions using UNIX style securityD. Manage storage within the NASE. Determine and document the methods of adding volumes and file systems
with the raw storage within the NAS.F. Document the methods and types of file systems that can be created.
G. Document the redundancy afforded by the NAS.H. Setup file systems in a manner that would mimic that of the current HAFS interms of name, directory naming, etc.
I. Setup Usersa. Setup users and home directories to mimic normal HAFS user.b. Verify the ability to setup Quotas for the users or directories via hard
quota type mechanism.c. .Verify the ability to specify UNIX UIDs and Group GIDs for a user.
J. Export the home file systems and mount the file system to the DICE clusternodes.
a. Export and mount the file systems using the similar mount commands
that would be used on the HAFS.b. Document the results.1. Export:2. share -F nfs -o nosub, nosuid,
root=ss1.asc.hpc.mil:ss2.asc.hpc.mil,[email protected]:@10.10.132:@10.10.134:@10.10.138:@10.10.140:@10.10.142:@10.10.148 -d "HAFS" /qfx1/hafx1
3. Mount:4. x1v-e2:/qfx1/hafx1 /hafx1 nfs rw, nodev, nosuid, timeo=70,
retrans=30, nfsvers=3, bg, intr, tcp, rsize=8192, wsize=8192, softK. Test that the file system is accessible through any interface on the system to
ensure that the file system is a true global file system. Any file should beaccessible through any NAS interface on any NAS head. Document theresults.
L. Add additional network connectivityM.Add and additional network to the NAS system by integrating access from
the Force10 switch. Test this connectivity through ancillary nodes in theDICE environment, and the Spirit clusters Login and Admin nodes.
N. Manage storage within the NASO. Determine and document the methods of adding volumes and file systems
with the raw storage within the NAS.P. Document the methods and types of file systems that can be created.Q. Add capacity to and existing file system.R. Decrease capacity of an existing file system.S. Performance Testing
i. IOZone and/or Xdd from cluster nodes
T. Single stream write throughput tests of large file ~10GBU. Single stream read throughput tests of large file ~10GB
7/23/2019 E - NetworkAttachedStorage.pdf
16/65
Page 16 of 65
V. Iterate from 1 to 32 simultaneous write throughput tests of large file ~10GBon each node through a round-robin use of the network interfaces into theNAS.
W.Iterate from 1 to 32 simultaneous read throughput tests of large file ~10GBon each node through a round-robin use of the network interfaces into the
NAS.X. Single stream write throughput tests of small file ~1MBY. Single stream read throughput tests of small file ~1MBZ. Iterate from 1 to 32 simultaneous write throughput tests of small file ~1MB
on each node through a round-robin use of the network interfaces into theNAS.
AA. Iterate from 1 to 32 simultaneous read throughput tests of small file ~1MBon each node through a round-robin use of the network interfaces into theNAS.
BB. Repeat simultaneous tests with as many tests from User LAN nodes inaddition to the tests running on the cluster nodes
CC. Fault TestingDD. Disconnect network cable while be utilized for nfs file copies. Verify copycontinues on alternate network adapter.
EE.Fail a disk drive and reintegrate it to verify no loss of data and benchmarkrebuild times.
FF.Disconnect internal fibre loop cable while be utilized for nfs file copies.Verify copy continues on alternate fibre loop.
GG. Fail ad NAS head while be utilized for nfs file copies. Verify copycontinues on alternate NAS head.
HH. Remove power supply while be utilized for nfs file copies. Verify NAshead stays available and copy continues.
II. Document the redundancy afforded by the NAS.JJ. Load balancing and Channel BondingKK. Evaluate the available load balancing options with the NAS and if
applicable test the methods.LL.Evaluate the availability of channel bonding available to aggregate
throughput. If available evaluate with the performance tests above.MM. Management toolNN. Setup and test email alert functionalityOO. Evaluate system logging and troubleshooting tools incorporated in the
NASPP.Evaluate and document the available management tools (GUI, CLI) for the
NAS. Evaluate for ease of use, simplicity and functionality.QQ. Evaluate alert report capabilities and troubleshootingRR. Evaluate the performance monitoring capabilities.SS.Evaluate the documentation and support web site available.TT.Evaluate software patching and upgradesUU. MiscellaneousVV. Verify that jumbo frames up to 9000 bytes are available
7/23/2019 E - NetworkAttachedStorage.pdf
17/65
Page 17 of 65
WW. Verify the NAS supports Ipv6
3. Document Results
a. Record all results from evaluationsb. Note deviations from anticipated results4. Follow up with Vendor
a. Discuss all results with the vendorb. Address questions, concerns, and any recommendations
6 Evaluation Results
Each of the vendors was provided a 45-60 day window for evaluation of their equipmentin the DICE environment. The following information contains the answers to the vendorquestions and results from the test plan.
EXPERIMENTS
6.1 Isilon
General
What are the names and model numbers of thestorage products? Isilon IQ 1920, 3000 & 6000
What is the general architecture of the NAS?Clustered solution with 3-32 nodes with a globalname space across all nodes active/active
What types of drives are supported by the NAS?(SATA/SAS/FC) SATA
Can the NAS be clustered? Solution is only available in a cluster of 3-32 nodes.
Is there a global file system within the NAS (and, ifappropriate, the cluster)? Yes, global FS
Are there any specialty nodes or componentsavailable? (accelerators) Accelerator nodes available
What are the optional software features of the NAS?
SmartConnect - connection load distributionSyncIQ - fast replication of files across multiple IsilonIQ clusters over WAN and LAN.SnapshotIQ - Snapshot technology.
Performance
What drive sizes and speeds are available in theNAS? 3.5 SATA-2 in 160 GB, 300 GB and 500GB
What are the throughput capabilities for reads andwrites in both Megabyte/s and IO/s? See figure 5-1 below table
What is the max throughput achievableattainable? What is the basis for the numbers? See figure 5-1 below table
What are the throughput capabilities pernode/system? See figure 5-1 below table
Are there benchmark numbers available? None listed on spec.org or storage performance
7/23/2019 E - NetworkAttachedStorage.pdf
18/65
Page 18 of 65
council.
Scalability
What is the maximum amount of storage per system? 576TB using a 96node IQ6000 system
What is the minimum amount of storage per system? Minimum is a 3-node cluster IQ1920 with 5.7TB.
Is clustering available and what type? (active/active,active/passive, etc)
It is a clustered solution. Each node comes with 12hot swappable drives at the three different sizesdepending on the model purchased.
What is the maximum usable capacity (in gigabytes;GB) for a single storage array? NA
Can new disks be added on the fly or new nodes? Yes, by adding new nodes with 12 disks.
Reliability and Availability
Are there any single points of failure inside NAS?
No, when implemented with two backend switches, aminimum 3-node cluster and one of the dataprotection schemes are enabled.
How many global spares does your array support? No spares in systemDoes the NAS support remote copy? Yes with optional SyncIQ software
What redundancy is built into the NAS? multiple nodes, with multiple PS, multiple I/O ports
What protects data in cache on a power failure? Howdoes feature work? NA
What type of RAID or other disk failure protectiondoes your NAS support?
FlexProtect-AP file-level striping with support for N+1,N+2, N+3, N+4 and mirroring data protectionschemes.
Any other protection mechanisms available for thedata? No
If a clustered solution, what is the maximum numberof nodes? 96 nodes
Is there port failover for external ports? Yes
Connectivity and Security
External port speeds available? (10/100/1000/10000) 1GigE
Support Jumbo Frames? (9000byte) 9000 bytes
Number of external ports available?(2) 1GigE, they can not be used simultaneously onthe same subnet.
Are Fiber and/or Copper ports for externalconnections available? Copper only
What is the backend architecture? 1GigE Ethernet or Infiniband
What NFS versions are supported? NFS v3 (UDP or TCP)
Other than NFS, what other connectivity options are
there? CIFS, FTP, HTTP reads/writes.
If a clustered solution is their load balancing betweennodes? How is it determined?
Yes, SmartConnect basic is Round-Robin. The Smartconnection Advanced supplies other algorithms suchas by CPU utilization and throughput
Support Dual Stack IPv4 and IPv6? No, just IPv4
Can external ports be connected to different subnets? Yes
What operating systems are supported?Microsoft Windows, Linux, UNIX, AppleMacintosh.
Is there an operating system embedded in your Yes
7/23/2019 E - NetworkAttachedStorage.pdf
19/65
Page 19 of 65
storage array?
Type of OS/Flavor? FreeBSD
Root access? Yes
How are vulnerabilities addressed? NA
Have the products been EvaluationAssurance Level (EAL) evaluated, and what level
have they achieved?Isilon is currently with an approved CCE Lab and
working on an EAL2 Security Target. What services and ports are used and which
ones are required?NFS, FTP, SSH, Web Port 80 any and all can bedisabled.
How are client users authenticated? UserID and password
Is there third party security supported suchas Securid, Kerberos or a combination of both?
Support for Microsoft Active Directory Services(ADS), Network Information Service (NIS) andLightweight Directory AccessProtocol (LDAP)
Does the NAS support the following NFS exportoptions:share -F nfs -o nosub, nosuid,root=ss1.asc.hpc.mil:ss2.asc.hpc.mil,
[email protected]:@10.10.132:@10.10.134:@10.10.138:@10.10.140:@10.10.142:@10.10.148 -d "HAFS"/qfx1/hafx1 No
NAS Clients
What clients are supported? NFS client or CIFS clients.
Any specialty client connectivity softwaresupported? No
Support the following Client Mount Options:fx1v-e2:/qfx1/hafx1 /hafx1 nfs rw, nodev, nosuid,timeo=70, retrans=30, nfsvers=3, bg, intr, tcp,rsize=8192, wsize=8192, soft Yes
Does the NAS have its own unique file system? Yes, it is called OneFS.
Backups
What backup methods does the NAS support? NDMP
What backup software is supported? NDMP V3 and V4 compliant software
Is there TAR or some other built in backup toolsupplied? Tar is there, but it can not do incrementals.
Capacity Planning and Performance Analysis
What tools are available to determine future capacityneeds? NA
Does the NAS have the capability to add capacity ondemand? Yes, add a node.
Are there tools to see how the NAS is performing?Historical Collection? Yes, history is limited to two weeks.
Automation
Are there tools to automate tasks? Limited to certain tasks
Storage Management
7/23/2019 E - NetworkAttachedStorage.pdf
20/65
Page 20 of 65
What software and/or hardware is required to deploythe NAs? DNS to configure SmartConnect.
Does the NAS use UNIX UIDs? UNIX Group IDs? Yes
Can you import an existing UNIX user file?Not with a command, maybe able to merge with ourfile
Can you import an existing UNIX group file?
Not with a command, maybe able to merge with our
fileSupport SSH connectivity for management? Yes
Support Kerberos? No
What connectivity options are there to manage theNAs? Shared Ethernet, serial port
Is there a command line interface (CLI) into the arrayfor management and configuration? Yes
Is there a GUI interface into the array formanagement and configuration? Yes
If a GUI is provided, is it Web, Java, or X based andwhat are the client requirements for accessing? Web with some java applets
What type of monitoring tools do you have?
GUI provides management and monitoring
capabilities.Does your software work with other networkmanagement frameworks? Nagios, Openview, etc? No
What types of reports can I get from yourmanagement tools? Performance
What documentation is provided for the hardware andsoftware for configuration and management? Web site and pdf documents
Does the NAS Support Quotas? Hard? Soft? Soft only
Can a user view their quota from the client? No
Diagnostic and Debugging Tools
Does the NAS support customer self-diagnostics?
Yes, there are logs and status GUI screens and CLI
commands that help identify problems.How are notifications sent? Email or home phone
Do you have any troubleshooting or diagnostic tools? Built into GUI
What documentation is provided for the hardware andsoftware troubleshooting?
The Isilon Users Guide has a section on systemmaintenance.
Service and Support
What is the warranty provided for hardware andsoftware? Must purchase support agreement with system buy
How many service employees do you have?
Isilon currently has 30 Engineers assigned across 3support Tiers. Isilon also has 30 Field SystemEngineers supporting our customer base directly.
Who provides the field service?
Isilon employs a host of options for handling ICdeployments. We have programs that allow us todevelop the right professional services and supportmodel to accommodate a wide variety of geographiesworldwide. If a specific integrator or service providerhas been selected by a given IC agency; training andcertification would take 2 days.
Do local field service technicians have securityclearances?
Isilon employs a number of subcontractors that havecleared personnel to support our maintenance efforts.
7/23/2019 E - NetworkAttachedStorage.pdf
21/65
Page 21 of 65
Where do you stock spare parts? Local UPS depots
Do you stock onsite spare parts? If yes, list how manyspares of each type.
Normally Isilon recommends customer purchase andstock One Spares kit for every 5 nodes in the cluster.Currently, customer spares kits are drives only.
What are the standard terms of service?
Due to the sensitivity and importance of theinformation stored on the Isilon, each system
purchased is required to have a support contract. ThePlatinum, Gold or Silver level would be selected andthe length of support desired: 1,2,3,4 or 5 yearcontracts.Platinum - 7x24x365 phone support; same day parts;software updatesGold - 7x24x365 phone support; next business dayparts; software updatesSilver - 5x8 business day phone support; next daypartsIf an offering outside of these is desired, customersupport plans can be developed.
What is the onsite response time if I have a problem?
Isilon Systems offer a complete array of supportoptions that can be tailored to the specific missionrequirement. Platinum Support provides same daysupport while Gold Support provides next businessday support.
Platinum and Gold Support plans provide thefollowing services:
Telephone and e-mail support (24x7x365) Web-based knowledge base Email home for critical alerts Software upgrades On-site support Spare parts support
Gold Support provides next business day partsdelivery and on-site support.
Platinum Support offers customers 4-hour partsdelivery, (24*7) and on-site support.
How often is software and firmware updated and whoperforms the upgrades?
Quarterly maintenance releases. We have 1-2 majorOS releases per year.
Do software and firmware updates require an outage?Generally maintenance upgrades do not. Major OSupgrades require system reboot.
Cost
What are the ongoing costs over the life of yoursolution? Annual maintenance costs.
Can you provide Department of Defense (DoD)references available?
Isilon is deployed in a variety of Uniformed Servicesand Intelligence Agencies including: Asia PacificSecurity Studies PACOM, Army Intelligence, DIA,NGA, Army Corps of Engineers, Navy, and OSD.
Do you support non-recovery of failed drives due toDoD security regulations, and if so, is there anyadditional maintenance cost?
Isilon has specific support part numbers that allowagencies to keep drives and destroy them due to thepresence of sensitive data. As with other
7/23/2019 E - NetworkAttachedStorage.pdf
22/65
Page 22 of 65
manufacturers, Isilon does charge additional fees forthe non-return option and is competitive from a pricingperspective.
What is the pricing for optional software components? NA
Vendor Information
Who manufactures the storage array of the NAs? Isilon
Number of Storage Units shipped? Approx 8,000 nodes shipped
Number of deployed sites? >500
Table 6-1
Figure 6-1 Isilons Published Performance Data
6.1.1 System deployment, setup and scalability
The 3 node IQ3000i cluster was very easy to deploy into the DICEenvironment at the AFRL MSRC. All the disks are located in each node.Therefore, the foot print is very small. Each node was 2U high and contained12 300GB SATA drives. We needed 6Us of cabinet rack space for the 3
nodes and 1U of cabinet space for the Infiniband switch. Each node required(2) 220V power sources for the redundant power supplies which we suppliedfrom the Dell cabinets PDUs.
The initial setup of the nodes was performed 3 different ways to show theflexibility of the systems. The 1st node was attached to the Netgear switch andmade available to the cluster via configuration through the front LCD panel ofthe 1
stnode. The 1
stnode was available to the cluster for NFS mounts within 5
7/23/2019 E - NetworkAttachedStorage.pdf
23/65
Page 23 of 65
minutes of setting the initial configuration. The 2nd node was powered up,connected to the Netgear switch and through the CLI of the 1
st node,
discovered, configured and brought online. The 3rd
node was powered up,connected to the Netgear switch and through the GUI of the 1st node,discovered, configured and brought online.
During initial setup and post setup, evaluation testing of all configurations onthe system were required to be done by the root user. The Isilon system isbased on the FreeBSD operating system, which is a UNIC derivative.
The Isilon cluster includes a software feature called SmartConnect Basic.SmartConnect Basic provides that ability to Round-Robin the initial inboundEthernet connections into the cluster. This is done in conjunction with settingson the DNS server and the SmartConnect software on the Isilon cluster. TheDNS server is setup to connect to a virtual IP SmartConnect address on theIsilon cluster, query for a connection to use, and relay the information to the
client wishing to connect. The initial setup of SmartConnect was verychallenging in a Linux based DNS environment, because the documentation inthe users manual was geared towards Microsoft Windows environment.Through phone conversations with the Isilon Support staff, a delegation zonewas established on the DNS within the DICE cluster. In addition, wediscovered a myriad of helpful how-to and best practices documents onIsilons support website.
Included with the Isilon cluster was an evaluation of SmartConnectAdvanced. The Advanced version of SmartConnect provides otheralgorithms for load balancing the inbound Ethernet connections into the Isiloncluster. In addition to Round-Robin, the other algorithms are:
CPU Utilization
Aggregate Throughput
Connection CountThe only algorithms tested in our evaluation were Round-Robin andAggregate Throughput, with the latter being the predominant.
Additionally, multiple inbound Ethernet connections from multiple nodes canbe zoned within the cluster, and each can use a different connectionalgorithm within the cluster. We tested this functionality by establishinganother zone within the Isilon cluster that utilized the 2
nd Ethernet port on
each node and connected them to the Force10 switch on the user networkwithin DICE. It does bring into question that there could be a potentialconflict with two different algorithms running on the same node, such as CPUUtilization and Connection Count balancing. Due to time constraints, we wereunable to explore this in detail.
7/23/2019 E - NetworkAttachedStorage.pdf
24/65
Page 24 of 65
6.1.2 Management
Isilon has both a web based Graphical User Interface (GUI) and an SSH
accessible Command Line Interface (CLI). The web based GUI is very easy touse and fairly well organized. One item to note is that to change some systemsettings you are sometimes required to traverse many dialogue pages to reachthe page requiring the change.
If a system administrator wants to access the GUI or the CLI via LANconnection, the connection must be made through one of the two Ethernetfront end ports that normal user traffic would flow across. A front end portcould be dedicated to be a port for management, but it takes away from theavailable user ports. The web GUI and SSH access can be disabled or enabledin whole, on all Ethernet ports, not on a port by port basis.
The GUI is used for both configuration and monitoring. You can configure thecluster and any node from the GUI. There are tools to configure andautomatically initiate Network Data Management Protocol(NDMP) v3 or v4based network backups. There are alert configuration and monitoring toolsand there are both real time and historical performance reporting tools.
The CLI allows both custom Isilon commands to be executed as well asstandard BSD UNIX commands. All Isilon commands start with ISI.Additionally, there is a third way to interface the system by way of the LCDfront panel on each Isilon node. There is functionality in the LCD panel tomonitor, configure, start and stop the node.
The users guide for the Isilon system has very limited documentation onusing the CLI or the node front LCD panel. The manual is very GUI oriented.The users guide provides the list of configuration options for all the fields youwill encounter, but it does a below average job explaining why you wouldchoose a particular option over another.
The Isilon system had quota management, but it was only soft. Meaning, thatif you put a quota on a user, the user only received messages they exceededtheir quota if they were logged onto the Isilon cluster. They would never see amessage, on the client side, that they exceeded their quota. Since it is a softquota, the user could continue to grow their data past the soft limit until thelimit of the cluster disk space was reached. Additionally, there is nomechanism on the system to restrict size growth of the directories.
While trying to resolve a security problem, Isilon wanted us to upgrade theoperating system from 4.5.1 to 4.5.3.81. The steps to initiate the upgrade areto place the update file on the node in the /ifs/data directory and from the GUI
7/23/2019 E - NetworkAttachedStorage.pdf
25/65
Page 25 of 65
or CLI select the upgrade screen/command, then select the update file. Theupgrade went very smooth, but it did require a reboot of the cluster
When starting the cluster from a halted state, it is necessary to power thenodes at the physical location of the node. There was no way to start the
cluster remotely. However, individual nodes can be started and stoppedremotely.
6.1.3 Operating System and File system
The operating system used by the Isilon system is FreeBSD. Isilon has addedtheir own code for the system to operate as a NAS and is part of OneFS. Thesystem employs Apache web server for GUI access. Some of the operatingsystem components were very old versions, which may indicate that theupdates, provided by Isilon, primarily address Isilons proprietary software
only.
The file system is called OneFS. The OneFS software includes the RAID, thefile system and the volume manager. The file system stripes the data acrossdisks and nodes on a file-by-file basis. You can choose your data protectionscheme at the cluster, directory or file level.
If choosing to set the protection level of the data at a cluster level there arefour available parity protection levels to choose from:
FlexProtect +1 Cluster and data available should a single disk or nodefail. The cluster must have a minimum of three nodes to configure this
protection scheme. FlexProtect +2 Cluster and data available should two disks or two nodes
fail. The cluster must have a minimum of five nodes to configure thisprotection scheme.
FlexProtect +3 - Cluster and data available should three disks or twonodes fail. The cluster must have a minimum of seven nodes to configurethis protection scheme.
FlexProtect +4 - Cluster and data available should four disks or two nodesfail. The cluster must have a minimum of nine nodes to configure thisprotection scheme
If there is a desire to change protection level at the directory or file level, thiscan be done by setting the protection level to FlexProtect Advanced. With thisoption you can choose the protection schemes mentioned above (FlexProtect1-4) or up to 8x mirroring.
Exporting a directory is simple with the GUI, but there are no advanced NFSoptions that can be set. The manuals and support site do offer suggestedmount options for clients. It is assumed that since the system relies on
7/23/2019 E - NetworkAttachedStorage.pdf
26/65
Page 26 of 65
FreeBSD to provide the NFS services, that there could be changes made to theNFS files, but this was not attempted.
6.1.4 Security
User and Group management is easy with the GUI. Like the NFS exportoption, it is very rudimentary. To add a user: you supply the Login ID, userreal name, new password, home directory and shell. You can also decidewhether to enable it or disable it. There is no way for the GUI or CLI to setthe UID for the user. In theory, you could edit the /etc/passwd files on all thenodes, but this was not tested. Adding groups was similar. In that, you can notdefine the GID of the group. There are no tools to import existing users orgroups from a file.
As part of the evaluation, it was necessary for our internal security team to
scan the system vulnerabilities using a common toolset that they usethroughout the environment. The findings from the scans resulted in multiplevulnerabilities being found. As a service to Isilons user community they arenot documented in this paper, but were shared with Isilon. There areworkarounds for the vulnerabilities, but this would result in a less than desireduser and management arrangement.
The system provides interfaces to LDAP, NIS, and ADS based authenticationand security systems. None of these options were evaluated.
Both files and directories utilize UNIX style permissions, ownership andgroup settings typical of any UNIX based file system.
6.1.5 Reliability
The Isilon system appears to be very robust in terms of redundancy andreliability from hardware failures. As part of the testing, we ran throughIsilons procedure to fail a disk drive, and data was still available. The faileddrive was re-inserted into the node and rebuilt successfully. Under heavysystem, we pulled power supplies and network cables with no alerts or errorsobserved. We did not test node failure by crashing a node as Isilon did notprovide us with a procedure to do this. We did not want to risk damaging aunit.
6.1.6 Performance
File transfer performance was equal to or better than the performance shownin figure 5-1. One point to note is that there is no channel bonding capability
7/23/2019 E - NetworkAttachedStorage.pdf
27/65
Page 27 of 65
with Isilon. So, there is no way to increase the speed of a single stream filetransfer over the 1Gb/sec Ethernet port.
Our performance testing was to verify that the system could sustain acceptabletransfer speed while serving many users simultaneously and spread the load
throughout the cluster. Our aggregate tests were done with a single streamcoming from each of the 32 HPC cluster nodes simultaneously to create anenvironment of sustained activity for a longer observation period.
During the throughput tests, it was observed that the GUI became verysluggish to navigate due to the sustained load on the system. The CPUutilization was well over 90% during the tests. The CPU utilization wasspread very equally among the nodes using the Throughput option in SmartConnect.
Aggregate Write Test for Parity Protected Data:
From the HPC cluster, we ran 32 simultaneous writes using IOZone of a 5GBfile to a single port on each Isilon node in the Isilon cluster(Total of threeports), while using the Max Throughput Isilon SmartConnect zone setting.Results were on the order 165MB/sec out of a theoretical maximumthroughput rate 384MB/sec (43%). The directory being written to wasconfiguring with Isilon FlexProtect +1 setting.
Figure 5-2 shows a historical performance graph available on the Isilonsystem. One strange anomaly observed was the drastic drop in throughput thatis seen in figure 6-2 at the 10:21:20 data point. This anomaly appeared invirtually every sustained write test that we did. We asked Isilon for theirexplanation for why this was occurring, and we have not received an answerto this question.
7/23/2019 E - NetworkAttachedStorage.pdf
28/65
Page 28 of 65
Figure 6-2. Summary of aggregate write throughput on parity protected folder
7/23/2019 E - NetworkAttachedStorage.pdf
29/65
Page 29 of 65
Aggregate Reads Test for Parity Protected Data:
From the HPC cluster, we ran 32 simultaneous reads using IOZone of a 5GBfile to a single port on each Isilon node in the Isilon cluster (Total of threeports), while using the Max Throughput Isilon SmartConnect zone setting.
Results were on the order 325 MB/sec out of a theoretical maximumthroughput rate 384 MB/sec (85%). The directory being written to wasconfigured with Isilon FlexProtect +1 setting.
Figure 6-3. Summary of aggregate read throughput on parity protected folder
7/23/2019 E - NetworkAttachedStorage.pdf
30/65
Page 30 of 65
Aggregate Write Test for Mirror Protected Data:
From the HPC cluster, we ran 32 simultaneous writes using IOZone of a 5GBfile to a single port on each Isilon node in the Isilon cluster (Total of three
ports), while using the Max Throughput Isilon SmartConnect zone setting.Results were on the order 130MB/sec out of a theoretical maximumthroughput rate 384MB/sec (34%). The directory being written to wasconfigured with Isilon 1 copy mirroring. Again, notice the negativethroughput spike in the graph.
Figure 6-4. Summary of aggregate write throughput on mirrored folder
7/23/2019 E - NetworkAttachedStorage.pdf
31/65
Page 31 of 65
Aggregate Reads Test for Mirror Protected Data:
From the HPC cluster, we ran 32 simultaneous reads using IOZone of a 5GBfile to a single port on each Isilon node in the Isilon cluster (Total of threeports), while using the Max Throughput Isilon SmartConnect zone setting.
Results were on the order 340 MB/sec out of a theoretical maximumthroughput rate 384 MB/sec (88%). The directory being written to wasconfigured with Isilon 1 copy mirroring.
Figure 6-5. Summary of aggregate read throughput on mirrored folder
7/23/2019 E - NetworkAttachedStorage.pdf
32/65
Page 32 of 65
6.2 NetApp
Table 6-2 below was developed in order to gather data and to be used to assess thecapabilities of a vendor and their NAS offerings. The data was generally available on thevendors website, vendor documentation, or through direct questions to the vendor.
General
What are the names and model numbers ofthe storage products?
Data ONTAP GX is supported on the followingproducts from: NetApp - FAS3040, FAS3050,FAS3070, or FAS6070. (FAS3050 has recentlyreached EOL for sales.)
What is the general architecture of the NAS? Cluster of 2-24 of nodes
What types of drives are supported by theNAS (SATA/SAS/FC)? FC or SATA
Can the NAS be clustered? Yes
Is there a global file system within the NAS(and, if appropriate, the cluster)? Yes
Are there any specialty nodes or componentsavailable (accelerators)? No
What software options are included?
Global Namespace, Transparent FlexVol move,Integrated automatic RAID manager, Snapshot, e-mail alerts, NIS, DNS, FlexVol, NDMP capability,SSH, CLI, Web GUI, SNMP, or FlexVol HighPerformance Option
What are the optional software features of theNAS?
Storage Failover, Asynchronous Mirroring, andSnapRestore
Performance
What drive sizes and speeds are available inthe NAS?
FC 144GB 10k or 15k RPM, 300GB 10k or 15kRPM SATA 250GB, 500GB, 750GB all 7.2kRPM
What are the throughput capabilities for readsand writes in both Megabyte/s and IO/s?
FAS3040A, two nodes, Data ONTAP 7 OS, 60038IOPS, Active-Active ClusterFAS3070A, two nodes, Data ONTAP 7 OS, 85615IOPS, Active-Active ClusterFAS6070A, two nodes, 7G OS, 136048 IOPS,Failover Cluster
What is the basis for the numbers? Data above was from the SPEC test shown below.
What are the throughput capabilities pernode/system? Throughput not published
Are there benchmark numbers available?SPEC SFS97_R! Benchmark results are at,http://www.spec.org/sfs97r1/results/sfs97r1.html.
Scalability
What is the maximum amount of storage persystem? 6048TB
What is the minimum amount of storage persystem?
One shelf would be the minimum, but to get themost of a FS3070 controller, two shelves, with
7/23/2019 E - NetworkAttachedStorage.pdf
33/65
Page 33 of 65
Fibre Channel 15k RPM drives, would benecessary.
Is clustering available, and what type (Active-Active and Active-Passive, etc)? Active-Active and Active-Passive
What is the maximum usable capacity (in
gigabytes; GB) for a single storage array?
FAS3040 336 disk drives per controller, FAS3070504 disk drives per controller, FAS6070 1008 disk
drives per controller
Can new disks be added on the fly for newnodes?
Yes, if slots are available in the disk tray. A newdisk tray requires an outage.
Reliability and Availability
Are there any single points of failure insideNAS? No, not when implemented as a two node cluster
How many global spares does the arraysupport? Configurable by site
Does the NAS support remote copy?Yes, through the use of the Disaster Recoverysoftware option and remote hardware
What redundancy is built into the NAS? Redundant hot-swappable controllers, coolingfans, and power supplies
What protects data in the cache on a powerfailure? How does the feature work?
A battery in each node protects the cache for up to30 days.
What type of RAID, or other disk failure,protection does your NAS support? RAID4 and RAID4 with a second parity drive
Any other protection mechanisms available forthe data?
Data ONTAP GX provides the capability to stripdata across multiple controllers.
If a clustered solution, what is the maximumnumber of nodes? 24
Is there port failover for external ports? Yes
Connectivity and Security
External port speeds available?(10/100/1000/10000) 10/100/1000/10000
Support Jumbo Frames (9000 bytes)? Yes
Number of external ports available? Max 32 ports in a two node FAS3070 GX cluster
Are Fibre and/or Copper ports, for externalconnections, available? Yes, both are available.
What is the backend architecture? Fibre Channel
What NFS versions are supported? NFS V2/V3 over UDP or TCPIP
Other than NFS, what other connectivityoptions are there? Microsoft CIFS
If a clustered solution is there load balancingbetween nodes? How is it determined? No
Support Dual Stack IPv4 and IPv6? No, only IPv4
Can external ports be connected to differentsubnets? Yes
Is there an operating system embedded inyour storage array? Yes
7/23/2019 E - NetworkAttachedStorage.pdf
34/65
Page 34 of 65
Type of OS/Flavor? Berkley Software Distribution (BSD)
Root access? Yes
How are vulnerabilities addressed?
Vulnerabilities are addressed through normal OSupgrades or specific support escalations - mostfunctions of BSD are turned off, and a firewall isactive to minimize any possible threats.
Have the products been EvaluationAssurance Level (EAL) evaluated, and whatlevel have they achieved? Data ONTAP GX has not been EAL certified.
What services and ports are used, andwhich ones are required? Port 80 for http access and port 22 for SSH
How are client users authenticated? Unique user ids and passwords
Is there a third party security supportedsuch as Securid, Kerberos or a combination ofboth? Kerberos
Does the NAS support the following NFSexport options:
share -F nfs -o nosub, nosuid,root=ss1.asc.hpc.mil:ss2.asc.hpc.mil,[email protected]:@10.10.132:@10.10.134:@10.10.138:@10.10.140:@10.10.142:@10.10.148 -d "HAFS" /qfx1/hafx1 Yes
NAS Clients
What clients are supported?
Supported NFS clients are: Apple Mac OS X 10.4,IBM AIX 5.2, IBM AIX 5.3, Linux 2.6, Sun Solaris8, and Sun Solaris 10.Supported CIFS Clients are: Apple Mac OS X10.4, MS Windows NT 4.0, MS Windows 2000, MSWindows 2003, and MS Windows XP.
Any specialty client connectivity softwaresupported? No
Support the following Client MountOptions: fx1v-e2:/qfx1/hafx1 /hafx1 nfs rw,nodev, nosuid, timeo=70, retrans=30,nfsvers=3, bg, intr, tcp, rsize=8192,wsize=8192, soft Yes
Does the NAS have its own unique filesystem? Yes
Backups
What backup methods do the NAS support? NDMP based backups to FC attached tape drives
What backup software is supported?Symantec Net backup, IBM Tivoli, CommVault,and BakBone
Is there TAR, or some other built-in backuptool supplied? No
Capacity Planning and PerformanceAnalysis
7/23/2019 E - NetworkAttachedStorage.pdf
35/65
Page 35 of 65
What tools are available to determine futurecapacity needs? None
Does the NAS have the capability to addcapacity on demand? Adding disks shelves would involve an interruption.
Are there tools to see how the NAS is
performing? Historical Collection?
Real time statistics are displayable in both the GUIas stats, or graphs, and in the CLI as stats. There
is no historical collection of performance statistics.
Automation
Are there tools to automate tasks?
The only automation available is for OS suppliedfeatures, such as Snapshots. Jobs can not becreated, such as ones to capture system utilizationand send an email.
Storage Management
What software and/or hardware is required todeploy the NAS? Network switches for inter-cluster communications
Does the NAS use UNIX UIDs? UNIX GroupIDs? Yes
Can you import an existing UNIX user file? Yes
Can you import an existing UNIX group file? Yes
Support SSH connectivity for management? Yes
Support Kerberos? Yes
What connectivity options are there to managethe NAS?
Serial console access is available, and it candedicate an Ethernet port for management.
Is there a Command Line Interface (CLI) intothe array for management and configuration? Yes
Is there a GUI interface into the array for
management and configuration? YesIf a GUI is provided, is it Web, Java, or X-based and what are the client requirements foraccessing? Web with Java for graphing
What types of monitoring tools are available? GUI has real-time Java performance monitoring
Does your software work with other networkmanagement frameworks? Nagios, Openview,etc?
No, limited snmp support for up/down stats, noperformance stats
What types of reports can one get from themanagement tools? No reporting
What documentation is provided for thehardware and software for configuration andmanagement?
Data ONTAP GX 10.0 Administration Guide andReference Guide
Does the NAS Support Quotas? Hard? Soft? No
Can a user view their quota from the client? No
Diagnostic and Debugging Tools
Does the NAS support customer self-diagnostics? Yes
7/23/2019 E - NetworkAttachedStorage.pdf
36/65
Page 36 of 65
How are notifications sent? Snmp and email
Are there any troubleshooting or diagnostictools?
Limited, must report errors to support forinterpretation
What documentation is provided for thehardware and software troubleshooting? None
Service and Support
What is the warranty provided for hardwareand software? Requires SupportEdge contract
How many service employees are there?NetApp contracts hardware support services, andit has extensive software support.
Who provides the field service? IBM
Do local field service technicians have securityclearances? Yes
Where are spare parts stocked? Common parts are stored at UPS parts depots.
Are onsite spare parts stocked? If yes, list howmany spares of each type. Not unless contracted
How often is software and firmware updated,and who performs the upgrades?
Quarterly fixes with semi-annual scheduledreleases
Do software and firmware updates require anoutage? Yes
What is the onsite response time if there is aproblem?
Both two hours, and four hours, onsite withPremium SupportEdge contract
7/23/2019 E - NetworkAttachedStorage.pdf
37/65
Page 37 of 65
Cost
What are the ongoing costs over the life of thesolution?
SupportEdge service contract and GXimplementation service
Can Department of Defense (DoD) referencesbe provided? Yes
Is there support for the non-recovery of faileddrives due to DoD security regulations, and ifso, is there any additional maintenance cost? Yes
Vendor Information
Who manufactures the storage array of theNAS? Third party
Number of Storage Units shipped? Over 250 GX Units
Number of deployed sites? Over 50 GX SitesTable 6-2
6.2.1 System Deployment, Setup and Scalability
The two node FAS3070 GX NAS was delivered in a standard 42U serverrack. The rack already had the two nodes, along with six disk shelves. Eachdisk shelf contained 14, 144GB fibre channel disk drives. Each disk shelf hadredundant fibre channel cables to the FAS3070 controller pair. The rack waspowered by two NEMA L6-30 power cables. NetApp sent an onsite engineerfor the initial configuration and training.
The intention was to have an initial setup, which allowed the opportunity toexpand the available connections to the NAS, for the purpose of evaluatingprovisions to other networks. The NAS was connected with: two 1GbpsEthernet connections to the Spirit Clusters Netgear GSM7248 switch, two1Gbps Ethernet connections to the Netgear GS116 for NAS inter-clustercommunications, and two 1Gbps Ethernet connections to the managementLAN. Additionally, serial connections on both of the FAS3070s were utilizedfor system configuration.
The system, FAS3070 running Data ONTAP GX, expects certain ports to be
configured for particular purposes. Pages 15 through 18, of the Data ONTAP10.0 Administration Guide, show which Ethernet ports should be used andtheir purpose. For example, the guide showed that inter-clustercommunications port1 and port2 should be on portse0aand e0b,andmanagement should be on port e1a. It also pointed out that the NAS shouldhave a minimum of two networks, one for data/management and another forinter-cluster communications. In the end, the NetApp evaluation cluster wasconnected to four different networks. The networks were:
7/23/2019 E - NetworkAttachedStorage.pdf
38/65
Page 38 of 65
Inter-cluster Communications
Administration management
NFS user data access from the HPC cluster
NFS user data access from other systems connected to theDICE general network
There are two ways to configure a new NetApp Data ONTAP GX NAScluster. The autoconfigor manual configureoption can be used to configurethe cluster. The manual configuration method was chosen, which allowed theuser NFS and cluster interfaces to be added by the staff. The manualconfiguration method was recommended by the NetApp engineer. It was alsorecommended to pre-wire the entire NetApp cluster prior to beginning theconfiguration.
The manual configuration method walks the user through a series of promptsthat defines only the basic information for the node, such as: node name,
management network, dns, gateway address, and the administration login.The manual configuration was then repeated for the second node.
After the manual configuration of both nodes was completed, SSH was usedto access one of the nodes, via the network management port, to continue theconfiguration. Chapter two of the NetApp Data ONTAP 10.0 AdministrationGuide walks through the steps to bring the nodes together into a cluster, whichwas used to finish the configuration. Data ONTAP GX provides the ability tocreate virtual servers. Virtual servers allow an organization to share the samephysical nodes with many applications. In virtual servers, separate networks,NFS and/or CIFS, Kerberos authentication, and UNIX users can be created in
separate groups. Only one virtual server was created for this evaluation.
With the NetApp engineer, two inter-cluster communications Ethernet linksand two user NFS data Ethernet links to the DICE HPC clusters Netgearswitch were defined on each node. The FS3070 nodes support Jumbo frameMTUs on the Ethernet interfaces of 9000 bytes, which were configured onseveral interfaces. Ethernet interfaces in GX are called virtual interfaces,since they have the ability to migrate to different physical ports. When avirtual interface is defined, they are assigned one of three roles. The roles are:
Data For user based NFS or CIFS traffic
Management For managing the GX systems
Cluster For GX inter-cluster communications
Three globally available file systems were created for this evaluation. Onefile system was striped across both nodes, and the two others were defined oneper node.
The version, of Data ONTAP GX, evaluated was GX 10.0.2. There have beenseveral modifications to the GX operating system to automate the initial setup
7/23/2019 E - NetworkAttachedStorage.pdf
39/65
Page 39 of 65
and configuration. This includes the automatic creation of network failoverrules. After the creation of the cluster, the network addresses were changed toa couple of the interfaces configured during the manual configuration. TheNetApp engineer had a very difficult time reconfiguring the interfaces withthe correct IP settings, due to the automatic failover rules that are
automatically created. Additionally, more network configuration problemswere discovered during failover testing. The issues, and their resolution, willbe explained in the reliability section of this paper.
The GX cluster only had two FS3070 nodes, but the Data ONTAP GXsoftware supports clusters of up to 24 nodes and up to 6PB of storage.Additionally, the nodes, with their I/O expansion slots, can support FCattached tape drives, 10Gbps Ethernet adapters, and both fibre and copper1Gbps adapter cards.
6.2.2 Management
NetApp has both a web based Graphical User Interface (GUI) and an SSHaccessible Command Line Interface (CLI). The web based GUI is easy to use,but it requires lots of navigation, due to the extensive configuration operationsavailable to the GX system. Adding a new Ethernet interface requires the visitof several screens for reference, as well as configuration.
If a system administrator wants to access the GUI, or the CLI via LANconnection, the connection, by default, can be made from any Ethernet port inthe cluster. Since this behavior is not desirable, the web services and SSHaccess to the management ports were limited. There also is the ability tocompletely disable services, such as the GUI web server or SSH.
The GUI is used for both configuring and monitoring. The cluster, and anynode from the GUI, can be configured. There are tools to configure andautomatically initiate Network Data Management Protocol(NDMP) basednetwork backups. There are alert configuration and monitoring tools and theability for real-time performance reporting tools.
The administration CLI provided Data ONTAP GX native commands. It isvery much like a UNIX style prompt, but it can only execute GX relatedcommands. UNIX style command line editing, as well as issuing commandsthat include query type operations, can be used for specifying the informationone would like to view. In both the CLI and GUI, there are three levels ofcommand access Admin, Advanced and Diagnostics. Each level providesadditional command functionality.
The Data ONTAP Users Guide, and the reference manual, provides goodresources of information for configuring the system. The guides provide a list
7/23/2019 E - NetworkAttachedStorage.pdf
40/65
Page 40 of 65
of configuration options for all the fields to be encountered, but it does notalways explain why a particular option should be chosen over another.NetApp support includes web access to knowledgebase type articles, but theywere limited in the amount of value provided for the setup, configuration andusage.
NetApp support was used for several configuration and usage type questions.They were very attentive to all questions and problems. They were veryproactive returning calls, and would often call hourly if there was no response.Additionally, on two occasions, NetApp support placed follow-up calls toensure everything was running smoothly.
NetApp Data ONTAP GX does not have a soft or hard user quotamanagement system to limit disk space usage. NetApp has stated they willprovide quotas in the future. A workaround to the problem would be to giveeach user their own GX volume, but there is a limit of 500 volumes per
node.
Data ONTAP GX provides no ability to browse the file system structures,monitor space at the directory level, or manage data within volumes using themanagement GUI or CLI itself. Overall space usage can be monitored withmanagement tools.
6.2.3 Operating System and File system
The underlying operating system of Data ONTAP GX is a variant of BSD.The GX code provides file system access, NFS configuration, storageconfiguration, and high availability. While root is able to access the BSDoperating system, during this evaluation, it was only used to configure firewallrules.
There are two types of file systems, or volumes as GX calls them, which canbe created. Both types are considered globally accessible, since they can beaccessed, for both reads and writes, from any node in the cluster. The firstvolume type is homed on one of the nodes in the cluster. This type of volumeis owned by a particular node, but any node in the cluster has access to readand write to the volume. It can be configured so that, upon failure of the nodeowning the volume, it can be failed over to another node in the cluster.
The second type of volume is called a Striped Volume or FlexVol HighPerformance Option. This volume is a collection of node-owned disks thatare striped across multiple nodes within a GX cluster. This distributes thereads and writes of the data across multiple controllers.
7/23/2019 E - NetworkAttachedStorage.pdf
41/65
Page 41 of 65
Volumes are built on what GX refers to as aggregates. These would be akinto LUNs in the direct attached storage realm, where an aggregate is avirtualized collection of physical disks. Additionally, an aggregate defineswhether a volume is striped or not.
GX offers two protection methods for protecting the data in the storageaggregate:
Raid4 Block level striping of data with parity on a separate parity drive
Raid4-DP Raid4 with two parity drives
Exporting a volume is done by modifying the volume within the CLI or GUI.Mount options are based on the GX Virtual Server Export policies and exportpolicy rules.
6.2.4 Security
Unique administrator user IDs can be created to administer and monitor thesystem. These IDs are limited to GX administration duties and have no accessto the BSD operating system as root. The ID can be limited to particularaccess applications, such as SSH only or http. Through the use of accesscontrol profiles, the user can be limited to commands within GX, and it canprovide read-only privileges.
The BSD user root is available on the system, but it is not used in day-to-daymanagement of the GX system.
As part of the evaluation, it was necessary for the internal security team toscan the system vulnerabilities using a common toolset throughout theenvironment. Only one finding was found on the first scan that would be arisk, and that finding was resolved by turning off the service within the GXsystem.
By default, if the web server GUI and/or SSH is enabled, it is availablethrough all network interfaces. Unsuccessful attempts were made to modifythis behavior through the use of firewall policies with the GX software.However, NetApp support was able to help, through the creation of an IPFilterfirewall rules file, in the BSD operating system, that limited access to the http
and SSH protocol.
The GX system provides interfaces to LDAP, NIS and Kerberos basedauthentication and security systems, which can be used for NFS accessauthentication. None of these options were evaluated.
Both files and directories can be configured to utilize UNIX style permissions,ownership and group settings typical of any UNIX based file system.
7/23/2019 E - NetworkAttachedStorage.pdf
42/65
Page 42 of 65
There is no ability to browse the user file systems within the FAS3070 nodesthemselves. All data file access must be accomplished through the NFS orCIFS file system. CIFS access was not evaluated.
Typical UNIX server NFS mount policies, for export security, can beemployed on GX, such as: hostname or IP lists, refusal of super user access,and the disallowing of suid and device creation through the use of the virtualserver export policy rules.
6.2.5 Reliability
The NetApp FAS3070 systems and storage were on-site for 2.5 months.During this period, there were two hardware failures. The first was a diskdrive that was reporting errors. The GX provided procedures and commands,
which were used to replace the drive.
The second failure was a problem with a module in one of the disk shelves.This failure was noticed by error lights flashing on the disk drive shelf. Thelight flashes indicated the problem was, OPS Panel Hardware Fault CallNetApp. There did not appear to be any degradation in the operation of thesystem. After NetApp support diagnosed the problem, NetApp sent a partreplacement for the disk shelf and arranged for IBM to do the replacement.This required a shutdown to the disk shelf, for the replacement, forcing astoppage to the entire system.
During failover testing, it was discovered that IP addresses were failing overto Ethernet ports not on the same network. After a bit of troubleshooting, itturns out that the default behavior of the virtual interface creation is toautomatically enable failover and create the failover policy. These automaticsettings do not take into account the network with which they are connected.Virtual interfaces, with a public IP address, were failing over to ports on otherprivate networks. Attempts were made to edit the failover policies and portsettings, but due to failover policy priority, changes could not be made. Theerrors could have probably been maneuvered around in order to fix the issues,but in the end, it seemed simpler to delete all the virtual interfaces and failoverpolicies, and recreate them manually. This included deleting, and recreating,the ports made during the original initialization and configuration of thesystem. The problems are believed to be due to the immaturity of the autodiscovery features, now included in NetApp Data ONTAP GX 10.0.2. Whenrecreating the interfaces, the option to disable thefailover-policyand use-failover-group was chosen. After the network interfaces were recreated, newfailover policies were also created, which allowed the team to define where aport would failover to. Failover was then enabled on each of the interfaces.
7/23/2019 E - NetworkAttachedStorage.pdf
43/65
Page 43 of 65
Once the network interfaces were recreated, the failover of cluster portsworked as desired. The interface cables were pulled, from the switches, whiletraffic was flowing, and the traffic failed over to the alternate port with littlehesitation.
Stopping a node was tested by failing over the storage owned by the node andall the user data interfaces. This worked well too, with minor hesitation fromthe FAS3070 controller to assume ownership of the disks.
One other item to note about failover policies is that there can be multiplefailover actions for an interface. This means that one could state in a policy,that should port A fail, then port B virtually assumes the IP address of thefailed port, and should B not be available, then port C could take over.Additionally, because NetApp uses virtual IP assignments on an Ethernet port,multiple IP addresses can share the same physical port, so no physical portneeds to be in an unused standby mode.
Although not tested, it is worth mentioning that GX provides disaster recoverymirroring of volumes to remote locations. The documentation states thevolume can be mounted read-only at the remote location, and it can assume anownership role of read-write if desired.
Other than the two hardware issues and network failover configurationproblems, the system seems to be robust enough for an Enterprise application,and there was enough redundancy with the hardware, in combination with theGX high availability software.
6.2.6 Performance
The performance testing was meant to verify that the system could sustainacceptable transfer speed, while serving many users simultaneously andspread the load throughout the cluster. The aggregate tests were scaled fromone to 32 nodes, with a single stream coming from each of the 32 HPC clusternodes. For the performance tests, two non-striped volumes were written to(Volume1 and Volume2). Volume1 was created and homed on Netapp1, andVolume2 was created and homed on Netapp2. Volume1 was mounted twice,on each compute node, by utilizing one virtual interface on each FAS3070.This was repeated for Volume2, to ensure the file system was globallyaccessible, even though a particular FAS3070 owned it, and also to verifythere was equal performance to the volume, no matter which node it wasaccessed from. A Round-Robin approach was used, by selecting one mounton each compute node to load-balance data streams across the four NetApp1Gbps Ethernet ports. The benchmark tool IOZone was utilized for this clientscaling test.
7/23/2019 E - NetworkAttachedStorage.pdf
44/65
Page 44 of 65
Netapp NFS Client Scaling - 10GB - 4096k Block Size
0
50
100
150
200
250
300
350
400
450
500
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
Number Of Clients
AggregateMB/second
W rite Read
Figure 6-6 IOZone Client Scaling Throughput Test
7/23/2019 E - NetworkAttachedStorage.pdf
45/65
Page 45 of 65
In addition to the scaling throughput test with multiple nodes, single clientperformance tests were performed to see if the team could saturate a clientwith NFS data. The team used the XDD benchmark tool to perform tests ofbuffered reads/writes, as well as direct I/O. In all cases, the NetApp couldmeet the demands of a single client for reads and writes. In the tests, the team
scaled the number of threads on a single client, from one to four, and utilizedonly one mount to the NetApp Cluster.
Netapp - Single NFS Client - Scaled Threads - Buffered Writes - 40GB
100
102
104
106
108
110
112
114
116
118
4k 8k 16k 32k 64k 128k 256k 512k 1024k 2048k 4096k 8192k
Block Size
Rate(MB/s)
1 2 3 4
Figure 6-7 XDD Single Client Thread Scaling for Buffered Writes
7/23/2019 E - NetworkAttachedStorage.pdf
46/65
Page 46 of 65
Netapp - Single NFS Client - Scaled Threads - Buffered Reads - 40GB
100
102
104
106
108
110
112
114
116
118
4k 8k 16k 32k 64k 128k 256k 512k 1024k 2048k 4096k 8192k
Block Size
Rate(MB/s)
1 2 3 4
Figure 6-8 XDD Single Client Thread Scaling for Buffered Reads
Netapp - Single NFS Client - Scaled Threads - Direct I/O Writes - 40GB
0
20
40
60
80
100
120
140
4k 8k 16k 64k 128k 256k 512k 1024k 2048k 4096k 8192k
Block Size
Rate(MB/s)
1 2 3 4
Figure 6-9 XDD Single Client Thread Scaling for Direct I/O Writes
7/23/2019 E - NetworkAttachedStorage.pdf
47/65
Page 47 of 65
Netapp - Single NFS Client - Scaled Threads - Direct I/O Writes - 40GB
0
20
40
60
80
100
120
140
4k 8k 16k 64k 128k 256k 512k 1024k 2048k 4096k 8192k
Block Size
Rate(MB/s)
1 2 3 4
Figure 6-10 XDD Single Client Thread Scaling for Direct I/O Reads
7/23/2019 E - NetworkAttachedStorage.pdf
48/65
Page 48 of 65
6.3 BlueArc
Table 6-3 below was developed in order to gather data that could be used to assess thecapabilities of a vendor and their NAS offerings. The data was generally available
through the vendors website, vendor documentation, or direct questions to the vendor.
General
What are the names and model numbers ofthe storage products?
Titan 1100, Titan 2100, Titan 2200, Titan 2500,Titan 3100 and Titan 3200
What is the general architecture of the NAS? Cluster of 8 of nodes, except the Titan 1100 (2nodes)
What types of drives are supported by theNAS (SATA/SAS/FC)?
FC or SATA
Can the NAS be clustered? Yes
Is there a global file system within the NAS(and, if appropriate, the cluster)?
Yes
Are there any specialty nodes or componentsavailable (accelerators)?
Separate management unit called a SMU
What software options are included? Snapshots and Quick Restores, IDR, VirtualVolumes, Virtual Servers, Quotas, NDMP, Snit-Virus Support, Storage Pool
What are the optional software features of theNAS?
NFS, CIFS, ISCSI with Multipath, Virtual ServerMigration, Data Migrator, Active-Actiev Clustering,Cluster Name Space, IBR, Sychronous VolumeMirroring, WORM file system, Cluster ReadCaching
Performance
What drive sizes and speeds are available inthe NAS?
FC and SATA
What are the throughput capabilities for readsand writes in both Megabyte/s and IO/s?
Titan 2100 single node 76060 IOPS,Titan 2200 single node 98131 IOPS,Titan 2200 2-node active/active cluster 192915IOPS
What is the basis for the performancenumbers?
Data above was from the SPEC test shown below.
What are the throughput capabilities pernode/system?
Throughput not published.
Are there benchmark numbers available? SPEC SFS97_R! Benchmark results for the Titan2100 and Titan 2200 are at
http://www.spec.org/sfs97r1/results/sfs97r1.html.
Scalability
What is the maximum amount of storage persystem?
Titan 1100 128TB,Titan 2100/2200 2PB,Titan 3100/3200 4PB
What is the minimum amount of storage persystem?
7/23/2019 E - NetworkAttachedStorage.pdf
49/65
Page 49 of 65
Is clustering available, and what type (Active-Active and Active-Passive, etc)?
Active-Active and Active-Passive
What is the maximum usable capacity (ingigabytes; GB) for a single storage array?
Titan 1100 128TB,Titan 2100/2200 2PB,Titan 3100/3200 4PB
Can new disks be added on the fly or new
nodes?
Reliability and Availability
Are there any single points of failure insideNAS?
No, not when implemented as a two node cluster
How many global spares does the arraysupport?
Depends on the disk subsystem being used.
Does the NAS support remote copy?
What redundancy is built into the NAS? Redundant hot-swappable controllers, coolingfans, and power supplies
What protects data in the cache on a power
failure? How does the feature work?
A battery in each node protects the cache for up to
72hours.What type of RAID, or other disk failure,protection does your NAS support?
RAID 1, 10, 5, 6
Any other protection mechanisms available forthe data?
If a clustered solution, what is the maximumnumber of nodes?
8 (except Titan 1100 which is 2)
Is there port failover for external ports? Yes
Connectivity and Security
External port speeds available?(10/100/1000/10000)
10/100/1000/10000
Support Jumbo Frames (9000 bytes)? Yes
Number of external ports available? Maximum 6 1GbE ports per node or 2 10GbEports per node.
Are Fibre and/or Copper ports, for externalconnections, available?
Yes, both are available.
What is the backend architecture? The inter cluster communications use 10Gb/sEthernet
What NFS versions are supported? NFS V2/V3 over UDP or TCPIP
Other than NFS, what other connectivityoptions are there?
FTP, Microsoft CIFS, iSCSI.
If a clustered solution is there load balancing
between nodes? How is it determined?
No, must be done at the switch with link aggregate
protocol.
Support Dual Stack IPv4 and IPv6? No, only IPv4
Can external ports be connected to differentsubnets?
Yes
Is there an operating system embedded inyour storage array?
Not on the NAS head. The heads are FPGAbased. The Storage Management Unit (SMU) runsLinux Centos.
Type of OS/Flavor? SMU only - Linux Centos
7/23/2019 E - NetworkAttachedStorage.pdf
50/65
Page 50 of 65
Is root access available? Yes, on SMU only
How are vulnerabilities addressed? Vulnerabilities are addressed through normal OSupgrades or specific support escalations
Have the