Upload
dantw
View
233
Download
1
Embed Size (px)
Citation preview
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
1/158
Copyright 2011 Hewlett-Packard Development Company, L.P. The informationcontained herein is subject to change without notice. Confidentiality label goes here
Peter Mattei, Senior Storage ConsultantNovember 2011
A technical overview ofHP 3PAR Utility Storage
The worlds most agile and efficient Storage Array
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
2/158
HP Copyright 2011 Peter Mattei
Table of content
The IT Sprawl and how 3PAR can help
HP Storage & SAN Portfolio Introducing the HP 3PAR Storage Servers
F-Class
T-Class
V-Class
HP 3PAR InForm OS Virtualization Concepts
HP 3PAR InForm Software and Features Thin Technologies
Full and Virtual Copy
Remote Copy
Dynamic and Adaptive Optimization
Peer Motion
Virtual Domain
Virtual Lock
System Reporter
VMware Integration
Recovery Manager
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
3/158
HP Copyright 2011 Peter Mattei
Creates challenges for Mission Critical InfrastructureThe IT Sprawl
Source: HP research
of resources captive in
operations and maintenance
70%
Increased Risk
Inefficient and Siloed
Complicated and Inflexible
Business innovationthrottled to30%
3
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
4/158
HP Copyright 2011 Peter Mattei
And storage must change with itThe world has changed
Explosive growth& new workloads
Virtualization& automation
Cloud & utilitycomputing
Infrastructure &technology shifts
Too complicated to manage
Expensive & hard to scale
Isolated & disconnected
Inefficient & inflexible
Simple
Scalable
Smart
Self-Optimized
Customers tell us storage is: Storage needs to be:
4
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
5/158
HP Copyright 2011 Peter Mattei
3PAR Thin Provisioning
Best new technology in the market
Industry leading technology to maximize storage utilization
Automatically optimizes using multiple classes of storage
Workload management and load balancing
Advanced shared memory architecture
Multi-tenancy for service providers and private clouds
HP 3PAR Industry Leadership
3PAR Autonomic Storage Tiering
3PAR Virtual Domains
3PAR Dynamic Optimization
3PAR Full Mesh Architecture
http://www.3par.com/8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
6/158
HP Copyright 2011 Peter Mattei
Constant evolutionHP 3PAR History
6
1999 2001 2002 2003 2004 20102005 2006 2007 2008 2009 2011
May 19993PAR foundedwith 5 employees
July 20013PAR secures $100 Millionin third-round financing
June3PAR Utility Storage andThin Provisioning launchin the US and Japan
SeptemberGeneralAvailability of theInServ S-ClassStorage Server
May3PAR introducesDynamic Optimizationand Recovery Manager
AugustIntroduction of theE-Class midrangeStorage Server
SeptemberIntroduction of the T-Classwith Gen 3 ASIC - the firstThin Built in storage array
April
Introduction of the F-Classthe first quad controllermidrange array
November3PAR IPOIntroduction ofVirtual Domainsand iSCSI support
September3PAR acquired by HP
November
InForm OS v2.3.1 releasedWith many new features
MarchIntroduction of AdaptiveOptimization and RecoveryManager for VMware
December 2000Bring-up of the Gen 13PAR ASIC
2000 2012
AugustIntroduction ofV-Class w. Gen 4 ASIC,InForm v3.1.1,Peer Motion
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
7/158
HP Copyright 2011 Peter Mattei
HP 3PAR Leadership Efficient
HP 3PAR Customers reduce TCO by 50%
GreenOptimizedThin
Reduce capacityrequirements by at
least 50%
Tiering balances
$/GB and $/IOP
Reduce powerand cooling costs at
least 50%
7
Tier 2Nearline
Tier 1FC
Tier 0 SSD
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
8/158
HP Copyright 2011 Peter Mattei
HP 3PAR Leadership Autonomic
HP 3PAR Customers reduce storage management
burden by 90% compared to competitors arrays
Respond to ChangeQuickly
MaintainService Levels
Up Fast
15 Seconds toprovision a LUN
Deliver high performanceto all applications. Evenunder failure scenarios.
Quickly adapt to theunpredictable
8
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
9/158
HP Copyright 2011 Peter Mattei
HP 3PAR Leadership Multi-Tenant
The Tier-1 Storage for Utility Computing
ResilientSecureShared
MassiveConsolidation
Storage can be usedacross many differentapplications and lines
of business
Virtual PrivateArray
Secure segregation ofstorage while preserving
the benefits of massiveparallelism
Ready for Change
Sustain and consolidatediverse or changing service
levels without compromise
9
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
10/158
HP Copyright 2011 Peter Mattei
Infrastructure
HP Networking Wired, Wireless,Data Center, Security & Management
B, C & H SeriesFC Switches & /Directors
SAN ConnectionPortfolio
HP NetworkingEnterprise Switches
Nearline
D2D BackupSystems
ESL tapelibraries
VLS virtuallibrarysystems
EML tapelibrariesMSL tape
librariesRDX, tape drives
& tape autoloaders
Software
Services
On
line
P2000X1000/X3000 P9500 XPX9000 P6000 EVAP4000 3PAR
Data ProtectorExpress StorageEssentialsStorage ArraySoftwareStorageMirroring DataProtector
Business Copy
Continuous Access
Cluster Extension
SAN Implementation Storage Performance AnalysisEntry Data Migration Data MigrationInstallation & Start-up
Proactive 24 Critical ServiceProactive Select Backup & RecoverySupportPlus 24 SAN Assessment
Consulting services (Consolidation, Virtualization, SAN Design)Data Protection Remote Support
The HP Storage Portfolio
X5000
E5000for Exchange
http://h18006.www1.hp.com/products/storage/software/continuousaccess/index.htmlhttp://h18006.www1.hp.com/products/storage/software/continuousaccess/index.htmlhttp://h18006.www1.hp.com/products/storage/software/bizcopyeva/index.html8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
11/158
HP Copyright 2011 Peter Mattei
HP Storage Array Positioning
P2000 MSA P9500P4000 LeftHand
Virtual ITMission CriticalConsolidation
P6000 EVA
Utility Storage
3PAR
ApplicationConsolidation
StorageConsolidation
Architecture Dual Controller Scale-out Cluster Dual Controller Mesh-Active Cluster Fully Redundant
Connectivity SAS, iSCSI, FC iSCSI FC, iSCSI, FCoE iSCSI, FC, (FCoE) FC, FCoE
Performance 30K Random Read IOPs ;1.5GB/s seq reads 35K Random read IOPs2.6 GB/s seq reads 55K Random read IOPS1.7 GB/s seq Reads > 400K random IOPs;> 10 GB/s seq reads >300K Random IOPS> 10GB/s seq reads
ApplicationSweet spot
SMB , enterprise ROBO,consolidation/ virtualizationServer attach, Videosurveillance
SMB, ROBO and EnterpriseVirtualized inc VDI ,Microsoft appsBladeSystem SAN (P4800)
Enterprise - Microsoft,Virtualized, OLTP
Enterprise and ServiceProvider , Utilities, Cloud,
Virtualized Environments,OLTP, Mixed Workloads
Large Enterprise - MissionCritical w/Extremeavailability, VirtualizedEnvironments, Multi-Site DR
Capacity 600GB 192TB;6TB average
7TB 768TB;72TB average
2TB 480TB;36TB average
5TB 1600TB;120TB average
10TB 2000 TB;150TB average
Key features Price / performance
Controller ChoiceReplicationServer Attach
All-inclusive SW
Multi-Site DR includedVirtualizationVM IntegrationVirtual SAN Appliance
Ease of use and Simplicity
Integration/CompatibilityMulti-Site Failover
Multi-tenancy
Efficiency (Thin Provisioning)PerformanceAutonomic Tiering andManagement
Constant Data Availability
HeterogeneousVirtualizationMulti-site Disaster Recovery
Application QOS (APEX)Smart Tiers
OS support Windows, vSphere, HP-UX,Linux, OVMS, Mac OS X,Solaris, Hyper-V
vSphere. Windows, Linux,HP-UX, MacOS X, AIX,Solaris, XenServer
Windows, VMware, HP-UX,Linux, OVMS, Mac OS X,Solaris, AIX
vSphere, Windows, Linux,HP-UX, AIX, Solaris
All major OSs includingMainframe and Nonstop
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
12/158
HP Copyright 2011 Peter Mattei
Brocade switch, director, HBA and software familyB-Series SAN Portfolio
8/80 SAN Switch48-80 8Gb ports
Data Center Fabric ManagerEnhanced capabilities
8/40 SAN Switch24-40 8Gb ports
HP 400 MP-Router(16x 4Gb FC + 2GbE IP ports)
Integrated 8Gb SAN Switchfor HP EVA4400
DC04 SAN Director32 - 2568Gb FC ports
+ 4x 64Gb ICL
Director Blades
MP Router16FC+2IP port
8/8 & 8/24 SAN Switch8-24 8Gb ports
8Gb SAN Switchfor HP c-Class BladeSystem
Encryption Switch(32x 8Gb FC ports)
DC Encryption
HP 2408 CEE ToR Switch(24x 10Gb CEE
+ 8 8Gb FC ports)
Host Bus Adapters4Gbps single and dual port HBA8Gbps single and dual port HBA
DC SAN Backbone Director32 - 512
8Gb FC ports
+ 4x 128Gb ICL
1606 Extension SANSwitch - FC & GbE
10/24 FCoE
MP Extension
16Gb FC32 & 48 port
8Gb FC & FICON16, 32, 48 & 64 port
SN8000B 8-slot
32 384 16Gb FC ports2.11Tb ICL bandwidthSN8000B 4-slot32 - 19216Gb FC ports
1Tb ICL bandwidth
SN6000B FC Switch24-48 16Gb ports
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
13/158
HP Copyright 2011 Peter Mattei
MDS9000 blades
FC switches MDS9000 multiprotocol switches and directors
Cisco MDS9000 and Nexus 5000 familyC-Series SAN Portfolio
Cisco Fabric Manager
Cisco NX-OS
Supervisor 2 1 - 4Gb FC12, 24, 48-Port
10Gb FC4-Port
18/4,IP Storage
Services blade
SSMVirtualization
blade
MDS 9506 MDS 9509 MDS 9513MDS 9124
MDS 9222iMDS 9134
1 - 8Gb FC24 & 48-Port
MDS 9124ec-class switch
Nexus DCE/CEE ToR switches
Nexus 501020 28 ports
Nexus 502040 56 ports
Nexus Expansion ModulesFC FC/4 10Gb Eth - 10Gb Eth
Management
Embedded OS
Fabric Manager andFabrice ManagerServer PackageEnhanced capabilities
SN6000C(MDS 9148)
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
14/158
HP Copyright 2011 Peter Mattei
New HardwareAnnouncement of 23th August 2011
14
New HP 3PAR top models P10000 V400 and V800
Higher performance 1.5 to 2 times T-Class
New SPC-1 performance world record of 450213 IOPS
Higher capacities 2 times T-Class
V400: 800TB
V800: 1600TB Higher number of drives 1.5 times T-Class
V400: 960 disks V800: 1920 disks
New faster Gen4 ASIC now 2 per node
PCI-e bus architecture provides higher bandwidth and resilience 8Gb FC ports higher IO performance
Chunklet size increased to 1GB to address future higher capacities
T10 DIF increased data resilience
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
15/158
HP Copyright 2011 Peter Mattei
New InForm OS and FeaturesAnnouncement of 23th August 2011
15
New InForm OS 3.1.1 for F-, T- and V-Class
64-bit architecture Remote Copy enhancements
Thin Remote Copy reduces initial copy size More FC RC links up to 4 from 2
Firmware upgrade enhancements All upgrades are now node by node
RC copy groups can now stay online during FW upgrades New additional Virtual Domain user roles
More granular 16kB Thin Provisioning space reclamation
VMware enhancements Automated VM space reclamation (T10 compliant) VASA support
Peer Motion for F-, T- and V-Class Allows transparent tiering and data migration between F-, T- and V-Class systems
New license bundles
Thin Suite for F-, T- and V-Class
Optimization Suite for V-Class
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
16/158
HP Copyright 2011 Peter Mattei
HP 3PAR InServ Storage Servers
F200 F400 T400 T800 V400 V800
Controller Nodes 2 2 4 2 4 2 8 2 4 2 8
Fibre Channel Host PortsOptional 1Gb iSCSI PortsOptional 10Gb iSCSI Ports 3)Optional 10Gb FCoE Ports 3)Built-in IP Remote Copy Ports
0 120 8
NANA
2
0 240 16
NANA
2 4
0 480 16
NANA
2 4
0 960 32
NANA
2 4
0 96NA
0 320 322 4
0 192NA
0 320 322 4
GBs Control CacheGBs Data Cache
812
8 1612 24
8 1624 48
8 3224 96
32 - 6464 - 128
64 - 256128 - 512
Disk Drives 16 192 16 384 16 640 16 1,280 16 - 960 16 - 1920
DriveTypes
SSD 1)
FC 15krpm
NL 7.2krpm 2)
100, 200GB300, 600GB
2TB
100, 200GB300, 600GB
2TB
100, 200GB300, 600GB
2TB
100, 200GB300, 600GB
2TB
100, 200GB300, 600GB
2TB
100, 200GB300, 600GB
2TB
Max Capacity 128TB 384TB 400TB 800TB 800TB 1600TB
Read throughput MB/sIOPS (true backend IOs)
1,30034,400
2,60076,800
3,800120,000
5,600240,000
6,500180,000
13,000360,000
SPC-1 Benchmark IOPS 93,050 224,990 450213
Same OS, Same Management Console, Same Replication Software
1) max. 32 SSD per Node Pair2) NL = Nearline = Enterprise SATA
3) Planned 1H201216
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
17/158
HP Copyright 2011 Peter Mattei
Comparison between T- and V-Class
HP 3PAR T-Class HP P10000 3PARBus Architecture PCI-X PCIe
CPUs 2 x dual-core per node 2 x quad-core per node
ASIC 1 per node 2 per node
Control cache 4GB per node V400: 16GB per node
V800: 32GB per nodeData cache 12GB per node V400: 32GB per node
V800: 64GB per node
I/O slots 6 9
FC host ports 0 - 128 4GB/s 0 - 192 8GB/s
iSCISI host ports 0 - 32 1GB/s 0 - 32 10GB/s*
FCoE host ports N/A 0 - 32 10GB/s*
Rack options 2M HP 3PAR rack 2M HP 3PAR rackor 3rd party rack for V400*
Drives 16 - 1280 16 - 1920
Max capacity 800TB 1.6PB
T10DIF N/A Supported
17
*planned
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
18/158
HP Copyright 2011 Peter Mattei
P10000 3PAR Bigger, Faster, Better! ...all round
18
192 384
640
1,280960
1,920
Disk Drives
128384 400
800 800
1,600
Raw Capacity (TB)
20 40 64 128
192
768
Total Cache (GB)
12 2464
12896
192
Host Ports
4693
156
312
180
360
Disk IOPS (,000)
1300 26003200
64006500
13000
Throughput (MB/s)1.5x 6x 2.5x
2x 1.5x 1.5x
240
120
3800
5600
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
19/158
HP Copyright 2011 Peter Mattei
SPC-1 IOPS HP 3PAR P10000 World RecordScalable performance
19
For details see: http://www.storageperformance.org/results/benchmark_results_spc1
http://www.storageperformance.org/results/benchmark_results_spc1http://www.storageperformance.org/results/benchmark_results_spc18/11/2019 A technical 3PAR presentation v9 4nov11.pdf
20/158
HP Copyright 2011 Peter Mattei
SPC-1 $/IOPSScalable performance without high cost
20
For details see: http://www.storageperformance.org/results/benchmark_results_spc1
http://www.storageperformance.org/results/benchmark_results_spc1http://www.storageperformance.org/results/benchmark_results_spc18/11/2019 A technical 3PAR presentation v9 4nov11.pdf
21/158
HP Copyright 2011 Peter Mattei
HP 3PAR Four Simple Building Blocks
F-Class
Controller Nodes Performance/connectivity building block CPU, Cache and 3PAR ASIC System Management RAID and Thin Calculations
Node Mid-Plane Cache Coherent Interconnect Completely passive encased in steel Defines Scalability
Drive Chassis Capacity Building Block
F Chassis 3U 16 Disk T & V Chassis 4U 40 Disks
Service Processor One 1U SVP per system Only for service and monitoring
T-Class V-Class
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
22/158
HP 3PAR Utility Storage
AdditionalHP
3PARSoftware
3PARInForm
Ope
rating
System
Softwa
re
F-,T-&
V-Class
Purpose built on native virtualizationHP 3PAR Architectural differentiation
ASIC
Active Mesh
Fast RAID 5 / 6
Mixed Workload
Zero Detection
Virtual Domains
Virtual LockSystem Reporter
VirtualCopy
Peer Motion
RecoveryManagers
Remote Copy
F-, T-, V-ClassThin Suite
ThinProvisioning
ThinConversion
ThinPersistence
22
V-ClassOptimization Suite
DynamicOptimization
System Tuner
AdaptiveOptimization
Cluster Extension
Self-Configuring Self-Healing
Self-MonitoringAutonomic Policy
ManagementSelf-Optimizing
InFormfine-grained OS
Performance
Instrumentation
Utilization
Manageability
Full Copy LDAPRapid Provisioning Access Guard
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
23/158
HP Copyright 2011 Peter Mattei
Hardware Based for PerformanceHP 3PAR ASIC
Fast RAID 10, 50 & 60Rapid RAID Rebuild
Integrated XOR Engine
Tightly-Coupled ClusterHigh Bandwidth, Low Latency
Interconnect
Mixed WorkloadIndependent Metadata and Data
Processing
Thin Built inZero Detect
23
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
24/158
HP Copyright 2011 Peter Mattei
Traditional Modular Storage
Traditional TradeoffsLegacy vs. HP 3PAR Hardware Architecture
Cost-efficient but scalability and resiliency limitedby dual-controller design
Cost-effective, scalable and resilient architecture.Meets cloud-computing requirements for efficiency,
multi-tenancy and autonomic management.
HP 3PAR meshed and active
Host Connectivity
Traditional Monolithic Storage
Scalable and resilient but costly.Does not meet multi-tenant requirements efficiently
Disk Connectivity
Distributed
Controller
Functions
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
25/158
HP Copyright 2011 Peter Mattei
Host Connectivity
Data Cache
Disk Connectivity
Passive Backplane
Scale without TradeoffsHP 3PAR Hardware Architecture
A finely, massively, andautomatically loadbalanced cluster
3PAR InSpireF-Class Architecture
Legend
3PAR InSpireT- and V-Class Architecture
3PARASIC
3PARASIC
25
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
26/158
HP Copyright 2011 Peter Mattei
Unified Processorand/or Memory
Control Processor& Memory
3PAR ASIC &Memory
disk
Heavy throughputworkload applied
Heavy transactionworkload applied
I/O Processing : Traditional Storage
I/O Processing : 3PAR Controller Node
hosts
hosts
small IOPs wait for large IOPs to
be processed
control information and data arepathed and processed separately
Heavy throughputworkload sustained
Heavy transactionworkload sustained
Diskinterface
=control information (metadata)=data
Hostinterface
Hostinterface
diskDiskinterface
Multi-tenant performance3PAR Mixed workload support
26
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
27/158
HP Copyright 2011 Peter Mattei
0
500
1000
1500
2000
2500
3000
0 10 20 30 40 50 60 70 80 90 100
% Read IOPS from Host
MBsofCacheDedicatedtoWritesperNode
20K IOPs
30K IOPs
40K IOPs
Self-adapting Cache 50 to 100% for reads / 50 to 0% for writes3PAR Adaptive Cache
Measured System: 2-Node T800 with 320 15K FC Disks and12 GB data cache per Node
Host Load
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
28/158
HP Copyright 2011 Peter Mattei
Spare Disk Drives vs. Distributed SparingHP 3PAR High Availability
Traditional Arrays
3PAR InServ
Few-to-one rebuildhotspots & long rebuild exposure
Spare drive
Many-to-many rebuildparallel rebuilds in less time
Spare chunklets
28
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
29/158
HP Copyright 2011 Peter Mattei
Shelf
Shelf
3PAR InServ
Shelf-independent RAIDDespite shelf failure Data access preserved
Shelf
Shelf
A1
B1
C1
D1
B2
A2
C2
D2
B3
A3
C3
D3
Raidlet Groups
A4
B4
C4
D4
B5
A5
C5
D5
B6
A6
C6
D6
Guaranteed Drive Shelf AvailabilityHP 3PAR High Availability
Shelf
Shelf
C G C
D H D
Traditional Arrays
Shelf-dependent RAIDShelf failure might mean no access to data
29
Shelf
B F B
Shelf
A E A
Raid Group RG
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
30/158
HP Copyright 2011 Peter Mattei
Write Cache Re-MirroringHP 3PAR High Availability
Traditional Arrays
3PAR InServ
Traditional Write-Cache MirroringEither poor performance due to write-thru mode
or risk of write data loss
Persistent Write-Cache Mirroring No write-thru mode consistent performance Works with 4 and more nodes
F400
T400, T800
V400, V800
Mirror
Write Cache
Mirror
Write Cache
Write-Cache off for data security
30
Write-Cache stays onthanks to redistribution
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
31/158
HP Copyright 2011 Peter Mattei
HP 3PAR virtualization advantage
RAID5 SetRAID1 Set
RAID1RAID5 Set RAID6 Set
LUN 1LUN 0
LUN 3
LUN 4
LUN 5
Traditional Controllers
Sp
are
Spare
LUN 7
LUN 6
LUN 2
0 1 2 3 4 5 6 7
R1 R1 R5R1 R5R5 R6 R6
Each RAID level requires dedicated disks Dedicated spare disk required Limited single LUN performance
Traditional Array
3PAR InServ Controllers
0 1 2 3 4 5 6 7
R1 R1 R5R1 R5R5 R6 R6
HP 3PAR
All RAID levels can reside on same disks Distributed sparing, no dedicated spare disks Built-in wide-striping based on Chunklets
Physical Disks
31
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
32/158
HP Copyright 2011 Peter Mattei
HP 3PAR F-Class InServ Components
Controller Nodes (4U)
Capacity building block Drive Magazines
Add non-disruptively
Industry leading density
Drive Chassis (3U)
Full-mesh Back-plane
Post-switch architecture
High performance, tightly coupled
Completely passive
3PAR
40U
,19Cabinet
orCustom
erProvided
Performance and connectivity building block
Adapter cards Add non-disruptively
Runs independent OS instance
Service Processor (1U) Remote error detection
Supports diagnostics and maintenance
Reporting to HP 3PAR Central
32
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
33/158
HP Copyright 2011 Peter Mattei
Configuration OptionsHP 3PAR F-Class Node
2 built-in FC Disk Ports
2 built-in FC Disk or Host Ports
Slot 1: optional 2 FC Ports for Host , Disk or FC Replicationor 2 GbE iSCSI Ports
Slot 0: optional 2 FC Ports for Host , Disk or FC Replicationor 2 GbE iSCSI Ports
GigE Management Port
GigE IP Replication Port
One Xeon Quad-Core 2.33GHz CPU
One 3PAR Gen3 ASIC per node
4GB Control & 6GB Data Cache per node
Built-in I/O ports per node
10/100/1000 Ethernet port & RS-232
Gigabit Ethernet port for Remote Copy
4 x 4Gb/s FC ports
Optional I/O per node Up to 4 more FC or iSCSI ports (mixable)
Preferred slot usage (in order); depending oncustomer requirements
Disk Connections: Slot 2 (ports 1,2), 0, 1higher backend connectivity and performance
Host Connections: Slot 2 (ports 3,4), 1, 0higher front-end connectivity and performance
RCFC Connections: Slot 1 or 0Enables FC based Remote Copy (first node pair only)
iSCSI Connections: Slot 1, 0adds iSCSI connectivity
33
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
34/158
HP Copyright 2011 Peter Mattei
Cache per node
Control Cache: 4GB (2 x 2048MBDIMMs)
Data Cache: 6 GB (3 x 2048MB
DIMMs) SATA : Local boot disk
Gen3 ASIC Data Movement
XOR RAID Processing
Built-in Thin Provisioning
I/O per node
3 PCI-X buses/ 2 PCI-X slots and oneonboard 4 port FC HBA
F-Class Controller NodeHP 3PAR InSpire Architecture
Controller Node(s)
SERIALLAN
SATA
DataCache
ControlCache4GB
6 GB
2 Onboard4 Port FC
10
Quad-Core Xeon2.33 GHz
High SpeedData Links
MultifunctionController
34
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
35/158
HP Copyright 2011 Peter Mattei
Minimum F-Class configurationsF-Class DC3 Drive Chassis Configurations
35
Non-Daisy Chained Daisy Chained
Minimum configuration
2 Drive Chassis
16 same Drives Min upgrade is 8 Drives
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
36/158
HP Copyright 2011 Peter Mattei
Maximum 2-node F-Class configurationsF-Class DC3 Drive Chassis Configurations
36
Non-Daisy Chained 96 Drives
Daisy Chained 192 Drives
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
37/158
HP Copyright 2011 Peter Mattei
Connectivity Options: Per F-Class Node Pair
Ports
0 1
Ports
2 - 3
PCI
Slot 1
PCI
Slot 2
# of FC
Host Ports
# of iSCSI
Ports
# of Remote
Copy FC Ports
# of Drive
Chassis
Max # of
Disks
Disk Host - - 4 - - 4 64
Disk Host Host - 8 - - 4 64
Disk Host Host Host 12 - - 4 64
Disk Host Host iSCSI 8 4 - 4 64
Disk Host iSCSI RCFC 4 4 2 4 64
Disk Host Disk - 4 - - 8 128
Disk Host Disk Host 8 - - 8 128
Disk Host Disk iSCSI 4 4 - 8 128
Disk Host Disk RCFC 4 - 2 8 128
Disk Host Disk Disk 4 - - 12 192
37
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
38/158
HP Copyright 2011 Peter Mattei
HP 3PAR T-Class InServ Components
Performance and connectivity building block
Adapter cards
Add non-disruptively
Runs independent OS instance
Controller Nodes (4U)
Capacity building block Drive Magazines
Add non-disruptively
Industry leading density
Drive Chassis (4U)
Full-mesh Back-plane Post-switch architecture
High performance, tightly coupled
Completely passive
3PAR40U
,19C
abinet
Bu
ilt-
InCa
bleManagement
Service Processor (1U) Remote error detection
Supports diagnostics and maintenance Reporting to HP 3PAR Central
38
http://portal.aphroland.org/~aphro/prod_inserv_drive_mag_hires.png8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
39/158
HP Copyright 2011 Peter Mattei
Bus to Switch to Full Mesh ProgressionThe 3PAR Evolution
3PAR InServ Full Mesh Backplane High Performance / Low Latency
Passive Circuit Board
Slots for Controller Nodes
Links every controller (Full Mesh)
1.6 GB/s (4 times 4Gb FC)
28 links (T800)
Single hop
3PAR InServ T800 with 8 Nodes
8 ASICS with 44.8 GB/s bandwidth
16 Intel Dual-Core processors
32 GB of control cache 96GB total data cache
24 I/O buses, totaling 19.2 GB/s ofpeak I/O bandwidth
123 GB/s peak memory bandwidth
39
T800 with 8 Nodesand 640 Disks of 1280 max
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
40/158
HP Copyright 2011 Peter Mattei
2 to 8 per System installed in pairs
2 Intel Dual-Core 2.33 GHz
16GB Cache
4GB Control/12GB Data
Gen3 ASIC
Data Movement, ThP & XOR RAIDProcessing
Scalable Connectivity per Node3 PCI-X buses/ 6 PCI-X slots
Preferred slot usage (in order)
2 slots 8 FC disk ports
Up to 3 slots 24 FC Host ports
1 slot 1 FC port used for Remote Copy(first node pair only)
Up to 2 slots 8 1GbE iSCSI Host ports
Controller Node(s)
HP 3PAR T-Class Controller Node
T-Class Node pair
0 1 3 4 520 1 3 4 52 PCI Slots
Console port C0
Remote Copy Eth port E1
Mgmt Eth port E0
Host FC/iSCSI/RC FC portsDisk FC ports
40
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
41/158
HP Copyright 2011 Peter Mattei
T-Class Controller NodeHP 3PAR InSpire architecture
Scalable Performance per Node 2 to 8 Nodes per System
Gen3 ASIC
Data Movement
XOR RAID Processing
Built-in Thin Provisioning 2 Intel Dual-Core 2.33 GHz
Control Processing
SATA : Local boot disk
Max host-facing adapters
Up to 3 (3 FC / 2 iSCSI)
Scalable Connectivity Per Node 3 PCI-X buses/ 6 PCI-X slots
Controller Node(s)
GEN3 ASIC
41
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
42/158
HP Copyright 2011 Peter Mattei
T-Class DC04 Drive Chassis
From 2 to 10 Drive Magazines (1+1) redundant power supplies
Redundant dual FC paths
Redundant dual switches
Each Magazine always holds4 disks of the same drive type
Each Magazines in a Chassiscan have different Drive types.
For example: 3 magazines of FC
1 magazine of SSD
6 magazines of SATA.
42
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
43/158
HP Copyright 2011 Peter Mattei
T400 Configuration examples
* Diagram is not intended to show all components in the 2M Cabinet, but rather to show howcontrollers and drive chassis scale. Controllers and Drive Chassis are populated from bottom to top
Minimum configuration is 2 nodes and 4 drive chassiswith 2 magazines per chassis. That means a starting
configuration with 600GB drives is 19.2 TB of rawstorage.
Upgrades are done as Columns of Magazines downthe Drive Chassis. In this example we added 4 600GBmagazines or 16 Drives.
Once we fill up the original 4 Drive Chassis we have achoice. Add 2 more nodes, drive chassis and disks or
just add 4 more drive chassis and some disks. Considerations:
Do I need more IOP performance? (A node pair candrive 320 15K disks or 8 fully loaded Chassis.)
It is virtually impossible to run out of CPUs powerwith so few drives. Only SSD drives may hit nodeIOP and CPU limits.
Do I need more Bandwidth? A nodes bandwidth canbe reached with much fewer resources. Addingnodes increases overall bandwidth.
43
f
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
44/158
HP Copyright 2011 Peter Mattei
T400 Configuration examples How do we grow? After looking at the performance
requirements it is decided that adding capacity to theexisting nodes is the best option. This offers a good balance
of capacity and performance.
The next upgrade is going to require additional ControllerNodes, Drive Chassis and Drive magazines. The minimumupgrade allowed is:
2 Controller nodes
4 Drive Chassis
8 Drive Magazines Just because you can do something doesnt mean it is a
good idea. This upgrade makes the Node Pairs veryunbalanced.
Over 50,000 IOPs on 2 nodes and 6400 on the other 2
Over 320 TB on one Node Pair and 19TB on the other 2
A much cleaner upgrade would be to add a lot more FC
capacity. This will bring the node IOP balance up muchcloser. 44,800 to 32,000 FC IOPs There will still be a lotmore capacity behind 2 nodes but the volumes that needmore IOPs can be balanced across all FC disks.
Due to power distribution limits in a 3PAR rack you canonly have 8 Chassis per rack. A T400 with 8 Chassisrequires 2 full racks and a mostly un-filled 3rd rack.
We decide that the next upgrade should be filling out thefirst two nodes.
44
f l
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
45/158
HP Copyright 2011 Peter Mattei
T400 Configuration examples
Youll notice that the T400 has space for 6 DriveChassis but the normal building block is 4 Chassis.
With a T400 you are allowed to deploy 6 DriveChassis on the initial deployment
But this has some important caveats:
Min. upgrades increment are 6 Magazines, 24drives. In this example with 600GB drives that is
a minimum upgrade of 14TB. This is the maximum configuartion in one rack:
2 nodes, 6 Chassis, 60 Magazines, 240 drives
The next min. upgrade requires:2 nodes, 6 Chassis with 12 Magazines of 48drives
You can finally fill out the configuration byadding 4 more drive Chassis (2 per node)Important note: To rebalance you probablyneed a TS engagement
45
ll f d
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
46/158
HP Copyright 2011 Peter Mattei
T800 Fully Configured 224000 SPC IOPS
8 Nodes
32 Drive Chassis
1280 Drives
768TB raw capacitywith 600GB drives
224000 SPC IOPS
46
Disk Chassis/Frames may be up to 100m apart from the Controllers (1stFrame)
Cl d d
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
47/158
HP Copyright 2011 Peter Mattei
T-Class redundant power
Controller Nodes and Disk Chassis (shelves) are
powered by (1+1) redundant power supplies.The Controller Nodes are backed up by a stringof two batteries.
47
HP P10000 3PAR V400 C
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
48/158
HP Copyright 2011 Peter Mattei
HP P10000 3PAR V400 Components
48
First rackwith controllers
and disks
Expansion rack(s)with disks only
Full-mesh Back-plane Post-switch architecture
High performance, tightly coupled
Completely passive
Up to 6 in first, 8 in expansion racks Capacity building block
2 to 10 Drive Magazines
Add non-disruptively
Industry leading density
Drive Chassis (4U)
Service Processor (1U) Remote error detection
Supports diagnostics and maintenance
Reporting to HP 3PAR Central
Performance and connectivity building block
Adapter cards
Add non-disruptively
Runs independent OS instance
Controller Nodes
HP P10000 3PAR V800 C
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
49/158
HP Copyright 2011 Peter Mattei
Full-mesh Back-plane Post-switch architecture
High performance, tightly coupled
Completely passive
HP P10000 3PAR V800 Components
2 in first, 8 in expansion racks Capacity building block
2 to 10 Drive Magazines
Add non-disruptively
Industry leading density
Drive Chassis (4U)
49
Service Processor (1U) Remote error detection
Supports diagnostics and maintenance
Reporting to HP 3PAR Central
First rackwith controllers
and disks
Expansion rack(s)with disks only
Performance and connectivity building block
Adapter cards
Add non-disruptively
Runs independent OS instance
Controller Nodes
Th 3PAR V Cl E l i
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
50/158
HP Copyright 2011 Peter Mattei
Bus to Switch to Full Mesh ProgressionThe 3PAR V-Class Evolution
V-Class Full Mesh Backplane High Performance / Low Latency
112 GB/s Backplane bandwidth
Passive Circuit Board
Slots for Controller Nodes
Links every controller (Full Mesh)
2.0 GB/s ASIC to ASIC
Single hop
Fully configured P10000 3PAR V800
8 Controller Nodes
16 Gen4 ASICs 2 per node 16 Intel Quad-Core processors
256 GB of control cache
512 GB total data cache
136 GB/s peak memory bandwidth
50
Max V800 configurationwith 8 Nodes and 1920 Disks
HP 3PAR V Cl C ll N d
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
51/158
HP Copyright 2011 Peter Mattei
2 to 8 per System installed in pairs
2 Intel Quad-Core per node
48GB or 96GB Cache per node
V400: 16GB Control/32GB Data
V800: 32GB Control/64GB Data
2 Gen4 ASIC per node
Data Movement, ThP & XOR RAID Processing
Scalable Connectivity per Node3 PCI-e buses/ 9 PCI-e slots
4-port 8Gb/s FC Adapter
10Gb/s FCoE ready (post GA) 10Gb/s iSCSI ready (post GA)
Internal SSD drive for
InServe OS
Cache destaging in case of power failure
Controller Node(s)
HP 3PAR V-Class Controller Node
51
P10000
3PARControllers
Remote Copy Ethernet portRCIP E1
Serial ports
PCI Slots
0 1 2
3 4 5
6 7 8
PCI-e card installation order
Drive Chassis Connections 6, 3, 0
Host Connections 2, 5, 8, 1, 4, 7
Remote Copy FC Connections 1, 4, 2, 3
Management Eth port E0
HP 3PAR I S i hit t
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
52/158
HP Copyright 2011 Peter Mattei
V-Class Controller NodeHP 3PAR InSpire architecture
Scalable Performance per Node 2 to 8 Nodes per System
Thin Built In Gen4 ASIC 2.0 GB/s dedicated ASIC-to-ASIC bandwidth
112 GB/s total backplane bandwidth
Inline Fat-to-Thin processing in DMA engine2
2 x Intel Quad-Core Processors
V400: 48GB Cache
V800: 96GB Maximum Cache
8Gb/s FC Host/Drive Adapter
10Gb/s FCoE/iSCSI Host Adapter (planned)
Warm-plug Adapters
Controller Node(s)
52
IntelMulti-CoreProcessor
3PAR Gen4ASIC
3PAR Gen4ASIC
Control Cache16 or 32GB
Data Cache32 or 64GB
Control(SCSICommandPath)
Data Paths
Intel
Multi-CoreProcessor
MultifunctionController
PCIe
Switch
PCIe
Switch
PCIe
Switch
PCI-eSlots
V Cl P10000 D i Ch i
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
53/158
HP Copyright 2011 Peter Mattei
V-Class P10000 Drive Chassis
From 2 to 10 Drive Magazines
(1+1) redundant power supplies
Redundant dual FC paths
Redundant dual switches
Each Magazine always holds4 disks of the same drive type
Each Magazines in a Chassiscan have different Drive types.
For example: 3 magazines of FC
1 magazine of SSD
6 magazines of SATA.
53
V400 C fi ti E l
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
54/158
HP Copyright 2011 Peter Mattei
With 2 Controllers and 4 Drive Chassis IncrementsV400 Configuration Examples
54
Minimum initial Configuration 1 Rack 2 Controller Nodes 4 Drive Chassis 8 Drive Magazines (32 Disks)
Minimum upgrade
4 Drive Magazines (16 Disks)
Maximum 2-node Configuration 2 Racks 12 Drive Chassis 480 Disks
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
55/158
V400 C fi ti E l
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
56/158
HP Copyright 2011 Peter Mattei
With 4 Controllers and 4 Drive Chassis IncrementsV400 Configuration Examples
56
Maximum Configuration 4 Racks 4 Controller Nodes 24 Drive Chassis 960 Disks
V800 C fi ti E l
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
57/158
HP Copyright 2011 Peter Mattei
With 2 Controllers and 4 Drive Chassis IncrementsV800 Configuration Examples
57
Minimum initial Configuration 2 Rack 2 Controller Nodes 4 Drive Chassis 8 Drive Magazines (32 Disks)
Minimum upgrade
4 Drive Magazines (16 Disks) Up to 160 Disks in 4 Chassis
V800 Configuration Examples
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
58/158
HP Copyright 2011 Peter Mattei
With 4 Controllers and 4 Drive Chassis IncrementsV800 Configuration Examples
58
Minimum initial Configuration 2 Rack 4 Controller Nodes 8 Drive Chassis 16 Drive Magazines (64 Disks)
Minimum upgrade
4 Drive Magazines (16 Disks) Up to 320 Disks in 8 Chassis
V800 Configuration Examples
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
59/158
HP Copyright 2011 Peter Mattei
With 8 Controllers and 4 Drive Chassis IncrementsV800 Configuration Examples
59
Minimum initial Configuration 3 Rack 8 Controller Nodes 16 Drive Chassis 32 Drive Magazines (128 Disks)
Minimum upgrade
4 Drive Magazines (16 Disks) Up to 640 Disks in 16 Chassis
V800 Configuration Examples
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
60/158
HP Copyright 2011 Peter Mattei
Max 8 Controllers configuration 450213 SPC-1 IOPSV800 Configuration Examples
60
7 Racks 8 Controller Nodes
192 Host Ports 768GB Cache (256GB Control / 512GB Data) 48 Drive Chassis 1920 Disks
Disk Chassis/Frames may be up to 100m apart from the Controllers (1stFrame)
HP 3PAR I F OS
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
61/158
Copyright 2011 Hewlett-Packard Development Company, L.P. The information
contained herein is subject to change without notice. Confidentiality label goes here
HP 3PAR InForm OSVirtualization Concepts
HP 3PAR Virtualization Concept
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
62/158
HP Copyright 2011 Peter Mattei
3PAR Mid-Plane
Example: 4-Node T400 with 8 Drive ChassisHP 3PAR Virtualization Concept
Drive Chassis are point-to-pointconnected to controllers nodes inthe T-Class to provide cagelevel availability to withstandthe loss of an entire driveenclosure without losing access toyour data.
Nodes are added in pairs forcache redundancy
An InServ with 4 or more nodessupports Cache Persistencewhich enables maintenance
windows and upgrades withoutperformance penalties.
62
HP 3PAR Virtualization Concept
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
63/158
HP Copyright 2011 Peter Mattei
Example: 4-Node T400 with 8 Drive ChassisHP 3PAR Virtualization Concept
T-Class Drive Magazines hold4 of the very same drives SSD, FC or SATA Size Speed
SSD, FC, SATA drivemagazines can be mixed
A minimum configuration has 2magazines per enclosure
Each Physical Drive is dividedinto Chunklets of- 256MB on F- and T-Class- 1GB on V-Class
63
HP 3PAR Virtualization Concept
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
64/158
HP Copyright 2011 Peter Mattei
VirtualVolume
VirtualVolume
Example: 4-Node T400 with 8 Drive ChassisHP 3PAR Virtualization Concept
RAID sets will be built acrossenclosures and massively stripedto form Logical Disks (LD)
Logical Disks are bound togetherto build Virtual Volumes
Each Virtual Volume isautomatically wide-striped acrossChunklets on all disk spindles
of the same type creating amassively parallel system
VirtualVolume
ExportedLUN
Virtual Volumes can now beexported as LUNs to servers
64
LD
LDLDLD
LDLDLD
LDLD
LD
LDLD
LD
LDLD
LDLD
LD
LDLDLD
LDLDLD
RAID5(3+1)
LDs are equally allocated tocontroller nodes
Why are Chunklets so Important?
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
65/158
HP Copyright 2011 Peter Mattei
Why are Chunklets so Important?
Ease of use and Drive Utilization
Same drive spindle can service many different LUNsand different RAID types at the same time
Allows the array to be managed by policy, not by
administrative planning
Enables easy mobility between physical disks, RAID
types and service levels by using Dynamic or AdaptiveOptimization
Performance Enables wide-striping across hundreds of disks
Avoids hot-spots
Allows Data restriping after disk installations
High Availability HA Cage - Protect against a cage (disk tray) failure.
HA Magazine - Protect against magazine failure
3PAR InServ Controllers
0 1 2 3 4 5 6 7
R1 R1 R5R1 R5R5 R6 R6
Physical Disks
65
Common Provisioning Groups (CPG)
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
66/158
HP Copyright 2011 Peter Mattei
Common Provisioning Groups (CPG)
CPGs are Policies that define Service and Availability level by
Drive type (SSD, FC, SATA)
Number of Drives (striping width)
RAID level (R10, R50 2D1P to 8D1P, R60 6D2P or 14D2P)
Multiple CPGs can be configured and optionally overlap the same drives i.e. a System with 200 drives can have one CPG containing all 200 drives and
other CPGs with overlapping subsets of these 200 drives.
CPGs have many functions:
They are the policies by which free Chunklets are assembled into logical disks They are a container for existing volumes and used for reporting
They are the basis for service levels and our optimization products.
66
HP 3PAR Virtualization the Logical View
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
67/158
HP Copyright 2011 Peter Mattei
The base for autonomic utility storageHP 3PAR Virtualization the Logical View
Physical Disks Logical Disks
Virtual
VolumesChunklets
Exported
LUNs
Physical Disks are divided in Chunklets (256MB or 1GB) The majority is used to build Logical Disks (LD) Some are used for distributed sparing
Logical Disks (LD) Are collections of Raidlets -Chunklets arranged as rows of RAID sets (Raid 0, 10, 50, 60) Provide the space for Virtual Volumes, Snapshot and Logging Disks Are automatically created when required
Virtual Volumes (VV) Exported LUNs User created volumes composed of LDs according to the corresponding CPG policies Can be fat or thin provisioned User exports VV as LUN
CPGs
Common Provisioning Groups (CPG) User created virtual pools of Logical Disks that allocates space to virtual volumes on demand The CPG defines RAID level, disk type and number, striping pattern etc.
3PAR autonomy User initiated
HP 3PAR Virtualization the Logical View
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
68/158
HP Copyright 2011 Peter Mattei
R1 AO
AO
ThP
Fat
ThPThPFat
ThP
ThP
ThPThP
Fat
R1
R5
R1
R5
R6
R6
R5 R1
R5
HP 3PAR Virtualization the Logical View
Physical DisksAutonomically built
Logical DisksUser created
CPGs
R5
R5
User createdVirtual
Volumes
ThP
Chunklets
Userexported
LUNs
FC
Nearlin
e
SSD
Create CPG(s)
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
69/158
HP Copyright 2011 Peter Mattei
Easy and straight forwardCreate CPG(s)
In the Create CPG Wizard selectand define
3PAR System
Residing Domain (if any)
Disk Type SSD Solid State Disk
FC Fibre Channel Disk
NL Near-Line SATA Disks
Disk Speed
RAID Type
By selecting advanced options moregranular options can be defined
Availability level Step size
Preferred Chunklets
Dedicated disks
69
Create Virtual Volume(s)
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
70/158
HP Copyright 2011 Peter Mattei
Easy and straight forwardCreate Virtual Volume(s)
In the Create Virtual VolumeWizard define Virtual Volume Name
Size
Provisioning Type: Fat or Thinly
CPG to be used
Allocation Warning
Number of Virtual Volumes
By selecting advanced options moreoptions can be defined
Copy Space Settings
Virtual Volume Geometry
70
Export Virtual Volume(s)
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
71/158
HP Copyright 2011 Peter Mattei
Easy and straight forwardExport Virtual Volume(s)
In the Export Virtual VolumeWizard define Host or Host Set to be presented to
Optionally Select specific Array Host Ports
Specify LUN ID
71
HP 3PAR Autonomic Groups
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
72/158
HP Copyright 2011 Peter Mattei
Simplify ProvisioningHP 3PAR Autonomic Groups
Traditional Storage
V1 V2 V3 V4 V5 V6 V7 V10V8 V9
Individual Volumes
Cluster of VMware ESX Servers Autonomic Host Group
Autonomic Volume Group
Initial provisioning of the ClusterAdd hosts to the Host GroupAdd volumes to the Volume Group
Export Volume Group to the Host Group Add another hostJust add host to the host group
Add another volumeJust add the volume to the Volume Group
Volumes are exported automatically
V1 V2 V3 V4 V5 V6 V7 V10V8 V9
Autonomic HP 3PAR Storage
Initial provisioning of the Cluster Requires 50 provisioning actions
(1 per host volume relationship)
Add another host Requires 10 provisioning actions
(1 per volume) Add another volume
Requires 5 provisioning actions(1 per host)
72
HP 3PAR InForm
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
73/158
Copyright 2011 Hewlett-Packard Development Company, L.P. The information
contained herein is subject to change without notice. Confidentiality label goes here
HP 3PAR InFormSoftware and Features
HP 3PAR Software and LicensingFour License Models:Consumption Based
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
74/158
HP Copyright 2011 Peter Mattei
Optimization SuiteThin Suite
HP 3PAR Software and Licensing
Remote Copy
InForm Operating System
InForm Additional Software
Virtual Copy
Thin Persistence
Thin Conversion
Thin Provisioning
Virtual Domains
Dynamic Optimization
LDAP
Adaptive Optimization
Scheduler Host PersonasInForm
Administration Tools
InForm Host Software
Recovery Manager forOracle
Host Explorer
Recovery Manager forVMware
Multi Path IO IBM AIX
Recovery Manager forExchange
Multi Path IOWindows 2003
Recover Manager forSQL
System Reporter
3PAR Manager forVMware vCenter
3PAR InForm Software
Thin CopyReclamation
RAID MP(Multi-Parity)Autonomic Groups
Rapid Provisioning
Access Guard
System Tuner
Full Copy
Virtual Lock
Spindle/Magazine BasedFrame BasedFree*
* Support fee associated
74
Peer Motion
HP 3PAR Software Support Cost & Capping
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
75/158
HP Copyright 2011 Peter Mattei
HP 3PAR Software Support Cost & Capping
Care Pack Support Services for spindle base licenses are charged by the number ofmagazines
Support Services cost incrementally increase until they reach a predefined threshold/capand stay flat i.e. will not increase anymore.
Capping threshold by array
F200 11Magazine
F400 13 Magazines T400 / V400 33 Magazines
T800 / V800 41 Magazines
Capping occurs for each software title per magazine type
Example for InForm OS on V800 with 3 Years Critical Service: 50 x 600GB Disk Magazine ---- 41 x HA112A3 - QQ6 - 3PAR InForm V800/4x600GB Mag LTU Support
24 x 2TB Disk Magazine ---- 24 x HA112A3 - QQ6 - 3PAR InForm V800/4x2TB Mag LTU Support
24 x 200GB SSD Magazine ---- 24 x HA112A3 - QQ6 - 3PAR InFrm V800/4x200GB SSD Mag LTU Support
The Thin Suite, Thin Provisioning, Thin Conversion and Thin Persistence do not have anyassociated support cost
HP 3PAR
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
76/158
Copyright 2011 Hewlett-Packard Development Company, L.P. The information
contained herein is subject to change without notice. Confidentiality label goes here
HP 3PARThin Technologies
HP 3PAR Thin Technologies Leadership Overview
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
77/158
HP Copyright 2011 Peter Mattei
HP 3PAR Thin Technologies Leadership Overview
Thin Provisioning No pool management or
reservations
No professional services
Fine capacity allocation units Variable QoS for snapshots
Thin DeploymentsStay Thin Over time
Reduce Tech RefreshCosts by up to 60%
Buy up to 75% lessstorage capacity
Start Thin Get Thin Stay Thin
Thin Conversion Eliminate the time & complexity of
getting thin
Open, heterogeneous migrations for
any array to 3PAR Service levels preserved during inline
conversion
Thin Persistence Free stranded capacity
Automated reclamation for 3PARoffered by Symantec, Oracle
Snapshots and Remote Copies staythin
77
HP 3PAR Thin Technologies Leadership Overview
http://images.google.com/imgres?imgurl=http://ximep2008.informatik.uni-mannheim.de/microsoft-logo.jpg&imgrefurl=http://ximep2008.informatik.uni-mannheim.de/&usg=__TrbW5baL8NwTEhl0g6C67aaG35Y=&h=686&w=2846&sz=80&hl=en&start=4&um=1&tbnid=qXPGv2e2QgNLeM:&tbnh=36&tbnw=150&prev=/images?q=microsoft&um=1&hl=en&rls=com.microsoft:en-us:IE-SearchBox&rlz=1I7SKPB&sa=Nhttp://images.google.com/imgres?imgurl=http://www.alleventsgroup.com/mhrcongress/images/oracle_logo3.jpg&imgrefurl=http://knoworacle.wordpress.com/tag/oracle-applications-technical/&usg=__19gMw5cvTRfhnNUIKdKM9KT_yzQ=&h=271&w=1362&sz=50&hl=en&start=4&um=1&tbnid=qYVM4hZQIUnTfM:&tbnh=30&tbnw=150&prev=/images?q=oracle&um=1&hl=en&rls=com.microsoft:en-us:IE-SearchBox&rlz=1I7SKPB&sa=N8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
78/158
HP Copyright 2011 Peter Mattei
HP 3PAR Thin Technologies Leadership Overview
Built-in HP 3PAR Utility Storage is built from the ground up to support Thin
Provisioning (ThP) by eliminating the diminished performance andfunctional limitations that plague bolt-on thin solutions.
In-band Sequences of zeroes are detected by the 3PAR ASIC and not
written to disks. Most other vendors ThP implementation write
zeroes to disks, some can reclaim space as a post-process. Reservation-less
HP 3PAR ThP draws fine-grained increments from a single free
space reservoir without pre-dedication of any kind. Other vendors
ThP implementation require a separate, pre-dedicated pool for
each data service level.
Integrated API for direct ThP integration in Symantec File System, VMware,
Oracle ASM and others
78
HP 3PAR Thin Provisioning Start Thin
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
79/158
HP Copyright 2011 Peter Mattei
Dedicate on write onlyg
Physically installed Disks
Requirednet ArrayCapacities
ServerpresentedCapacities/ LUNs
PhysicalDisks
Physically installed Disks
FreeChunkl
Traditional Array
Dedicate on allocationHP 3PAR Array
Dedicate on write only
Actually written data
79
HP 3PAR Thin Conversion Get Thin
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
80/158
HP Copyright 2011 Peter Mattei
Thin your online SAN storage up to 75%A practical and effective solution to
eliminate costs associated with: Storage arrays and capacity
Software licensing and support
Power, cooling, and floor space
Unique 3PAR Gen3 ASIC with built-inzero detection delivers: Simplicity and speed eliminate the time &
complexity of getting thin
Choice - open and heterogeneous migrations for
any-to-3PAR migrations Preserved service levels high performance during
migrations
Before After
0000
0000
0000
Gen3 ASIC
Fast
80
HP 3PAR Thin Conversion Get Thin
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
81/158
HP Copyright 2011 Peter Mattei
How to get there
1. Defragment source Data
a) If you are going to do a block level migration via an appliance or host volumemanager (mirroring) you should defragment the filesystem prior to zeroing the freespace
b) If you are using filesystem copies to do the migration the copy will defragment the
files as it copies eliminating the need to defragment the source filesystem2. Zero existing volumes via host tools
a) On Windows use sdelete (free utility available from Microsoft )sdel et e c
b) On UNIX/Linux use dd to create files containing zeros likedd i f =/ dev/ zero of =/ pat h/ 10GB_zer of i l e bs=128K count =81920or zero and delete a file directly with shredshr ed n 0 z u / pat h/ f i l e
81
HP 3PAR Thin Conversion at a Global Bank
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
82/158
HP Copyright 2011 Peter Mattei
No budget for additional storage
Recently had huge layoffs Moved 271 TBs, DMX to 3PAR
Online/non-disruptive
No Professional Services
Large capacity savings
The results shown within thisdocument demonstrate a highly
efficient migration process which
removes the unused storage
No special host software
components or professional servicesare required to utilise this
functionality
0
50
100
150
200
Unix ESX Win
EMC
3PAR
Reduced
power &coolingcosts
G
Bs
Sample volume migrations on different OSs
(VxVM) (VMotion) (SmartMove)
Capacityrequirements reducedby >50%
$3 millionsavings
in upfrontcapacity
purchases
82
HP 3PAR Thin Persistence Stay Thin
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
83/158
HP Copyright 2011 Peter Mattei
Keep your array thin over timey
Before After
Gen3 ASIC
00000000
Fast
Non-disruptive and application-
transparent re-thinning of thinprovisioned volumes
Returns space to thin provisionedvolumes and to free pool for reuse
New with InForm 3.1.1:
intelligently reclaims16KB pages Unique 3PAR Gen3 ASIC with
built-in zero detection delivers: Simplicity No special host software required.
Leverage standard file system tools/scripts towrite zero blocks.
Preserved service levels zeroes detected andunmapped at line speeds
Integrated automated reclamation withSymantec and Oracle
83
HP 3PAR Thin Persistence manual thin reclaim
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
84/158
HP Copyright 2011 Peter Mattei
Remember: Deleted files still occupy disk space
LUN 1
Data 1
LUN 2
Data 2
FreeChunklets
LUN 1
Data 1
LUN 2
Data 2 FreeChunklets
Initial state:
LUN1 and 2 are ThP Vvols Data 1 and 2 is actually written data
LUN 1
Data1
LUN 2
Data 2
FreeChunklets
Unused Unused
After a while:
Files deleted by the servers/file systemstill occupy space on storage
LUN 1
Data1
LUN 2
Data 2Free
Chunklets
00000000000000000000000000000000
Zero-out unused space: Windows: sdelete * Unix/Linux: dd script
Run Thin Reclamation: Compact CPC and Logical Disks Freed-up space is returned to the free Chunklets
* sdelete is a free utility available from Microsoft
84
HP 3PAR Thin Persistence Thin API
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
85/158
HP Copyright 2011 Peter Mattei
Partnered with Symantec
Jointly developed a Thin APIAn industry first! File system / array communication API (write same)
Most elements now captured as part of emerging T10 SCSI standard
HP has introduced API to other operating systemvendors and offered development support
VMware
Microsoft
85
HP 3PAR Thin Persistence Oracle Integration
http://www.microsoft.com/http://www.microsoft.com/8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
86/158
HP Copyright 2011 Peter Mattei
ASM with ASRU
g
Oracle auto-extend allows customers to save on
database capacity with Thin Provisioning
Database Capacity can get stranded after writes
and deletes
3PAR Thin Persistence and Oracle ASM Storage
Reclamation Utility (ASRU) can reclaim 25% or
more stranded capacityAfter Tablefile shrink/drop or Database drop
After a new LUN is added to ASM Disk Group
Oracle ASM Utility compacts files and writes zeroes to free space
3PAR Thin Built-In ASIC-based, zero-detection eliminates free
space
From a DBA perspective:Non disruptive does not impact storage performance. ASIC
huge advantage
Increase DB Miles Per GallonTraditional Array
Unused space remains Zeroes are written
0000000000
Disk Group
Tablespace
Tables
00000000
Oracle
3PAR Array withThin Persistence Files compacted by ASRU Zeroes removed Space reclaimed
86
HP 3PAR Thin Persistence in VMware Environments
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
87/158
HP Copyright 2011 Peter Mattei
Introduced with vSphere 4.0 VMware VMFS supports three formats for VM disk images
o Thino Thick - ZeroedThick (ZT)
o EagerZeroedThick (EZT)
VMware recommends EZT for highest performance
More info http://www.vmware.com/pdf/vsp_4_thinprov_perf.pdf
3PAR Thin Technologies work with and optimize all three formats
Introduced with vSphere 4.1 vStorage API for Array Integration (VAAI)
Thin Technologies enabled by the 3PAR Plug-in for VAAI
Thin VMotion - Uses XCOPY via the plug-in
Active Thin Reclamation - Using Write-Same to offload zeroing to array
Introduced with vSphere 5.0Automated VM Space Reclamation
Leverages industry standard T10 UNMAP
Supported with VMware vSphere 5.0 and InForm OS 3.1.1
87
Autonomic VMware Space Reclamation
http://www.vmware.com/pdf/vsp_4_thinprov_perf.pdfhttp://www.vmware.com/pdf/vsp_4_thinprov_perf.pdf8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
88/158
HP Copyright 2011 Peter Mattei
p
Fine-grained Reclaim, Fast
20GB VMDKs post deletions consume ~20GB
40+ GB
X X
100GB
XX
0000000000000000
0000000000000000
25GB 25GB 25GB 25GB
10GB 10GB
DATASTORE
TIME
Coarse Reclaim, Slow
20GB VMDKs post deletions consume 40+ GB
0
0
0
0
T10 UNMAP
(768kB42MB Coarse)
Standard Thin Provisioning
Slow, Post-process
Overhead
HP 3PAR with Thin PersistenceTraditional Storage with Space Reclaim
20 GB
X X
100GB
XX
0000000000000000
0000000000000000
25GB 25GB 25GB 25GB
10GB 10GB
DATASTORE
0
0
0
0
T10 UNMAP
(16kB granularity)
Rapid, Inline
ASIC Zero Detect
3PAR Scalable Thin Provis ioning
HP 3PAR Thin Provisioning positioning
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
89/158
HP Copyright 2011 Peter Mattei
Built-in not bolt ong g
No upfront allocation of storage for Thin Volumes
No performance impact when using Thin Volumes unlike competingstorage products
No restrictions on where 3PAR Thin Volumes should be used unlike
many other storage arrays Allocation size of 16k which is much smaller than most ThP
implementations
Thin provisioned volumes can be created in under 30 seconds
without any disk layout or configuration planning required Thin Volumes are autonomically wide striped over all drives within
that tier of storage
89
HP 3PAR
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
90/158
Copyright 2011 Hewlett-Packard Development Company, L.P. The information
contained herein is subject to change without notice. Confidentiality label goes here
Full and Virtual Copy
HP 3PAR Full Copy Flexible point-in-time copies
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
91/158
HP Copyright 2011 Peter Mattei
Part of the base InForm OS
91
3PAR Full Copy
Share data quickly and easily
Full physical point-in-time copy ofbase volume
Independent of base volumes RAIDand physical layout properties formaximum flexibility
Fast resynchronization capability
Thin Provisioning-aware Full copies can consume same physical
capacity as Thin Provisioned base volume
Base Volume
Full Copy
Full Copy
HP 3PAR Virtual Copy Snapshot at its best
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
92/158
HP Copyright 2011 Peter Mattei92
Integration withOracle, SQL, Exchange, VMware
3PAR Virtual Copy
Base Volume 100s of Snaps
Smart Promotable snapshots Individually deleteable snapshots Scheduled creation/deletion Consistency groups
Thin No reservations needed Non-duplicative snapshots Thin Provisioning aware Variable QoS
Ready Instant readable or writeable snapshots
Snapshots of snapshots Control given to end user for snapshot
management Virtual Lock for retention of read-only snaps
but justone CoW
Up to 8192 Snaps per array
HP 3PAR Virtual Copy Snapshot at its best
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
93/158
HP Copyright 2011 Peter Mattei
Base volume and virtual copies can be mapped to different CPGsThis means that they can have different quality of service
characteristics. For example, the base volume space can be derived
from a RAID 1 CPG on FC disks and the virtual copy space from a
RAID 5 CPG on Nearline disks.
The base volume space and the virtual copy space can grow
independently without impacting each other (each space has its own
allocation warning and limit).
Dynamic optimization can tune the base volume space and the virtualcopy space independently.
93
HP 3PAR Virtual Copy Relationships
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
94/158
HP Copyright 2011 Peter Mattei
The following shows a complex relationship scenario
94
Creating a Virtual Copy Using The GUI
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
95/158
HP Copyright 2011 Peter Mattei
Right Click and select Create Virtual Copy
95
InForm GUI View of Virtual Copies
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
96/158
HP Copyright 2011 Peter Mattei
The GUI gives a very easy to read graphical view of VCs:
96
HP 3PAR
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
97/158
Copyright 2011 Hewlett-Packard Development Company, L.P. The information
contained herein is subject to change without notice. Confidentiality label goes here
Remote Copy
HP 3PAR Remote Copy Protect and share data
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
98/158
HP Copyright 2011 Peter Mattei
3PAR Remote Copy
Smart Initial setup in minutes
Simple and intuitive commands
No consulting services
VMware SRM integration
Complete Native IP-based, or FC
No extra copies or infrastructure needed
Thin provisioning aware
Thin conversion
Synchronous, Asynchronous Periodic orSynchronous Long Distance (SLD)
Mirror between any InServ size or model Many to one, one to many
Sync or
Async Perodic
Primary Secondary
P S
S P
Primary
Secondary
P
S2
Tertiary
S1Async Periodic
Standby
Sync
Synchronous Long Distance1:2 Configuration
1:1 Configuration
98
HP 3PAR Remote Copy N:1 Configuration
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
99/158
HP Copyright 2011 Peter Mattei
You can use Remote Copy over IP(RCIP) and/or Fibre Channel (RCFC)connections
InServ Requirements
Max support is 4 to 1.
One of the 4 can mirror bi-directionallyEach RC relationship requires dedicated
node-pairs. In a 4:1 setup the target siterequires 8 node-pairs.
Primary Site A
Primary Site B
Primary Site C
Primary / TargetSite D
Target Site
P
P
P
P
RCRC P
RC RC
RC
99
HP 3PAR Remote Copy 1:N Configuration
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
100/158
HP Copyright 2011 Peter Mattei
You can use Remote Copy over IP(RCIP) and/or Fibre Channel (RCFC)connections
InServ Requirements
Max support is 1 to 2.
One mirror can be bi-directionallyEach RC relationship requires dedicated
node-pairs. The primary site requires 4node-pairs.
100
Target Site A
Target Site B
Primary Site RC
RC
P
P
HP 3PAR Remote Copy Synchronous
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
101/158
HP Copyright 2011 Peter Mattei
2
InServ writes I/Os to secondary cacheStep 2 :
Real-time Mirror Highest I/O currency
Lock-step data consistency
Space Efficient Thin provisioning aware
Targeted Use Campus-wide business continuity
P
Primary
Volume
Secondary
Volume
S
1
Host server writes I/Os to primary cacheStep 1 :
3
Remote system acknowledges the receiptof the I/O
Step 3 :
4
I/O complete signal communicated backto primary host
Step 4 :
101
D t i t itHP 3PAR Remote Copy
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
102/158
HP Copyright 2011 Peter Mattei
Data integrity
Assured Data Integrity
Single VolumeAll writes to the secondary volume are completed in the
same order as they were written on the primary volume
Multi-Volume Consistency GroupVolumes can be grouped together to maintain writeordering across the set of volumes
Useful for databases or other applications that make
dependant writes to more than one volume
102
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
103/158
Remote Copy Asynchronous Periodic
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
104/158
HP Copyright 2011 Peter Mattei
Base Volume Snapshot Base Volume Snapshot
Primary Site
P
Sequence
Remote Site
A SA1 Initial Copy
SBB-AdeltaResynchronization.Delta Copy
B SAResynchronization.
Starts with snapshots
2
Ready for nextresynchronization
A SA
B SB
Upon Completion.Delete old snapshot3
104
S t d Di t d L t iHP 3PAR Remote Copy
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
105/158
HP Copyright 2011 Peter Mattei
Supported Distances and Latencies
Remote Copy Type Max Supported Distance Max Supported Latency
Synchronous IP 210 km /130 miles 1.3ms
Synchronous FC 210 km /130 miles 1.3ms
Asynchronous Periodic IP N/A 150ms round tripAsynchronous Periodic FC 210 km /130 miles 1.3ms
Asynchronous Periodic FCIP N/A 60ms round trip
105
Cl t i l ti t ti i t d t f ilCluster Extension for Windows
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
106/158
HP Copyright 2011 Peter Mattei
Clustering solution protecting against server and storage failure
MicrosoftCluster
Data Center 1 Data Center 2Up to 210km
Cluster Extension
What does it do? Manual or automated site-failover for
Server and Storage resources
Transparent Hyper-V Live Migrationbetween site
Supported environments: Microsoft Windows Server 2003
Microsoft Windows Server 2008 HP ProLiant Storage Server
Up to 210km (RC supported max)
Requirements: 3PAR Disk Arrays
Remote Copy
Microsoft Cluster
Cluster Extension
Max 20ms network round-trip delay
A AB
File share WitnessData Center 3
Remote Copy
See also http://h18006.www1.hp.com/storage/software/ces/index.html
E d t d l t i l ti t t t i t d t f il
Metrocluster for HP-UX
http://h18006.www1.hp.com/storage/software/ces/index.htmlhttp://h18006.www1.hp.com/storage/software/ces/index.html8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
107/158
HP Copyright 2011 Peter Mattei
End-to-end clustering solution to protect against server and storage failure
Serviceguard
for HP-UX
HP Metrocluster
What does it do? Provides manual or automated site-
failover for Server and Storage resources
Supported environments: HP-UX 11i v2 & v3 with Serviceguard
Up to 210km (RC supported max) Requirements:
HP 3PAR Disk Arrays
3PAR Remote Copy
HP Serviceguard Metrocluster
Max 200ms network round-trip delay
A A
BA
QuorumData Center 3
Data Center 1 Data Center 2Up to 210km
Remote Copy
See also: http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02967683/c02967683.pdf
Automated ESX Disaster RecoveryVMware ESX DR with SRM
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02967683/c02967683.pdfhttp://h20000.www2.hp.com/bc/docs/support/SupportManual/c02967683/c02967683.pdf8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
108/158
HP Copyright 2011 Peter Mattei
Automated ESX Disaster Recovery
HP 3PAR
Servers
VMware Infrastructure
Virtual Machines
VirtualCenterSite
RecoveryManager
HP 3PAR
Servers
VMware Infrastructure
Virtual Machines
VirtualCenterSite
RecoveryManager
Production Site
Recovery Site
What does it do? Simplifies DR and increases reliability
Integrates VMware Infrastructure with HP3PAR Remote Copy and Virtual Copy
Makes DR protection a property of the VM
Allowing you to pre-program your disasterresponse
Enables non-disruptive DR testing
Requirements: VMware vSphere
VMware vCenter
VMware vCenter Site Recovery Manager
HP 3PAR Replication Adapter for VMwarevCenter Site Recovery Manager
HP 3PAR Remote Copy Software
HP 3PAR Virtual Copy Software (for DRfailover testing)
Production LUNsRemote Copy DR LUNsVirtual Copy Test LUNs108
HP 3PARD i d Ad ti
http://images.google.ch/imgres?imgurl=http://bp0.blogger.com/_QCmUgDEhTiw/RkAU4i-wikI/AAAAAAAAAJQ/ci03ud20tJY/s320/vmware_identity_01[1].gif&imgrefurl=http://frank.vanpuffelen.net/2007/05/driving-vmware.html&h=318&w=318&sz=6&hl=de&start=5&tbnid=e8YduiP3czY3vM:&tbnh=118&tbnw=118&prev=/images?q=vmware&gbv=2&hl=de8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
109/158
Copyright 2011 Hewlett-Packard Development Company, L.P. The information
contained herein is subject to change without notice. Confidentiality label goes here
Dynamic and AdaptiveOptimization
A New Optimization Strategy for SSDs
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
110/158
HP Copyright 2011 Peter Mattei
Flash Price decline has enabledSSD as a viable storage tier butdata placement is difficult on aper LUN basis
Non-optimizedapproach
Non-Tiered Volume/LUN
SSD only
Tier 2 NL
Tier 1 FC Optimizedapproach for
leveraging SSDs
Multi-Tiered Volume/LUN
Tier 0 SSD A new way of autonomic data
placement and cost/performanceoptimization is required:HP 3PAR Adaptive Optimization
110
Manual or Automatic TieringHP 3PAR Dynamic and Adaptive Optimization
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
111/158
HP Copyright 2011 Peter Mattei
Tier 0 SSD
Tier 1 FC
Tier 2 SATA
3PAR DynamicOptimization
3PAR AdaptiveOptimization
- Region
AutonomicTiering and
Data Movement
AutonomicData
Movement
Manual or Automatic Tiering
111
Storage Tiers HP 3PAR Dynamic Optimization
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
112/158
HP Copyright 2011 Peter Mattei
Performance
Cost per Useable TB
FC
Nearline
RAID 1
RAID 52+1)RAID 5
(3+1)RAID 5(7+1)
RAID 1
RAID 5(2+1)RAID 5
(3+1)RAID 5(7+1)
RAID 6 (6+2)
RAID 6 (14+2)
RAID 6 (6+2)
RAID 6 (14+2)
RAID 1
RAID 52+1)
RAID 5(3+1)
RAID 5(7+1)
RAID 6 (6+2)
RAID 6 (14+2)
SSD
In a single commandnon-disruptively optimize and
adapt cost, performance,efficiency and resiliency
112
HP 3PAR Dynamic Optimization Use Cases
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
113/158
HP Copyright 2011 Peter Mattei
Deliver the required service levels for the lowest possible cost throughout the data lifecycle
10TB net 10TB net 10TB net
~50%Savings ~80%Savings
RAID 10300GB FC Drives
RAID 50 (3+1)600GB FC Drives
RAID 50 (7+1)2TB SATA-Class Drives
Free 7.5 TBsof net
capacity on demand !10 TB net
7.5TB net free
20 TB raw RAID 10 20 TB raw RAID 50
10 TB net
Accommodate rapid or unexpected, application growth on demand by freeing raw capacity
113
Optimize QoS levels with autonomic rebalancing without pre-planning
HP 3PAR Dynamic Optimization at a Customer
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
114/158
HP Copyright 2011 Peter Mattei
Optimize QoS levels with autonomic rebalancing without pre-planning
Before Dynamic Optimization
0
100
200
300
400
500
600
1 20 39 58 77 96
Physical Disks
Chunklets
Free
Used
After Dy nami c Optim izat ion
0
100
200
300
400
500
600
1 20 39 58 77 96
Physical Disks
Chunklets
Free
Used
REBALANCE
Distribution after 2 disk upgrades Distribution after Dynamic Optimization
How to Use Dynamic Optimization
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
115/158
HP Copyright 2011 Peter Mattei115
How to Use Dynamic Optimization
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
116/158
HP Copyright 2011 Peter Mattei116
How to Use Dynamic Optimization
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
117/158
HP Copyright 2011 Peter Mattei117
Performance Example with Dynamic Optimization
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
118/158
HP Copyright 2011 Peter Mattei
Volume Tune from R5, 7+1 SATA to R5, 3+1 FC 10K
118
IO density differences across applications
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
119/158
HP Copyright 2011 Peter Mattei
0,00%
10,00%
20,00%
30,00%
40,00%
50,00%
60,00%
70,00%
80,00%
90,00%
100,00%
0,00% 10,00% 20,00% 30,00% 40,00% 50,00% 60,00%
Cumu
lative
Ac
cess
Rate
%
Cumulative Space %
ex2k7db_cpg
ex2k7log_cpg
oracle
oracle-stage
oracle1-fc
windows-fc
unix-fc
vmware
vmware2
vmware5
windows
119
Improve Storage UtilizationHP 3PAR Adaptive Optimization
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
120/158
HP Copyright 2011 Peter Mattei
Improve Storage Utilization
Traditional deployment
Single pool of same disk drive type, speedand capacity and RAID level
Number and type of disks are dictate bythe max IOPS + capacity requirements
Deployment with HP 3PAR AO
An AO Virtual Volume draws space from 2 or 3different tiers/CPGs
Each tier/CPG can be built on different disk types,RAID level and number of disks
RequiredIOPS
Required Capacity
IOdistribution
0% 100%0%
100%
High-speedmedia poolSingle pool of
high-speed media
Medium-speedmedia pool Low-speed
media pool
Wasted space
Re
quiredIOPS
Required Capacity0% 100%0%
100%
IOdistribution
120
Improve Storage UtilizationHP 3PAR Adaptive Optimization
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
121/158
HP Copyright 2011 Peter Mattei
Improve Storage Utilization
Access/GiB/min
UsedSpaceGiB
This chart out of System reporter showsthat most of the capacity has very low
IO activity Adding Nearline disks would lower
cost without compromising overallperformance
One tier without Adaptive Optimization
Access/GiB/min
UsedSpaceGiB
Two tiers with Adaptive Optimization running
A Nearline tier has been added andAdaptive Optimization enabled
Adaptive Optimization has movedthe least used chunklets to theNearline tier
121
HP 3PAR Peer Motion
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
122/158
Copyright 2011 Hewlett-Packard Development Company, L.P. The information
contained herein is subject to change without notice. Confidentiality label goes here
Beyond Virtualization: Storage Federation
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
123/158
HP Copyright 2011 Peter Mattei
Federation
The delivery of distributed volume
management across a set of self-governing, homogeneous, peer
storage arrays
Pros
Less expensive
Minimized failure domains
Simpler administration
Cons
No heterogeneous array support
Virtualization
The delivery of consolidated or distributed
volume management through appliancesthat hierarchically control a set ofheterogeneous storage arrays
Pros
Broader, heterogeneous array support
Cons
More expensive (dual controller layer)Additional failure domains
Lowest common denominator function
Likely additional administration
Converged Migration HP 3PAR Peer Motion
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
124/158
HP Copyright 2011 Peter Mattei
HP Confidential
Traditional Block Migration
SW
Appliance
Downtime
Pre-planningSLA risk
Extra tools
Complex,post-processthinning
= Block Migration Approaches
Complex, time-consuming, risky
HP 3PAR Peer Motion
With Peer Motion, you can: load balance at will perform tech refresh seamlessly cost-optimize Asset Lifecycle Management
lower tech refresh CAPEX (thin-landing)
Simple
Fool-proof
Online or offline
Non-disruptive
Any-to-any 3PAR Thin Built-In
MIGRATE
1st Non-Disruptive DIY Migration for Enterprise SAN
With Peer Motion, customers can:
Load balance at will
Perform tech refresh seamlessly
Cost-optimize Asset Lifecycle Management
Lower tech refresh CAPEX (thin-landing)
Non-disruptive array migration the steps behind the scenePeer Motion Migration Phases
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
125/158
HP Copyright 2011 Peter Mattei
Zone
SAN
p y g p
Initial 3PAR Configuration 1. Install new 3PAR array2. Configure Array Peer Ports on target3. Create new source-destination zone4. Configure destination as host on source5. Export volumes from source to destination
Zone
SAN
6. Create new destination-host zone7. Admit source volumes on destination (admitvv)8. Export destination volumes to host
(This adds additional paths to source)
Zone
SAN
Non-disruptive array migration the steps behind the scenePeer Motion Migration Phases
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
126/158
HP Copyright 2011 Peter Mattei
p y g p
9. Unzone source from hostIO flow: host - destination source only10. Start data migration (importvv)
Zone
SAN
Migration has finished11. Remove exported source volume12. Remove destination-source zone
Zone
SAN
Post migration
Zone
SAN
Wizard-based do it yourself data migrationHP 3PAR Peer Motion Manager
8/11/2019 A technical 3PAR presentation v9 4nov11.pdf
127/158
HP Copyright 2011 Peter Mattei
y g
Easy and straight forward CLUI
Automated processes System configuration import
Source volume presentation
Volume migration Cleanup
Current support Windows, Linux, Solaris (more to come)
No existing snapshots Not part of a replication group
=============================================================================--- Main Menu ---
Source Array: WWN=2FF70002A000144 SerialNumber=1300324 SystemName=s324
Destination Array: WWN=2FF70002A00017D SerialNumber=1300381 SystemName=s381
-------------- Migration Links/Host --------------
Destination array peer li