Upload
hatruc
View
220
Download
3
Embed Size (px)
Citation preview
HPE 3PAR StoreServ Solid StateMartynas Skripkauskas
1
MSA Flash 44% FREE
MSA 2040 Gets a Boost!A simple firmware update for the MSA 2040 Customers
Performance Boostover MSA 2040 w/GL210
Downloadfrom hp.com
SimpleLoad new firmware, Enjoy more IOPS.
FastUp to 122,000 IOPS Random Read Boost when used with SSDs, via streamlined I/O
AffordableNo cost upgrade for all existing and new customers
FutureproofMSA HW Architecture was built to perform and has more under the hood
SSD Boost
HPE MSA 2040 Gets a Performance Boost!
Up to 20%Faster*Up to 44%
Faster*How did we do it?• Analyzed data path to improve utilization of
SSD low latency access• Optimized usage of controller CPU cache for
data transactions• Reduced number of PCIe transactions
needed to complete each I/O request• New high performance cache lookup
*Based on compare of GL200 code release to GL210 code on MSA 2040 FC with SSDs. Random 8k block workload, average latency of 30 ms or less
GL210 MSA Firmware Up to 44% faster read and 20 % faster write
MSAStorage
Competitive Performance Dynamics
Tested Random Performance – IOPS (GL210) Note: 40 HDD and 4 SSD configs. Not max benchmark setup. MSA Max numbers can be found in QuickSpecs
– HDD Random Read IOPS is small spindle bound config
– HDD Random Write showsHUGE MSA advantage
– IBM/Dell Turbo doesn’t do much
– SSD Random Read IOPS shows HUGE MSA advantage
– SSD Random Write IOPS shows HUGE MSA advantage
– IBM/Dell Turbo is needed – Remember the COST! IBM=>$6850 Dell=>$6364
Tested (8K, HDD) MSA 2040 IBM V3700 Turbo IBM V3700 Base Dell MD3820 Turbo Dell MD3820 Base
Random Read (IOPS) 12,119 11,844 11,722 12,096 12,151
Average Latency (ms) 21.1 21.6 21.8 21.2 21.1
Random Write (IOPS) 11,367 5,481 4,883 3,159 3,148
Average Latency (ms) 0.39 0.7 0.8 1.3 0.4
Configuration: 40 HDD, RAID 10, 8K Random, Read QD=128/Write QD=2
Tested (8K, SSD) MSA 2040 IBM V3700 Turbo IBM V3700 Base Dell MD3820 Turbo Dell MD3820 Base
Random Read (IOPS) 109,490 70,755 43,367 68,782 36,260
Average Latency (ms) 2.3 3.6 5.9 3.7 7.1
Random Write (IOPS) 34,774 16,207 10,057 16,335 16,195
Average Latency (ms) 3.7 7.8 12.7 7.8 7.9
Configuration: 4 SSD, RAID 1, 8K Random, Read QD=128/Write QD=64
?
MSAStorage
?
? ?
“You never change things by fighting the existing reality.To change something, build a new model that makes the existing model obsolete.”R. Buckminster Fuller
6
Trends
7
Growing 5x faster than Moore’s Law19x SSD capacity increase in only 30 months
8
HDD (Fast Class + NL) vs SSD72% of all revenue in drives came from Flash !!
9
• Capable systems today, comparing with Pure, XtremeIO and others.
• #1 in SPC-2 performance against VMAX3, HDS and XP
• Easier configurations with 8 drives comparing to 16 drives for 10000
• 20450 capable of 1.1PB RAW and nearly 1.8M IOPS of SSD in a single rack
SSD Wear-outBased on measured Field Telemetry Data
July 2014 July 2015
Projection at current IO profile
>50%
2064
10
Take a confident step into the future with 3D-NAND– Why 3D-NAND?
– Planar 2-D NAND technology is at the limit of its achievable density with current 1X geometries. The next step in density (8TB and beyond) calls for a new NAND geometry
– Enter 3D-NAND which essentially is multi-level cell flash that stores 3 bits per cell, but the key innovation is that cells are stacked up as vertical constructs
– 3D-NAND 3-bit MLCs that we use in our systems offer better performance and endurance at a attractive price points compared to 2D-MLCs (see representations below). These are every bit enterprise grade as the 2D MLCs we offer today
– It is very important to not confuse these new drives with planar TLC which is essentially a failed step on the way to 3D-NAND– This is also why we choose to never use the term TLC, as it will trigger the same behavior seen with cMLC that lead
customers to believe consumer grade SSDs were used.
11
Planar 2-bit MLC
Planar TLC
3D-NAND 3-bit MLC
Page Program Time
Tim
e to
com
plet
e
Planar 2-bit MLC
Planar TLC 3D-NAND
3-bit MLC
Block Erase Time
Tim
e to
com
plet
e
Planar 2-bit MLC
Planar TLC
3D-NAND 3-bit MLC
Program/Erase cycles @ 85 deg C
Lower values mean better performance
Higher value means higher endurance and
lower power consumption
P/E
cycl
es
Warranty and extended support coverage for SSDs
– The basics of what is covered under warranty and support for HPE 3PAR SSDs
12
Warranty: • HPE 3PAR offers unconditional 5 year warranty that covers both wear and electronic failure for all SSDs. • This means that even if a customer does not have TS support on their system HPE will replace their
SSDs for the first 5 years of the life that SSD.
TS Support: • TS Support levels such as PC24 uplifts warranty and add enhancements such as onsite response in
some cases. TS support does not cover SSD wear out, only electronic failures• This means that as per today’s rules SSDs are not covered for wear beyond year 6 even with an
extended TS support contract
– What’s new?– The TS team is working on introducing wear out coverage for years 6 and 7 as part of a support contract– Targeted for introduction in 1H CY2016 – This is still under work and there will be separate communications in the
coming months with details about the offering
SSD warranty and support competitive landscape– As you can see from the table that most of our biggest competitors require a support contract at any point
for SSD wear-out coverage
– The extended coverage being planned will get us on par with competition
13
EMC XtremIO Pure Storage IBM Flashsystem 3PAR SSD
Special SSD replacement program
Xpect More Program -special limited time offer
Forever Flash Program in EULA
Flash Wear Guarantee
5-year warranty includes burned-out
SSD
Post-warranty exclude burn out
Support includes burned out SSDs
Yes, 7 years Yes, unlimited Yes, 7 years
Launch date July 2014 Feb 2013, updated in Nov 2014
Feb 2015
Supplementaltemporary program only
Supplemental TS and CS renewed every 6 months
Unclear Only available for AFA with FlashCore (900 and V9000)
Continuous support coverage
Required Required Required
SSD Rebuild Times
14
RAID rebuild times: 8000 seriesPer-chunklet rebuild times
0
2
4
6
8
10
12
14
16
None Light (>40% idle) Heavy (<25% idle)
Seco
nds
per c
hunk
let
Workload
8000 Series
SSD (R1) SSD (R5) FC (R1) FC (R5)
Using 3.2.2.290, heavy workload data missing – no suitable systems available for testing, load created using SPC-1 benchmarking tools
As HPE 3PAR arrays are hardware-accelerated by the 3PAR ASIC, rebuilds happen extremely quickly
This chart shows the per-chunklet rebuild times for 8000 series arrays broken down by drive type and workload
While it is possible to work drives hard, this level of activity is only usually seen in POC environments and edge cases
Per-drive rebuild times based on common usageBroken down by platform
16
54.4
68.8 72
102.496
160
36.844.8
52.8
72 76.8
142.4
0
20
40
60
80
100
120
140
160
180
R1 None R5 None R1 Light R5 Light R1 Heavy R5 Heavy
Tim
e to
rebu
ild in
min
utes
RAID level and load
Rebuild times for 50% allocated 1.92TB SSD
8000 20000
Express Layout(available from 3.2.2 MU2 – ETA end Jan 2016)
Smaller SystemsSystems with fewer drives result in significant overheads
18
Node0 Node1
LD LD
Flash drive Flash drive Flash drive Flash drive
RAID5 2+1
As LDs are built by a node from the PDs owned by that node, so the maximum set size is limited to the number of drives behind a single node (two shown below)
This limits smaller arrays to smaller set sizes - even though the array has enough PD’s to create larger set sizes – which restricts usable capacity
RAID overhead:33%
Primary IO path from owner node
Secondary path from non-owning node
Flash drive Flash drive
Express Layout allows smaller configsActive/Active access to SSDs
19
Both nodes own every SSD simultaneously, with active IO being issued on both paths to the drive at the same time
This is enabled on the LD and is only used when the set size would require more than 50% of the drives behind a single node
Chunklets from each drive are assigned to LDs, with each node gaining access to the chunklets it controls
Primary IO path from owner node
Secondary path from non-owning node
LD LDRAID 5 5+1
Node0 Node1
RAID overhead:~17%
Flash drive Flash drive Flash drive Flash driveFlash drive Flash drive
3PAR Mesh-Active architecture - getting the most from flashDifferent approaches
20
HPE 3PAR Express LayoutAll flash vendors
Most all flash vendors (even most of the scale-out ones) have shelves of disks attached to two controllers. While both controllers are active on the fabric, only one controller “owns” the drives resulting in lost back-end performance as only one controller can service back-end requests
Only HPE 3PAR with it’s advanced virtualisation engine and truly active/active access to flash drives powered by Express Layout can offer an end-to-end active/active solution that drives the maximum performance from the flash drives installed in the array
HPE 3PAR 8200 Configurations with Express Layout
Raw TB Usable TB 1 Street Price 2 $/GB usable
6 x 3.84TB SSD 23.04 38.40 $84,937 $2.21
8 x 3.84TB SSD 30.72 60.48 $111,041 $1.84
6 x 1.92TB SSD 11.52 19.20 $51,693 $2.69
8 x 1.92TB SSD 15.36 30.24 $66,716 $2.21
6 x 400GB SSD 2.4 4.00 $19,479 $4.87
8 x 400GB SSD 3.2 6.30 $23,764 $3.77
21
1 After spare space, overhead, and compaction. Compaction assumption: 3:1, based on telemetry data (Median). Average compaction is 4:1.
2 System cost, product only (no support), assuming 48% street discount. Only Base OS included.
Adaptive Sparing
Adaptive SparingExtending HPE 3PAR StoreServ distributed sparing for flash media
23
Flash enduranceTraditional arrays use dedicated spare drives, but this has a negative impact on the endurance of other drives and reduces potential performance
3PAR VirtualizationHPE 3PAR’s virtualization tech provides distributed sparing by reserving spare chunklets on each drive in the array, only used during drive failure
Flash SparingFlash isn’t like other media, not using some space can affect endurance. Adaptive Sparing takes the reserved chunklets and assigns them to the drive
One technology to improve the performance and endurance of flash drives
Overprovisioning (OP) spaceLow-OP space leads to lower costs for customers
24
OP space800GB
User space800GB
User space1300GB
OP space600GB
User space1000GB
OP space usesOP space is used by the drive for internal housekeeping, including wear levelling and garbage collection. This allows the drive to perform and endure write cycles
OP space variesThe drives above all have the
same amount of raw flash: 1.6TB. The varying OP space
means different usable capacities and cost/performance profiles
Cost/performanceThe drive on the left offers the
highest performance levels but at the highest price. The right-most
drive offers the lowest cost but it’s challenging to it make perform
OP space300GB
HPE 3PAR work closely with the drive manufacturer to produce custom firmware that reduces the drive’s fixed OP space. The array then reserves spare space using chunklets
HPE 3PAR StoreServ Adaptive SparingUnique sparing algorithm to remove compromise
25
User space
OP space
Spare space
The spare chunklets are then reassigned back to the drive’s firmware to extend the OP space further, improving performance and endurance while maintaining a low capacity overhead
The increased OP space allows the drive more capacity to perform internal housekeeping tasks and, if needed during drive failure, the system can reclaim the chunklets required to perform sparing
SPC-1 Benchmark
The SPC-1 benchmark
SPC‐1 is a benchmark that consists of a single workload designed to demonstrate the performance of a storage subsystem while performing the typical functions of business critical applications*
“
”
*http://www.storageperformance.org/results/benchmark_results_spc1_active/
HPE 3PAR StoreServ 8450 SPC-1 result
28
Metric Reported Result
SPC-1™ Price-Performance $0.24/ SPC-1 IOPSTM
SPC-1TM IOPS 545,164
Response time 0.27 to 0.80
Total ASU Capacity 5,267GB
Unused Storage Ratio 3%
Midrange Price $128,480
SPC-1 AFA price/performance leader #1 Mid-range leader in performance
$693,205 $1,598,965 $220,278
SPC-1 system breakdownAverage system prices
29
HPE 3PAR StoreServ 8450High end features – Industry leading affordability
$784,127 $1,968,284 $399,392
All SPC-1 results High end Mid-range
$128,480
Any media type
All flash
*as of 4th February 2016, IDC price band 5-8 $25k to $249k, IDC price band 9+ $250k+
$0.24 $0.24
$0.32$0.37
$0.41 $0.43
$0.54$0.57 $0.58
$0.77
$0.00
$0.10
$0.20
$0.30
$0.40
$0.50
$0.60
$0.70
$0.80
$0.90
$ / S
PC-1
IOPS
™
SPC-1 Price-Performance™ ($/SPC-1 IOPS™) Top Ten External Systems (as at 27-Feb-16)
31
$128 480 $148 738$176 942
$227 062
$486 660 $488 617 $493 346
$708 702
$0
$100 000
$200 000
$300 000
$400 000
$500 000
$600 000
$700 000
$800 000
SPC
-1™
Tot
al S
yste
m C
ost
SPC-1™ - Total System Cost - Mid Range All Flash (as at 27-Feb-16)
EMC XtremIO2 x X-Brick
13U footprint
Performance300,000 IOPS
(70% read, 8KB)
Internally benchmarked by EMC
Pure Storage//m70
11U footprint
Performance300,000 IOPS
(100% read, 32KB)
Internally benchmarked by Pure
Performance compare with other all-flash arraysHPE 3PAR position in the market
32
HPE 3PAR StoreServ8450 4N
8U footprint
Performance545,000 IOPS(SPC-1 workload)
SPC-1 validated performance
? ?
https://www.emc.com/collateral/data-sheet/h12451-xtremio-4-system-specifications-ss.pdfhttps://www.purestorage.com/content/dam/purestorage/pdf/PureStorage_FlashArraym-Brochure pdf
What this SPC-1 means for customers
33
AffordableAt $0.24 per SPC-1 IOPS, HPE 3PAR StoreServ 8450 delivers the lowest dollar per IOPS for an enterprise all-flash array, providing superior performance at cost effective prices
Enterprise ClassHPE 3PAR StoreServ provides customers with a scalable storage platform that meets the needs of the most demanding enterprise applications including OLTP, VDI environments and business analytics.
FastThis SPC-1 result (545,146 SPC-1 IOPS at 0.80ms latency) demonstrates HPE 3PAR StoreServ 8450 has the right architecture to leverage the unique characteristics of flash to provide customer benefits.
Fast
Enterprise-ClassAffordable
HPE
High performance, low power consumptionHPE 3PAR StoreServ 8450
34
For HPE 3PAR StoreServ, providing 545,164 IOPS uses less power than it takes to make an espresso
Power consumption:
<1,500kWh
SPC-1TM response time-throughput curvesHPE 3PAR StoreServ 7400 vs 8450*
35
Metric 7400 8450
SPC-1 IOPSTM 258,078 545,146
SPC-1 Price-Performance™ $0.58 $0.24
Average Response Time (ms) 0.33 to 0.86 0.27 to 0.80
Total ASU Capacity 1,145 GB 5,267 GB
Protected Application Utilization 70.46% 81%
*as of 4th February 2016
0.0
0.2
0.4
0.6
0.8
1.0
0 100 000 200 000 300 000 400 000 500 000 600 000
Res
pons
e Ti
me
(ms)
SPC-1 IOPS™
SPC-1™ Response Time-Throughput Curves3PAR StoreServ 7400 vs 8450 (as at 27-Feb-16)
HPE 3PAR StoreServ 8450HPE 3PAR StoreServ 7400 SSD
HPE StoreOnce Next generation Data Protection solutions for your businessMartynas SkripkauskasHPE Storage sales specialist
31st March 2016
The Tip of the Iceberg
Primary Data
Backup and Archive Data 10x – 50x
1xConsider a 1TB database
- Daily fulls kept for a week (5)- Weekly fulls kept for a month (4)- Monthly fulls kept for a year (12)- All replicated = 2x
= 42TB of backup data* (for just 1TB of primary storage)
* Before deduplication
New: Reduce risk with application and flashintegrated recovery systems and software
Protect
StoreOnce 3100 StoreOnce 3520 StoreOnce 3540
StoreOnce 5100
Up to 5.5TB usable
Up to 31.5TB usable
Up to 216TB usable
Up to 15.5TB usable
2.7x Faster
<$0.05/GB Usable Capacity at 20:1
2xDensity
ONE Architecture from software-defined to scale-out with application, ISV, and flash-array Integration
41% Lower cost
vs EMC DD
HPE StoreOnce Recovery Manager Central 2.0Application-Managed end-to-end availability and protection
• More application support: Fast and affordable end-to-end protection for Microsoft SQL server databases*
• More protection: Copy backups from one StoreOnce appliance to another for disaster recovery*• More integration: Fast and efficient copy of 3PAR snapshots to StoreOnce with Data Protector 9.05
Federating Primary & Secondary systems for the next generation of data protection
StoreOnce RMC RMCExpress Protect
Application consistent protection
HPE Data Protector 9.05 integration
HP StoreServ 3PAR
StoreOnce System
Catalyst Copy for Disaster Recovery
StoreOnce System
* Available January 2016
Solid State Evolution
June 2013 Dec 2013 June 2014 Sept 2014 June 2015
• Adaptive Sparing• QoS latency Goals• 920GB SSD
• Express Writes
• Thin Deduplication• 1.92TB SSD
• HPE 3PAR StoreServ 20000• GEN5 ASIC• 3.84TB SSD
• HP 3PAR 7450• Architecture Flash-
Optimizations• Storage QoS• 400GB SSD
Dec 2015
• 3D NAND• 3PAR Flash
Acceleration for Oracle
Jan 2016 June 2016
• HPE 3PAR 8000• Sub-millisecond QoS
latency goals
• SPC-2 World Record with HPE 3PAR AFA
Thank You