123
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Confidentiality label goes here 3PAR TechCircle HP Dübendorf 14. April 2011 Reto Dorigo Business Unit Manager Storage Serge Bourgnon 3PAR Business Development Manager Peter Mattei Senior Storage Consultant Peter Reichmuth Senior Storage Consultant

3PAR Presentation 14apr11-2

Embed Size (px)

Citation preview

Page 1: 3PAR Presentation 14apr11-2

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information

contained herein is subject to change without notice. Confidentiality label goes here

3PAR TechCircle

HP Dübendorf14. April 2011

• Reto Dorigo

Business Unit Manager Storage

• Serge Bourgnon

3PAR Business Development Manager

• Peter Mattei

Senior Storage Consultant

• Peter Reichmuth

Senior Storage Consultant

Page 2: 3PAR Presentation 14apr11-2

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information

contained herein is subject to change without notice. Confidentiality label goes here

Agenda09:00 – 09:15 Hewlett-Packard Schweiz

Begrüssung Serge Bourgnon

09:15 – 10:15 Hewlett-Packard Schweiz

HP 3PAR Architecture Peter Reichmuth Peter Mattei

10:15 – 10:45 Pause

10:45 – 11:45 Hewlett-Packard Schweiz

HP 3PAR Software + Funktionen Peter Mattei / Peter Reichmuth

11:45 – 12.15 Hewlett-Packard Schweiz

Live Demo Peter Mattei / Peter Reichmuth

Page 3: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

3PAR background

• Founded by server engineers

• Funded by leading infrastructure providers

• Commercial shipments since 2002

• Initial Public Offering, November 2007

• NYSE: PAR

• Profitable and strong balance sheet

• Expanding presence in US, Canada,Europe, Asia, and Africa

• HP acquisition September 2010

Page 4: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Softw

are

Serv

ices

Onl

ine

Nea

rlin

e

P2000X1000 P9500X9000 EVAX3000 P4000 3PAR

Data ProtectorExpress

Storage Essentials

Storage Array Software

Storage Mirroring

Data Protector

Business Copy

Continuous Access

Cluster Extension

SAN Implementation Storage Performance AnalysisEntry Data Migration Data MigrationInstallation & Start-up

Proactive 24 Critical ServiceProactive Select Backup & RecoverySupportPlus 24 SAN Assessment

Consulting services (Consolidation, Virtualization, SAN Design)Data Protection Remote Support

D2D Backup Systems

ESLtape

libraries

VLS virtual library systems

EML tape libraries

MSL tape libraries

RDX, tape drives& tape autoloaders

The HP Storage Portfolio

4

Infrast

ruct

ure

ProCurve Wired, Wireless, Data Center, Security & Management

B, C & H SeriesFC Switches/Directors

SAN Connection Portfolio

ProCurve Enterprise Switches

Page 5: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Leading the next storage waveHP Storageworks Portfolio

Block Level Storage File Level Storage Backup/Recovery

Large Enterprise Federal

P9000 (XP)X9000 (IBRIX)

StoreOnce

Cloud / Hosting Service Providers 3PAR

CorporateP6000 (EVA)

Mid Size X3000 (MS WSS)P4000 (LeftHand)

Small/Remote Office

Branch OfficeX1000 (MS WSS)

P2000 (MSA)

Page 6: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Architecture for Cloud Services

• Performance and capacity scalability for multiple apps

• Handle diverse and unpredictable workloads

• Security among tenants

• Resilient

• Acceptable service levels with a major component failure

• High utilization with high performance/service levels

• Eliminate capacity reservations

• Allow fat to thin volume migrations without disruption, post processing

• Continual, intelligent re-thinning without disruption

• Fast implementations of low overhead RAID levels

• Autonomic configuration, including for server clusters

• Autonomic capacity provisioning

• Autonomic data movement

• Autonomic performance optimization

• Autonomic storage tiering

Autonomic Management

Thin Technologies

Multi-Tenant Clustering

Page 7: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Built-In, Not Bolt-On

3PAR LEADS IN ALL 3 CATEGORIES

• Mesh Active, Cache Coherent Cluster

• ASIC-based Mixed Workload

• Virtual Private Array Security

• Tier 1 HA, DR

• Failure-Resistant Performance, QoS

• Reservation-less, Dedicate-on-Write

• Thin Engine and Thin API-based Reclamation

• ASIC-based Zero Detection

• Wide-Striping, sub-Disk RAID

• ASIC-based Fast RAID

• Autonomic Groups

• Autonomic capacity provisioning for thin technologies

• Dynamic Optimization

• System Tuner, Policy Advisor

• Adaptive Optimization

Autonomic Management

Thin Technologies

Multi-Tenant Clustering

Page 8: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

3PAR Thin Provisioning

Best new technology in the market

Industry leading technology to maximize storage utilization

Automatically optimizes using multiple classes of storage

Workload management and load balancing

Advanced shared memory architecture

Multi-tenancy for service providers and private clouds

HP 3PAR Industry Leadership

3PAR Autonomic Storage Tiering

3PAR Virtual Domains

3PAR Dynamic Optimization

3PAR Full Mesh Architecture

8

Page 9: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR InServ Storage Servers

F200 F400 T400 T800

Controller Nodes 2 2 – 4 2 – 4 2 – 8

Fibre Channel Host Ports

Optional iSCSI Host Ports

Built-in Remote Copy Ports

0 – 120 – 8

2

0 – 240 – 16

2

0 – 480 – 16

2

0 – 960 – 32

2

GBs Control/Data Cache 8/12 8-16/12-24 8-16/24-48 8-32/24-96

Disk Drives 16 – 192 16 - 384 16 – 640 16 – 1,280

Drive Types 50GB SSD*,

300, 600GB FC

and/or 1, 2TB NL

50GB SSD*

300, 600GB FC

and/or 1, 2TB NL

50GB SSD*

300, 600GB FC

and/or 1, 2TB NL

50GB SSD*

300, 600GB FC

and/or 1, 2TB NL

Max Capacity 128TB 384TB 400TB 800TB

Throughput/

IOPS (from disk)1,300 (MB/s)

46,800

2,600 (MB/s)

93,600

3,800 (MB/s)

156,000

5,600 (MB/s)

312,000

SPC-1 Benchmark Results 93,050 224,990

Same OS, Same Management Console, Same Replication Software

* max. 32 SSD per Node Pair

9

Page 10: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Array Comparison

Maximum Values EVA8400 3PAR T800 P9500

Internal Disks 324 1280 2048

Internal Capacity TB 194/324 ¹ 800 1226/2040 3

Subsystem Capacity TB 324 800 247‘000

FC Host Ports 8 128/32 ² 192

# of LUNs 2048 NA 65280

Cache GB 22 32+96 512

Sequential Performance Disk GB/s 1.57 6.4 >15

Random Performance Disk IOPS 78’000 >300‘000 >350‘000

Internal Bandwidth GB/s NA 44.8 192

1 600GB FC / 1TB FATA disks2 optional iSCSI Host Ports 3 600GB SAS / 1TB Near-SAS disks

Page 11: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

0

5

10

15

20

25

30

0 25,000 50,000 75,000 100,000 125,000 150,000 175,000 200,000 225,000

HP 3PAR Scalable Performance: SPC-1 Comparison

IBM DS5300

Transaction-intensive applications typically demand response time < 10 ms

SPC-1 IOPS™

Resp

ons

e Ti

me

(ms)

IBM DS8300 Turbo

HDS USP V /HP XP24000

EMC CLARiiON CX3-40

NetAppFAS3170

3PAR InServ T800

Mid Range

High EndHDS AMS 2500

3PAR InServ F400

11

Page 12: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Traditional Modular Storage

Traditional TradeoffsLegacy vs. HP 3PAR Hardware Architecture

Cost-efficient but scalability and resiliency limited by dual-controller design

Host Connectivity

Switched Backplane

Traditional Monolithic Storage

Scalable and resilient but costly. Does not meet multi-tenant requirements efficiently

Cache

Disk Connectivity

Distributed

Controller

FunctionsCost-effective, scalable and resilient architecture.

Meets cloud-computing requirements for efficiency, multi-tenancy and autonomic management.

HP 3PAR meshed and active

12

Page 13: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR – Four Simple Building Blocks

F200 and F400 T400 and T800Controller Nodes

Performance and connectivity building blockCPU, Cache and 3PAR ASIC

System ManagementRAID and Thin Calculations

Node Mid-PlaneCache Coherent Interconnect

1.6 GB/sec per NodeCompletely Passive encased in steel

Defines Scalability

Drive ChassisCapacity Building Block

F Chassis 3u 16 DiskT Chassis 4 U 40 Disks

Service ProcessorOne 1U SVP per system

For service and monitoring

13

Page 14: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Gen3ASIC

Mesh Active

Fast RAID 5 / 6

InFormfine-grained OS

Utilization

Manageability

Autonomic Policy Management

Self-Configuring

Self-Optimizing

Mixed Workload

Zero Detection

Performance

Instrumentation

Self-Healing

Self-Monitoring

HP 3PAR Utility Storage

ThinProvisioning

Virtual Domains

Virtual LockSystem

ReporterVirtualCopy

Adaptive Optimization

Dynamic Optimization

Recovery Managers

F-Class - T-Class

Purpose built on native virtualizationHP 3PAR Architectural differentiation

Remote Copy

ThinConversion

ThinPersistence

14

Page 15: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Unified Processor and/or Memory

Control Processor & Memory

3PAR ASIC & Memory

disk

Heavy throughputworkload applied

Heavy transactionworkload applied

I/O Processing : Traditional Storage

I/O Processing : 3PAR Controller Node

hosts

hosts

small IOPs wait for large IOPs to be processed

control information and data are pathed and processed separately

Heavy throughputworkload sustained

Heavy transaction workload sustained

Disk interface

= control information (metadata)= data

Host interface

Host interface

diskDisk

interface

Multi-tenant performanceMixed workload support

15

Page 16: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Spare Disk Drives vs. Distributed Sparing HP 3PAR High Availability

Traditional Arrays

3PAR InServ

Few-to-one rebuildhotspots & long rebuild exposure

Spare drive

Many-to-many rebuildparallel rebuilds in less time

Spare chunklets

16

Page 17: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Guaranteed Drive Shelf AvailabilityHP 3PAR High Availability

She

lfShe

lf RAID Group

RAID Group She

lfShe

lf

Raid

let G

roup

Raid

let G

roup

Raid

let G

roup

Traditional Arrays

3PAR InServ

Shelf-dependent RAIDShelf failure means no access to data

Shelf-independent RAIDDespite shelf failure Data access preserved

17

Page 18: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Write Cache Re-MirroringHP 3PAR High Availability

Traditional Arrays

3PAR InServ

Traditional Write-Cache MirroringPoor performance due to write-thru mode

Persistent Write-Cache Mirroring• No write-thru mode – consistent performance

• Works with 4 and more nodes

• F400

• T400

• T800

Write-Cache Mirroring off

18

Page 19: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR virtualization advantage

RAID5 SetRAID1 Set

RAID1RAID5 Set RAID6 Set

LUN 1LUN 0

LUN 3

LUN 4LUN 5

Traditional Controllers

Spare

Spare

LUN 7

LUN 6

LUN 2

0 1 2 3 4 5 6 7

R1 R1 R5R1 R5R5 R6 R6

• Each RAID level requires dedicated disks• Dedicated spare disk required • Limited single LUN performance

Traditional Array

3PAR InServ Controllers

0 1 2 3 4 5 6 7

R1 R1 R5R1 R5R5 R6 R6

HP 3PAR• All RAID levels can reside on same disks• Distributed sparing• Built-in wide-striping based on Chunklets

Physical Disks

19

Page 20: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR F-Class InServ Components

– Controller Nodes (4U)

• Capacity building block

− 4-Disk Drive Magazines

• Add non-disruptively

• Industry leading density

– 16 Slot Drive Chassis (3U)

– Full-mesh Back-plane

• Post-switch architecture

• High performance, tightly coupled

• Completely passive

3PA

R 4

0U, 1

9” C

abin

etor

Cus

tom

er P

rovi

ded

• Performance and connectivity building block

− Adapter cards

• Add non-disruptively

• Runs independent OS instance

– Service Processor (1U)• Remote error detection

• Supports diagnostics and maintenance

• Reporting to 3PAR Central

20

Page 21: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Configuration OptionsHP 3PAR F-Class Node

2 built-in FC Disk Ports

2 built-in FC Disk or Host Ports

Slot 1: optional 2 FC Ports for Host , Disk or FC Replicationor 2 GbE iSCSI Ports

Slot 0:optional 2 FC Ports for Host , Disk or FC Replicationor 2 GbE iSCSI Ports

GigE Management Port

GigE IP Replication Port

– One Xeon Quad-Core 2.33GHz CPU

– One 3PAR Gen3 ASIC per node

– 4GB Control & 6GB Data Cache per node

– Built-in I/O ports per node

• 10/100/1000 Ethernet port & RS-232

• Gigabit Ethernet port for Remote Copy

• 4 x 4Gb/s FC ports

– Optional I/O per node

• Up to 4 more FC or iSCSI ports (mixable)

– Preferred slot usage (in order); depending on customer requirements

• Disk Connections: Slot 0 (ports 1,2), 0, 1 higher backend connectivity and performance

• Host Connections: Slot 0 (ports 3,4), 1, 0higher front-end connectivity and performance

• RCFC Connections: Slot 1 or 0Enables FC based Remote Copy (first node pair only)

• iSCSI Connections: Slot 1, 0adds iSCSI connectivity

21

Page 22: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

– Cache per node

• Control Cache: 4GB (2 x 2048MB DIMMs)

• Data Cache: 6 GB (3 x 2048MB DIMMs)

– SATA : Local boot disk

– Gen3 ASIC • Data Movement

• XOR RAID Processing

• Built-in Thin Provisioning

– I/O per node

• 3 PCI-X buses/ 2 PCI-X slots and one onboard 4 port FC HBA

F-Class Controller NodeHP 3PAR InSpire Architecture

Controller Node(s)

SERIALLAN

SATA

Data Cache

Control Cache4GB

6 GB

2 – Onboard4 Port FC

10

Quad-Core Xeon

2.33 GHz

High Speed Data Links

Multifunction Controller

22

Page 23: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

F-Class DC3 Drive Chassis

Drive Chassis or “cage” contains 4 drive bays that accommodate:

– 4 drive magazines

– Each magazine holds four disks

– Each disk is individually accessible23

Page 24: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

F-Class DC3 Drive Chassis

– Maximum 16 Drives per Drive Chassis

– Must populate 4 drives (a magazine) at a time

– 2 x 4Gb interfaces connected to 2 controller nodes

– Can be Daisy Chained to have 32 drives per loop doubling the amount of capacity behind a node pair

Node 0

Node 1

Node 0

Node 1

Non-Daisy Chained

Daisy Chained

– Minimum configuration is 4 Drive Chassis

– Upgrades must Increment at 4 Drive Chassis

– Must deploy 4 Drive Magazines at a time (16 drives) across all 4 Drive Chassis (1 drive magazine per Chassis)

*Drive Magazine = 4 disks

24

Page 25: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Connectivity Options: Per F-Class Node Pair

Ports0 – 1

Ports2 - 3

PCI Slot 1

PCI Slot 2

# of FC Host Ports

# of iSCSI Ports

# of Remote Copy FC Ports

# of Drive Chassis

Max # of Disks

Disk Host - - 4 - - 4 64

Disk Host Host - 8 - - 4 64

Disk Host Host Host 12 - - 4 64

Disk Host Host iSCSI 8 4 - 4 64

Disk Host iSCSI RCFC 4 4 2 4 64

Disk Host Disk - 4 - - 8 128

Disk Host Disk Host 8 - - 8 128

Disk Host Disk iSCSI 4 4 - 8 128

Disk Host Disk RCFC 4 - 2 8 128

Disk Host Disk Disk 4 - - 12 192

25

Page 26: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR T-Class InServ Components

• Performance and connectivity building block

− Adapter cards

• Add non-disruptively

• Runs independent OS instance

– Controller Nodes (4U)

• Capacity building block

− Drive Magazines

• Add non-disruptively

• Industry leading density

– Drive Chassis (4U)

– Full-mesh Back-plane• Post-switch architecture

• High performance, tightly coupled

• Completely passive

3PAR 40U, 19” Cabinet

Built-In Cable Management

– Service Processor (1U)• Post-switch architecture

• High performance, tightly coupled

• Completely passive

26

Page 27: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Bus to Switch to Full Mesh ProgressionThe 3PAR Evolution

• 3PAR InServ Full Mesh Backplane

• High Performance / Low Latency

• Passive Circuit Board

• Slots for Controller Nodes

• Links every controller (Full Mesh)

• 1.6 GB/s (4 times 4Gb FC)

• 28 links (T800)

• Single hop

• 3PAR InServ T800 with 8 Nodes

• 8 ASICS with 44.8 GB/s bandwidth

• 16 Intel® Dual-Core processors

• 32 GB of control cache

• 96GB total data cache

• 24 I/O buses, totaling 19.2 GB/s of peak I/O bandwidth

• 123 GB/s peak memory bandwidth T800 with8 Nodes640 Disks27

Page 28: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

• 2 to 8 per System – installed in pairs

• 2 Intel Dual-Core 2.33 GHz

• 16GB Cache

• 4GB Control/12GB Data

• Gen3 ASIC

• Data Movement, ThP & XOR RAID Processing

• Scalable Connectivity per Node3 PCI-X buses/ 6 PCI-X slots

• Preferred slot usage (in order)

• 2 slots – 8 FC disk ports

• Up to 3 slots – 24 FC Host ports

• 1 slot – 1 FC port used for Remote Copy (first node pair only)

• Up to 2 slots – 8 1GbE iSCSI Host ports

Controller Node(s)

HP 3PAR T-Class Controller Node

T-Class Node pair

0 1 3 4 520 1 3 4 52 PCI Slots

Console port C0

Remote Copy Eth port E1

Mgmt Eth port E0

Host FC/iSCSI/RC FC ports

Disk FC ports

28

Page 29: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

T-Class Controller NodeHP 3PAR InSpire architecture

• Scalable Performance per Node

• 2 to 8 Nodes per System

• Gen3 ASIC

• Data Movement

• XOR RAID Processing

• Built-in Thin Provisioning

• 2 Intel Dual-Core 2.33 GHz

• Control Processing

• SATA : Local boot disk

• Max host-facing adapters

• Up to 3 (3 FC / 2 iSCSI)

• Scalable Connectivity Per Node

• 3 PCI-X buses/ 6 PCI-X slots

Controller Node(s)

GEN3 ASIC

29

Page 30: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

T-Class DC04 Drive Chassis

• From 2 to 10 Drive Magazines

• (1+1) redundant power supplies

• Redundant dual FC paths

• Redundant dual switches

• Each Magazine always holds 4 disks of the same drive type

• Each Magazines in a Chassis can have different Drive types. For example:

• 3 magazines of FC

• 1 magazine of SSD

• 6 magazines of SATA.

30

Page 31: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

CD ROM3PAR Service Processor

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

0 1 2

3 4 5

< >

. < >

.E

0

E

1

C

0

| | |O OOK

OK / !

0 1 2

3 4 5

< >

. < >

.E

0

E

1

C

0

| | |

O OOK

OK / !

2

T

B

N

L

6

0

0

F

C

2

T

B

N

L

6

0

0

F

C

2

T

B

N

L

6

0

0

F

C

2

T

B

N

L

6

0

0

F

C

T400 Configuration examples

– A T400 minimum configuration is

– 2 nodes

– 4 drive chassis with

– 2 magazines per chassis.

– Upgrades are done as columns of magazines down the drive chassis..

31

600

FC

600

FC

600

FC

600

FC

600

FC

600

FC

600

FC

600

FC

Page 32: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

T800 Fully Configured – 224’000 SPC IOPS

• 8 Nodes

• 32 Drive Chassis

• 1280 Drives

• 768TB raw capacity with 600GB drives

• 224’000 SPC IOPS

Nodes and Chassis are FC connected and can be up to 100 meters apart

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

CD ROM3PAR Service Processor

0 1 2

3 4 5

<

>….

<

>….E

0

E

1

C

0

| |

|

O

OOK

OK /

!

0 1 2

3 4 5

<

>….

<

>….E

0

E

1

C

0

| |

|

O

OOK

OK /

!

0 1 2

3 4 5

<

>….

<

>….E

0

E

1

C

0

| |

|

O

OOK

OK /

!

0 1 2

3 4 5

<

>….

<

>….E

0

E

1

C

0

| |

|

O

OOK

OK /

!

0 1 2

3 4 5

<

>…. <

>….E

0

E

1

C

0

| |

|

O

OOK

OK /

!

0 1 2

3 4 5

<

>…. <

>….E

0

E

1

C

0

| |

|

O

OOK

OK /

!

0 1 2

3 4 5

<

>…. <

>….E

0

E

1

C

0

| |

|

O

OOK

OK /

!

0 1 2

3 4 5

<

>…. <

>….E

0

E

1

C

0

| |

|

O

OOK

OK /

!

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

6

0

0

F

C

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

Pulizzi| | |0| | |0

Off On

CB1

Off On

CB2

32

Page 33: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

T-Class redundant power

Controller Nodes and Disk Chassis (shelves) are powered by (1+1) redundant power supplies.

The Controller Nodes are backed up by a string of two batteries.

33

Page 34: 3PAR Presentation 14apr11-2

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information

contained herein is subject to change without notice. Confidentiality label goes here

HP 3PAR InForm OS™Virtualization Concepts

Page 35: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

3PAR Mid-Plane

Example: 4-Node T-Class with 8 Drive Chassis HP 3PAR Virtualization Concept

• Drive Chassis are point-to-point connected to controllers nodes in the T-Class to provide “cage level” availability to withstand the loss of an entire drive enclosure without losing access to your data.

• Nodes are added in pairs for cache redundancy

• An InServ with 4 or more nodes supports “Cache Persistence” which enables maintenance windows and upgrades without performance penalties.

35

Page 36: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Example: 4-Node T-Class with 8 Drive Chassis HP 3PAR Virtualization Concept

• T-Class Drive Magazines hold 4 of the very same drives • SSD, FC or SATA• Size• Speed

• SSD, FC, SATA drive magazines can be mixed

• A minimum configuration has 2 magazines per enclosure

• Each Physical Drive is divided into 256 MB “Chunklets”

36

Page 37: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Virtual Volume

Virtual Volume

Example: 4-Node T-Class with 8 Drive Chassis HP 3PAR Virtualization Concept

• RAID sets will be built across enclosures and massively striped to form Logical Disks (LD)

• LDs are equally allocated to controller nodes

• Logical Disks are bound together to build Virtual Volumes

• Each Virtual Volume is automatically wide-striped across “Chunklets” on all disk spindles of the same type creating a massively parallel system

Virtual Volume

Exported LUN

• Virtual Volumes can now be exported as LUNs to servers

37

LDLDLD

LDLDLD

LDLDLD

LDLDLD

LDLDLD

LDLDLD

LDLDLD

LDLDLD

Page 38: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Chunklets – the 3PAR Virtualization Basis

DC = 256 MB Data Chunklet

SC = 256 MB Spare Chunklet

DC DC DC DC

Physical Disk

SC

SC

SC

• Each physical disk in a 3PAR array is initialized with data and spare Chunklets of 256MB each

• Chunklets are Automatically Grouped by Drive Rotational Speed

Device Type Total # of Chunklets

50GB SSD 185

147GB FC 15K 545

300GB FC 15K 1115

450GB FC 15K 1675

600GB FC 15K 2234

1TB NL 7.2K 3724

2TB NL 7.2K 7225

DC DC

DC DC DCDC DC

DC DC DCDC DC

DC DC DC DCDC

38

Page 39: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Why are Chunklets so Important?

Ease of use and Drive Utilization

• Same drive spindle can service many different LUNs

and different RAID types at the same time

• Allows the array to be managed by policy, not by

administrative planning

• Enables easy mobility between physical disks, RAID

types and service levels by using Dynamic or Adaptive

Optimization

Performance• Enables wide-striping across hundreds of disks

• Avoids hot-spots

• Allows Data restriping after disk installations

High Availability• HA Cage - Protect against a cage (disk tray) failure.

• HA Magazine - Protect against magazine failure

3PAR InServ Controllers

0 1 2 3 4 5 6 7

R1 R1 R5R1 R5R5 R6 R6

Physical Disks

39

Page 40: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Common Provisioning Groups (CPG)

CPGs are Policies that define Service and Availability level by

• Drive type (SSD, FC, SATA)

• Number of Drives

• RAID level (R10, R50 2D1P to 8D1P, R60 6D2P or 14D2P)

Multiple CPGs can be configured and optionally overlap the same drives

• i.e. a System with 200 drives can have one CPG containing all 200 drives and other CPGs with overlapping subsets of these 200 drives.

CPGs have many functions:

• They are the policies by which free Chunklets are assembled into logical disks

• They are a container for existing volumes and used for reporting

• They are the basis for service levels and our optimization products.

40

Page 41: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR Virtualization – the Logical View

41

Page 42: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Easy and straight forwardCreate CPG(s)

– In the “Create CPG” Wizard select and define

• 3PAR System

• Residing Domain (if any)

• Disk Type− SSD – Solid State Disk

− FC – Fibre Channel Disk

− NL – Near-Line SATA Disks

• Disk Speed

• RAID Type

– By selecting advanced options more granular options can be defined

• Availability level

• Step size

• Preferred Chunklets

• Dedicated disks

42

Page 43: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Easy and straight forwardCreate Virtual Volume(s)

– In the “Create Virtual Volume” Wizard define

• Virtual Volume Name

• Size

• Provisioning Type: Fat or Thinly

• CPG to be used

• Allocation Warning

• Number of Virtual Volumes

– By selecting advanced options more options can be defined

• Copy Space Settings

• Virtual Volume Geometry

43

Page 44: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Easy and straight forwardExport Virtual Volume(s)

– In the “Export Virtual Volume” Wizard define

• Host or Host Set to be presented to

– Optionally • Select specific Array Host Ports

• Specify LUN ID

44

Page 45: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Simplify ProvisioningHP 3PAR Autonomic Groups

Traditional Storage

V1 V2 V3 V4 V5 V6 V7 V10V8 V9

Individual Volumes

Cluster of VMware ESX Servers Autonomic Host Group

Autonomic Volume Group

– Initial provisioning of the Cluster • Add hosts to the Host Group• Add volumes to the Volume Group• Export Volume Group to the Host Group

– Add another host• Just add host to the host group

– Add another volume• Just add the volume to the Volume Group

– Volumes are exported automatically

V1 V2 V3 V4 V5 V6 V7 V10V8 V9

Autonomic HP 3PAR Storage

– Initial provisioning of the Cluster • Requires 50 provisioning actions

(1 per host – volume relationship)– Add another host

• Requires 10 provisioning actions (1 per volume)

– Add another volume• Requires 5 provisioning actions

(1 per host)

45

Page 46: 3PAR Presentation 14apr11-2

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information

contained herein is subject to change without notice. Confidentiality label goes here

HP 3PAR InForm Software and Features

Page 47: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR Software and Licensing

System Tuner

InForm Operating System

InForm Additional Software

Virtual Copy

Thin Persistence

Thin Conversion

Thin Provisioning Virtual Domains

Dynamic Optimization

LDAP

Virtual Lock

Scheduler Host PersonasInForm

Administration Tools

InForm Host Software

Recovery Manager for Oracle

Host Explorer

Recovery Manager for VMware

Multi Path IO IBM AIX

Recovery Manager for Exchange

Multi Path IO Windows 2003

Recover Manager for SQL

System Reporter

3PAR Manager for VMware vCenter

3PAR InForm Software

Thin Copy Reclamation

RAID MP (Multi-Parity)Autonomic Groups

Rapid Provisioning

Access Guard

Remote Copy

Full Copy

Adaptive Optimization

Four License Models:Consumption BasedSpindle Based Frame BasedFree*

* Support fee associated

47

Page 48: 3PAR Presentation 14apr11-2

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information

contained herein is subject to change without notice. Confidentiality label goes here

HP 3PARThin Technologies

Page 49: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR Thin Technologies Leadership Overview

Thin Provisioning– No pool management or

reservations

– No professional services

– Fine capacity allocation units

– Variable QoS for snapshots

Thin Deployments Stay Thin Over time

Reduce Tech Refresh Costs by up to 60%

Buy up to 75% less storage capacity

Start Thin Get Thin Stay Thin

Thin Conversion‣ Eliminate the time & complexity of

getting thin

‣ Open, heterogeneous migrations for any array to 3PAR

‣ Service levels preserved during inline conversion

Thin Persistence‣ Free stranded capacity

‣ Automated reclamation for 3PAR offered by Symantec, Oracle

‣ Snapshots and Remote Copies stay thin

49

Page 50: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR Thin Technologies Leadership Overview

• Built-in− HP 3PAR Utility Storage is built from the ground up to support Thin

Provisioning (ThP) by eliminating the diminished performance and

functional limitations that plague bolt-on thin solutions.

• In-band− Sequences of zeroes are detected by the 3PAR ASIC and not

written to disks. Most other vendors ThP implementation write

zeroes to disks, some can reclaim space as a post-process.

• Reservation-less− HP 3PAR ThP draws fine-grained increments from a single free

space reservoir without pre-dedication of any kind. Other vendors

ThP implementation require a separate, pre-dedicated pool for

each data service level.

• Integrated − API for direct ThP integration in Symantec File System, VMware,

Oracle ASM and others

50

Page 51: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Dedicate on write only HP 3PAR Thin Provisioning – Start Thin

Physically installed Disks

Requirednet ArrayCapacities

ServerpresentedCapacities

/ LUNs

PhysicalDisks

Physically installed Disks

FreeChunkl

Traditional Array –Dedicate on allocation

HP 3PAR Array –Dedicate on write only

Actually written data

51

Page 52: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR Thin Conversion – Get Thin

Thin your online SAN storage up to 75%

A practical and effective solution to

eliminate costs associated with:• Storage arrays and capacity

• Software licensing and support

• Power, cooling, and floor space

Unique 3PAR Gen3 ASIC with built-in

zero detection delivers:• Simplicity and speed – eliminate the time &

complexity of getting thin

• Choice - open and heterogeneous migrations for

any-to-3PAR migrations

• Preserved service levels – high performance during

migrations

Before After

0000

00000000

Gen3 ASIC

Fast

52

Page 53: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

How to get thereHP 3PAR Thin Conversion – Get Thin

1. Defragment source Data

a) If you are going to do a block level migration via an appliance or host volume manager (mirroring) you should defragment the filesystem prior to zeroing the free space

b) If you are using filesystem copies to do the migration the copy will defragment the files as it copies eliminating the need to defragment the source filesystem

2. Zero existing volumes via host tools

a) On Windows use sdelete –c <drive letter> *

b) On UNIX/Linux use dd script

* sdelete is a free utility available from Microsoft

53

Page 54: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR Thin Conversion at a Global Bank

• No budget for additional storage• Recently had huge layoffs

• Moved 271 TBs, DMX to 3PAR• Online/non-disruptive

• No Professional Services

• Large capacity savings

• “The results shown within this

document demonstrate a highly

efficient migration process which

removes the unused storage”

• “No special host software

components or professional services

are required to utilise this

functionality”

0

50

100

150

200

Unix ESX Win

EMC

3PAR

Reducedpower & cooling costs

GBs

Sample volume migrations on different OSs

(VxVM) (VMotion) (SmartMove)

Capacity requirements reduced by >50%

$3 million savings

in upfrontcapacity

purchases

54

Page 55: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Keep your array thin over timeHP 3PAR Thin Persistence – Stay Thin

Before After

Gen3 ASIC

00000000

Fast

– Non-disruptive and application-transparent “re-thinning” of thinprovisioned volumes

– Thin “insurance” against unexpectedor thin-hostile application behavior

– Returns space to thin provisionedvolumes and to free pool for reuse

– Unique 3PAR Gen3 ASIC withbuilt-in zero detection delivers:• Simplicity – No special host software required.

Leverage standard file system tools/scripts to write zero blocks.

• Preserved service levels – zeroes detected and unmapped at line speeds

– Integrated automated reclamation with Symantec and Oracle

55

Page 56: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Remember: Deleted files still occupy disk spaceHP 3PAR Thin Persistence – manual thin reclaim

LUN 1

Data 1

LUN 2

Data 2

Free Chunklets

LUN 1

Data 1

LUN 2

Data 2 Free Chunklets

Initial state:• LUN1 and 2 are ThP Vvols• Data 1 and 2 is actually written data

LUN 1

Data1

LUN 2

Data 2

Free Chunklets

Unused Unused

After a while:• Files deleted by the servers/file system

still occupy space on storage

LUN 1

Data1

LUN 2

Data 2

Free Chunklets

00000000000000000000000000000000

Zero-out unused space:• Windows: sdelete *• Unix/Linux: dd script

Run Thin Reclamation:• Compact CPC and Logical Disks • Freed-up space is returned to the free Chunklets

* sdelete is a free utility available from Microsoft 56

Page 57: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR Thin Persistence and VMware

DataStore

000000000000000000000000000000000000000000000000000000000

100GB Eager Zeroed Thick VMDK

0

0

0

0

0

0

0

0

0

0

Without 3PAR Thin PersistenceCapacity used = 100GB

All zeroes need to be written to disk

This will impact the performance of the storage

ESX

DataStore

100GB Eager Zeroed Thick VMDK

With 3PAR Thin PersistenceCapacity used = 0GB

Hardware zero detectionin the 3PAR Gen3 ASIC

No physical disk IO required!

ESX

0

0

0

0

0

57

Page 58: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

VMware and HP 3PAR Thin Provisioning Options

Storage Array

VMware VMFS Volume/Datastore

Thin VirtualDisks (VMDKs)

30GB150GB

Volume Provisioned at Storage Array

Virtual Machines (VMs)

Over provisioned VMs: 250 GB 250 GB

Physically Allocated: 200 GB 40 GB

Capacity Savings: 50GB 210 GB

30GB150GB

200 GB

200GBThick LUN

40 GB

3PAR Array

10GB100GB

30GB150GB

10GB100GB

30GB150GB

200GBThin LUN

10GB100GB

10GB100GB

58

Page 59: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Built-in not bolt onHP 3PAR Thin Provisioning positioning

� No upfront allocation of storage for Thin Volumes

� No performance impact when using Thin Volumes unlike competing storage products

� No restrictions on where 3PAR Thin Volumes should be used unlike many other storage arrays

� Allocation size of 16k which is much smaller than most ThP implementations

� Thin provisioned volumes can be created in under 30 seconds without any disk layout or configuration planning required

� Thin Volumes are autonomically wide striped over all drives within that tier of storage

59

Page 60: 3PAR Presentation 14apr11-2

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information

contained herein is subject to change without notice. Confidentiality label goes here

HP 3PAR Virtual Copy

Page 61: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR Virtual Copy – Snapshot at its best

61

Integration withOracle, SQL, Exchange, VMware

3PAR Virtual Copy

Base Volume 100s of Snaps…

– Smart• Promotable snapshots• Individually deleteable snapshots• Scheduled creation/deletion• Consistency groups

– Thin• No reservations needed• Non-duplicative snapshots• Thin Provisioning aware• Variable QoS

– Ready• Instant readable or writeable snapshots• Snapshots of snapshots• Control given to end user for snapshot

management• Virtual Lock for retention of read-only snaps

…but justone CoW

Up to 8192 Snaps per array

Page 62: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR Virtual Copy – Snapshot at its best

– Base volume and virtual copies can be mapped to different CPG’s

This means that they can have different quality of service

characteristics. For example, the base volume space can be derived

from a RAID 1 CPG on FC disks and the virtual copy space from a

RAID 5 CPG on Nearline disks.

– The base volume space and the virtual copy space can grow

independently without impacting each other (each space has it’s own

allocation warning and limit).

– Dynamic optimization can tune the base volume space and the virtual

copy space independently.

62

Page 63: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR Virtual Copy Relationships

The following shows a complex relationship scenario

63

Page 64: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Creating a Virtual Copy Using The GUI

Right Click and select “Create Virtual Copy”

64

Page 65: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

InForm GUI View of Virtual Copies

The GUI gives a very easy to read graphical view of VCs:

65

Page 66: 3PAR Presentation 14apr11-2

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information

contained herein is subject to change without notice. Confidentiality label goes here

HP 3PAR Remote Copy

Page 67: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

3PAR Remote Copy

HP 3PAR Remote Copy – Protect and share data

– Smart• Initial setup in minutes

• Simple and intuitive commands

• No consulting services

• VMware SRM integration

– Complete• Native IP-based, or FC

• No extra copies or infrastructure needed

• Thin provisioning aware

• Thin conversion

• Synchronous, Asynchronous Periodic or Synchronous Long Distance (SLD)

• Mirror between any InServ size or model

• Many to one, one to many

Sync or

Async Perodic

Primary Secondary

P S

S P

Primary

Secondary

P

S2

Tertiary

S1Async Periodic

Standby

Sync

Synchronous Long DistanceConfiguration

1:N Configuration

67

Page 68: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

2

InServ writes I/Os to secondary cacheStep 2 :

HP 3PAR Remote Copy Synchronous

• Real-time Mirror– Highest I/O currency

– Lock-step data consistency

• Space Efficient– Thin provisioning aware

• Targeted Use– Campus-wide business continuity

P

PrimaryVolume

SecondaryVolume

S

1

Host server writes I/Os to primary cacheStep 1 :

3

Remote system acknowledges the receipt of the I/O

Step 3 :

4

I/O complete signal communicated back to primary host

Step 4 :

68

Page 69: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Data integrityHP 3PAR Remote Copy

Assured Data Integrity

– Single Volume• All writes to the secondary volume are completed in the

same order as they were written on the primary volume

– Multi-Volume Consistency Group• Volumes can be grouped together to maintain write ordering across the set of volumes

• Useful for databases or other applications that make dependant writes to more than one volume

69

Page 70: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

The Replication Solution for long-distance implementationsHP 3PAR Remote Copy Asynchronous Periodic

• Efficient even with high latency replication links– Host writes are acknowledged as soon as the data is written into cache of the primary array

• Bandwidth-friendly– The primary and secondary Volumes are resynchronized periodically either scheduled or manually

– If data is written to the same area of a volume in between resyncs only the last update needs to be

resynced

• Space efficient– Copy-on-write Snapshot versus full PIT copy

– Thin Provisioning-aware

• Guaranteed Consistency – Enabled by Volume Groups

– Before a resync starts a snapshot of the Secondary Volume or Volume Group is created

70

Page 71: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Remote Copy Asynchronous Periodic

Base Volume Snapshot Base Volume Snapshot

Primary Site

P

Sequence

Remote Site

A SA1 Initial Copy

SBB-Adelta

Resynchronization. Delta Copy

B SAResynchronization.Starts with snapshots

2

Ready for nextresynchronization

A SA

B SB

Upon Completion. Delete old snapshot

3

71

Page 72: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR Remote Copy many-to-one / one-to-many

• Asynchronous Periodic Only

• Distance Limit and Performance

characteristics same as that supported

for asynchronous periodic mode

~4800km /3000 miles and 150ms

• Requires 2 gigabit Ethernet adapters

per array

• InServ Requirements

– Max support is 4 to 1.

One of the 4 can mirror bi-directionally

– Requires a minimum of 2 controllers per array per

site. Target site requires 4 or more controller

nodes in the array

Primary Site A

Primary Site B

Primary Site C

Primary / Target Site D

Target Site

P

P

P

P

RC

RC P

RC RC

RC

72

Page 73: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Supported Distances and LatenciesHP 3PAR Remote Copy

Remote Copy Type Max Supported Distance Max Supported Latency

Synchronous IP 210 km /130 miles 1.3ms

Synchronous FC 210 km /130 miles 1.3ms

Asynchronous Periodic IP N/A 150ms round trip

Asynchronous Periodic FC 210 km /130 miles 1.3ms

Asynchronous Periodic FCIP N/A 60ms round trip

73

Page 74: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Automated ESX Disaster RecoveryVMware ESX DR with SRM

HP 3PAR

Servers

VMware Infrastructure

Virtual Machines

VirtualCenterSite

Recovery Manager

HP 3PAR

Servers

VMware Infrastructure

Virtual Machines

VirtualCenterSite

Recovery Manager

Production Site

Recovery Site

• What does it do?− Simplifies DR and increases reliability

− Integrates VMware Infrastructure with HP 3PAR Remote Copy and Virtual Copy

− Makes DR protection a property of the VM

− Allowing you to pre-program your disaster response

− Enables non-disruptive DR testing

• Requirements:− VMware vSphere™

− VMware vCenter™

− VMware vCenter Site Recovery Manager™

− HP 3PAR Replication Adapter for VMware vCenter Site Recovery Manager

− HP 3PAR Remote Copy Software

− HP 3PAR Virtual Copy Software (for DR failover testing)

Production LUNsRemote Copy DR LUNsVirtual Copy Test LUNs

74

Page 75: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HA solution with shared disk resourceLocal cluster

Data Center

• What does it do?− Provides application failover

between servers

• Advantages:− No manual intervention required in case of

server failure

− Can fail over automaticallyor manually

• Disadvantages:− No protection against storage or Data

Center failures

Cluster

A A

Page 76: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Using server/volume manager based mirroringCampus cluster

Cluster

Data Center 1 Data Center 2

A A

• What does it do?− Provides very high availability of

application/services

− Provides application failover between servers, storage and Data Centers

• Advantages:− Data is replicated by OS/volume

manager

− No array based replication needed

− Storage failure does not require restart of application/service

− Can fail over automatically or manually

• Disadvantages:− High risk for split brain if no arbitration

node or service is deployed

− Risk for rolling disaster/data inconsistency

Up to 100km

QuorumData Center 3

Page 77: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Using storage array based mirroringStretch cluster

Cluster

Data Center 1 Data Center 2

Remote Copy

A A

• What does it do?− Data is replicated by the Storage Array

(Remote Copy)

• Advantages:− Data consistency can be assured

• Disadvantages:− Manual failover

− Array based replication needed

Up to several 100km

Swap CAMount volumeRestart App

Page 78: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

End-to-end clustering solution to protect against site failureCluster Extension Geocluster for Windows

MicrosoftCluster

Data Center 1 Data Center 2

Up to 500km

CLX Geocluster

• What does it do?− Provides manual or automated site-

failover for Server and Storage resources

− Allows for transparent Live Migration of Hyper-V VMs between data centers.

• Supported environments:− Microsoft Windows Server

• Requirements:− 3PAR Disk Arrays

− Remote Copy sync

− Microsoft Cluster

− Cluster Extension Geocluster

− Max 20ms network round-trip delay

A AB

File share WitnessData Center 3

Remote Copy

Page 79: 3PAR Presentation 14apr11-2

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information

contained herein is subject to change without notice. Confidentiality label goes here

HP 3PAR Dynamic and Adaptive Optimization

Page 80: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Tier 0 – SSD

Tier 1 – FC

Tier 2 – SATA

3PAR Dynamic Optimization

3PAR Adaptive Optimization

- Region

AutonomicTiering and

Data Movement

AutonomicData

Movement

Manual or Automatic Tiering HP 3PAR Dynamic and Adaptive Optimization

80

Page 81: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Storage Tiers – HP 3PAR Dynamic OptimizationPe

rform

anc

e

Cost per Useable TB

FC

Nearline

RAID 1

RAID 5 2+1)RAID 5

(3+1)RAID 5(7+1)

RAID 1

RAID 5(2+1)RAID 5

(3+1)RAID 5(7+1)

RAID 6 (6+2)

RAID 6 (14+2)

RAID 6 (6+2)

RAID 6 (14+2)

RAID 1

RAID 5 2+1)

RAID 5 (3+1)

RAID 5(7+1)

RAID 6 (6+2)

RAID 6 (14+2)

SSD

In a single command…non-disruptively optimize and

adapt cost, performance, efficiency and resiliency

81

Page 82: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR Dynamic Optimization – Use Cases

Deliver the required service levels for the lowest possible cost throughout the data lifecycle

10TB net 10TB net 10TB net

~50% Savings

~80% Savings

RAID 10300GB FC Drives

RAID 50 (3+1)600GB FC Drives

RAID 50 (7+1)2TB SATA-Class Drives

Free 7.5 TBs of net capacity on demand !

10 TB net

7.5TB net free

20 TB raw – RAID 10 20 TB raw – RAID 50

10 TB net

Accommodate rapid or unexpected, application growth on demand by freeing raw capacity

82

Page 83: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

How to Use Dynamic Optimization

83

Page 84: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

How to Use Dynamic Optimization

84

Page 85: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

How to Use Dynamic Optimization

85

Page 86: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Performance Example with Dynamic Optimization

Volume Tune from R5, 7+1 SATA to R5, 3+1 FC 10K

86

Page 87: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR Dynamic Optimization at a Customer

Before Dynamic Optimization

0

100

200

300

400

500

600

1 20 39 58 77 96

Physical Disks

Chu

nkle

ts

Free

Used

After Dynamic Optimization

0

100

200

300

400

500

600

1 20 39 58 77 96

Physical Disks

Chu

nkle

ts

Free

Used

Data layout after a series of capacity upgrades

Data layout after Dynamic Optimization

(non-disruptive)

87

Page 88: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Improve Storage Utilization HP 3PAR Adaptive Optimization

Traditional deployment

• Single pool of same disk drive type, speed and capacity and RAID level

• Number and type of disks are dictate by the max IOPS + capacity requirements

Deployment with HP 3PAR AO

• An AO Virtual Volume draws space from 2 or 3 different tiers/CPGs

• Each tier/CPG can be built on different disk types, RAID level and number of disks

Requi

red IO

PS

Required Capacity

IO distribution

0% 100%0%

100%

High-speed media pool

Single pool of high-speed media

Medium-speed media pool Low-speed

media pool

Wasted space

Requi

red IO

PS

Required Capacity0% 100%0%

100%

IOdistribution

88

Page 89: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

A New Optimization Strategy for SSDs

• Flash Price decline has enabled SSD as a viable storage tier but data placement is difficult on a per LUN basis

Non-optimized approach

Non-Tiered Volume/LUN

SSD only

Tier 2 NL

Tier 1 FC Optimized approach for

leveraging SSDs

Multi-Tiered Volume/LUN

Tier 0 SSD • A new way of autonomic data

placement and cost/performance optimization is required:HP 3PAR Adaptive Optimization

89

Page 90: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

IO density differences across applications

0.00%

10.00%

20.00%

30.00%

40.00%

50.00%

60.00%

70.00%

80.00%

90.00%

100.00%

0.00% 10.00% 20.00% 30.00% 40.00% 50.00% 60.00%

Cumulative Access Rate %

Cumulative Space %

ex2k7db_cpg

ex2k7log_cpg

oracle

oracle-stage

oracle1-fc

windows-fc

unix-fc

vmware

vmware2

vmware5

windows

90

Page 91: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Improve Storage Utilization HP 3PAR Adaptive Optimization

Access/GiB/min

Use

dS

pace

GiB

• This chart out of System reporter shows that most of the capacity has very low IO activity

• Adding Nearline disks would lower cost without compromising overall performance

One tier without Adaptive Optimization

Access/GiB/min

Use

dS

pace

GiB

Two tiers with Adaptive Optimization running

• A Nearline tier has been added and Adaptive Optimization enabled

• Adaptive Optimization has moved the least used chunklets to the Nearline tier

91

Page 92: 3PAR Presentation 14apr11-2

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information

contained herein is subject to change without notice. Confidentiality label goes here

HP 3PAR Virtual Domains

Page 93: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

What are HP 3PAR Virtual Domains?

Multi-Tenancy with Traditional Storage Multi-Tenancy with 3PAR Domains

Separate, Physically-Secured Storage Shared, Logically-Secured Storage

• Admin A• App A• Dept A• Customer A

• Admin B• App B• Dept B• Customer B

• Admin C• App C• Dept C• Customer C

• Admin A• App A• Dept A• Customer A

• Admin B• App B• Dept B• Customer B

• Admin C• App C• Dept C• Customer C

Domain C

Domain B

Domain A

93

Page 94: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

What are the benefits of Virtual Domains?

Physical Storage

ConsolidatedStorage

CentralizedStorageAdministration

End Users(Dept,Customer)

ProvisionedStorage

Physical Storage

ProvisionedStorage

VirtualDomains

Centralized Storage Adminwith Traditional Storage

Self-Service Storage Adminwith 3PAR Virtual Domains

ConsolidatedStorage

CentralizedStorageAdministration

94

Page 95: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

3PAR Domain Types & Privileges

Super User(s)• Domains, Users, Provisioning Policies

Edit User(s) (set to “All” Domain)• Provisioning Policies

“All” Domain

CPG(s)Host(s)

User(s) & respective user level(s)

VLUNsVVs & TPVVsVCs & FCs & RCsChunklets & LDs

Unassigned elements

“No” Domain

“Engineering” Domain Set

Unassigned elements

Domain “A” (Dev) Domain “B” (Test)

95

Page 96: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR Virtual Domains Overview

• Requires a license

• Allows fine-grained access control on a 3PAR array

• Up to 1024 domains or spaces per array

• Each User may have privileges over one, up to 32 selected or all domains

• Each domain can be dedicated to a specific application

• System provides different privileges to different users for Domain Objects with

no limit on max # Users per Domain

Also see the analyst report and product brief on http://www.3par.com/litmedia.html

96

Page 97: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

LDAP LoginAuthentication and Authorization

Management Workstation

12

3

4

5

6

3PAR InServ LDAP Server

User initiates login to 3PAR InServ via 3PAR CLI/GUI or SSHStep 1 :

InServ searches local user entries first. Upon mismatch, configured LDAP Server is checked

Step 2 :

LDAP Server authenticates user.Step 3 :

InServ requests User’s Group information Step 4 :

LDAP Server provides LDAP Group information for userStep 5 :

InServ authorizes user for privilege level based on User’s group-to-role mapping.Step 6 :

97

Page 98: 3PAR Presentation 14apr11-2

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information

contained herein is subject to change without notice. Confidentiality label goes here

HP 3PAR Virtual Lock

Page 99: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR Virtual Lock

• HP 3PAR Virtual Lock Software prevents alteration

and deletion of selected Virtual Volumes for a

specified period of time

• Supported with

– Fat and Thin Vitual Volumes

– Full Copy, Virtual Copy and Remote Copy

• Locked Virtual Volumes cannot be overwritten

• Locked Virtual Volumes cannot be deleted, even by

a HP 3PAR Storage System administrator with the

highest level privileges.

• Because it’s tamper-proof, it’s also a way to avoid

administrative mistakes.

Also see the product brief on http://www.3par.com/litmedia.html

99

Page 100: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR Virtual Lock

– Easily set just by defining Retention and/or Expiration Time in a Volume Policy

– Remember:Locked Virtual Volumes cannot be deleted, even by a HP 3PAR Storage System user with the highest level privileges.

100

Page 101: 3PAR Presentation 14apr11-2

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information

contained herein is subject to change without notice. Confidentiality label goes here

HP 3PAR System Reporter

Page 102: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

HP 3PAR System Reporter

– Allows monitoring performance, creating charge back reports and plan storage resources

– Enables metering of all physical and logical objects including Virtual Domains

– Provides custom thresholds and e-mail notifications

– Run or schedule canned or customized reports at your convenience

– Export data to a CSV file

– Controls Adaptive Optimization

– Use DB of choice

– SQLite, MySQL or Oracle

– DB Access:• Clients: Windows IE, Mozilla, Excel

• Directly via published DB schema

102

Page 103: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Example Histogram – VLUN Performance HP 3PAR System Reporter

Export data to a CSV file

103

Page 104: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

System Reporter

Historical performance information with 3 levels

• Daily

• Hourly

• High resolution. Default 5mn, can be set to 1mn

All logical and physical objects instrumented

Page 105: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Front-end statisticsSystem Reporter

Page 106: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Backend statisticsSystem Reporter

IOPS and bandwidth

should be the same on all

backend ports

Page 107: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

CPU statisticsSystem Reporter

Thanks to the 3PAR ASIC, CPUs gets barely used, even during IO peaks

Page 108: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Physical disks vs Virtual Volumes usageSystem Reporter for capacity planning

Page 109: 3PAR Presentation 14apr11-2

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information

contained herein is subject to change without notice. Confidentiality label goes here

HP 3PAR VMware Integration

Page 110: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Enhanced visibility into Storage Resources3PAR Management Plug-In for vCenter

Also see the whitepapers, analyst reports and brochures on http://www.3par.com/litmedia.html

– Improved Visibility • VM-to-Datastore-to-LUN mapping

– Storage Properties• View LUN propertiesincluding Thin versus Fat

• See capacity utilized

– Integration with 3PAR Recovery Manager • Seamless rapid online recovery

110

Page 111: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Array-based Snapshots for Rapid Online Recovery3PAR Recovery Manager for VMware

– Solution composed of• 3PAR Recovery Manager for VMware

• 3PAR Virtual Copy

• VMware vCenter

– Use Cases• Expedite provisioning of new virtual machines from VM copies

• Snapshot copies for testing and development

– Benefits• Hundreds of VM snapshots granular, rapid online recovery − Reservation-less, non-duplicative without

agents

• vCenter integration – superior ease of use

111

Page 112: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Hardware Assisted Full CopyvStorage API for array integration (VAAI)

– Optimized data movement within the SAN• Storage VMotion

• Deploy Template

• Clone

– Significantly lower CPUand network overhead• Quicker migration

112

Page 113: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

DataMover.HardwareAcceleratedMove=0

DataMover.HardwareAcceleratedMove=1

VMware Storage VMotion with VAAI enabled and disabled

HP 3PAR VMware VAAI support Example

Backend Disk IO

FrontendIO

113

Page 114: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Virtual Infrastructure IOs are Random

In a virtual infrastructure, multiple VMs and applications share the same I/O queue.

The result is that even with applications that do sequential I/Os the physical server will end up doing random I/Os because of intermeshing of these applications VM store 3

Cache

Random I/Os typically miss cache and will be served by the physical disks.

Therefore the performance of a VM store will be directly linked to the number of physical disks that compose this LUN

Random I/Os miss cache and are served by disks

Page 115: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Hardware Assisted LockingvStorage API for array integration (VAAI)

Increase I/O performance and scalability, by offloading block locking mechanismMoving a VM with VMotion; Creating a new VM or deploying a VM from a template; Powering a VM ON or OFF; Creating a template;Creating or deleting a file, including snapshots

Without VAAI

ESX

SCSI Reservation locks entire LUN

With VAAI

ESX

SCSI Reservation locks at Block Level

115

Page 116: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

With VAAI

0000000000000000000000

Hardware Assisted Block ZerovStorage API for array integration (VAAI)

Without VAAI

000000000000000000000

ESX0

ESX0

– offloads large, block-level write operations of zeros to storage hardware

– reduction of the ESX server workload.

116

Page 117: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Are there any caveats that I should be aware of?VMware vStorage VAAI

Also see the analyst report and brochure on http://www.3par.com/litmedia.html

The VMFS data mover does not leverage hardware offloads and instead uses software data movement if:

• The source and destination VMFS volumes have different block sizes

• The source file type is RDM and the destination file type is non-RDM (regular file)

• The source VMDK type is eagerzeroedthick and the destination VMDK type is thin

• The source or destination VMDK is any sort of sparse or hosted format

• The source Virtual Machine has a snapshot

• The logical address and/or transfer length in the requested operation are not aligned to the minimum alignment required by the storage device − all datastores created with the vSphere Client are aligned automatically

• The VMFS has multiple LUNs/extents and they are all on different arrays

• Hardware cloning between arrays (even if within the same VMFS volume) does not work.

vStorage APIs for Array Integration FAQ– http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1021976

117

Page 118: 3PAR Presentation 14apr11-2

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information

contained herein is subject to change without notice. Confidentiality label goes here

HP 3PAR Recovery Managers

Page 119: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Array-based Snapshots for Rapid Online Recovery3PAR Recovery Manager for VMware

– Solution composed of• 3PAR Recovery Manager for VMware

• 3PAR Virtual Copy

• VMware vCenter

– Use Cases• Expedite provisioning of new virtual machines from VM copies

• Snapshot copies for testing and development

– Benefits• Hundreds of VM snapshots granular, rapid online recovery − Reservation-less, non-duplicative without

agents

• vCenter integration – superior ease of use

119

Page 120: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Recovery Manager for Microsoft

– Exchange & SQL Aware• Automatic discovery of Exchange and SQL Servers and their associated databases

• VSS Integration for application consistent snapshots

• Support for Microsoft® Exchange Server 2003, 2007, and 2010

• Support for Microsoft® SQL Server™ 2005 and Microsoft® SQL Server™ 2008

• Database verification using Microsoft tools

– Built upon 3PAR Thin Copy technology• Fast point-in-time snapshot backups of Exchange & SQL databases

• 100’s of copy-on-write snapshots with just-in-time, granular snapshot space allocation

• Fast recovery from snapshot, regardless of size

• 3PAR Remote Copy integration

• Export backed up databases to other hosts

Also see the brochure on http://www.3par.com/litmedia.html

120

Page 121: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

3PAR Recovery Manager for Oracle

• Allows PIT Copies of Oracle Databases� Non-disruptive, eliminating production

downtime � Uses 3PAR Virtual Copy technology

• Allows Rapid Recovery of Oracle Databases � Increases efficiency of recoveries � Allows Cloning and Exporting of new

databases

• Integrated High Availability with Disaster Recovery Sites� Integrated 3PAR Replication / Remote Copy

for Array to Array DR

Also see the brochure on http://www.3par.com/litmedia.html

121

Page 122: 3PAR Presentation 14apr11-2

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information

contained herein is subject to change without notice. Confidentiality label goes here

HP 3PAR the right choice!

Thank you

Serving Information®. Simply.

Page 123: 3PAR Presentation 14apr11-2

© HP Copyright 2011 – Peter Mattei

Questions ???Further Information

3PAR Whitepapers, Reports, Videos, Datasheets etc.

http://www.3par.com/litmedia.html

123