Upload
byron-jackson
View
214
Download
0
Embed Size (px)
DESCRIPTION
© 2002 hp page 3 HSV110 virtualization : subjects covered distributed virtual RAID versus conventional RAID. disk group characteristics. virtual disk ground rules. virtual disk Leveling. distributed sparing. redundant storage sets. Snapshot and SnapClone implementation. configuration remarks.
Citation preview
1 © 2002 hp
Introduction to EVA
Keith ParrisSystems/Software Engineer
HP ServicesMultivendor Systems Engineering
Budapest, Hungary23 May 2003
Presentation slides on this topic courtesy of:chet jacobs
senior technical consultant, andkaren fay
senior technical consultant
page 2© 2002
hp
HSV110 storage system
virtualization techniques
page 3© 2002
hp
HSV110 virtualization :subjects covered
distributed virtual RAID versus conventional RAID.disk group characteristics.virtual disk ground rules.virtual disk Leveling.distributed sparing.redundant storage sets.Snapshot and SnapClone implementation.configuration remarks.
page 4© 2002
hp
HSV110 virtualization :distributed versus conventional RAID
performance limited by # of disk drives in StorageSetpossible to find customer data if one knows the LBN and chunksize.load balancing required of application and databases over available backend (SCSI) bussesI/Os balanced across StorageSet
performance limited by # of disk drives in disk groupcustomer data distributed across all disks in a group
eliminate load balancing procedures for applications and databases.
I/Os balanced across disk group
conventional RAID distributed virtual RAID
page 5© 2002
hp
HSV110 virtualization : conventional versus distributed virtual
RAID
HSG80RAID sets
SCSI Bus 1
SCSI Bus 2
SCSI Bus 3
SCSI Bus 4
SCSI Bus 5
SCSI Bus 6
RAID 5 volume
RAID 0 volume
RAID 1 volume
HSV110DVR
R A I D 5 v o l u m e
R A I D 1 v o l u m e R A I D 0 v o l u m e
workload evenly distributed across all spindles
page 6© 2002
hp
HSV110 virtualization:disk group characteristics
minimum: 8 physical disk drives• VRAID5 requires 5 physical disk spindles, minimum (no problem)• VRAID1 uses even number of spindlesmaximum : # of physical disk drives presentwill automatically choose spindles across shelves (in V2)maximum # of disk groups per subsystem: 16net capacity• TBD (as disk capacities grows, it will change)contains the spare disk space• 0, 1, or 2 disk failures• called “none, single or double” in element managerchunk size• 2 MB (fixed), PSEG
page 7© 2002
hp
HSV110 virtualization:virtual disk ground rules
virtual disk redundancy:•VRAID0 (none): data is striped across all physical
disks in the disk group.•VRAID5 (moderate): data is striped with parity
across all physical disks in the disk group. always 5 (4+1) physical disks per stripe are used.
•VRAID1 (high): data is striped mirrored across all physical disks (even number of them) in the disk group. established pairs of physical disks mirror each other.
page 8© 2002
hp
conventional RAID5 algorithm
etc...
disk 0
etc...
disk 1
etc...
disk 2
etc...
disk 4
etc...
disk 3CHUNK 00 CHUNK 01 CHUNK 02 Parity 00,01,02,03CHUNK 03
CHUNK 04CHUNK 05 CHUNK 06 Parity 04,05,06,07CHUNK 07CHUNK 08 CHUNK 09CHUNK 10 Parity 08,09,10,11CHUNK 11
DataDataData
Parity
DataData
Parity
DataDataData
Parity
DataDataData
Parity
DataDataData
Parity
LBN 000-299 LBN 300-599 LBN 600-999 LBN 1000-1299 LBN 1300-1599
Data
virtual disk address space
Data Data Data DataData
page 9© 2002
hp
VRAID5 algorithm
etc...
disk 0
etc...
disk 1
etc...
disk 2CHUNK CHUNK CHUNK CHUNK CHUNK 01 CHUNK 03 Parity 05 CHUNK CHUNK
etc...
disk 3
etc...
disk 4CHUNK 04 CHUNK CHUNK CHUNK CHUNK CHUNK 02
CHUNK
DataDataDataData
DataData
Data
DataDataDataData
DataDataDataData
DataDataDataData
LBN 000-399 LBN 400-799 LBN 800-1199 LBN 1200-1599 LBN 1600-1999
Data
virtual disk address space
ParityParity Parity ParityParity
always 4+1 RAID5guaranteed to have each PSEG on a separate spindle in disk group
page 10© 2002
hp
VRAID1 algorithm
etc...
disk 0
etc...
disk 1
etc...
disk 2
etc...
disk 4
etc...
disk 3
DataDataData
Data
DataData
Data
DataDataData
Data
DataDataData
Data
DataDataData
Data
LBN 000-299 LBN 300-599 LBN 600-999 LBN 1000-1299 LBN 1300-1599
Data
virtual disk address space
Data Data Data DataData
page 11© 2002
hp
HSV110 virtualizationvirtual disk leveling
goal is to provide proportional capacity leveling across all disk drives within the disk group.•example 1: disk group = 100 drives, all 18G
– All disks will contain 1% of the virtual disk.•example 2: disk group = 100 drives , 50*72G,
50*36G– each 72G disk will contain > 1% of the V.D.
– approximately double the share of the 36GB drives, because it is double the capacity
– each 36G disk will contain < 1% of the V.D.load balancing is achieved through capacity leveling
page 12© 2002
hp
HSV110 virtualization :virtual disk leveling
dynamic pool capacity changes pool capacity can be added in small increments (1 disk minimum)
R A I D 5 V o l u m eR A I D 0 V o l u m e R A I D 1 V o l u m e
need more capacity or performance in a disk group
+
add more spindles
disks running at optimum throughput(dynamic load balancing)
R A I D 0 V o l u m eR A I D 1 V o l u m e
=R A I D 5 V o l u m e
available for expansion
page 13© 2002
hp
HSV110 virtualization :distributed sparing
note: we no longer spare in separate spindleschunks allocated, but not dedicated as spares, on all disk drives of disk group to survive 1 or 2 disk drive failures.allocation algorithm•single (1) = capacity of 2 * largest spindle in disk group•double (2) = capacity of 4 * largest spindle in disk group
hint: spindles have a semi-perm paired relationship for VRAID1…thats why 2 times
page 16© 2002
hp
HSV110 virtualization :distributed sparing
example # 3:•disk group : 8 * 36G & 6 * 72G, protection level :
single (1).– total disk group size ?
– 720 GB– spare allocation ?
– 144 GB– maximum size for total virtual disks in disk group ?
– 576 GB
note: minus overhead for metadata & formatted disk reduction & binary to decimal conversion
page 17© 2002
hp
HSV110 virtualization :distributed sparing
virtual disk blocks automatically regenerated to restore redundancy.
high redundant volume (RAID 1)moderate redundant volume (RAID 5)
available storage space (virtual space = 2 disks)
redundancy temporarily compromised
high redundant volume (RAID 1)moderate redundant volume (RAID 5)
available storage space (virtual space <1 disk)
redundancy automatically regenerated• data regenerated and distributed across the virtual pool
page 18© 2002
hp
best practices for disk groups•when using mostly VRAID1; use even spindle
counts in disk groups• if you need to isolate performance or disk failure
impacts, use separate groups; example: log file for a database should be in a different group than the data area.
• try keeping disk groups to like disk capacities and speeds
•but…bring unlike drive capacities into disk group in pairs
page 20© 2002
hp
HSV110 storage system
point-in-time copy techniques
page 21© 2002
hp
HSV110 virtualization :Snapshot and SnapClone
implementationSnapshot : data is copied from virtual disk to Snapshot on demand (before its modified on parent volume).•space efficient - “virtually capacity free” :
– chunks will be allocated in the disk group on demand.– Snapshot removed if disk group becomes full.
•space guaranteed - “standard” : – chunks are allocated in the disk group at moment of Snapshot creation.– Snapshot allocation remains available if disk group becomes full.
•7 active snapshots per parent volume (in V2)•must live in the same disk group as parent•“preferred” pathed by the same controller as the parent
volume
page 22© 2002
hp
HSV110 virtualization :Snapshot and SnapClone
implementationSnapClone – “virtually instantaneous SnapClone” (COPY) : data is copied from virtual disk to SnapClone in background
– chunks are allocated in the disk group at moment of SnapClone creation.
•can be presented to a host and used immediately•any group may be the home for the SnapClone (in V2)•SnapClone’s raid level will match parent volume (for now)•independent volume when fully realized
– may be preferred pathed to either controller
yuck!
page 23© 2002
hp
HSV110 virtualization : space guarantee Snapshot creation and utilization
time
volume“A”
snapof “A”
volume“A”
12:00 noon
12:10
snap of “A”(contentsas of noon)
contentsidentical
contentsdifferent
updates T1 updates T1updates T3
12:20
volume“A”
snap of “A”(contentsas of noon)
contentsdifferent
12:05
volume “A”receives updates
12:15
volume “A”receives more
updates
page 24© 2002
hp
HSV110 virtualization : space efficient
Snapshot creation and utilization
time
volume“A”
snapof “A”
volume“A”
12:00noon
12:10
snapof “A”(contentsas of noon)
contentsidentical
contentsdifferent
updates T1 updates T1updates T3
12:20
volume“A”
snapof “A”(contentsas of noon)
contentsdifferent
12:05
volume “A”receives updates
12:15
volume “A”receives more
updates
page 25© 2002
hp
HSV110 virtualization:Snapshot versus Snapclone
Description Pro’s Con’s Snapshot Space Efficient
Pointer based Copy-before-Write Allocate space on demand
Space efficient (allocated on demand)
Overcommit issues
Snapshot Space Guaranteed
Pointer based Copy-before-Write PreAllocate space on creation
No Overcommit issues
Space inefficient (allocated right away)
SnapClone Same as Snapshot space guaranteed, but now with background process to separate VD.
No Overcommit issues Repeatable, separate VD’s
Space inefficient Consumes some background process time