27
RAID Technology

RAID Levels

Embed Size (px)

DESCRIPTION

Server Raid

Citation preview

Page 1: RAID Levels

RAID TechnologyRAID Technology

Page 2: RAID Levels

Metadata - Disk Data Format (DDF)

• DDF - Common RAID Disk Data Format Specification developed by the SNIA Common RAID Disk Data Format Technical Working Group

• Specification defines standard data structure describing how data is formatted across disks in a RAID group

• Common RAID DDF structure allows a basic level of interoperability, such as data-in-place migration, between different suppliers of RAID technology

• Adaptec implementation of DDF is AMF – Adaptec Metadata Format

• AMF provides the space to expand metadata for new features and array types like RAID 6

Page 3: RAID Levels

LBA 0

User Data Run timeWorkspace

ConfigurationMetadata

64MB 32MB

• AMF uses more space than previous metadata formats and is at the opposite ‘end’ of the disk occupying the largest block addresses instead of the smallest

• Although it uses more space, ARC truncates available disk space to the highest 100 MB multiple that fits within the physical size of the drive, so there may not be a difference

• AMF metadata resides in the last 96MB of the disk It starts at LBA (capacity - 96MB - 1) and ends at LBA (capacity - 1)

Metadata - Adaptec Metadata Format (AMF)

Page 4: RAID Levels

No RAID

• Contiguous Data • No Fault Tolerance• Minimum 1 Drive

• Storage with no RAID logic is often referred to as Volume or JBOD

• Groups of disks known as Disk Array

HDD 1

I

J

K

L

E

F

G

H

A

B

C

D

M

N

O

P

HDD 2 HDD 3 HDD 4

Page 5: RAID Levels

No RAID

• Contiguous Data • No Fault Tolerance• Minimum 1 Drive

• Storage with no RAID logic is often referred to as Volume or JBOD

HDD 1

I

J

K

L

E

F

G

H

A

B

C

D

M

N

O

P

HDD 2 HDD 3 HDD 4

All data is lost because there is no redundancy!!

Page 6: RAID Levels

RAID 0

• Block Striping • No Fault Tolerance• Minimum 2 Drives

• RAID 0 spreads data across several drives for speed using block level striping

• High Read and Write performance

HDD 1

C

G

K

O

B

F

J

N

A

E

I

M

D

H

L

P

HDD 2 HDD 3 HDD 4

Stripe

Page 7: RAID Levels

RAID 0

• Block Striping • No Fault Tolerance• Minimum 2 Drives

• RAID 0 spreads data across several drives for speed using block level striping

• High Read and Write performance

HDD 1 HDD 2 HDD 3 HDD 4

All data is lost because there is no redundancy!!

C

G

K

O

B

F

J

N

A

E

I

M

D

H

L

PStripe

Page 8: RAID Levels

RAID 1

• Mirroring• No Striping• 100% Redundancy• 50% Capacity Loss• Requires 2 Drives

• Writes all data to both drives in the mirrored pair

A

B

C

D

A

B

C

D

HDD 1 HDD 2

Mirror Drive

Page 9: RAID Levels

RAID 1

• Mirroring• No Striping• 100% Redundancy• 50% Capacity Loss• Requires 2 Drives

• Writes all data to both drives in the mirrored pair

A

B

C

D

A

B

C

D

HDD 1 HDD 2

Mirror DriveNo data was lost because of the mirrored drive!

Page 10: RAID Levels

RAID 10

• Striped array of mirrors• Minimum 4 Drives• 50% Capacity Loss• Good write performance

• Can survive a disk failure in both mirror sets• High I/O rates achieved due to multiple

stripe segments

HDD 1

B

D

F

H

A

C

E

G

A

C

E

G

B

D

F

H

HDD 2 HDD 3 HDD 4

Page 11: RAID Levels

RAID 10

• Striped array of mirrors• Minimum 4 Drives• 50% Capacity Loss• Good write performance

• Can survive a disk failure in both mirror sets• High I/O rates achieved due to multiple

stripe segments

HDD 1

B

D

F

H

A

C

E

G

A

C

E

G

B

D

F

H

HDD 2 HDD 3

HDD 4

Supports 2 disk failures: 1 failure per mirror set

Page 12: RAID Levels

Parity

• RAID 5 - Redundancy is achieved by the use of parity blocks

• If a single drive in the array fails, data blocks and a parity block from the working drives can be combined to reconstruct the missing data

• Exclusive-OR (XOR) is the logical operation used to generate parity

• XOR compares every two bits− If the two bits are the same then an even XOR parity bit (0) is generated

− If the bits are different then an odd parity bit (1) is created

Page 13: RAID Levels

XOR

• Exclusive OR• Logical operation that generates parity for every 2 data

bits

0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 0Data Byte

Parity Byte

Data Byte

1 1 0 0 0 1 0 1

Page 14: RAID Levels

RAID 5

• Block Striping• Distributed Parity• 100% Redundancy• Better use of capacity than

RAID 1

Parity is distributed to reduce a single drive bottleneck

• Data is striped and includes parity protection

• Parity is also striped for higher performance

HDD 1

C

Parity

H

K

B

E

Parity

J

A

D

G

Parity

Parity

F

I

L

HDD 2 HDD 3 HDD 4

Stripe

Page 15: RAID Levels

RAID 5

• Block Striping• Distributed Parity• 100% Redundancy• Better use of capacity than

RAID 1

• Data is striped and includes parity protection

• Parity is also striped for higher performance

HDD 1

C

Parity

H

K

B

E

Parity

J

A

D

G

Parity

Parity

F

I

L

HDD 2 HDD 3 HDD 4

Stripe

No data is lost because of the distributed parity!

Page 16: RAID Levels

RAID 50

• Striped array of RAID 5 arrays• Minimum 6 Drives

• Can survive a disk failure in each sub-array• Data and parity striped across RAID 5 arrays

HDD 1

BParity

I

N

A

EParity

M

HDD 2

Parity

F

JParity

HDD 3 HDD 4

DParity

K

P

C

GParity

O

HDD 5

Parity

H

LParity

HDD 6

Page 17: RAID Levels

RAID 50

• Striped array of RAID 5 arrays• Minimum 6 Drives

• Can survive a disk failure in each sub-array• Data and parity striped across RAID 5 arrays

HDD 1

BParity

I

N

A

EParity

M

HDD 2

Parity

F

JParity

HDD 3 HDD 4

DParity

K

P

C

GParity

O

HDD 5

Parity

H

LParity

HDD 6

Supports 2 disk failures: 1 failure per RAID 5 sub-array

Page 18: RAID Levels

Advanced Data Protection SuiteAdvanced Data Protection Suite

Page 19: RAID Levels

Adaptec sets the RAID Feature Standard

Industry Standard RAID Features

• RAID 0, 1, 10, 5, 50, JBOD• Online Array Optimizations

− Online Capacity Expansion− RAID Level Migration− Stripe Size Configuration

• Large Array Size• Configurable Hot Spares• Hot Swap Disk Support• Flexible Initialization Schemes• Multiple arrays on a single set of drives

Adaptec Unique Features

• Optimized Disk Utilization• Hot Space (RAID-5EE)• Dual Drive Failure Protection (RAID-6)• Striped Mirror (RAID-1E)• Copyback Hot Spare

• Striping, Mirroring, Rotating Parity

• Add a drive or expand an array• Optimize for protection, capacity, or performance• Optimize performance based on data access patterns• Seamlessly support >2 TB arrays, up to 512 TB• Dedicated or global• Auto-rebuild after a disk failure• Instant availability, background, or clear• Stripe multiple arrays across the same physical disks

• No wasted space with different drive sizes• No more spindles sitting idle• Double the tolerance to drive failures over RAID-5• Spread a mirror across an odd number of drives• Auto-rebuild to original setup after drive replacement

Data Protection

AD

PS

Page 20: RAID Levels

Striped Mirror (RAID-1E)

• RAID level-1 Enhanced (RAID-1E) combines mirroring and data striping

• Stripes data and copies of the data across all of the drives in the array

• As with standard RAID level-1, the data is mirrored and the capacity of the logical drive is 50% of the total actual disk capacity (N/2)

• RAID 1E requires a minimum of 3 drives and supports a maximum of 16 drives

Data Protection

copy

Page 21: RAID Levels

Hot Space (RAID 5EE)

• Similar to RAID-5 but includes efficient distributed spare drive

• Extra spindle for better performance and faster rebuild times

• Stripes data and parity across all the drives in the array

• Spare drive part of the RAID-5EE array - Spare space interleaved with parity blocks

• Spare space dedicated to the array

• N+2 drives to implement – Minimum of 4 drives up to 16 drives maximum

Data Protection

Page 22: RAID Levels

Dual Drive Failure Protection (RAID-6)

• Similar to RAID 5 except it uses a second set of independently calculated and distributed parity information for additional fault tolerance

• This extra fault tolerance ensures data availability in the event of two drives failing before a drive replacement can occur (physically or through a hot spare rebuild)

• To lose access to the data, three disks would have to fail within the mean time to repair (MTTR) interval and the probability of this occurring is thousands of times less likely than simultaneous failure of both disks in a RAID 1 array

• Requires N+2 drives to implement

Data Protection

Page 23: RAID Levels

RAID 60

• Striped array of RAID 6 arrays• Minimum 8 Drives

HDD 1

AParity

Parity

N

Parity

Parity

I

M

HDD 2

B

EParity

Parity

HDD 3

Parity

F

JParity

HDD 4 HDD 5

CParity

Parity

P

Parity

Parity

K

O

HDD 6

D

GParity

Parity

HDD 7

Parity

H

LParity

HDD 8

• For use with high availability solutions• Can survive 2 disk failures in each sub-array• Data and parity striped across RAID 6 arrays

Page 24: RAID Levels

RAID 60

• Striped array of RAID 6 arrays• Minimum 8 Drives

• For use with high availability solutions• Can survive 2 disk failures in each sub-array• Data and parity striped across RAID 6 arrays

HDD 1

AParity

Parity

N

Parity

Parity

I

M

HDD 2

B

EParity

Parity

HDD 3

Parity

F

JParity

HDD 4 HDD 5

CParity

Parity

P

Parity

Parity

K

O

HDD 6

D

GParity

Parity

HDD 7

Parity

H

LParity

HDD 8

Supports 4 disk failures: 2 failures per RAID 6 sub-array

Page 25: RAID Levels

Selecting a RAID level Data Protection

RAID Level

Available Capacity

Read Performance

Write Performance Built-in Spare

Minimum Drives

Maximum Drives

Volume 100% =1 drive =1 drive No 1 32

RAID 0 100% ¨¨¨¨ ¨¨¨¨ No 2 128

RAID 1 N/2 ¨¨¨ ¨¨¨ No 2 2

RAID 1E N/2 ¨¨¨ ¨¨¨ No 3 16

RAID 5 N-1 ¨¨¨ ¨¨ No 3 16

RAID 5EE N-2 ¨¨¨ ¨¨¨ Yes 4 16

RAID 6 N-2 ¨¨¨ ¨¨ No 4 16

RAID 10 N/2 ¨¨¨¨ ¨¨¨ No 4 16

RAID 50N-1 per

member array¨¨¨ ¨¨¨ No 6 128

RAID 60N-2 per

member array¨¨¨ ¨¨ No 8 128

Page 26: RAID Levels

Advanced Data Protection Suite

Copyback Hotspare

• When a drive fails, data from the failed drive is built to the hotspare during the rebuild

• With Copyback enabled, the data is moved back to its original location after the controller detects that the failed drive has been replaced

• After the data has been copied back to the replaced drive, the hotspare becomes available again

• Concept of hotspare location

• Useful particularly if a SATA hotspare is protecting a SAS RAID array

• Copyback mode is enabled using Adaptec Storage Manager, HRConf or ARCConf

Data Protection

Page 27: RAID Levels

Advanced Data Protection Suite

RAID 5 Array

HotspareDisk Failure!

Hotspare kicks in…

New Disk Replacement

New Array Member

Copyback Process takes place…