View
222
Download
0
Tags:
Embed Size (px)
Citation preview
1
Recap(RAID and Storage
Architectures)
2
RAID• To increase the availability and the performance
(bandwidth) of a storage system, instead of a single disk, a set of disks (disk arrays) can be used.
• However, the reliability of the system drops (n devices have 1/n the reliability of a single device).
• Reliability of N disks = Reliability of 1 Disk ÷N
– 50,000 Hours ÷ 70 disks = 700 hours
• Disk system Mean Time To Failure (MTTF): Drops from 6 years to 1 month!
3
RAID-0
Strip 12Strip 8Strip 4Strip 0
Strip 13Strip 9Strip 5Strip 1
Strip 14Strip 10Strip 6Strip 2
Strip 15Strip 11Strip 7Strip 3
• Striped, non-redundantExcellent data transfer rate Excellent I/O request processing rate Not fault tolerant
• Typically used for applications requiring high performance for non-critical data
4
RAID 1 - Mirroring
• Called mirroring or shadowing, uses an extra disk for each disk in the array (most costly form of redundancy)
• Whenever data is written to one disk, that data is also written to a redundant disk: good for reads, fair for writes
• If a disk fails, the system just goes to the mirror and gets the desired data.
• Fast, but very expensive.• Typically used in system drives and critical files
– Banking, insurance data– Web (e-commerce) servers
Strip 3Strip 2Strip 1Strip 0
Strip 3Strip 2Strip 1Strip 0
5
RAID 2: Memory-Style ECC
f0(b)b2b1b0 b3f1(b) P(b)
Data Disks Multiple ECC Disks and a Parity Disk
• Multiple disks record the (error correcting code) ECC information to determine which disk is in fault• A parity disk is then used to reconstruct corrupted or lost data
Needs log2(number of disks) redundancy disks
•Least used since ECC is irrelevant because most new Hard drives support built-in error correction
6
RAID 3 - Bit-interleaved Parity
• Use 1 extra disk for each array of n disks.• Reads or writes go to all disks in the array, with the extra
disk to hold the parity information in case there is a failure.• Performance of RAID 3:
– Only one request can be serviced at a time– Poor I/O request rate– Excellent data transfer rate– Typically used in large I/O request size applications, such as imaging
or CAD
100100111100110110010011
. . .
Logical record
1 0 0 1 0 0 1 1 0 1 1 0 0 1 1 0 1 1 1 0 0 1 0 0 1 1 0
Striped physicalrecords
P
Physical record
7
RAID 4: Block Interleaved Parity
block 0
block 4
block 8
block 12
block 1
block 5
block 9
block 13
block 2
block 6
block 10
block 14
block 3
block 7
block 11
block 15
P(0-3)
P(4-7)
P(8-11)
P(12-15)
• Allow for parallel access by multiple I/O requests
• Doing multiple small reads is now faster than before.
•A write, however, is a different story since we need to update the parity information for the block.
•In this case the parity disk is the bottleneck.
8
RAID 5 - Block-interleaved Distributed Parity
• To address the write deficiency of RAID 4, RAID 5 distributes the parity blocks among all the disks.
• This allows some writes to proceed in parallel
– For example, writes to blocks 8 and 5 can occur simultaneously.
– However, writes to blocks 8 and
11 cannot proceed in parallel.
0
4
8
12
20
. . .
1
5
9
16
21
. . .
2
6
13
17
22
. . .
3
10
14
18
23
. . .
7
11
15
19
. . .
P4
P3
P2
P1
P0
P5
R A ID 5
9
Performance of RAID 5 - Block-interleaved Distributed Parity
• Performance of RAID 5– I/O request rate: excellent for reads, good for writes– Data transfer rate: good for reads, good for writes– Typically used for high request rate, read-intensive
data lookup– File and Application servers, Database servers,
WWW, E-mail, and News servers, Intranet servers
• The most versatile and widely used RAID.
10
Which Storage Architecture?
• DAS - Directly-Attached Storage
• NAS - Network Attached Storage
• SAN - Storage Area Network
11
Storage Architectures(Direct Attached Storage (DAS))
Unix
NetWare
NT/W2K
NetWare Server
Storage
NetWare Server
Storage
NT Server
Storage
NT Server
Storage
Virtual Drive 3Virtual Drive 3Unix Server
Storage
Unix Server
Storage
12
DAS
CPUs
Bus
Memory
SCSI Adaptor
SCSI Adaptor
SCSI Disk Drive
NIC
SCSI protocol
MS Windows
Traditional Server
BlockI/O
13
Storage Architectures(Network Attached Storage (NAS))
Hosts
IP Network
NAS Controller Disk subsystem
SharedInformation
14
NAS
CPUs
Bus
MemoryNIC
MS Windows
“Diskless” App Server(or rather a “Less Disk” server)
IP networkFile protocol (CIFS, NFS)
CPUs
Bus
MemoryNIC
Optimised OS
NAS appliance
SCSI Adaptor
SCSI Adaptor
SCSI Disk Drive
SCSI Adaptor
SCSI Adaptor
SCSI Disk Drive
SCSI protocolBlockI/O
15
The NAS Network
IP network
App Server App Server App ServerNAS Appliance
NAS - truly an appliance
16
Storage Architectures(Storage Area Networks (SAN))
StorageNetwork
Hosts
IPNetwork
Clients
SharedStorage
17
SAN- Fibre Channel (FC)
CPUs
Bus
Memory
SCSI Adaptor
SCSI Adaptor
SCSI Disk Drive
NIC
SCSI protocol
MS Windows
DAS
CPUs
Bus
Memory
FC HBA(Host Bus Adaptor)
NIC
MS Windows
Server with FC
FC Adaptor
SCSI Disk Drive
Remote-ishStorage Unit
(to 30 metres)
SCSI Adaptor
SCSI overFC ProtocolBlock I/O
(to 3 metres)
18
FC-based SAN
FC SwitchFabric
IP network
App server App server App server App server
FC StorageSub-system
FC StorageSub-system
FC StorageSub-system
FCBackupSystem