27
Ji-Yong Shin Cornell University In collaboration with Mahesh Balakrishnan (MSR SVC), Tudor Marian (Google), and Hakim Weatherspoon (Cornell) Gecko: Contention- Oblivious Disk Arrays for Cloud Storage FAST 2013

Gecko: Contention-Oblivious Disk Arrays for Cloud Storage

  • Upload
    arawn

  • View
    38

  • Download
    0

Embed Size (px)

DESCRIPTION

Gecko: Contention-Oblivious Disk Arrays for Cloud Storage. Ji -Yong Shin Cornell University In collaboration with Mahesh Balakrishnan (MSR SVC), Tudor Marian (Google), and Hakim Weatherspoon (Cornell). FAST 2013. Cloud and Virtualization. What happens to storage?. Guest VM. Guest - PowerPoint PPT Presentation

Citation preview

Ji-Yong Shin Cornell University

In collaboration with Mahesh Balakrishnan (MSR SVC), Tudor Marian (Google), and Hakim Weatherspoon (Cornell)

Gecko: Contention-Oblivious Disk Arrays for Cloud Storage

FAST 2013

• What happens to storage?

Cloud and Virtualization

VMM

GuestVM

GuestVM

GuestVM

GuestVM

SharedDisk

SharedDisk

Shared Disk

Shared Disk

SEQUENTIAL

RANDOM

2

1 2 3 4 5 6 70

50100150200250300350

Seq - VM+EXT4Rand - VM+EXT4

# of Seq Writers

Thro

ughp

ut (M

B/s)

1 2 3 4 5 6 70

50100150200250300350

Seq - VM+EXT4Rand - VM+EXT4

# of Seq Writers

Thro

ughp

ut (M

B/s)

I/O CONTENTION causes throughput collapse

4-Disk RAID0

3

Existing Solutions for I/O Contention?

• I/O scheduling: reordering I/Os– Entails increased latency for certain workloads– May still require seeking

• Workload placement: positioning workloads to minimize contention– Requires prior knowledge or dynamic prediction– Predictions may be inaccurate– Limits freedom of placing VMs in the cloud

4

– Log all writes to tail

Log-structured File System to the Rescue?[Rosenblum et al. 91]

Addr 0 1 2 … … N…

Writ

e

Writ

e

Writ

eLo

g Ta

ilShared

Disk

5

• Garbage collection is the Achilles’ Heel of LFS [Seltzer et al. 93, 95; Matthews et al. 97]

SharedDisk

Challenges of Log-Structured File System

… N…

Log

Tail

SharedDisk Shared

Disk

GC

Garbage Collection (GC)from Log Head

Read

GC and read still cause disk seek

Log

Head

6

Challenges of Log-Structured File System

• Garbage collection is the Achilles’ Heel of LFS– 2-disk RAID-0 setting of LFS – GC under write-only synthetic workload

RAID 0 + LFS

1 12 23 34 45 56 67 78 89 100 111 1220

50

100

150

200

250

GCAggregateApp

Time (s)

Thro

ughp

ut (M

B/s)

Max Sequential Throughput

Throughput falls by 10X during GC

7

Problem:Increased virtualization leads to

increased disk seeks and kills performance

RAID and LFS do not solve the problem

8

Rest of the Talk• Motivation• Gecko: contention-oblivious disk storage– Sources of I/O contention– New technique: Chained logging– Implementation

• Evaluation• Conclusion

9

What Causes Disk Seeks?• Write-write• Read-read• Write-read

VM 1 VM2

Writ

e

Writ

e

Read

Read

Writ

e Read

10

What Causes Disk Seeks?• Write-write• Read-read• Write-read

• Logging– Write-GC read– Read-GC read

VM 1 VM2

Writ

e Read

GC

11

Principle: A single sequentially accessed disk is better than multiple randomly seeking

disks

12

Gecko’s Chained Logging Design• Separating the log tail from the body

– GC reads do not interrupt the sequential write– 1 uncontended drive >>faster>> N contended drives

Disk 2Disk 1Disk 0

LogTail

Physical Addr Space

GCGarbage collection

from Log Head to Tail

Disk 0 Disk 1 Disk 2

Eliminateswrite-write

and reduces write-read contention

Eliminates write-GC read

contention

13

Gecko’s Chained Logging Design• Smarter “Compact-In-Body” Garbage Collection

Disk 1Disk 0

Physical Addr Space

GC

Garbage collectionfrom Head to Body

LogTail

Disk 2

Garbage collection

from Log Head to Tail

14

Gecko Caching• What happens to reads going to tail drives?

LRU Cache

Reduces write-read contention

Disk 1Disk 0 Disk 2

Writ

e

Tail Cache

Read

Read Re

ad

Body Cache (Flash )

Reduces read-read contention

RAMHot data

MLC SSDWarm data

15

Gecko Properties SummaryNo write-write contention,

No GC-write contention, andReduced read-write contention

Disk 1Disk 0 Disk 2

Writ

e

Tail Cache

Read

Read

Body Cache (Flash )

Disk 1’Disk 0’ Disk 2’

Fault tolerance + Read performance

Mirroring/Striping

Disk 1’Disk 0’Power saving w/o

consistency concerns

Reduced read-write and read-read

contention

16

Gecko ImplementationPrimary map: less than 8 GB RAM

for a 8 TB storage

Inverse map: 8 GB flash for a 8 TB storage (written every 1024 writes)

4KB pages

empty filled

Data(in disk)

Physical-to-logical map(in flash)he

ad

head

4-byte entriesLogical-to-physical map(in memory) R

R

W

W

W

W

tail

tail

Disk 0 Disk 1 Disk 2

WW

17

Evaluation1. How well does Gecko handle GC?2. Performance of Gecko under real workloads?3. Effect of varying Gecko chain length?4. Effectiveness of the tail cache?5. Durability of the flash based tail cache?

18

Evaluation Setup

• In-kernel version– Implemented as block device

for portability– Similar to software RAID

• Hardware– WD 600GB HDD

• Used 512GB of 600GB• 2.5” 10K RPM SATA-600

– Intel MLC (multi level cell) SSD • 240GB SATA-600

• User-level emulator– For fast prototyping– Runs block traces– Tail cache support

19

How Well Does Gecko Handle GC?

Gecko Log + RAID0

Gecko’s aggregate throughput always remains high3X higher aggregate & 4X higher application throughput

1 14 27 40 53 66 79 92 1051180

50

100

150

200

250

GCAggregateApp

Time (s)Th

roug

hput

(MB/

s)1 13 25 37 49 61 73 85 97 109

0

50

100

150

200

250GCAggregateApp

Time (s)

Thro

ughp

ut (M

B/s)

2-disk setting; write-only synthetic workload

Max Sequential Throughput of 1 disk Max Sequential

Throughput of 2 disksAggregate throughput

= Max throughput

Throughput collapses

20

How Well Does Gecko Handle GC?

App throughput can be preserved using smarter GC

Gecko

1 13 25 37 49 61 73 85 97 1090

50

100

150

200

250GCAggregateApp

Time (s)

Thro

ughp

ut (M

B/s)

No GC Move-to-Tail

GC

Compact-in-Body

GC

0

50

100

150

200

250

Thro

ughp

ut (M

B/s)

21

MS Enterprise and MSR Cambridge Traces

Trace NameEstimatedAddr Space

Total DataAccessed (GB)

Total DataRead (GB)

Total DataWritten (GB) TotalIOReq NumReadReq NumWriteReq

prxy 136 2,076 1,297 779 181,157,932 110,797,984 70,359,948

src1 820 3,107 2,224 884 85,069,608 65,172,645 19,896,963

proj 4,102 2,279 1,937 342 65,841,031 55,809,869 10,031,162

Exchange 4,822 760 300 460 61,179,165 26,324,163 34,855,002

usr 2,461 2,625 2,530 96 58,091,915 50,906,183 7,185,732

LiveMapsBE 6,737 2,344 1,786 558 44,766,484 35,310,420 9,456,064

MSNFS 1,424 303 201 102 29,345,085 19,729,611 9,615,474

DevDivRelease 4,620 428 252 176 18,195,701 12,326,432 5,869,269

prn 770 271 194 77 16,819,297 9,066,281 7,753,016

[Narayanan et al. 08, 09]

22

What is the Performance of Gecko under Real Workloads?

• Gecko– Mirrored chain of length 3– Tail cache (2GB RAM + 32GB SSD)– Body Cache (32GB SSD)

• Log + RAID10 – Mirrored, Log + 3 disk RAID-0 – LRU cache (2GB RAM + 64GB SSD)

Gecko showed less read-write contention and higher cache hit rateGecko’s throughput is 2X-3X higher

Gecko Log + RAID10

Mix of 8 workloads: prn, MSNFS, DevDivRelease, proj, Exchange, LiveMapsBE, prxy, and src16 Disk configuration with 200GB of data prefilled

1 11 21 31 41 51 61 71 81 91 1011110

20406080

100120140160180200

Read Write

Time (s)

Thro

ughp

ut (M

B/s)

1 11 21 31 41 51 61 71 81 91 1011110

20406080

100120140160180200

Read Write

Time (s)

Thro

ughp

ut (M

B/s)

23

What is the Effect of Varying Gecko Chain Length?

• Same 8 workloads with 200GB data prefilled

• Single uncontended disk • Separating reads and writes better performance

Gecko 2

Gecko 3

Gecko 4

Gecko 5

Gecko 6

Gecko 7

Log-R

AID 2

Log-R

AID 3

Log-R

AID 4

Log-R

AID 5

Log-R

AID 6

Log-R

AID 70

20

40

60

80

100

120

140

160

Read WriteTh

roug

hput

(MB/

s)

RAID 0

errorbar = stdev

24

How Effective Is the Tail Cache?• Read hit rate of tail cache (2GB RAM+32GB SSD) on 512GB disk • 21 combinations of 4 to 8 MSR Cambridge and MS Enterprise traces

Tail cache can effectively resolve read-write contention

• At least 86% of read hit rate• RAM handles most of hot data

• Amount of data changes hit rate• Still average 80+ % hit rate

1 3 5 7 9 11 13 15 17 19 210

102030405060708090

100

SSDRAM Application Mix

Hit R

ate

(%)

100 200 300 400 50050556065707580859095

100

Amount of Data in Disk (GB)Hi

t Rat

e (%

)

25

How Durable is Flash Based Tail Cache?

• Static analysis of lifetime based on cache hit rate• Use of 2GB RAM extends SSD lifetime

2X-8X Lifetime Extension

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 210

100

200

300

400

500

600

700

Application Mix

SSD

Life

time

(Day

s)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 210

100

200

300

400

500

600

700

Application Mix

SSD

Life

time

(Day

s)

Lifetime at 40MB/s

26

Conclusion• Gecko enables fast storage in the cloud– Scales with increasing virtualization and number of cores– Oblivious to I/O contention

• Gecko’s technical contribution– Separates log tail from its body – Separates reads and writes– Tail cache absorbs reads going to tail

• A single sequentially accessed disk is better than multiple randomly seeking disks

27

Question?