36
DistCache: Provable Load Balancing for Large-Scale Storage Systems with Distributed Caching Zaoxing (Alan) Liu Zhihao Bai, Zhenming Liu, Xiaozhou Li, Changhoon Kim, Vladimir Braverman, Xin Jin, Ion Stoica

DistCache: Provable Load Balancing for Large-Scale Storage ... · Storage Systems with Distributed Caching Zaoxing(Alan) Liu ZhihaoBai, ZhenmingLiu, XiaozhouLi, ChanghoonKim,

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

DistCache: Provable Load Balancing for Large-Scale Storage Systems with Distributed Caching

Zaoxing (Alan) Liu

Zhihao Bai, Zhenming Liu, Xiaozhou Li, Changhoon Kim, Vladimir Braverman, Xin Jin, Ion Stoica

Large-scale cloud services need large storage clusters

2

§ Major cloud services serve billions of users.

Large datacenter clusters

Data Item

Que

ry P

opul

arity

0

0.1

0.2

0.3

1 2 3 4 5 6 7 8 9 10

Storage servers have load imbalance issue

3

Server load

….….….

The skewness of the workload brings imbalance.

§ Typical workloads [Sigmetrics’12]:§ Highly skewed. § Dynamic.

Solutions to mitigate the load imbalance

4

§ Consistent hashing and related.o Do not handle dynamic and skewed workloads.

§ Front-end cache as a load balancer. § Low update overhead.§ Work for arbitrary workloads.

§ Data migration or replication.o Large system and storage overhead.o High cache coherence cost.

Solutions to mitigate the load imbalance

5

§ Consistent hashing and related.o Do not handle dynamic and skewed workloads.

§ Data migration or replication.o Large system and storage overhead.o High cache coherence cost.

§ Front-end cache as a load balancer. § Low update overhead.§ Work for arbitrary workloads.

Prior work: Fast, small cache alleviates load imbalance

6

Cache hottest O(n log n) items [SoCC’11]

Server load is balanced

A cache node brings load balancing in a cluster.

n servers……

Application to cluster-scale:[NSDI’16, SOSP’17]

Strawman: Big, fast cache for inter-cluster load balancing

7

……

……

mClusters

ofn

Servers

8

……

One big cache is not scalable.

……

32Clusters

of32

Servers

41 Tbps

One “Big” cache is infeasible

40G 40G 40G

First, balance the load within each cluster

9

……

mclusters

……

Cluster load

Second, balance the load between clusters

10

……

mclusters

……

Cache hottest O(m log m) items.

“BIG”Server

“BIG”Server

“BIG”Server

We need to avoid using big node anywhere.

11

……

……

Upper-layer cache nodes

Lower-layer cache nodes

DistCache: Distributed caching as load balancer

mclusters

Cache hottest O(m log m) items.

Provable, Practical, General mechanism.

Natural goals on a distributed caching mechanism

12

Ideally, DistCache should be as good as “one big cache” to absorb O(m log m) hottest items.

Lower-layer

Upper-layer

m clusters

Natural goals on a distributed caching mechanism

13

Ideally, DistCache should be as good as “one big cache” to absorb O(m log m) hottest items.

To achieve “one big cache”:o Support ANY query workload to hottest O(m log m) items.o Each cache node is NOT overloaded.o Keep cache coherence with MINIMAL cost.

Lower-layer

Upper-layer

Design Challenges of DistCache

14

§ Challenge #1: How to allocate cached items?§ Do not overload any cache node.§ Do not incur high cache coherence cost.

§ Challenge #2: How to query the cached items?§ Provide best and stable cache query distribution.

§ Challenge #3: How to update the cached items? § Two-phase update to ensure cache coherence.

Design Challenges of DistCache

15

§ Challenge #1: How to allocate cached items?§ Do not overload any cache node.§ Do not incur high cache coherence cost.

§ Challenge #2: How to query the cached items?§ Provide best and stable cache query distribution.

§ Challenge #3: How to update the cached items? § Two-phase update to ensure cache coherence.

Challenge #1: How to allocate the cached items?

16

Strawman Sol #1: Cache-Replication

{A,B,C,D,E}

{A, B, C} {D, E} {F}

Cache-Replication incurs high cache coherence cost.

{A,B,C,D,E} {A,B,C,D,E}

Update cache Update cache Update cache

Update cacheLower layer cache nodes

Upper layer cache nodes

Challenge #1: How to allocate the cached items?

17

Lower layer cache nodes

Upper layer cache nodes

Strawman Sol #2: Cache-Partition

{A, B, C} {D} {E}

{A, B, C} {E} {D}

Cache-Partition could put too many hottest items into the same cache node.

Overload

Overload

Independent hashes to allocate the cached items

18

{B, E}

{A, B, C} {D, E} {F}

{A} {C, D, F}

Update cache

Update cache

Two independent hashes H1 and H2 to allocate hot items

• Stable and best cache allocation.• Small cache coherence cost.

Lower layer cache nodes

Upper layer cache nodes

Challenge #2: How to query the cached items?

19

{B, E}

{A, B, C} {D, E} {F}

{A} {C, D, F}

Querying item with upper layer first does not guarantee best throughput.

Get(C) with upper layer first

Lower layer cache nodes

Upper layer cache nodes

20

{B, E}

{A, B, C} {D, E} {F}

{A} {C, D, F}

Power-of-two-choices to route the queries guarantee stable throughput.

Power-of-two-choices to query the cached items

Lower layer cache nodes

Upper layer cache nodes

Get(C) with upper layer first

Putting together: DistCache

21

….

……

….

……

• Independent hashes to allocate cache items.• Power-of-two-choices of current cache loads to route queries.

…… ……{B, E}

{A, C} {B, E} {D, F}

{A} {C, D, F}

Get(B)

Lower layer cache nodes

Upper layer cache nodes

Theoretical Guarantee behind DistCache

22

For m storage clusters:o DistCache absorbs any query workload to the

hottest O(m log m) items.

with the following condition:o Query rate for a single item is no larger than ½ of one

cache node’s throughput. (No more half of a cluster!)

Proof Sketch: Convert to a perfect matching problem

23

B

A

C

D

E

1

F

10

0

0

00

0

1

11

1

1

Hottest items

2

3

4

5

6

Upper layer cache nodes

Lower layer cache nodesOur PoT query can find a

perfect match for any query workload distribution.

Proofs leverage tools from expander graph, network flow, and querying theory

Remarks of the DistCache Analysis

24

§ The numbers of cache nodes in two layers can be different as long as m isn’t too small.

§ The throughput of cache nodes can be different.

§ Aggregated throughput is almost same as “big cache”.

Example Deployment Scenarios of DistCache

25

DRAM

O(100) KQPS eachO(10) MQPS each

O(10) MQPS each O(1) BQPS each

Flash / Disk

+ +DRAM/SSD Array Programmable Switch

Servers

Cache

Case Study: Switch-based distributed caching

26

……

….

Redis Storage Clusters

……

Client Cluster

1. Client sends query

2. Client side switchdecides which cache node to access (PoT)

3. If cache hit, switch will reply

When cache hit, cache switch will reply the query immediately.

Programmable switches

Programmable switches

Case Study: Switch-based distributed caching

27

……

….

Redis Storage Clusters

……

Client Clusters

1. Client sends query

2. Client side switchdecides which cache node to access (PoT)

3. If cache miss, query is forwarded to server

4. Server handles the query to Redis, and replies.

When cache miss, query is handled by the server.

Implementation Overview

28

Controller

QueryRouting

Key-ValueCache

Heavy Hitter Detector

Cache Management

Network Management

Controller

Query LoadStatistics

Cache Switch

Client-side Switch Clients

Servers

QueryRouting

P4: Programmable Protocol-Independent Packet Processing

29

User-definedPacket Format: ETH IP TCP SEQ OP KEY VALUE

Existing Packet Header Packet Header for Caching

Pars

er

Depa

rser

Header/Metadata in Shared Memory

Match-ActionTable

Match-ActionTable

Match-ActionTable

P4: Programmable Protocol-Independent Packet Processing

30

User-definedPacket Format: ETH IP TCP SEQ OP KEY VALUE

Existing Packet Header Packet Header for Caching

Pars

er

Dep

arse

r

Header/Metadata in Shared Memory

Match:OP == GET

Action: Get_Load ++

Match:Val of A is fetched

Action: Update to header

ETH IP TCP 1 GET A NULL

Match:KEY == A & Vaild

Action: Get value of A

ETH IP TCP 1 GET A V(A)

Evaluation Setup

31

§ Baselines: NoCache, Cache-Partition, Cache-Replication.

Emulated Storage Servers

Emulated Lower-layer nodes Emulated Client-side switches

Emulated Upper-layer nodes

6.5 TbpsBarefoot Tofino

Two PhysicalServers Emulated Client Servers

6.5 TbpsBarefoot Tofino

Evaluation Takeaways

32

§ For read queries, DistCache works as good as Cache-Replication.

§ For write queries, DistCache has performed significantly better:§ When write ratio (<0.3), better throughput.§ When write ratio (>0.3), as good as Cache-Partition.

DistCache balances the loads of different clusters

33

DistCache offers nearly perfect throughput for skewed workloads

DistCache scales linearly with the number of nodes

34DistCache can support very large storage clusters.

0 1024 2048 3072 40961uPber Rf 6tRrDge 1Rdes

0

2048

4096

1Rr

PDl

ized

7hr

Rugh

Sut

DistCDcheCDcheReSlicDtiRnCDche3DrtitiRn1RCDche

DistCache incurs small cache coherence cost

35

0.0 0.2 0.4 0.6 0.8 1.0WriWe 5DWiR

0

512

1024

1Rr

PDl

ized

ThrR

ughp

uW DisWCDcheCDche5eplicDWiRnCDchePDrWiWiRn1RCDche

Under Zipf-0.99 workload, DistCache offers best write throughput.

Conclusions

36

§ DistCache is a general distributed caching mechanism to ensure load balancing crossing many storage clusters.

§ DistCache requires simple primitives (independent hashing, power-of-two-choices routing).

§ DistCache provides near-perfect throughput with rigorous theoretical guarantees.