23
Proteus: Power Proportional Memory Cache Cluster in Data Centers Shen Li, Shiguang Wang, Fan Yang, Shaohan Hu, Fatemeh Saremi, Tarek Abdelzaher

Proteus: Power Proportional Memory Cache Cluster in Data Centers Shen Li, Shiguang Wang, Fan Yang, Shaohan Hu, Fatemeh Saremi, Tarek Abdelzaher

Embed Size (px)

Citation preview

Proteus: Power Proportional Memory Cache Cluster in Data Centers

Shen Li, Shiguang Wang, Fan Yang, Shaohan Hu, Fatemeh Saremi, Tarek Abdelzaher

Background

Web Server Cluster

Memcached Server Cluster

Database Server Cluster

1. request

7. response

2. tr

y ca

che

3. miss or hit

4. try DB

5. data6.

upd

ate

A Typical Use Case of Memcached Cluster

Background

• Dynamic Provisioning saves energy by turning off servers when the service level measurements (e.g., response time, # of replicas, etc.) allow.

• Existing solutions for webservers and databases/DFS does not fit cache clusters due to delay penalties.

Outline

• Load Balance Under Dynamics

• Smooth Transition

• Implementation and Evaluation

Objectives

• Balance load distribution in cache tier under dynamics.

• Minimize data movements during re-balancing transitions.

mc1 mc2 mc3

mc1 mc2 mc3 mc1 mc2

Virtual Node

Memcache Server

Load Balance Under Dynamics• Methodology:

₋ Consistent hashing.₋ Virtual nodes.

mc1 mc2 mc3

mc1 mc2 mc3

Load Balance Under Dynamics• Question:

1. In order to achieve our objectives, what is the minimum possible number of virtual nodes? (Provisioning off according to the decreasing order of mc id.)

2. How to construct such consistent hashing ring?

Load Balance Under Dynamics• Answer 1: (N is the number of cache servers)

• Proof:denotes the virtual nodes set of cache server

if takes over the host range of when turns off.

as ‘s host range has to be evenly divided to the other i -1 servers.

Then, we constructively show it is possible to achieve this lower bound.

Load Balance Under Dynamics• Answer 2: (assume the length of the hash ring is 1)

For i ← 2~ NFor j ← 1~(i-1)

Pick any virtual node of server j whose hostis large than 1/[i(i+1)];Insert an virtual node of server i in front of that virtual node of server j, such that exactly 1/[i(i+1)] host range is moved from server j to server i.

• Please refer to the paper for proof of correctness and complexity analysis.

1/31/411/2An example

1/2

1/6

1/6

1/3

1/12

1/121/12

1/121/4

mc1

mc2

mc3

mc4

Outline

• Load Balance Under Dynamics

• Smooth Transition

• Implementation and Evaluation

Smooth Transition Objectives• Move only hot in-cache data.

₋ A piece of data is “hot” if it has been requested by any user in the past TTL seconds.

₋ Even though, a cache server may response for a relatively large key range, only hot in-cache data should be moved.

• Amortize data movement cost.• Bounded transition delay.

Smooth Transition• Methodology:

1. Each mc server maintains a Bloom Filter digest to record what is in cache.

2. Digests are sent to web servers before provisioning transitions.

3. For subsequent queries during transition, web servers will asks the local digest before query mc servers.

Bloom Filter Configuration

False positive rate:

Counter overflow probability:

Minimize Bloom Filter size with given false positive (pp)

and false negative rate(pn):

where

Bloom Filter ConfigurationTake partial derivatives:

where

When we have

Almost always true

Hence, optimal solution is reached at minimum possible .

where is Lambert function.

Outline

• Load Balance Under Dynamics

• Smooth Transition

• Implementation and Evaluation

Implementation

Wikipedia Workload

Fluctuating workloads create opportunities for energy savings.

Response Time

• Naïve:Re-map the modulo based hash.

• Consistent:Original Consistent hashing algorithm with n^2 randomly placed virtual nodes.

• Static:Always keep all servers on.

Load BalanceThe curves show min load over max load among all mc servers under different schemes.

Cache Size vs Hit Ratio

MEM allocated for eachmc server.

Energy Saving

The energy consumption of the cache cluster is reduced by 23%.