View
3
Download
0
Category
Preview:
Citation preview
PrincetonUniversity
Performance Isolation and Fairness for Multi-Tenant Cloud Storage
David Shue*, Michael Freedman*, and Anees Shaikh✦
*Princeton ✦IBM Research
Setting: Shared Storage in the Cloud
2
Z
Y
TFZ
Y
FT
Setting: Shared Storage in the Cloud
2
Z
Y
TFZ
Y
FT
S3 EBS SQS
Setting: Shared Storage in the Cloud
2
Z
Y
TFZ
Y
FT
S3 EBS SQSShared Key-Value Storage
DD DD DD DDShared Key-Value Storage
3
Z Y T FZ Y FTY YZ F F F
Multiple co-located tenants ⇒ resource contention
Predictable Performance is Hard
DD DD DD DDDD DD DD DDShared Key-Value Storage
4
Z Y T FZ Y FTY YZ F F F
Multiple co-located tenants ⇒ resource contention
Predictable Performance is Hard
DD DD DD DDDD DD DD DDShared Key-Value Storage
4
Z Y T FZ Y FTY YZ F F F
Fair queuing @ big iron
Multiple co-located tenants ⇒ resource contention
Predictable Performance is Hard
5
Distributed system ⇒ distributed resource allocationMultiple co-located tenants ⇒ resource contention
Z Y T FZ Y FTY YZ F F F
SS SS SS SS
Predictable Performance is Hard
Z Y T FZ Y FTY YZ F F F
SS SS SS SS
6
Multiple co-located tenants ⇒ resource contentionDistributed system ⇒ distributed resource allocation
Z keyspace T keyspace F keyspaceY keyspace
Predictable Performance is Hard
Z Y T FZ Y FTY YZ F F F
SS SS SS SS
6
popu
lari
ty data partition
Multiple co-located tenants ⇒ resource contentionDistributed system ⇒ distributed resource allocation
Predictable Performance is Hard
Z Y T FZ Y FTY YZ F F FZ Y T FZ Y FTY YZ F F F
SS SS SS SS
7
Skewed object popularity ⇒ variable per-node demand
Multiple co-located tenants ⇒ resource contentionDistributed system ⇒ distributed resource allocation
Predictable Performance is Hard
Z Y T FZ Y FTY YZ F F FZ Y T FZ Y FTY YZ F F F
SS SS SS SS
7
Skewed object popularity ⇒ variable per-node demand
Multiple co-located tenants ⇒ resource contentionDistributed system ⇒ distributed resource allocation
1kBGET 10BGET 1kBSET 10BSET(small reads)(large reads) (large writes) (small writes)
Disparate workloads ⇒ different bottleneck resources
Predictable Performance is Hard
Zynga Yelp FoursquareTP
Shared Key-Value Storage
8
Tenants Want System-wide Resource Guarantees
Z Y T FZ Y FTY YZ F F F
SS SS SS SS
demandz = 120 kreq/s
demandf = 120 kreq/s
Skewed object popularity ⇒ variable per-node demand
Multiple co-located tenants ⇒ resource contentionDistributed system ⇒ distributed resource allocation
Disparate workloads ⇒ different bottleneck resources
Zynga Yelp FoursquareTP
Shared Key-Value Storage
8
Tenants Want System-wide Resource Guarantees
Z Y T FZ Y FTY YZ F F F
SS SS SS SS
80 kreq/s 120 kreq/s 160 kreq/s40 kreq/sdemandz = 120 kreq/s
demandf = 120 kreq/s
Skewed object popularity ⇒ variable per-node demand
Multiple co-located tenants ⇒ resource contentionDistributed system ⇒ distributed resource allocation
Disparate workloads ⇒ different bottleneck resources
Zynga Yelp FoursquareTP
Shared Key-Value Storage
9
Pisces Provides Weighted Fair-shares
wz = 20% wy = 30% wf = 40%wt = 10%
demandz = 30%Z Y T FZ Y FTY YZ F F F
SS SS SS SS
Skewed object popularity ⇒ variable per-node demand
Multiple co-located tenants ⇒ resource contentionDistributed system ⇒ distributed resource allocation
Disparate workloads ⇒ different bottleneck resources
Pisces: Predictable Shared Cloud Storage
10
•Pisces- Per-tenant max-min fair shares of system-wide resources
~ min guarantees, high utilization
- Arbitrary object popularity
- Different resource bottlenecks
Pisces: Predictable Shared Cloud Storage
10
•Pisces- Per-tenant max-min fair shares of system-wide resources
~ min guarantees, high utilization
- Arbitrary object popularity
- Different resource bottlenecks
•Amazon DynamoDB- Per-tenant provisioned rates
~ rate limited, non-work conserving
- Uniform object popularity
- Single resource (1kB requests)
Tenant A
Predictable Multi-Tenant Key-Value Storage
11
Tenant BVM VM VM VM VM VM
GET 1101100
RR
Controller
Tenant A
Predictable Multi-Tenant Key-Value Storage
11
Tenant BVM VM VM VM VM VM
GET 1101100
RR
Controller
PP
Tenant A
Predictable Multi-Tenant Key-Value Storage
11
Tenant BVM VM VM VM VM VM
RS
GET 1101100
RR
Controller
PP
Tenant A
Predictable Multi-Tenant Key-Value Storage
11
Tenant BVM VM VM VM VM VM
RS
FQ
GET 1101100
RR
Controller
PP
Tenant A
Predictable Multi-Tenant Key-Value Storage
12
Tenant BVM VM VM VM VM VM
WeightA WeightB
RS
FQ
PP
WA WA2 WB2
GET 1101100
RR
Controller
Tenant A
Predictable Multi-Tenant Key-Value Storage
12
Tenant BVM VM VM VM VM VM
WeightA WeightB
RS
FQ
PP
WA WA2 WB2
GET 1101100
RR
Controller
WA2 WB2WA1 WB1
Strawman: Place Partitions Randomly
13
Tenant A Tenant BVM VM VM VM VM VM
WeightA WeightB
PP
RS
FQ
WA WA2 WB2Controller
RR
Strawman: Place Partitions Randomly
14
Tenant A Tenant BVM VM VM VM VM VM
WeightA WeightB
PP
RS
FQ
WA WA2 WB2
RR
ControllerOverloaded
Pisces: Place Partitions By Fairness Constraints
15
Tenant A Tenant BVM VM VM VM VM VM
WeightA WeightB
PP
RS
FQ
WA WA2 WB2
RR
Collect per-partition tenant demand Controller
Pisces: Place Partitions By Fairness Constraints
15
Bin-pack partitions
Tenant A Tenant BVM VM VM VM VM VM
WeightA WeightB
PP
RS
FQ
WA WA2 WB2
RR
Collect per-partition tenant demand Controller
Pisces: Place Partitions By Fairness Constraints
16
Tenant A Tenant BVM VM VM VM VM VM
WeightA WeightB
PP
Results in feasible partition placement
RS
FQ
WA WA2 WB2
RR
Controller
Pisces: Place Partitions By Fairness Constraints
16
Tenant A Tenant BVM VM VM VM VM VM
WeightA WeightB
PP
Results in feasible partition placement
RS
FQ
WA WA2 WB2
RR
Controller
Controller
Strawman: Allocate Local Weights Evenly
17
WA1 = WB1 WA2 = WB2
Tenant A Tenant BVM VM VM VM VM VM
WeightA WeightB
PP
WA WA2 WB2
RR RS
FQ
Controller
Strawman: Allocate Local Weights Evenly
17
WA1 = WB1 WA2 = WB2
Tenant A Tenant BVM VM VM VM VM VM
WeightA WeightB
PP
WA WA2 WB2
RR RS
FQ
Controller
Strawman: Allocate Local Weights Evenly
17
WA1 = WB1 WA2 = WB2
Tenant A Tenant BVM VM VM VM VM VM
WeightA WeightB
PP
WA WA2 WB2
RR RS
FQ
Overloaded
Pisces: Allocate Local Weights By Tenant Demand
18
Tenant A Tenant BVM VM VM VM VM VM
WeightA WeightB
PP
WA WA2 WB2Controller
WA1 = WB1 WA2 = WB2
RR RS
FQ
Pisces: Allocate Local Weights By Tenant Demand
18
maxmismatch
Tenant A Tenant BVM VM VM VM VM VM
WeightA WeightB
Compute per-tenant+/- mismatch
PP
WA WA2 WB2Controller
WA1 = WB1 WA2 = WB2
RR RS
FQ
Pisces: Allocate Local Weights By Tenant Demand
18 A←B
WA1 > WB1
Tenant A Tenant BVM VM VM VM VM VM
WeightA WeightB
Compute per-tenant+/- mismatch
PP
WA WA2 WB2Controller
WA2 = WB2
RR RS
FQ
Pisces: Allocate Local Weights By Tenant Demand
18 A←B A→B
WA1 > WB1 WA2 < WB2
Tenant A Tenant BVM VM VM VM VM VM
WeightA WeightB
Compute per-tenant+/- mismatch
PP
WA WA2 WB2Controller
Reciprocal weight swap
RR RS
FQ
Strawman: Select Replicas Evenly
19
50% 50%RS
PP
WA WA2 WB2Controller
Tenant A Tenant BVM VM VM VM VM VM
WeightA WeightB
GET 1101100
RR
WA1 > WB1 WA2 < WB2
FQ
Strawman: Select Replicas Evenly
19
50% 50%
RS
PP
WA WA2 WB2Controller
Tenant A Tenant BVM VM VM VM VM VM
WeightA WeightB
GET 1101100
RR
WA1 > WB1 WA2 < WB2
FQ
Tenant A
Pisces: Select Replicas By Local Weight
20
Tenant BVM VM VM
WeightB
Controller50% 50%
RS
PP
WA WA2 WB2
VM VM VM
WeightA
GET 1101100
WA1 > WB1 WA2 < WB2
FQ
RR
Tenant A
Pisces: Select Replicas By Local Weight
20
detect weightmismatch by
request latency
Tenant BVM VM VM
WeightB
Controller50% 50%
RS
PP
WA WA2 WB2
VM VM VM
WeightA
GET 1101100
WA1 > WB1 WA2 < WB2
FQ
RR
Tenant A
Pisces: Select Replicas By Local Weight
20
60% 40%detect weightmismatch by
request latency
Tenant BVM VM VM
WeightB
Controller
RS
PP
WA WA2 WB2
VM VM VM
WeightA
GET 1101100
WA1 > WB1 WA2 < WB2
FQ
RR
Tenant A
Pisces: Select Replicas By Local Weight
20
60% 40%detect weightmismatch by
request latency
Tenant BVM VM VM
WeightB
Controller
RS
PP
WA WA2 WB2
VM VM VM
WeightA
GET 1101100
WA1 > WB1 WA2 < WB2
FQ
RR
Strawman: Queue Tenants By Single Resource
21
Tenant A Tenant BVM VM VM VM VM VM
Controller
RS
PP
WA WA2 WB2
FQ
WA2 < WB2WA1 > WB1
RR
GET 1101100GET 0100111
Strawman: Queue Tenants By Single Resource
21
Tenant A Tenant BVM VM VM VM VM VM
Controller
RS
PP
WA WA2 WB2
FQ
WA2 < WB2WA1 > WB1
RR
Strawman: Queue Tenants By Single Resource
Bandwidth limited Request Limited
21
out req out req
Tenant A Tenant BVM VM VM VM VM VM
Controller
RS
PP
WA WA2 WB2
FQ
WA2 < WB2
RR
Strawman: Queue Tenants By Single Resource
Bandwidth limited Request Limited
bottleneck resource (out bytes) fair share
21
out req out req
Tenant A Tenant BVM VM VM VM VM VM
Controller
RS
PP
WA WA2 WB2
FQ
WA2 < WB2
RR
Pisces: Queue Tenants By Dominant Resource
Bandwidth limited Request Limited
22
out req out req
Tenant A Tenant BVM VM VM VM VM VM
Track per-tenantresource vector
Controller
RS
PP
WA WA2 WB2
FQ
WA2 < WB2
RR
Pisces: Queue Tenants By Dominant Resource
Bandwidth limited Request Limited
22
out req out req
Tenant A Tenant BVM VM VM VM VM VM
Track per-tenantresource vector
dominant resource fair share
Controller
RS
PP
WA WA2 WB2
FQ
WA2 < WB2
RR
Pisces: Queue Tenants By Dominant Resource
Bandwidth limited Request Limited
22
out req out req
Tenant A Tenant BVM VM VM VM VM VM
Track per-tenantresource vector
dominant resource fair share
Controller
RS
PP
WA WA2 WB2
FQ
WA2 < WB2
RR
Pisces Mechanisms Solve For Global Fairness
23Timescale
Syst
em
Vis
ibil
ity
RS
dominant resourcefair shares
PP
WA WA2 WB2
FQ
Pisces Mechanisms Solve For Global Fairness
23
minutesTimescale
Syst
em
Vis
ibil
ity glob
al
RS
Con
trol
ler
dominant resourcefair shares
PP
WA WA2 WB2
FQ
Pisces Mechanisms Solve For Global Fairness
23
minutessecondsTimescale
Syst
em
Vis
ibil
ity glob
al
RS
Con
trol
ler
dominant resourcefair shares
PP
WA WA2 WB2
FQ
Pisces Mechanisms Solve For Global Fairness
23
minutessecondsmicrosecondsTimescale
Syst
em
Vis
ibil
ity
loca
lgl
obal
RS
RR
RR
...
SS
SS
...C
ontr
olle
r
dominant resourcefair shares
PP
WA WA2 WB2
FQ
Pisces Mechanisms Solve For Global Fairness
23
minutessecondsmicrosecondsTimescale
Syst
em
Vis
ibil
ity
loca
lgl
obal
RS
RR
RR
...
SS
SS
...C
ontr
olle
r
dominant resourcefair shares
Replica Selection Policies
Wei
ght A
lloca
tions
PP
WA WA2 WB2
FQ
Pisces Mechanisms Solve For Global Fairness
23
minutessecondsmicrosecondsTimescale
Syst
em
Vis
ibil
ity
loca
lgl
obal
RS
RR
RR
...
SS
SS
...C
ontr
olle
r
dominant resourcefair shares
Replica Selection Policies
Wei
ght A
lloca
tions
PP
WA WA2 WB2
FQ
Pisces Mechanisms Solve For Global Fairness
23
minutessecondsmicrosecondsTimescale
Syst
em
Vis
ibil
ity
loca
lgl
obal
RS
RR
RR
...
SS
SS
...C
ontr
olle
r
dominant resourcefair shares
Replica Selection Policies
Wei
ght A
lloca
tions
PP
WA WA2 WB2
FQ
Pisces Mechanisms Solve For Global Fairness
23
minutessecondsmicrosecondsTimescale
Syst
em
Vis
ibil
ity
loca
lgl
obal
RS
RR
RR
...
SS
SS
...C
ontr
olle
r
dominant resourcefair shares
Replica Selection Policies
Wei
ght A
lloca
tions
fairness and capacity constraints
PP
WA WA2 WB2
FQ
Pisces Mechanisms Solve For Global Fairness
23
minutessecondsmicrosecondsTimescale
Syst
em
Vis
ibil
ity
loca
lgl
obal
RS
RR
RR
...
SS
SS
...C
ontr
olle
r
dominant resourcefair shares
Replica Selection Policies
Wei
ght A
lloca
tions
fairness and capacity constraints
PP
WA WA2 WB2
FQ
Pisces Mechanisms Solve For Global Fairness
23
minutessecondsmicrosecondsTimescale
Syst
em
Vis
ibil
ity
loca
lgl
obal
RS
RR
RR
...
SS
SS
...C
ontr
olle
r
dominant resourcefair shares
Replica Selection Policies
Wei
ght A
lloca
tions
fairness and capacity constraints
PP
WA WA2 WB2
FQ
Pisces Mechanisms Solve For Global Fairness
23
minutessecondsmicrosecondsTimescale
Syst
em
Vis
ibil
ity
loca
lgl
obal
RS
RR
RR
...
SS
SS
...C
ontr
olle
r
dominant resourcefair shares
Replica Selection Policies
Wei
ght A
lloca
tions
fairness and capacity constraints
PP
WA WA2 WB2
FQ
Pisces Mechanisms Solve For Global Fairness
23
minutessecondsmicrosecondsTimescale
Syst
em
Vis
ibil
ity
loca
lgl
obal
RS
RR
RR
...
SS
SS
...C
ontr
olle
r
dominant resourcefair shares
Replica Selection Policies
Wei
ght A
lloca
tions
fairness and capacity constraints
PP
WA WA2 WB2
FQ
Pisces Mechanisms Solve For Global Fairness
23
minutessecondsmicrosecondsTimescale
Syst
em
Vis
ibil
ity
loca
lgl
obal
RS
RR
RR
...
SS
SS
...C
ontr
olle
r
dominant resourcefair shares
Replica Selection Policies
Wei
ght A
lloca
tions
fairness and capacity constraints
PP
WA WA2 WB2
FQ
Pisces Mechanisms Solve For Global Fairness
23
minutessecondsmicrosecondsTimescale
Syst
em
Vis
ibil
ity
loca
lgl
obal
RS
RR
RR
...
SS
SS
...C
ontr
olle
r
dominant resourcefair shares
Replica Selection Policies
Wei
ght A
lloca
tions
fairness and capacity constraints
PP
WA WA2 WB2
FQ
Pisces Mechanisms Solve For Global Fairness
23
minutessecondsmicrosecondsTimescale
Syst
em
Vis
ibil
ity
loca
lgl
obal
RS
RR
RR
...
SS
SS
...C
ontr
olle
r
dominant resourcefair shares
Maximum bottleneck flow weight exchange
Replica Selection Policies
Wei
ght A
lloca
tions
fairness and capacity constraints
PP
WA WA2 WB2
FQ
Pisces Mechanisms Solve For Global Fairness
23
minutessecondsmicrosecondsTimescale
Syst
em
Vis
ibil
ity
loca
lgl
obal
RS
RR
RR
...
SS
SS
...C
ontr
olle
r
dominant resourcefair shares
Maximum bottleneck flow weight exchange
FAST-TCP basedreplica selection
Replica Selection Policies
Wei
ght A
lloca
tions
fairness and capacity constraints
PP
WA WA2 WB2
FQ
Pisces Mechanisms Solve For Global Fairness
23
minutessecondsmicrosecondsTimescale
Syst
em
Vis
ibil
ity
loca
lgl
obal
RS
RR
RR
...
SS
SS
...C
ontr
olle
r
dominant resourcefair shares
Maximum bottleneck flow weight exchange
FAST-TCP basedreplica selection
DRR token-basedDRFQ scheduler
Replica Selection Policies
Wei
ght A
lloca
tions
fairness and capacity constraints
PP
WA WA2 WB2
FQ
Evaluation
24
Evaluation
24
•Does Pisces achieve (even) system-wide fairness?
Evaluation
24
•Does Pisces achieve (even) system-wide fairness?
- Is each Pisces mechanism necessary for fairness?
Evaluation
24
•Does Pisces achieve (even) system-wide fairness?
- Is each Pisces mechanism necessary for fairness?- What is the overhead of using Pisces?
Evaluation
24
•Does Pisces achieve (even) system-wide fairness?
- Is each Pisces mechanism necessary for fairness?- What is the overhead of using Pisces?
•Does Pisces handle mixed workloads?
Evaluation
24
•Does Pisces achieve (even) system-wide fairness?
- Is each Pisces mechanism necessary for fairness?- What is the overhead of using Pisces?
•Does Pisces handle mixed workloads?
•Does Pisces provide weighted system-wide fairness?
Evaluation
24
•Does Pisces achieve (even) system-wide fairness?
- Is each Pisces mechanism necessary for fairness?- What is the overhead of using Pisces?
•Does Pisces handle mixed workloads?
•Does Pisces provide weighted system-wide fairness?
•Does Pisces provide local dominant resource fairness?
Evaluation
24
•Does Pisces achieve (even) system-wide fairness?
- Is each Pisces mechanism necessary for fairness?- What is the overhead of using Pisces?
•Does Pisces handle mixed workloads?
•Does Pisces provide weighted system-wide fairness?
•Does Pisces provide local dominant resource fairness?
•Does Pisces handle dynamic demand?
Evaluation
24
•Does Pisces achieve (even) system-wide fairness?
- Is each Pisces mechanism necessary for fairness?- What is the overhead of using Pisces?
•Does Pisces handle mixed workloads?
•Does Pisces provide weighted system-wide fairness?
•Does Pisces provide local dominant resource fairness?
•Does Pisces handle dynamic demand?
•Does Pisces adapt to changes in object popularity?
Evaluation
25
•Does Pisces achieve (even) system-wide fairness?
- Is each Pisces mechanism necessary for fairness?- What is the overhead of using Pisces?
•Does Pisces handle mixed workloads?
•Does Pisces provide weighted system-wide fairness?
•Does Pisces provide local dominant resource fairness?
•Does Pisces handle dynamic demand?
•Does Pisces adapt to changes in object popularity?
Pisces Achieves System-wide Per-tenant Fairness
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80
GET
Req
uest
s (k
req/
s)
Time (s)
Unmodified Membase
Ideal fair share: 110 kreq/s (1kB requests)
0.57 MMR
Min-Max Ratio: min rate/max rate (0,1]
8 Tenants - 8 Client - 8 Storage NodesZipfian object popularity distribution
Pisces Achieves System-wide Per-tenant Fairness
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80
GET
Req
uest
s (k
req/
s)
Time (s)
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80G
ET R
eque
sts
(kre
q/s)
Time (s)
Unmodified Membase
Ideal fair share: 110 kreq/s (1kB requests)
Pisces
0.57 MMR 0.98 MMR
Min-Max Ratio: min rate/max rate (0,1]
8 Tenants - 8 Client - 8 Storage NodesZipfian object popularity distribution
Each Pisces Mechanism Contributes to System-wide Fairness and Isolation
27
Unmodified Membase
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80
GET
Req
uest
s (k
req/
s)
Time (s)
0.57 MMR
Each Pisces Mechanism Contributes to System-wide Fairness and Isolation
27
Unmodified Membase
0.36 MMR
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80
GET
Req
uest
s (k
req/
s)
Time (s)
0.57 MMR
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80
GET
Req
uest
s (k
req/
s)
Time (s)
2x vs 1x demand
Each Pisces Mechanism Contributes to System-wide Fairness and Isolation
27
Unmodified Membase
0.59 MMR
0.36 MMR 0.58 MMR
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80
GET
Req
uest
s (k
req/
s)
Time (s)
0.57 MMR
FQ
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80
GET
Req
uest
s (k
req/
s)
Time (s)
2x vs 1x demand
Each Pisces Mechanism Contributes to System-wide Fairness and Isolation
27
Unmodified Membase
0.59 MMR
0.36 MMR 0.58 MMR 0.74 MMR
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90 0
20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90
PPFQ
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90 0
20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80
GET
Req
uest
s (k
req/
s)
Time (s)
0.57 MMR
FQ
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80
GET
Req
uest
s (k
req/
s)
Time (s)
2x vs 1x demand
0.64 MMR
Each Pisces Mechanism Contributes to System-wide Fairness and Isolation
27
Unmodified Membase
0.59 MMR 0.93 MMR
0.36 MMR 0.58 MMR 0.74 MMR
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90 0
20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90
WAPPFQ
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90 0
20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90
0.96 MMR
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80
GET
Req
uest
s (k
req/
s)
Time (s)
0.57 MMR
FQ
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80
GET
Req
uest
s (k
req/
s)
Time (s)
2x vs 1x demand
0.64 MMR
Each Pisces Mechanism Contributes to System-wide Fairness and Isolation
27
Unmodified Membase
0.59 MMR 0.93 MMR
0.36 MMR 0.58 MMR 0.74 MMR
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90 0
20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90 0
20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90
RSWAPPFQ PPFQ
0.90 MMR
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90 0
20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90 0
20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90
0.96 MMR 0.89 MMR
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80
GET
Req
uest
s (k
req/
s)
Time (s)
0.57 MMR
FQ
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80
GET
Req
uest
s (k
req/
s)
Time (s)
2x vs 1x demand
0.64 MMR
Each Pisces Mechanism Contributes to System-wide Fairness and Isolation
27
Unmodified Membase
0.59 MMR 0.93 MMR 0.98 MMR
0.36 MMR 0.58 MMR 0.74 MMR
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90 0
20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90 0
20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90
RSWAPPFQ WAPPFQ
0.90 MMR
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90 0
20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90 0
20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80 90
0.96 MMR 0.97 MMR0.89 MMR
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80
GET
Req
uest
s (k
req/
s)
Time (s)
0.57 MMR
FQ
0 20 40 60 80
100 120 140 160 180
10 20 30 40 50 60 70 80
GET
Req
uest
s (k
req/
s)
Time (s)
2x vs 1x demand
0.64 MMR
Pisces Imposes Low-overhead
28
0
875
1750
2625
3500
1kB Requests 10B Requests
Aggregate System Throughput
GET
Req
uest
s (k
req/
s)
Unmodified Membase Pisces
< 5%
> 19%
Pisces Achieves System-wide Weighted Fairness
29
4 heavy hitters 20 moderate demand 40 low demand
Pisces Achieves System-wide Weighted Fairness
29
0 20 40 60 80
100 120 140 160
25 30 35 40 45 50 55 60
100x weight (10)10x weight (40)
1x weight (50)
0 20 40 60 80
100 120 140 160
25 30 35 40 45 50 55 60
100x weight (4)10x weight (20)
1x weight (40)
0
50
100
150
200
20 25 30 35 40 45 50 55 60
1x weight2x weight
3x weight4x weight
0
5
10
15
20
20 25 30 35 40 45 50 55 60
GE
T R
eque
sts
(kre
q/s)
GE
T R
eque
sts
(kre
q/s)
Time (s)
GE
T R
eque
sts
(kre
q/s)
Time (s)
SE
T R
eque
sts
(kre
q/s)
Time (s) Time (s)4 heavy hitters 20 moderate demand 40 low demand
Pisces Achieves System-wide Weighted Fairness
29
0 20 40 60 80
100 120 140 160
25 30 35 40 45 50 55 60
100x weight (10)10x weight (40)
1x weight (50)
0 20 40 60 80
100 120 140 160
25 30 35 40 45 50 55 60
100x weight (4)10x weight (20)
1x weight (40)
0
50
100
150
200
20 25 30 35 40 45 50 55 60
1x weight2x weight
3x weight4x weight
0
5
10
15
20
20 25 30 35 40 45 50 55 60
GE
T R
eque
sts
(kre
q/s)
GE
T R
eque
sts
(kre
q/s)
Time (s)
GE
T R
eque
sts
(kre
q/s)
Time (s)
SE
T R
eque
sts
(kre
q/s)
Time (s) Time (s)0.98 MMR
4 heavy hitters 20 moderate demand 40 low demand0.89 MMR 0.91 MMR
Pisces Achieves System-wide Weighted Fairness
29
0 20 40 60 80
100 120 140 160
25 30 35 40 45 50 55 60
100x weight (10)10x weight (40)
1x weight (50)
0 20 40 60 80
100 120 140 160
25 30 35 40 45 50 55 60
100x weight (4)10x weight (20)
1x weight (40)
0
50
100
150
200
20 25 30 35 40 45 50 55 60
1x weight2x weight
3x weight4x weight
0
5
10
15
20
20 25 30 35 40 45 50 55 60
GE
T R
eque
sts
(kre
q/s)
GE
T R
eque
sts
(kre
q/s)
Time (s)
GE
T R
eque
sts
(kre
q/s)
Time (s)
SE
T R
eque
sts
(kre
q/s)
Time (s) Time (s)0.98 MMR
4 heavy hitters 20 moderate demand 40 low demand0.89 MMR 0.91 MMR
0.91 MMR
Pisces Achieves System-wide Weighted Fairness
29
0 20 40 60 80
100 120 140 160
25 30 35 40 45 50 55 60
100x weight (10)10x weight (40)
1x weight (50)
0 20 40 60 80
100 120 140 160
25 30 35 40 45 50 55 60
100x weight (4)10x weight (20)
1x weight (40)
0
50
100
150
200
20 25 30 35 40 45 50 55 60
1x weight2x weight
3x weight4x weight
0
5
10
15
20
20 25 30 35 40 45 50 55 60
GE
T R
eque
sts
(kre
q/s)
GE
T R
eque
sts
(kre
q/s)
Time (s)
GE
T R
eque
sts
(kre
q/s)
Time (s)
SE
T R
eque
sts
(kre
q/s)
Time (s) Time (s)0.98 MMR
4 heavy hitters 20 moderate demand 40 low demand
0.56 MMR
0.89 MMR 0.91 MMR
0.91 MMR
Pisces Achieves Dominant Resource Fairness
30
1kB workloadbandwidth limited
10B workloadrequest limited
0
100
200
300
400
500
600
700
800
20 25 30 35 40 45 50 55 60
1kB bandwidth limited10B request limited
0
50
100
150
200
250
300
20 25 30 35 40 45 50 55 60
Pisces Achieves Dominant Resource Fairness
30
Time (s)
Band
wid
th (
Mb/
s)
1kB workloadbandwidth limited
10B workloadrequest limited
0
100
200
300
400
500
600
700
800
20 25 30 35 40 45 50 55 60
1kB bandwidth limited10B request limited
0
50
100
150
200
250
300
20 25 30 35 40 45 50 55 60
Pisces Achieves Dominant Resource Fairness
30
Time (s)
Band
wid
th (
Mb/
s)
GET
Req
uest
s (k
req/
s)
Time (s)
1kB workloadbandwidth limited
10B workloadrequest limited
0
100
200
300
400
500
600
700
800
20 25 30 35 40 45 50 55 60
1kB bandwidth limited10B request limited
0
50
100
150
200
250
300
20 25 30 35 40 45 50 55 60
Pisces Achieves Dominant Resource Fairness
30
Time (s)
Band
wid
th (
Mb/
s)
GET
Req
uest
s (k
req/
s)
76% of bandwidth 76% of request rate
Time (s)
1kB workloadbandwidth limited
10B workloadrequest limited
0
100
200
300
400
500
600
700
800
20 25 30 35 40 45 50 55 60
1kB bandwidth limited10B request limited
0
50
100
150
200
250
300
20 25 30 35 40 45 50 55 60
Pisces Achieves Dominant Resource Fairness
30
Time (s)
Band
wid
th (
Mb/
s)
GET
Req
uest
s (k
req/
s)
76% of bandwidth 76% of request rate
Time (s)
1kB workloadbandwidth limited
10B workloadrequest limited
24% of request rate
Pisces Adapts to Dynamic Demand
31
Constant BurstyDiurnal (2x wt)
Tenant Demand
Pisces Adapts to Dynamic Demand
31
Constant BurstyDiurnal (2x wt)
0
50
100
150
200
0 10 20 30 40 50 60 70 80 90
GET
Req
uest
s (k
req/
s)
Time (s)
Tenant Demand
Pisces Adapts to Dynamic Demand
31
Constant BurstyDiurnal (2x wt)
~2x
0
50
100
150
200
0 10 20 30 40 50 60 70 80 90
GET
Req
uest
s (k
req/
s)
Time (s)
Tenant Demand
Pisces Adapts to Dynamic Demand
31
Constant BurstyDiurnal (2x wt)
~2x
even
0
50
100
150
200
0 10 20 30 40 50 60 70 80 90
GET
Req
uest
s (k
req/
s)
Time (s)
Tenant Demand
Conclusion
•Pisces Contributions- Per-tenant weighted max-min fair shares of system-wide
resources w/ high utilization
- Arbitrary object distributions
- Different resource bottlenecks
- Novel decomposition into 4 complementary mechanisms
32
PPPartition
Placement WA RS FQWeight
AllocationReplica
SelectionFair
Queuing
Recommended