23
Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng, Baochun Li Department of Electrical and Computer Engineering University of Toronto USENIX ICAC, San Jose, CA. June 28, 2013 Wednesday, 3 July, 13

Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

Temperature Aware Workload Management in

Geo-distributed Datacenters

Hong (Henry) Xu, Chen Feng, Baochun Li

Department of Electrical and Computer EngineeringUniversity of Toronto

USENIX ICAC, San Jose, CA. June 28, 2013

Wednesday, 3 July, 13

Page 2: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

Geo-distributed datacenters

2

Source: Google

Wednesday, 3 July, 13

Page 3: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

Request routing

3

Data is replicated across the wide areaHow to route requests to datacenters?

Wednesday, 3 July, 13

Page 4: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

Prior work

4

Price aware request routing:A. Qureshi et al., Cutting the Electricity Bill for Internet-scale Systems, SIGCOMM 2009Z. Liu et al., Greening Geographical Load Balancing, SIGMETRICS 2011

$35.83

$52.64

$44.20

$61.55

Price in $USD/MWh. Source: Federal Energy Regulatory Commission, etc.

Wednesday, 3 July, 13

Page 5: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

Two missing aspects...

5

Wednesday, 3 July, 13

Page 6: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

Cooling system

Temperature Cooling mode PUE

35 C (95 F) Mechanical 1.30

21.1 C (70 F) Mechanical 1.21

15.6 C (60 F) Mixed 1.17

10 C (50 F) Outside air 1.10

-3.9 C (25 F) Outside air 1.05

Source: Emerson® Liebert DSETM cooling system with an EconoPhase air-side economizer

6

Cooling energy efficiency (PUE) is a constantCooling energy efficiency is NOT a constant

Wednesday, 3 July, 13

Page 7: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

Temperature diversity

Selected Google DC locations. Source: National Climate Data Center

7

−200

2040

Tem

p. (C

)

The Dalles, OR

−200

2040

Tem

p. (C

)

Hamina, Finland

Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec −20

02040

Tem

p. (C

)

Quilicura, Chile

0

10

20

Tem

p. (C

)

Council bluffs, IA

0

10

20

Tem

p. (C

)

Dublin, Ireland

Apr 16 Apr 17 Apr 18 Apr 19 Apr 20 Apr 21 Apr 2210

20

30

Tem

p. (C

)

Tseung Kwan, Hong Kong

Wednesday, 3 July, 13

Page 8: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

First idea

8

3C

20C

10C

25C

Route more requests to cooler locations to reduce energy consumption and cost.

Wednesday, 3 July, 13

Page 9: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

InteractiveWorkload

Second idea

9

delay-tolerant,resource elastic

At cooling efficient locations, allocate more capacity to interactive workload.

BatchWorkload

User requests

Capacity allocation is fixed

Wednesday, 3 July, 13

Page 10: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

This work

10

Temperature aware workload management

1. System model and formulation

2. A distributed optimization algorithm (ADMM)

3. Trace-driven simulations

Wednesday, 3 July, 13

Page 11: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

System model [1/2]

11

In our model In reality

User A unique IP prefixCommon practice, e.g.

Akamai [34] ✓Request traffic

Arbitrarily splittable among datacenters

Common practice, e.g. DNS, HTTP proxies

[17,29,34]✓

Time scale

Hourly optimization

Common practice, traffic predictable, electricity

price known [28, 32, 34]✓

Wednesday, 3 July, 13

Page 12: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

System model [2/2]

12

in order to dynamically adjust datacenter operations to theambient conditions, and to save the overall energy costs.

3. MODELIn this section, we introduce our model first and then for-

mulate the temperature aware workload management prob-lem of joint request routing and capacity allocation.

3.1 System ModelWe consider a discrete time model where the length of

a time slot matches the time scale at which request routingand capacity allocation decisions are made, e.g., hourly. Thejoint optimization is periodically solved at each time slot.We therefore focus only on a single time slot.

We consider a provider that runs a set of datacenters Jin distinct geographical regions. Each datacenter j 2 Jhas a fixed capacity C

j

in terms of the number of servers.To model datacenter operating costs, we consider both theenergy cost and utility loss of request routing and capacityallocation, which are detailed below.

3.2 Energy Cost and Cooling EfficiencyWe focus on servers and cooling system in our energy cost

model. Other energy consumers, such as network switches,power distribution systems, etc., have constant power drawindependent of workloads [15] and are not relevant.

For servers, we adopt the empirical model from [15] thatcalculates the individual server power consumption as anaffine function of CPU utilization, Pidle + (Ppeak � Pidle)u.Pidle is the server power when idle, Ppeak is the server powerwhen fully utilized, and u is the CPU load. This model isespecially accurate for calculating the aggregated power of alarge number of servers [15]. Thus, assuming workloads areperfectly dispatched and servers have a uniform utilizationas a result, the server power of datacenter j can be modeledas C

j

Pidle + (Ppeak � Pidle)Wj

, where W denotes the totalworkload in terms of the number of servers required.

For the cooling system, we take an empirical approachbased on production CRACs to model its energy consump-tion. We choose not to rely on simplifying models for the in-dividual components of a CRAC and their interactions [40],because of the difficulty involved in and the inaccuracy re-sulted from the process, especially for hybrid CRACs withboth outside air and mechanical cooling. Therefore, we studyCRACs as a black box, with outside temperature as the in-put, and its overall energy efficiency as the output.

Specifically, we use partial PUE (pPUE) to measure theCRAC energy efficiency. As in Sec. 2.2, pPUE is defined as

pPUE =

Server power + Cooling powerServer power

.

A smaller value indicates a more energy efficient system. Weapply regression techniques to the empirical pPUE data ofthe Emerson CRAC [14] introduced in Table 1. We find thatthe best fitting model describes pPUE as a quadratic functionof the outside temperature as plotted below.

−25 −15 −5 5 15 25 35 451

1.1

1.2

1.3

1.4

1.5

Outside temperature (C)

pPU

E

pPUE=7.1705e−5 T2+0.0041T+1.0743

Figure 3: Model fitting of pPUE as a function of the out-side temperature T for Emerson’s DSETM CRAC [14].Small circles denote empirical data points.

The model can be calibrated given more data from mea-surements. For the purpose of this paper, our approach yieldsa tractable model that captures the overall CRAC efficiencyfor the entire spectrum of its operating modes. Our model isalso useful for future studies on datacenter cooling energy.

Given the outside temperature Tj

, the total datacenter en-ergy as a function of the workload W

j

can be expressed as

Ej

(Wj

) = (Cj

Pidle + (Ppeak � Pidle)Wj

) · pPUE(Tj

). (1)

Here we implicitly assume that Tj

is known a priori and donot include it as the function variable. This is valid sinceshort-term weather forecast is fairly accurate and accessible.

A datacenter’s electricity price is denoted as Pj

. The pricemay additionally incorporate the environmental cost of gen-erating electricity [17], which we do not consider here. Inreality, electricity can be purchased from local day-ahead orhour-ahead forward markets at a pre-determined price [34].Thus, we assume that P

j

is known a priori and remains fixedfor the duration of a time slot. The total energy cost, includ-ing server and cooling power, is simply P

j

Ej

(Wj

).

3.3 Utility LossRequest routing. The concept of utility loss captures the

lost revenue due to the user-perceived latency for requestrouting decisions. Latency is arguably the most importantperformance metric for most interactive services. A smallincrease in the user-perceived latency can cause substantialrevenue loss for the provider [25]. We focus on the end-to-end propagation latency, which largely accounts for the user-perceived latency compared to other factors such as requestprocessing times at datacenters [31]. The provider obtainsthe propagation latency L

ij

between user i and datacenter jthrough active measurements [30] or other means.

We use ↵ij

to denote the volume of requests routed todatacenter j from user i 2 I, and D

i

to denote the demand ofeach user that can be predicted using machine learning [28,32]. Here, a user is an aggregated group of customers from acommon geographical region, which may be identified by aunique IP prefix. The lost revenue from user i then dependson the average propagation latency

Pj

↵ij

Lij

/Di

througha generic delay utility loss function U

i

. Ui

can take variousforms depending on the interactive service. Our algorithm

4

Energy cost at data center j:

−25 −15 −5 5 15 25 35 451

1.1

1.2

1.3

1.4

1.5

Outside temperature (C)

pPU

E

pPUE=7.1705e−5 T2+0.0041T+1.0743

Google cluster measurements [15] Our empirical data

Pj

Wednesday, 3 July, 13

Page 13: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

Formulation

13

User i Datacenter j↵ij

: Batch workload�j

Energy cost Utility loss+

Latency

Workload conservation

Capacity constraint

Revenue loss due to performance

min↵,�⌫0

X

j

Ej

�X

i

↵ij

�+

X

i

Ui (L(↵i))

+X

j

Ej(�j) +X

j

Vj(�j)

s.t.: 8i :X

j

↵ij = Di,

8j :X

i

↵ij + �j Cj .

Wednesday, 3 July, 13

Page 14: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

Challenges

14

Convex optimization

O(105) IP prefixes, O(107) variables, O(105) constraints

Large-scale problems

Distributed optimization algorithms

Dual decomposition with subgradient methods

Two drawbacks:

Delicate step size adjustment

Very slow convergence

Wednesday, 3 July, 13

Page 15: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

ADMM

15

Alternating Direction Method of Multipliers[S. Boyd et al., 2011]

Fast convergence for large-scale distributed convex optimization in data mining and machine learning

Limitation: It only works for problems with 2 sets of variables linked by an equality constraint

Does NOT work for our problem

Wednesday, 3 July, 13

Page 16: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

Generalized ADMM

16

Minimize utility loss for interactive

Minimize energy cost for interactive

Dual update

per-user sub-problems

per-DC sub-problems

Minimize total cost for batch

per-DC sub-problems

penalty(αk, ak-1)

penalty(ak, αk)

penalty(βk, αk)

ak

αk

βk

Wednesday, 3 July, 13

Page 17: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

Convergence

17

It works for problems with any sets of variables.

Theorem: Generalized ADMM converges to the optimal solution.

Applicable to problems in other domains.

Wednesday, 3 July, 13

Page 18: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

Evaluation: Setup

18

Google DC locations, Wikipedia request traces, empirical temperature, latency, and electricity price data

Benchmarks:Joint opt: Our work

Baseline: State-of-the-art, no temperature aware request routing, no capacity allocation

Cooling optimized

Capacity optimized

Wednesday, 3 July, 13

Page 19: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

Benefits breakdown

19

0:00 4:00 8:00 12:00 16:00 20:000

0.05

0.1

0.15

0.2

0.25

Joint optCapacity optimizedCooling optimized

0:00 4:00 8:00 12:00 16:00 20:00

0

0.1

0.2

Joint optCapacity optimizedCooling optimized

Cooling energy savings Utility loss reductions

Wednesday, 3 July, 13

Page 20: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

Overall improvement

20

0

5

10

15

20

25

0 4 8 12 16 20 24

January May August

Result: 5%-20% total cost savings, consistent across seasons

Tot

al c

ost

savi

ngs

%

hour

Wednesday, 3 July, 13

Page 21: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

Convergence

21

0

0.5

1.0

1.5

0 50 100 150 2000

1

2

3Generalized ADMMSubgradient method

Prim

al r

esid

ual (

105 )

Primal residual (10

3)

Result: Generalized ADMM converges much faster than existing algorithms.

Iteration

Wednesday, 3 July, 13

Page 22: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

Related work• Workload management in geo-distributed DCs

• A. Qureshi et al., Cutting the Electricity Bill for Internet-scale Systems. SIGCOMM, 2009

• Z. Liu et al., Greening Geographical Load Balancing. SIGMETRICS, 2011

• Gao et al., It’s not easy being green. SIGCOMM, 2012

• ADMM

• Han et al., A note on the alternating direction method of multipliers. J. Optim. Theory Appl. 155:227-238, 2012

• Hong et al., On the linear convergence of the alternating direction method of multipliers. arXiv, August 2012

22

Wednesday, 3 July, 13

Page 23: Temperature Aware Workload Management in Geo-distributed Datacenters · 2019-12-18 · Temperature Aware Workload Management in Geo-distributed Datacenters Hong (Henry) Xu, Chen Feng,

Thank you!

Google “Henry Xu”

23

Wednesday, 3 July, 13