Using Performance Models to Design Self …menasce/cs672/slides/autonomic...1 1 Using Performance...

Preview:

Citation preview

1

1

Using Performance Models to Design Self-Configuring and Self-Otimizing

Computer Systems

Prof. Daniel Menascé Department of Computer Science

E-Center for E-Business George Mason University

Fairfax, VA, USA Menasce@cs.gmu.edu

www.cs.gmu.edu/faculty/menasce.html

2004 D. A . Menascé. All Rights Reserved.

2

2

3

Huge number of devicesHuge number of data sourcesMany different data formats

Heterogeneous devicesWidely varying capacity

Wired and wirelessWidely varying QoS requirements

4

Node failuresConnectivity failures

Security attacksLimited battery power

3

5

Characteristics of the new generation of distributed software systems

qHighly distributedqComponent-based (for reusability)qService-oriented architectures (SOA)qUnattended operationqHostile environmentsqComposed of a large number of “replaceable”

components discovered at run-time qRun on a multitude of (unknown and heterogeneous)

hardware and network platforms

2004 D. A . Menascé. All Rights Reserved.

6

Requirements of Next Generation of Large Distributed SystemsqAdaptable and self-configurable to changes in

workload intensity:Ø QoS requirements at the application and

component level must be met.

qAdaptable and self-configurable to withstand attacks and failures:Ø Availability and security requirements must be

met.

2004 D. A . Menascé. All Rights Reserved.

self-configurable, self-optimizing, self-healing,and self-protecting

4

7

Important Technologies

q Web Services:§ SOAP, UDDI, WSDL

q Grid Computingq Peer to Peer Networksq Wireless Networkingq Sensor and ad-hoc networks

2004 D. A . Menascé. All Rights Reserved.

8

Challenges

q Dynamically changing application structure. q Hard to characterize the workload.Ø unpredictableØ dynamically changing servicesØ application adaptation

q Difficult to build performance models.Ø moving target

q Multitude of QoS metrics at various levels of a distributed architectureØ response time, jitter, throughput, availability, survivability, recovery

time after attack/failure, call drop rate, access failure rate, packet delay, packet drop rate.

q Tradeoffs between QoS metrics (response time vs. availability, response time vs. security)

2004 D. A . Menascé. All Rights Reserved.

5

9

Challenges (cont’d)

qNeed to perform transient, critical time (e.g., terrorism attack or catastrophic failures) analysis of QoS compliance. Steady-state analysis is not enough.qMapping of global SLAs to local SLAsØCost and pricing issues.

qQoS monitoring, negotiation, and enforcement.qPlatform-neutral representation of QoS goals and

contracts.qResource management: resource reservation,

resource allocation, admission control.Ø non-dedicated resources

2004 D. A . Menascé. All Rights Reserved.

10

What we need …

q Design self-regulating (autonomic systems)q Embed within each system component:Ø Monitoring capabilitiesØ Negotiation capabilities (requires predictive modeling power)Ø Self-protection and recovery capabilities (for attacks, failures, and

overloads)q Push more of the desired system features to individual

componentsq Design QoS-aware middlewareØ QoS negotiation protocolsØ Mapping of global to local SLAsØ QoS monitoring

2004 D. A . Menascé. All Rights Reserved.

6

11

Rest of this talk …

q Novel uses for performance modelsq Two examples of self-regulating systems:Ø A three-tiered e-commerce systemØ QoS-aware software components

q Concluding Remarks

2004 D. A . Menascé. All Rights Reserved.

12

Rest of this talk …

q Novel uses for performance modelsq Two examples of self-regulating systems:Ø A three-tiered e-commerce systemØ QoS-aware software components

q Concluding Remarks

2004 D. A . Menascé. All Rights Reserved.

7

13

What are performance models good for?

q At the design stage:ØCompare competing design alternatives.§ A large number of low capacity servers vs. a small

number of large capacity servers?

2004 D. A . Menascé. All Rights Reserved.

14

LoadBalancer

WebServers

LoadBalancer.

.

.

?

Site Design

2004 D. A . Menascé. All Rights Reserved.

8

15

What are performance models good for?

q At the design stage:ØCompare competing design alternatives.§ A large number of low capacity servers vs. a small

number of large capacity servers?

qDuring production:ØMedium and long-term (weeks and months):§ Capacity planning.

2004 D. A . Menascé. All Rights Reserved.

16

Capacity PlanningResponseTime (sec)

Load (sessions/sec)

3 mo. 6 mo. 9 mo.

QoS Goal

2

2004 D. A . Menascé. All Rights Reserved.

9

17

What are performance models good for?

q At the design stage:ØCompare competing design alternatives.§ A large number of low capacity servers vs. a small

number of large capacity servers?

qDuring production:ØMedium and long-term (weeks and months):§ Capacity planning.

Ø Short-term (minutes):§ Dynamic reconfiguration.

2004 D. A . Menascé. All Rights Reserved.

18

Rest of this talk …

q Novel uses for performance modelsq Two examples of self-regulating systems:Ø A three-tiered e-commerce systemØ QoS-aware software components

2004 D. A . Menascé. All Rights Reserved.

10

19

Automatic QoS Control: Motivation

qModern computer systems are complex and composed of multiple tiers.

2004 D. A . Menascé. All Rights Reserved.

20

Multi-tier Architecture

2004 D. A . Menascé. All Rights Reserved.

11

21

Automatic QoS Control: Motivation

qModern computer systems are complex and composed of multiple tiers.qThe workload presents short-term variations

with high peak-to-average ratios.

2004 D. A . Menascé. All Rights Reserved.

22

3600 sec

60 sec

1 sec

Multi-scale time workload variation

2004 D. A . Menascé. All Rights Reserved.

12

23

Automatic QoS Control: Motivation

q Modern computer systems are complex and composed of multiple tiers.qThe workload presents short-term variations with high

peak-to-average ratios.qMany software and hardware parameters influence the

performance of e-commerce sites.Manual reconfiguration is not an option!Need self-managing systems.

2004 D. A . Menascé. All Rights Reserved.

24

Computer System

ServiceDemand

Computation

WorkloadAnalyzer

PerformanceModelSolver

QoSControllerAlgorithm

arrivingrequests

completingrequests

QoS Controller

(2)

(1)

(3) (4)

(5)

(6) (7)

(8)

(9)

(10) (11)

(12)

QoSgoals

2004 D. A . Menascé. All Rights Reserved.

13

25

Computer System

ServiceDemand

Computation

WorkloadAnalyzer

PerformanceModelSolver

QoSControllerAlgorithm

arrivingrequests

completingrequests

QoS Controller

(2)

(1)

(3) (4)

(5)

(6) (7)

(8)

(9)

(10) (11)

(12)

QoSgoals

iU 0X

2004 D. A . Menascé. All Rights Reserved.

26

Computer System

ServiceDemand

Computation

WorkloadAnalyzer

PerformanceModelSolver

QoSControllerAlgorithm

arrivingrequests

completingrequests

QoS Controller

(2)

(1)

(3) (4)

(5)

(6) (7)

(8)

(9)

(10) (11)

(12)

QoSgoals

iU 0X

0/ XUD ii =

2004 D. A . Menascé. All Rights Reserved.

14

27

Computer System

ServiceDemand

Computation

WorkloadAnalyzer

PerformanceModelSolver

QoSControllerAlgorithm

arrivingrequests

completingrequests

QoS Controller

(2)

(1)

(3) (4)

(5)

(6) (7)

(8)

(9)

(10) (11)

(12)

QoSgoals

λ 2004 D. A . Menascé. All Rights Reserved.

28

Computer System

ServiceDemand

Computation

WorkloadAnalyzer

PerformanceModelSolver

QoSControllerAlgorithm

arrivingrequests

completingrequests

QoS Controller

(2)

(1)

(3) (4)

(5)

(6) (7)

(8)

(9)

(10) (11)

(12)

QoSgoals

2004 D. A . Menascé. All Rights Reserved.

15

29

Controller Interval

measurements fromservers in e-commerce

site.

measurements fromservers in e-commerce

site.

reconfigurationcommands

reconfigurationcommands

reconfigurationcommands

controller algorithm

i-th interval (i+1)-th interval

2004 D. A . Menascé. All Rights Reserved.

30

Combined QoS Metric

XXPPRR QoSwQoSwQoSwQoS ∆×+∆×+∆×=

Rw Pw Xw, , and are relative weights that indicate the relativeimportance of response time, throughput, and probability of rejection.

2004 D. A . Menascé. All Rights Reserved.

16

31

QoS Metric

PQoS∆

XXPPRR QoSwQoSwQoSwQoS ∆×+∆×+∆×=

Rw Pw Xw, , and are relative weights that indicate the relativeimportance of response time, throughput, and probability of rejection.

RQoS∆ , , and are relative deviations of theresponse time, throughput, and probability of rejection metrics withrespect to their desired levels.

XQoS∆

2004 D. A . Menascé. All Rights Reserved.

32

QoS Metric

PQoS∆

XXPPRR QoSwQoSwQoSwQoS ∆×+∆×+∆×=

Rw Pw Xw, , and are relative weights that indicate the relativeimportance of response time, throughput, and probability of rejection.

RQoS∆ , , and are relative deviations of theresponse time, throughput, and probability of rejection metrics withrespect to their desired levels.

XQoS∆

The QoS metric is a dimensionless number in the interval [-1, 1].

2004 D. A . Menascé. All Rights Reserved.

17

33

QoS Metric

PQoS∆

XXPPRR QoSwQoSwQoSwQoS ∆×+∆×+∆×=

Rw Pw Xw, , and are relative weights that indicate the relativeimportance of response time, throughput, and probability of rejection.

RQoS∆ , , and are relative deviations of theresponse time, throughput, and probability of rejection metrics withrespect to their desired levels.

XQoS∆

The QoS metric is a dimensionless number in the interval [-1, 1].

If all metrics meet or exceed their QoS targets, QoS = 0.

2004 D. A . Menascé. All Rights Reserved.

34

Response Time Deviation

),max( max

max

measured

measuredR RR

RRQoS

−=∆

• = 0 if the response time meets its target.• > 0 if the response time exceeds its target.• < 0 if the response time does not meet its target.

1/)(1 max1

<−≤∆ ∑=

RDQoSK

iiR

Rmeasured QoSRR ∆≤−−<− )/1(1 max

2004 D. A . Menascé. All Rights Reserved.

18

35

Probability of Rejection Deviation

• = 0 if the probability of rejection meets its target.• > 0 and = 1 if the probability of rejection exceeds its target.• < 0 and = -1 if the probability of rejection does not meet its target.

),max( max

max

measured

measuredP PP

PPQoS −=∆

2004 D. A . Menascé. All Rights Reserved.

36

Throughput Deviation

•• = 0 if the throughput meets its target.• > 0 and < 1 if the throughput exceeds its target.• < 0 and > -1 if the throughput does not meet its target.

),max( *min

*min

XXXX

QoSmeasured

measuredX

−=∆

),min( min*min XX λ=

2004 D. A . Menascé. All Rights Reserved.

19

37

Heuristic Optimization Approach

Server Parameter 1

ServerParameter 2

• The space of configurationpoints is searched usinga combinatorial search technique.

• Each point has a QoS valuecomputed through ananalytic performancemodel.

),...,,,( 21 mcccWfQoSr

=

2004 D. A . Menascé. All Rights Reserved.

38

Heuristic Optimization Approach

Server Parameter 1

ServerParameter 2

• The space of configurationpoints is searched usingcombinatorial search techniques.

• Each point has a QoS valuecomputed through ananalytic performancemodel.

Currentconfiguration

2004 D. A . Menascé. All Rights Reserved.

20

39

Heuristic Optimization Approach

Server Parameter 1

ServerParameter 2

• The space of configurationpoints is searched usingcombinatorial search techniques.

• Each point has a QoS valuecomputed through ananalytic performancemodel.

Newconfiguration

2004 D. A . Menascé. All Rights Reserved.

40

10

8

15

13

25

11

12

17 27

30

AB

C

D

E F

G H

IJ

Hill-Climbing Search

© 2003 D. A. Menascé. All Rights Reserved.

21

41

10

8

15

13

25

11

12

17 27

30

AB

C

D

E F

G H

IJ

Hill-Climbing Search

© 2003 D. A. Menascé. All Rights Reserved.

42

10

8

15

13

25

11

12

17 27

30

AB

C

D

E F

G H

IJ

© 2003 D. A. Menascé. All Rights Reserved.

Hill-Climbing Search

22

43

10

1115 1812

16 22 17 18 25 21 19 20

40 28 31 27 29 26 39 32

1

2

3

4

level

Beam Search

© 2003 D. A. Menascé. All Rights Reserved.

44

A Queuing Model is Used to Compute QoS Values

2004 D. A . Menascé. All Rights Reserved.

23

45

Prototype Configuration

Workstation

100 Mbps Hub

WebServer

ApplicationServer

DatabaseServer

QoSController

WorkloadGenerator

TPC-Wsite

2004 D. A . Menascé. All Rights Reserved.

46

Experiment Results

Arrival rate

0

10

20

30

40

50

60

70

80

90

100

0 5 10 15 20 25 30 35

Time (Controller Intervals)

Arr

ival

Rat

e (r

eque

sts/

sec)

2004 D. A . Menascé. All Rights Reserved.

24

47

Results of QoS Controller

-0.8

-0.6

-0.4

-0.2

0.0

0.2

0.4

0.6

0.8

14.4

14.0

14.4

41.3

38.1

36.6

62.2

61.5

63.5

77.3

80.4

78.5

83.8

85.7

85.9

88.2

88.3

89.2

90.0

90.3

89.5

85.7

81.4

83.7

79.0

75.9

73.5

66.2

64.1

64.4

Arrival Rate (req/sec)

QoS

Controlled QoS Uncontrolled QoS

2004 D. A . Menascé. All Rights Reserved.

48

Experiment Results

Arrival rate

0

10

20

30

40

50

60

70

80

90

100

0 5 10 15 20 25 30 35

Time (Controller Intervals)

Arr

ival

Rat

e (r

eque

sts/

sec)

QoS is not met!

2004 D. A . Menascé. All Rights Reserved.

25

49

Variable inter-arrival and service times of requests

qReal workloads exhibit high variability in:ØTraffic intensityØService demands at various system resources

qNeed to investigate the efficiency of the proposed self-managing technique under these conditionsqConsider variability in requests inter-arrival

time and requests service times at physical resources (e.g., CPU, disk)

2004 D. A . Menascé. All Rights Reserved.

50

Effect of Varying the COV of the Service Time (Ca = 1)

Ca = 1, Cs = 2

-0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

Controller Interval

Ave

rage

QoS

No Controller Beam Search Hill Climbing

Ca = 1, Cs =4

-0.2

0

0.2

0.4

0.6

0.8

1

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

Controller Interval

Ave

rage

QoS

No Controller Beam Search Hill Climbing

Ca = 1, Cs = 1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

Controller Interval

Ave

rage

QoS

No Controller Beam Search Hill Climbing

2004 D. A . Menascé. All Rights Reserved.

Cs = 1 Cs = 2

Cs = 4

26

51

Effect of Varying the COV of the InterarrivalTime (Cs = 1)

Ca = 1, Cs = 1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

Controller Interval

Ave

rag

e Q

oS

No Controller Beam Search Hill Climbing

Ca = 2, Cs = 1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

Controller Interval

Ave

rage

QoS

No Controller Beam Search Hill Climbing

Ca = 4, Cs = 1

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

Controller Interval

Ave

rage

QoS

No Controller Beam Search Hill Climbing

2004 D. A . Menascé. All Rights Reserved.

Ca = 1 Ca = 2

Ca = 4

52

Extreme values for Ca and CsCa = 4, Cs = 4

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

Controller Interval

Ave

rag

e Q

oS

No Controller Beam Search Hill Climbing

2004 D. A . Menascé. All Rights Reserved.c

27

53

Dynamic Controller Interval and Workload Forecasting

-0.2

0

0.2

0.4

0.6

0.8

1

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

Monitoring Interval

Ave

rag

e Q

oS

No Forecasting +Dynamic Controller IntervalForecasting + Dynamic Controller Interval

2004 D. A . Menascé. All Rights Reserved.c

54

Sensitivity of Controller to SLAs

qNeed to investigate the controller behavior in the case of a variation in the SLAsqWe ran experiments for stricter and more

relaxed SLAsØBase: Rmax= 1.2 sec, Xmin = 5 req/sec, Pmax= 0.05

ØStrict: Rmax= 1.0 sec, Xmin = 7 req/sec, Pmax= 0.03

ØRelaxed: Rmax = 1.5 sec, Xmin = 4 req/sec, Pmax = 0.10

qUsed Ca = Cs = 2

2004 D. A . Menascé. All Rights Reserved.

28

55

Sensitivity of Controller to SLAs

Controller Sensitivity

-40

-30

-20

-10

0

10

20

30

40

50

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

Controller Interval

Ave

rage

QoS

Rel

ativ

e V

aria

tion

No Controller Relaxed SLA No Controller Strict SLABeam Search Relaxed SLA Beam Search Strict SLA

2004 D. A . Menascé. All Rights Reserved.

56

Rest of this talk …

q Novel uses for performance modelsq Two examples of self-regulating systems:Ø A three-tiered e-commerce systemØ QoS-aware software components

q Concluding Remarks

2004 D. A . Menascé. All Rights Reserved.

29

57

Q-Applications and Q-components

Q-component

Q-component

Q-component

Q-component

Q-component

Q-application

registration

Servicedirectory

2004 D. A . Menascé. All Rights Reserved.

58

Q-Applications and Q-components

Q-component

Q-component

Q-component

Q-component

Q-component

Q-application

discovery

Servicedirectory

2004 D. A . Menascé. All Rights Reserved.

30

59

Q-Applications and Q-components

Q-component

Q-component

Q-component

Q-component

Q-component

Q-application

QoS Negotiation

2004 D. A . Menascé. All Rights Reserved.

60

Q-Applications and Q-components

Q-componentQ-component

Q-component

Q-component

Q-component

Q-application

Service Access

2004 D. A . Menascé. All Rights Reserved.

31

61

QoS-Aware Software Components: Q-Components

q Engage in QoS Negotiations (accept, reject, counter-offer)q Provide QoS guarantees for multiple concurrent

servicesq Maintain a table of QoS commitmentsq Service dispatching based on accepted QoS

commitmentsq Q-components are the building blocks of QoS-aware

applications

2004 D. A . Menascé. All Rights Reserved.

62

Service 1Service 1

ServiceDispatcher

Service kService k

. . .

Architecture of a typical software component

ServiceRegistration

2004 D. A . Menascé. All Rights Reserved.

32

63

Service 1Service 1

QoSRequestHandler

ServiceDispatcher

Service kService k

. . .

Architecture of a Q-component (QoS Negotiation)

ServiceRegistration

2004 D. A . Menascé. All Rights Reserved.

64

Service 1Service 1

QoSRequestHandler

QoSEvaluator

ServiceDispatcher

Service kService k

. . .

Table ofQoS commitmentsService

Registration

QoSNegoti-

ator

Architecture of a Q-component (QoS Negotiation)

2004 D. A . Menascé. All Rights Reserved.

33

65

Service 1Service 1

QoSRequestHandler

QoSEvaluator

PerformanceModelSolver

ServiceDispatcher

Service kService k

. . .

Table ofQoS commitmentsService

Registration

QoSNegoti-

ator

Architecture of a Q-component (QoS Negotiation)

2004 D. A . Menascé. All Rights Reserved.

66

Service 1Service 1

ServiceDispatcher

Service kService k

. . .

PerformanceMonitor

Table ofQoS commitments

Architecture of a Q-component –Service Requests

ServiceRegistration

2004 D. A . Menascé. All Rights Reserved.

34

67

Service 1Service 1

QoSRequestHandler

QoSEvaluator

PerformanceModelSolver

ServiceDispatcher

Service kService k

. . .

PerformanceMonitor

Table ofQoS commitmentsService

Registration

QoSNegoti-

ator

Architecture of a Q-component

2004 D. A . Menascé. All Rights Reserved.

68

Successful QoS Negotiation

Client Q-componentQoSRequest(rid,Sid,N,Rmax,Xmin)

Accept(rid,token) Request in ToC

ServiceReq (…,token)

ReplyReq (…)

ServiceReq (…,token)

ReplyReq (…)

. . .

EndSession (token)Request removed from ToC

2004 D. A . Menascé. All Rights Reserved.

35

69

On-time Accepted Counteroffer

Client Q-componentQoSRequest(rid,Sid,N,Rmax,Xmin)

CounterOffer (rid,N’,token) Request in ToC

AcceptCounterOffer (token)timeout

2004 D. A . Menascé. All Rights Reserved.

70

Expired Accepted Counteroffer

Client Q-componentQoSRequest(rid,Sid,N,Rmax,Xmin)

CounterOffer (rid,N’,token) Request in ToC

AcceptCounterOffer (token)

timeout

ExpiredCounterOffer (rid)

Request removed from ToC

2004 D. A . Menascé. All Rights Reserved.

36

71

Rejected Counteroffer

Client Q-componentQoSRequest(rid,Sid,N,Rmax,Xmin)

CounterOffer (rid,N’,token) Request in ToC

RejectCounterOffer (token)timeout

Request removed from ToC

2004 D. A . Menascé. All Rights Reserved.

72

Rejected QoS Negotiation

Client Q-componentQoSRequest(rid,Sid,N,Rmax,Xmin)

RejectQoSRequest (rid)

2004 D. A . Menascé. All Rights Reserved.

37

73

Decision Tablefor QoSNegotiation

Accept

Reason Remedy Current Others Decision

OK OK Counter OfferOK Not OK Reject

Not OK & MAXR is violated

OK Reject

Not OK & MAXR is violated Not OK

Reject

MINX & MAXR are

violatedReject

Reason Remedy Current Others DecisionOK OK Counter OfferOK Not OK Reject

MINX & MAXR are

violatedReject

Reason Remedy Current Others DecisionOK OK Counter Offer

OKNot OK: N=1 but others still

violatedReject

Any Decrease N

4. Only Other Requests are Violated

Not OK: MINX

violated or N=0

OK or not OK Reject

OK OK Counter OfferNot OK: MINX is

violated or N=0

OK Reject

Only MINX is violated

Decreasing N reduces X and increasing N increases R. So, there is no solution.

Increase N

Decrease N

X could be increased by increasing N. But this would further violate the QoS of

other classes.

Decreasing N reduces X and increasing N increases R. So, there is no solution.

Not OK: MINX is

violated or N=0

OK or not OK

1. Current and other requests are satisfied

Reject

Only MINX is violated

2. Only Current is Violated

Only MAXR is violated

Decrease N

3. Current Request and Others are Violated

Only MAXR is violated Reject

2004 D. A . Menascé. All Rights Reserved.

74

Concurrency level

Concurrency level

Resp. Time

Throughput

N

N

current

current

others

others

2004 D. A . Menascé. All Rights Reserved.

38

75

Building a Performance ModelNew Request: Sid = 3, N = 12

Base Matrix of Service Demands (in msec): Table of Commitments (ToC):Commitment

ID Service ID N …1 2 3 1 2 10 …

CPU 25 34 20 2 3 15 …Disk 1 30 50 24 3 1 8 …Disk 2 28 42 31 4 1 20 …

5 2 13 …

Matrix of Service Demands (in msec)

1 2 3 4 5 6CPU 34 20 25 25 34 20Disk 1 50 24 30 30 50 24Disk 2 42 31 28 28 42 31

Vector N: 10 15 8 20 13 12

Service

Class

2004 D. A . Menascé. All Rights Reserved.

76

Building a Performance ModelNew Request: Sid = 3, N = 12

Base Matrix of Service Demands (in msec): Table of Commitments (ToC):Commitment

ID Service ID N …1 2 3 1 2 10 …

CPU 25 34 20 2 3 15 …Disk 1 30 50 24 3 1 8 …Disk 2 28 42 31 4 1 20 …

5 2 13 …

Matrix of Service Demands (in msec)

1 2 3 4 5 6CPU 34 20 25 25 34 20Disk 1 50 24 30 30 50 24Disk 2 42 31 28 28 42 31

Vector N: 10 15 8 20 13 12

Service

Class

2004 D. A . Menascé. All Rights Reserved.

39

77

Service 0Results:

2004 D. A . Menascé. All Rights Reserved.

78

Service 1Results:

2004 D. A . Menascé. All Rights Reserved.

40

79

Service 2Results:

2004 D. A . Menascé. All Rights Reserved.

80

Svc No.

No. Dropped Sessions

No. of Sessions % Drop

% Resp. Time Reduction

0 24 440 5 111 19 470 4 92 59 590 10 7

Total 102 1500 7 9

Svc No.

No. Dropped Sessions

No. of Sessions % Drop

% Resp. Time Reduction

0 52 440 12 211 66 470 14 162 148 590 25 12

Total 266 1500 18 16

Svc No.

No. Dropped Sessions

No. of Sessions % Drop

% Resp. Time Reduction

0 92 440 21 281 140 470 30 262 263 590 45 20

Total 102 1500 33 24

f = 0.0

f = 0.10

f = 0.25

2004 D. A . Menascé. All Rights Reserved.

41

81

Real -time Stock Quote Response Time

0.000

0.050

0.100

0.150

0.200

0.250

0.300

0.350

0.400

0.450

4 13 18 21 31 35 42 50 55 61 106

111 114

121

126 133

139

144 148

sessionID

Res

pons

eTim

e(s)

R_NonQoS R_QoS

2004 D. A . Menascé. All Rights Reserved.

Stock Quote Service

82

Delayed Stock Quote Response Time

0.000

0.050

0.100

0.150

0.200

0.250

0.300

0.350

0.400

1 5 8 15 24 28 34 38 46 49 57 62 109

118

125

130

137

142

sessionID

Res

pons

eTim

e(s)

R_NonQoS R_QoS

2004 D. A . Menascé. All Rights Reserved.

Stock Quote Service

42

83

Concluding Remarks

q Performance models can be used to build QoS controllers for complex multi-tiered systems:ØControlled system provides better QoS values even in case

of high variability in request’s inter-arrival and service timesØShort term workload forecasting improves the QoS,

especially when the workload intensity gets close to system saturation levelØDynamic adjustment of the controller interval length

improves the QoS furtherØEven when basic model assumptions are violated, the

models are robust enough to track the evolution of the performance metrics as the workload and configuration parameters change.

2004 D. A . Menascé. All Rights Reserved.

84

Concluding Remarks (Cont’d)

q Performance models can be used by software components to make admission control decisions.ØQoS components should be able to negotiate QoS

requests and perform admission control ØQoS negotiation overhead is small (it did not

exceed 10% of the CPU service demand in our experiments).

2004 D. A . Menascé. All Rights Reserved.

43

85

Bibliography

q “On the Use of Online Analytic Performance Models in Self-Managing and Self-Organizing Computer Systems,” D.A. Menascé, M. Bennanni and H. Ruan, in the book Self-Star Properties in Complex Information Systems, O. Babaoglu, M. Jelasity, A. Montresor, C. Fetzer , S. Leonardi, A. van Moorsel, and M. van Steen, eds., Lecture Notes in Computer Science, Vol. 3460, Springer Verlag, 2005.

q “Assessing the Robustness of Self-Managing Computer Systems under Highly Variable Workloads,” M. Bennani and D. Menascé, Proc. International Conf. Autonomic Computing (ICAC-04), New York, NY, May 17-18, 2004.

q “A Framework for QoS-Aware Software Components,” D. Menascé, H. Ruan ,and H. Gomaa, Proc. 2004 ACM Workshop on Software and Performance (WOSP’04), San Francisco, CA, January 14, 2004.

q “On the Use of Performance Models to Design Self-Managing Systems,” D. Menascé and M. Bennani, Proc. 2003 Computer Measurement Group Conference, Dallas, TX, Dec. 71-2, 2003.

q “Automatic QoS Control,” IEEE Internet Computing, January/February 2003, Vol. 7, No. 1.

q “Preserving QoS of E-commerce Sites Through Self-Tuning: A Performance Model Approach,” D. A. Menascé, R. Dodge and D. Barbara, Proc. 2001 ACM Conference on E-commerce, Tampa, FL, October 14-17, 2001.

2005 D. A . Menascé. All Rights Reserved.

Recommended