40
CSE 325: Operating Systems Spring 2014 Module 7 Scheduling 1 Md. Shamsujjoha

Lecture 7 scheduling

Embed Size (px)

DESCRIPTION

this is total guideline for course on operating system (OS). THIS is highly recommended for CSE students who are doing OS or CSE325

Citation preview

Page 1: Lecture 7 scheduling

CSE 325:

Operating

Systems

Spring 2014

Module 7

Scheduling

Page 2: Lecture 7 scheduling

Md. Shamsujjoha 2

Scheduling

• In discussing processes and threads, we talked about context switching– an interrupt occurs (device completion, timer interrupt)– a thread causes a trap or exception– may need to choose a different thread/process to run

• We glossed over the choice of which process or thread is chosen to be run next– “some thread from the ready queue”

• This decision is called scheduling• scheduling is a policy• context switching is a mechanism

Page 3: Lecture 7 scheduling

Md. Shamsujjoha 3

Classes of Schedulers

• Batch– Throughput / utilization oriented– Example: audit inter-bank funds transfers each night, Pixar rendering,

Hadoop/MapReduce jobs• Interactive

– Response time oriented– Example: attu.cs

• Real time– Deadline driven– Example: embedded systems (cars, airplanes, etc.)

• Parallel– Speedup-driven– Example: “space-shared” use of a 1000-processor machine for large

simulations

We’ll be talking primarily about interactive schedulers

Page 4: Lecture 7 scheduling

Md. Shamsujjoha 4

Multiple levels of scheduling decisions

• Long term– Should a new “job” be “initiated,” or should it be held?

• typical of batch systems• what might cause you to make a “hold” decision?

• Medium term– Should a running program be temporarily marked as non-

runnable (e.g., swapped out)?• Short term

– Which thread should be given the CPU next? For how long?– Which I/O operation should be sent to the disk next?– On a multiprocessor:

• should we attempt to coordinate the running of threads from the same address space in some way?

• should we worry about cache state (processor affinity)?

Page 5: Lecture 7 scheduling

Md. Shamsujjoha 5

Scheduling Goals I: Performance

• Many possible metrics / performance goals (which sometimes conflict)– maximize CPU utilization– maximize throughput (requests completed / s)– minimize average response time (average time from submission of request to completion of response)

– minimize average waiting time (average time from submission of request to start of execution)

– minimize energy (joules per instruction) subject to some constraint (e.g., frames/second)

Page 6: Lecture 7 scheduling

Md. Shamsujjoha 6

Scheduling Goals II: Fairness

• No single, compelling definition of “fair”– How to measure fairness?

• Equal CPU consumption? (over what time scale?)– Fair per-user? per-process? per-thread?– What if one process is CPU bound and one is I/O bound?

• Sometimes the goal is to be unfair:– Explicitly favor some particular class of requests (priority

system), but…– avoid starvation (be sure everyone gets at least some

service)

Page 7: Lecture 7 scheduling

Md. Shamsujjoha 77

The basic situation

Schedulable units Resources

Scheduling:- Who to assign each resource to- When to re-evaluate your

decisions

Page 8: Lecture 7 scheduling

Md. Shamsujjoha 8

When to assign?• Pre-emptive vs. non-preemptive schedulers

– Non-preemptive• once you give somebody the green light, they’ve got it until they

relinquish it– an I/O operation– allocation of memory in a system without swapping

– Preemptive• you can re-visit a decision

– setting the timer allows you to preempt the CPU from a thread even if it doesn’t relinquish it voluntarily

– in any modern system, if you mark a program as non-runnable, its memory resources will eventually be re-allocated to others

• Re-assignment always involves some overhead– Overhead doesn’t contribute to the goal of any scheduler

• We’ll assume “work conserving” policies– Never leave a resource idle when someone wants it

• Why even mention this? When might it be useful to do something else?

Page 9: Lecture 7 scheduling

Md. Shamsujjoha 9

Before we look at specific policies

• There are some simple but useful “laws” to know about …

• The Utilization Law: U = X * S– Where U is utilization, X is throughput (requests per

second), and S is average service requirement• Obviously true• This means that utilization is constant, independent of the

schedule, so long as the workload can be processed

Page 10: Lecture 7 scheduling

Md. Shamsujjoha 10

• Little’s Law: N = X * R– Where N is average number in system, X is throughput, and

R is average response time (average time in system)• This means that better average response time implies fewer in

system, and vice versa– Proof:

• Let W denote the total time-in-system accumulated by all customers during a time interval of length T

• The average number of requests in the system N = W / T• If C customers complete during that time period, then the

average contribution of each completing request R = W / C• Algebraically, W/T = C/T * W/C.• Thus, N = X * R

Page 11: Lecture 7 scheduling

Md. Shamsujjoha 11

(Not quite a law – requires some assumptions)• Response Time at a single server under FCFS

scheduling: R = S / (1-U)– Clearly, when a customer arrives, her response time will be

the service time of everyone ahead of her in line, plus her own service time: R = S * (1+A)

• Assumes everyone has the same average service time– Assume that the number you see ahead of you at your

instant of arrival is the long-term average number in line; so R = S * (1+N)

– By Little’s Law, N = X * R– So R = S * (1 + X*R) = S + S*X*R = S / (1 – X*S)– By the Utilization Law, U = X*S– So R = S / (1-U)– And since N = X*R, N = U / (1-U)

Page 12: Lecture 7 scheduling

Md. Shamsujjoha 12

• Kleinrock’s Conservation Law for priority scheduling:

Sp Up * Rp = constant– Where Up is the utilization by priority level p and Rp is the

time in system of priority level p• This means you can’t improve the response time of one class of

task by increasing its priority, without hurting the response time of at least one other class

Page 13: Lecture 7 scheduling

Md. Shamsujjoha 13

Algorithm #1: FCFS/FIFO

• First-come first-served / First-in first-out (FCFS/FIFO)– schedule in the order that they arrive– “real-world” scheduling of people in (single) lines

• supermarkets, McD’s, Starbucks …– jobs treated equally, no starvation

• In what sense is this “fair”?

• Sounds perfect!– in the real world, when does FCFS/FIFO work well?

• even then, what’s it’s limitation?– and when does it work badly?

Page 14: Lecture 7 scheduling

Process Burst Time

P1 24

P2 3

P3 3

• Suppose that the processes arrive in the order: P1 , P2 , P3

The Gantt Chart for the schedule is:

• Waiting time for P1 = 0; P2 = 24; P3 = 27• Average waiting time: (0 + 24 + 27)/3 = 17

P1 P2 P3

24 27 300

FCFS/FIFO example

Page 15: Lecture 7 scheduling

Suppose that the processes arrive in the order:

P2 , P3 , P1

• The Gantt chart for the schedule is:

• Waiting time for P1 = 6; P2 = 0; P3 = 3

• Average waiting time: (6 + 0 + 3)/3 = 3• Much better than previous case• Convoy effect - short process behind long process

– Consider one CPU-bound and many I/O-bound processes

P1P3P2

63 300

FCFS/FIFO example Cont …

Page 16: Lecture 7 scheduling

Md. Shamsujjoha 16

FCFS/FIFO example Cont …

• Suppose the duration of A is 5, and the durations of B and C are each 1– average response time for schedule 1 (assuming A, B, and

C all arrive at about time 0) is (5+6+7)/3 = 18/3 = 6– average response time for schedule 2 is (1+2+7)/3 = 10/3 =

3.3– consider also “elongation factor” – a “perceptual” measure:

• Schedule 1: A is 5/5, B is 6/1, C is 7/1 (worst is 7, ave is 4.7)• Schedule 2: A is 7/5, B is 1/1, C is 2/1 (worst is 2, ave is 1.5)

Job A B C

CB Job A

time

1

2

Page 17: Lecture 7 scheduling

Md. Shamsujjoha 17

• Average response time can be lousy– small requests wait behind big ones

• May lead to poor utilization of other resources– if you send me on my way, I can go keep another resource

busy– FCFS may result in poor overlap of CPU and I/O activity

• E.g., a CPU-intensive job prevents an I/O-intensive job from doing a small bit of computation, thus preventing it from going back and keeping the I/O subsystem busy

• Note: The more copies of the resource there are to be scheduled, the less dramatic the impact of occasional very large jobs (so long as there is a single waiting line)– E.g., many cores vs. one core

FCFS/FIFO drawbacks

Page 18: Lecture 7 scheduling

Md. Shamsujjoha 18

Algorithm #2: SPT/SJF

• Shortest processing time first / Shortest job first (SPT/SJF)– choose the request with the smallest service requirement

• Provably optimal with respect to average response time– Why do we care about “provably optimal”?

Page 19: Lecture 7 scheduling

Example of SJF

ProcessArriva l Time Burst Time

P1 0.0 6

P2 2.0 8

P3 4.0 7

P4 5.0 3

• SJF scheduling chart• Average waiting time = (3 + 16 + 9 + 0) / 4 = 7

P4P3P1

3 160 9

P2

24

Page 20: Lecture 7 scheduling

Md. Shamsujjoha 20

SPT/SJF optimality – The interchange argument

tk

sf sg

tk+sf tk+sf+sg

• In any schedule that is not SPT/SJF, there is some adjacent pair of requests f and g where the service time (duration) of f, sf, exceeds that of g, sg

• The total contribution to average response time of f and g is 2tk+2sf+sg

• If you interchange f and g, their total contribution will be 2tk+2sg+sf, which is smaller because sg < sf

• If the variability among request durations is zero, how does FCFS compare to SPT for average response time?

Page 21: Lecture 7 scheduling

Md. Shamsujjoha 21

• It’s non-preemptive – So?

• … but there’s a preemptive version – SRPT (Shortest Remaining Processing Time first) – that accommodates arrivals (rather than assuming all requests are initially available)

• Sounds perfect!– what about starvation?– can you know the processing time of a request?– can you guess/approximate? How?

SPT/SJF drawbacks

Page 22: Lecture 7 scheduling

Example of Shortest-remaining-time-first

• Now we add the concepts of varying arrival times and preemption to the analysis

ProcessA arri Arrival TimeT Burst Time

P1 0 8

P2 1 4

P3 2 9

P4 3 5

• Preemptive SJF Gantt Chart• Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5

msec

P1P1P2

1 170 10

P3

265

P4

Page 23: Lecture 7 scheduling

Md. Shamsujjoha 23

Algorithm #3: RR

• Round Robin scheduling (RR)– Use preemption to offset lack of information about execution times

• I don’t know which one should run first, so let’s run them all!– ready queue is treated as a circular FIFO queue– each request is given a time slice, called a quantum

• request executes for duration of quantum, or until it blocks– what signifies the end of a quantum?

• time-division multiplexing (time-slicing)– great for timesharing

• no starvation

• Sounds perfect!– how is RR an improvement over FCFS?– how is RR an improvement over SPT?– how is RR an approximation to SPT?

Page 24: Lecture 7 scheduling

Example of RR with Time Quantum = 4

Process Burst Time

P1 24

P2 3

P3 3• The Gantt chart is:

• Typically, higher average turnaround than SJF, but better response

• q should be large compared to context switch time• q usually 10ms to 100ms, context switch < 10 usec

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

Page 25: Lecture 7 scheduling

Time Quantum and Context Switch Time

Page 26: Lecture 7 scheduling

Turnaround Time Varies With The Time Quantum

80% of CPU bursts should be shorter than q

Page 27: Lecture 7 scheduling

Md. Shamsujjoha 27

RR drawbacks

• What if all jobs are exactly the same length?– What would the pessimal schedule be (with average

response time as the measure)?

• What do you set the quantum to be?– no value is “correct”

• if small, then context switch often, incurring high overhead• if large, then response time degrades

• Treats all jobs equally• if I run 100 copies of SETI@home, it degrades your service• how might I fix this?

Page 28: Lecture 7 scheduling

Md. Shamsujjoha 28

Algorithm #4: Priority

• Assign priorities to requests– choose request with highest priority to run next

• if tie, use another scheduling algorithm to break (e.g., RR)– Goal: non-fairness (favor one group over another)

• Abstractly modeled (and usually implemented) as multiple “priority queues”– put a ready request on the queue associated with its priority

• Sounds perfect!

Page 29: Lecture 7 scheduling

Example of Priority Scheduling

ProcessA arri Burst TimeTPriority

P1 10 3

P2 1 1

P3 2 4

P4 1 5

P5 5 2

• Priority scheduling Gantt Chart• Average waiting time = 8.2 msec

P2 P3P5

1 180 16

P4

196

P1

Page 30: Lecture 7 scheduling

Md. Shamsujjoha 30

Priority drawbacks

• How are you going to assign priorities?

• Starvation– if there is an endless supply of high priority jobs, no low-

priority job will ever run

• Solution: “age” threads over time– increase priority as a function of accumulated wait time– decrease priority as a function of accumulated processing

time– many ugly heuristics have been explored in this space

Page 31: Lecture 7 scheduling

Md. Shamsujjoha 31

Program behavior and scheduling

• An analogy:– Say you're at the airport waiting for a flight– There are two identical ATMs:

• ATM 1 has 3 people in line• ATM 2 has 6 people in line

– You get into the line for ATM 1– ATM 2's line shrinks to 4 people– Why might you now switch lines, preferring 5th in line for

ATM 2 over 4th in line for ATM 1?

Page 32: Lecture 7 scheduling

Md. Shamsujjoha 32

Residual Life

• Given that a job has already executed for X seconds, how much longer will it execute, on average, before completing?

ResidualLife

Time Already Executed

Give priority to new jobs

Round robin

Give priority to old jobs

Residual Life

Page 33: Lecture 7 scheduling

Multilevel Queue

• Ready queue is partitioned into separate queues, eg:– foreground (interactive)– background (batch)

• Process permanently in a given queue

• Each queue has its own scheduling algorithm:– foreground – RR– background – FCFS

• Scheduling must be done between the queues:– Fixed priority scheduling; (i.e., serve all from foreground

then from background). Possibility of starvation.– Time slice – each queue gets a certain amount of CPU time

which it can schedule amongst its processes; i.e., 80% to foreground in RR

– 20% to background in FCFS

Page 34: Lecture 7 scheduling

Multilevel Queue Scheduling

Page 35: Lecture 7 scheduling

Multilevel Feedback Queue

• A process can move between the various queues; aging can be implemented this way

• Multilevel-feedback-queue scheduler defined by the following parameters:– number of queues– scheduling algorithms for each queue– method used to determine when to upgrade a process– method used to determine when to demote a process– method used to determine which queue a process will

enter when that process needs service

Page 36: Lecture 7 scheduling

Example of Multilevel Feedback Queue

• Three queues: – Q0 – RR with time quantum 8

milliseconds– Q1 – RR time quantum 16 milliseconds

– Q2 – FCFS

• Scheduling– A new job enters queue Q0 which is

served FCFS• When it gains CPU, job receives 8

milliseconds• If it does not finish in 8 milliseconds, job

is moved to queue Q1

– At Q1 job is again served FCFS and receives 16 additional milliseconds

• If it still does not complete, it is preempted and moved to queue Q2

Page 37: Lecture 7 scheduling

Md. Shamsujjoha 37

UNIX scheduling

• Canonical scheduler is pretty much MLFQ– 3-4 classes spanning ~170 priority levels

• timesharing: lowest 60 priorities• system: middle 40 priorities• real-time: highest 60 priorities

– priority scheduling across queues, RR within• process with highest priority always run first• processes with same priority scheduled RR

– processes dynamically change priority• increases over time if process blocks before end of quantum• decreases if process uses entire quantum

• Goals:– reward interactive behavior over CPU hogs

• interactive jobs typically have short bursts of CPU

Page 38: Lecture 7 scheduling

Md. Shamsujjoha 38

Scheduling the Apache web server SRPT

• What does a web request consist of? (What’s it trying to get done?)

• How are incoming web requests scheduled, in practice?

• How might you estimate the service time of an incoming request?

• Starvation under SRPT is a problem in theory – is it a problem in practice?– “Kleinrock’s conservation law”

(Work by Bianca Schroeder and Mor Harchol-Balter at CMU)

Page 39: Lecture 7 scheduling

Md. Shamsujjoha 39© 2003 Bianca Schroeder & Mor Harchol-Balter, CMU

Page 40: Lecture 7 scheduling

Md. Shamsujjoha 40

Summary

• Scheduling takes place at many levels• It can make a huge difference in performance

– this difference increases with the variability in service requirements

• Multiple goals, sometimes conflicting• There are many “pure” algorithms, most with some

drawbacks in practice – FCFS, SPT, RR, Priority• Real systems use hybrids that exploit observed

program behavior• Scheduling is still important, and there are still new

angles to be explored – particularly in large-scale datacenters for reasons of cost and energy