33
CS162 Operating Systems and Systems Programming Lecture 10 Scheduling (conā€™t) February 25 th , 2020 Prof. John Kubiatowicz http://cs162.eecs.Berkeley.edu Acknowledgments: Lecture slides are from the Operating Systems course taught by John Kubiatowicz at Berkeley, with few minor updates/changes. When slides are obtained from other sources, a a reference will be noted on the bottom of that slide, in which case a full list of references is provided on the last slide.

CS162 Operating Systems and Systems Programming Lecture 10

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: CS162 Operating Systems and Systems Programming Lecture 10

CS162 Operating Systems andSystems Programming

Lecture 10

Scheduling (conā€™t)

February 25th, 2020Prof. John Kubiatowicz

http://cs162.eecs.Berkeley.edu

Acknowledgments: Lecture slides are from the Operating Systems course taught by John Kubiatowicz at Berkeley, with few minor updates/changes. When slides are obtained from other sources, a a reference will be noted on the bottom of that slide, in which case a full list of references is provided on the last slide.

Page 2: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 2

Recall: Scheduling

ā€¢ Discussion of Scheduling: ā€“ Which thread should run on the CPU next?

ā€¢ Scheduling goals, policiesā€¢ Look at a number of different schedulers

if ( readyThreads(TCBs) ) {nextTCB = selectThread(TCBs);run( nextTCB );

} else {run_idle_thread();

}

Page 3: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 3

Recall: Scheduling Policy Goals/Criteriaā€¢ Minimize Response Time

ā€“ Minimize elapsed time to do an operation (or job)ā€“ Response time is what the user sees:Ā» Time to echo a keystroke in editorĀ» Time to compile a programĀ» Real-time Tasks: Must meet deadlines imposed by World

ā€¢ Maximize Throughputā€“ Maximize operations (or jobs) per secondā€“ Throughput related to response time, but not identical:Ā» Minimizing response time will lead to more context switching than if you

only maximized throughputā€“ Two parts to maximizing throughputĀ» Minimize overhead (for example, context-switching)Ā» Efficient use of resources (CPU, disk, memory, etc)

ā€¢ Fairnessā€“ Share CPU among users in some equitable wayā€“ Fairness is not minimizing average response time:Ā» Better average response time by making system less fair

Page 4: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 4

Recall: Example of RR with Time Quantum = 20ā€¢ Example: Process Burst Time

P1 53 P2 8 P3 68 P4 24

ā€“ The Gantt chart is:

ā€“ Waiting time for P1=(68-20)+(112-88)=72P2=(20-0)=20

P3=(28-0)+(88-48)+(125-108)=85P4=(48-0)+(108-68)=88

ā€“ Average waiting time = (72+20+85+88)/4=66Ā¼ā€“ Average completion time = (125+28+153+112)/4 = 104Ā½

ā€¢ Thus, Round-Robin Pros and Cons:ā€“ Better for short jobs, Fair (+)ā€“ Context-switching time adds up for long jobs (-)

P10 20

P228

P348

P468

P188

P3108

P4 P1 P3 P3112 125 145 153

Page 5: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 5

Comparisons between FCFS and Round Robinā€¢ Assuming zero-cost context-switching time, is RR always better than

FCFS?ā€¢ Simple example: 10 jobs, each take 100s of CPU time

RR scheduler quantum of 1s All jobs start at the same time

ā€¢ Completion Times:

ā€“ Both RR and FCFS finish at the same timeā€“ Average response time is much worse under RR!Ā» Bad when all jobs same length

ā€¢ Also: Cache state must be shared between all jobs with RR but can be devoted to each job with FIFOā€“ Total time for RR longer even for zero-cost switch!

Job # FIFO RR

1 100 991

2 200 992

ā€¦ ā€¦ ā€¦

9 900 999

10 1000 1000

Page 6: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 6

Quantum

CompletionTime

WaitTime

AverageP4P3P2P1

Earlier Example with Different Time QuantumP2[8]

P4[24]

P1[53]

P3[68]

0 8 32 85 153

Best FCFS:

6257852284Q = 1

104Ā½11215328125Q = 20

100Ā½8115330137Q = 1

66Ā¼ 88852072Q = 20

31Ā¼885032Best FCFS

121Ā¾14568153121Worst FCFS

69Ā½32153885Best FCFS83Ā½121014568Worst FCFS

95Ā½8015316133Q = 8

57Ā¼5685880Q = 8

99Ā½9215318135Q = 10

99Ā½8215328135Q = 5

61Ā¼68851082Q = 10

61Ā¼58852082Q = 5

Page 7: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 7

Handling Differences in Importance: Strict Priority Scheduling

ā€¢ Execution Planā€“ Always execute highest-priority runable jobs to completionā€“ Each queue can be processed in RR with some time-quantum

ā€¢ Problems:ā€“ Starvation:

Ā» Lower priority jobs donā€™t get to run because higher priority jobsā€“ Deadlock: Priority Inversion

Ā» Not strictly a problem with priority scheduling, but happens when low priority task has lock needed by high-priority task

Ā» Usually involves third, intermediate priority task that keeps running even though high-priority task should be running

ā€¢ How to fix problems?ā€“ Dynamic priorities ā€“ adjust base-level priority up or down based on heuristics

about interactivity, locking, burst behavior, etcā€¦

Priority 3

Priority 2

Priority 1

Priority 0 Job 5 Job 6

Job 1 Job 2 Job 3

Job 7

Job 4

Page 8: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 8

Scheduling Fairness

ā€¢ What about fairness?ā€“ Strict fixed-priority scheduling between queues is unfair (run

highest, then next, etc):Ā» long running jobs may never get CPU Ā» Urban legend: In Multics, shut down machine, found 10-year-

old job ā‡’ Ok, probably notā€¦ā€“ Must give long-running jobs a fraction of the CPU even when

there are shorter jobs to runā€“ Tradeoff: fairness gained by hurting avg response time!

Page 9: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 9

Scheduling Fairness

ā€¢ How to implement fairness?ā€“ Could give each queue some fraction of the CPU

Ā» What if one long-running job and 100 short-running ones?Ā» Like express lanes in a supermarketā€”sometimes express

lanes get so long, get better service by going into one of the other lines

ā€“ Could increase priority of jobs that donā€™t get serviceĀ» What is done in some variants of UNIXĀ» This is ad hocā€”what rate should you increase priorities?Ā» And, as system gets overloaded, no job gets CPU time, so

everyone increases in priorityā‡’Interactive jobs suffer

Page 10: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 10

Lottery Scheduling

ā€¢ Yet another alternative: Lottery Schedulingā€“ Give each job some number of lottery ticketsā€“ On each time slice, randomly pick a winning ticket

Ā» NOTE: Not a ā€œrealā€ random number generator; instead pseudo-random number generators can make sure that every ticket picked once before repeating!

ā€“ On average, CPU time is proportional to number of tickets given to each job

ā€¢ How to assign tickets?ā€“ To help with responsiveness, give short running jobs more

tickets, long running jobs get fewer ticketsā€“ To avoid starvation, every job gets at least one ticket

(everyone makes progress)ā€¢ Advantage over strict priority scheduling: behaves gracefully as

load changesā€“ Adding or deleting a job affects all jobs proportionally,

independent of how many tickets each job possesses

Page 11: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 11

Lottery Scheduling Example (Cont.)

ā€¢ Lottery Scheduling Exampleā€“ Assume short jobs get 10 tickets, long jobs get 1 ticket

ā€“ What if too many short jobs to give reasonable response time?

Ā» If load average is 100, hard to make progressĀ» One approach: log some user out

# short jobs/# long jobs

% of CPU each short jobs gets

% of CPU each long jobs gets

1/1 91% 9%

0/2 N/A 50%

2/0 50% N/A

10/1 9.9% 0.99%

1/10 50% 5%

Page 12: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 12

How to Evaluate a Scheduling algorithm?ā€¢ Deterministic modeling

ā€“ takes a predetermined workload and compute the performance of each algorithm for that workload

ā€¢ Queueing modelsā€“ Mathematical approach for handling stochastic workloads

ā€¢ Implementation/Simulation:ā€“ Build system which allows actual algorithms to be run against

actual data ā€“ most flexible/general

Page 13: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 13

How to Handle Simultaneous: Mix of Diff Types of Apps?

ā€¢ Consider mix of interactive and high throughput apps:ā€“ How to best schedule them?ā€“ How to recognize one from the other?

Ā» Do you trust app to say that it is ā€œinteractiveā€?ā€“ Should you schedule the set of apps identically on servers, workstations, pads,

and cellphones?ā€¢ For instance, is Burst Time (observed) useful to decide which application gets

CPU time?ā€“ Short Bursts ā‡’ Interactivity ā‡’ High Priority?

ā€¢ Assumptions encoded into many schedulers:ā€“ Apps that sleep a lot and have short bursts must be interactive apps ā€“ they

should get high priorityā€“ Apps that compute a lot should get low(er?) priority, since they wonā€™t notice

intermittent bursts from interactive appsā€¢ Hard to characterize apps:ā€“ What about apps that sleep for a long time, but then compute for a long time?ā€“ Or, what about apps that must run under all circumstances (say periodically)

Page 14: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 14

What if we Knew the Future?ā€¢ Could we always mirror best FCFS?ā€¢ Shortest Job First (SJF):

ā€“ Run whatever job has least amount of computation to do

ā€“ Sometimes called ā€œShortest Time to Completion Firstā€ (STCF)ā€¢ Shortest Remaining Time First (SRTF):

ā€“ Preemptive version of SJF: if job arrives and has a shorter time to completion than the remaining time on the current job, immediately preempt CPU

ā€“ Sometimes called ā€œShortest Remaining Time to Completion Firstā€ (SRTCF)

ā€¢ These can be applied to whole program or current CPU burstā€“ Idea is to get short jobs out of the systemā€“ Big effect on short jobs, only small effect on long onesā€“ Result is better average response time

Page 15: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 15

Discussion

ā€¢ SJF/SRTF are the best you can do at minimizing average response time

ā€“ Provably optimal (SJF among non-preemptive, SRTF among preemptive)

ā€“ Since SRTF is always at least as good as SJF, focus on SRTF

ā€¢ Comparison of SRTF with FCFSā€“ What if all jobs the same length?Ā» SRTF becomes the same as FCFS (i.e. FCFS is best can do if all jobs the

same length)ā€“ What if jobs have varying length?Ā» SRTF: short jobs not stuck behind long ones

Page 16: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 16

Example to illustrate benefits of SRTF

ā€¢ Three jobs:ā€“ A, B: both CPU bound, run for week

C: I/O bound, loop 1ms CPU, 9ms disk I/Oā€“ If only one at a time, C uses 90% of the disk, A or B could use 100%

of the CPUā€¢ With FCFS:

ā€“ Once A or B get in, keep CPU for two weeksā€¢ What about RR or SRTF?

ā€“ Easier to see with a timeline

C

Cā€™s I/O

Cā€™s I/O

Cā€™s I/O

A or B

Page 17: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 17

SRTF Example continued:

Cā€™s I/O

CABABā€¦ C

Cā€™s I/O

RR 1ms time slice

Cā€™s I/O

Cā€™s I/O

CA BC

RR 100ms time slice

Cā€™s I/O

AC

Cā€™s I/O

AA

SRTF

Disk Utilization:~90% but lots of wakeups!

Disk Utilization:90%

Disk Utilization:9/201 ~ 4.5%

Page 18: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 18

ā€¢ Starvationā€“ SRTF can lead to starvation if many small jobs!ā€“ Large jobs never get to run

ā€¢ Somehow need to predict futureā€“ How can we do this? ā€“ Some systems ask the user

Ā» When you submit a job, have to say how long it will takeĀ» To stop cheating, system kills job if takes too long

ā€“ But: hard to predict jobā€™s runtime even for non-malicious usersā€¢ Bottom line, canā€™t really know how long job will takeā€“ However, can use SRTF as a yardstick

for measuring other policiesā€“ Optimal, so canā€™t do any better

ā€¢ SRTF Pros & Consā€“ Optimal (average response time) (+)ā€“ Hard to predict future (-)ā€“ Unfair (-)

SRTF Further discussion

Page 19: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 19

Predicting the Length of the Next CPU Burst

ā€¢ Adaptive: Changing policy based on past behaviorā€“ CPU scheduling, in virtual memory, in file systems, etcā€“ Works because programs have predictable behavior

Ā» If program was I/O bound in past, likely in futureĀ» If computer behavior were random, wouldnā€™t help

ā€¢ Example: SRTF with estimated burst lengthā€“ Use an estimator function on previous bursts:

Let tn-1, tn-2, tn-3, etc. be previous CPU burst lengths. Estimate next burst Ļ„n = f(tn-1, tn-2, tn-3, ā€¦)ā€“ Function f could be one of many different time series estimation schemes

(Kalman filters, etc)ā€“ For instance,

exponential averagingĻ„n = Ī±tn-1+(1-Ī±)Ļ„n-1 with (0<Ī±ā‰¤1)

Page 20: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 20

Multi-Level Feedback Scheduling

ā€¢ Another method for exploiting past behavior (first use in CTSS)ā€“ Multiple queues, each with different priority

Ā» Higher priority queues often considered ā€œforegroundā€ tasksā€“ Each queue has its own scheduling algorithm

Ā» e.g. foreground ā€“ RR, background ā€“ FCFSĀ» Sometimes multiple RR priorities with quantum increasing exponentially (highest:

1ms, next: 2ms, next: 4ms, etc)ā€¢ Adjust each jobā€™s priority as follows (details vary)ā€“ Job starts in highest priority queueā€“ If timeout expires, drop one levelā€“ If timeout doesnā€™t expire, push up one level (or to top)

Long-Running ComputeTasks Demoted to

Low Priority

Page 21: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 21

Scheduling Details

ā€¢ Result approximates SRTF:ā€“ CPU bound jobs drop like a rockā€“ Short-running I/O bound jobs stay near top

ā€¢ Scheduling must be done between the queuesā€“ Fixed priority scheduling: Ā» serve all from highest priority, then next priority, etc.

ā€“ Time slice:Ā» each queue gets a certain amount of CPU time Ā» e.g., 70% to highest, 20% next, 10% lowest

Long-Running ComputeTasks Demoted to

Low Priority

Page 22: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 22

Scheduling Details

ā€¢ Countermeasure: user action that can foil intent of the OS designers

ā€“ For multilevel feedback, put in a bunch of meaningless I/O to keep jobā€™s priority high

ā€“ Of course, if everyone did this, wouldnā€™t work!ā€¢ Example of Othello program:

ā€“ Playing against competitor, so key was to do computing at higher priority the competitors. Ā» Put in printf ā€™s, ran much faster!

Long-Running ComputeTasks Demoted to

Low Priority

Page 23: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 23

Case Study: Linux O(1) Scheduler

ā€¢ Priority-based scheduler : 140 prioritiesā€“ 40 for ā€œuser tasksā€ (set by ā€œniceā€), 100 for ā€œRealtime/Kernelā€ā€“ Lower priority value ā‡’ higher priority (for nice values)ā€“ Highest priority value ā‡’ Lower priority (for realtime values)ā€“ All algorithms O(1)

Ā» Timeslices/priorities/interactivity credits all computed when job finishes time sliceĀ» 140-bit bit mask indicates presence or absence of job at given priority level

ā€¢ Two separate priority queues: ā€œactiveā€ and ā€œexpiredā€ā€“ All tasks in the active queue use up their timeslices and get placed on the

expired queue, after which queues swappedā€¢ Timeslice depends on priority ā€“ linearly mapped onto timeslice range

ā€“ Like a multi-level queue (one queue per priority) with different timeslice at each levelā€“ Execution split into ā€œTimeslice Granularityā€ chunks ā€“ round robin through

priority

Kernel/Realtime Tasks User Tasks

0 100 139

Page 24: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 24

O(1) Scheduler Continuedā€¢ Heuristics ā€“ User-task priority adjusted Ā±5 based on heuristics

Ā» p->sleep_avg = sleep_time ā€“ run_timeĀ» Higher sleep_avg ā‡’ more I/O bound the task, more reward (and vice versa)

ā€“ Interactive CreditĀ» Earned when a task sleeps for a ā€œlongā€ timeĀ» Spend when a task runs for a ā€œlongā€ timeĀ» IC is used to provide hysteresis to avoid changing interactivity for temporary

changes in behaviorā€“ However, ā€œinteractive tasksā€ get special dispensation

Ā» To try to maintain interactivityĀ» Placed back into active queue, unless some other task has been starved for too

longā€¦ā€¢ Real-Time Tasksā€“ Always preempt non-RT tasksā€“ No dynamic adjustment of prioritiesā€“ Scheduling schemes:

Ā» SCHED_FIFO: preempts other tasks, no timeslice limitĀ» SCHED_RR: preempts normal tasks, RR scheduling amongst tasks of same

priority

Page 25: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 25

Linux Completely Fair Scheduler (CFS)ā€¢ First appeared in 2.6.23, modified in 2.6.24ā€¢ ā€œCFS doesn't track sleeping time and doesn't use heuristics to

identify interactive tasksā€”it just makes sure every process gets a fair share of CPU within a set amount of time given the number of runnable processes on the CPU.ā€

ā€¢ Inspired by Networking ā€œFair Queueingā€ā€“ Each process given their fair share of resourcesā€“ Models an ā€œideal multitasking processorā€ in which N processes

execute simultaneously as if they truly got 1/N of the processorĀ» Tries to give each process an equal fraction of the processor

ā€“ Priorities reflected by weights such that increasing a taskā€™s priority by 1 always gives the same fractional increase in CPU time ā€“ regardless of current priority

Page 26: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 26

Real-Time Scheduling (RTS)ā€¢ Efficiency is important but predictability is essential:ā€“ We need to predict with confidence worst case response times for systemsā€“ In RTS, performance guarantees are:

Ā» Task- and/or class centric and often ensured a prioriā€“ In conventional systems, performance is:

Ā» System/throughput oriented with post-processing (ā€¦ wait and see ā€¦)ā€“ Real-time is about enforcing predictability, and does not equal fast computing!!!

ā€¢ Hard Real-Timeā€“ Attempt to meet all deadlinesā€“ EDF (Earliest Deadline First), LLF (Least Laxity First),

RMS (Rate-Monotonic Scheduling), DM (Deadline Monotonic Scheduling)ā€¢ Soft Real-Time

ā€“ Attempt to meet deadlines with high probabilityā€“ Minimize miss ratio / maximize completion ratio (firm real-time)ā€“ Important for multimedia applicationsā€“ CBS (Constant Bandwidth Server)

Page 27: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 27

Example: Workload Characteristics

ā€¢ Tasks are preemptable, independent with arbitrary arrival (=release) times

ā€¢ Tasks have deadlines (D) and known computation times (C) ā€¢ Example Setup:

Page 28: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 28

Example: Round-Robin Scheduling Doesnā€™t Work

Time

Page 29: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 29

ā€¢ Tasks periodic with period P and computation C in each period: ( , ) for each task

ā€¢ Preemptive priority-based dynamic scheduling:ā€“ Each task is assigned a (current) priority based on how close the

absolute deadline is (i.e. for each task!)ā€“ The scheduler always schedules the active task with the closest absolute

deadline

ā€¢ Schedulable when

š‘ƒš‘–š¶š‘– š‘–

š·š‘”+1š‘– = š·š‘”

š‘– + š‘ƒš‘–

š‘›

āˆ‘š‘–=1 ( š¶š‘–

š‘ƒš‘– ) ā‰¤ 1

Earliest Deadline First (EDF)

0 5 10 15

)1,4(1 =T

)2,5(2 =T

)2,7(3 =T

Page 30: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 30

Choosing the Right Scheduler

I Care About: Then Choose:

CPU Throughput FCFS

Avg. Response Time SRTF Approximation

I/O Throughput SRTF Approximation

Fairness (CPU Time) Linux CFS

Fairness ā€“ Wait Time to Get CPU

Round Robin

Meeting Deadlines EDF

Favoring Important Tasks Priority

Page 31: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 31

A Final Word On Schedulingā€¢ When do the details of the scheduling policy and fairness really

matter?ā€“ When there arenā€™t enough resources to go around

ā€¢ When should you simply buy a faster computer?ā€“ (Or network link, or expanded highway, or ā€¦)ā€“ One approach: Buy it when it will pay

for itself in improved response timeĀ» Perhaps youā€™re paying for worse response

time in reduced productivity, customer angst, etcā€¦Ā» Might think that you should buy a faster X

when X is utilized 100%, but usually, response time goes to infinity as utilizationā‡’100%

ā€¢ An interesting implication of this curve:ā€“ Most scheduling algorithms work fine in the ā€œlinearā€ portion of the load

curve, fail otherwiseā€“ Argues for buying a faster X when hit ā€œkneeā€ of curve

Utilization

Responsetim

e 100%

Page 32: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 32

Summary (1 of 2)ā€¢ Scheduling Goals:

ā€“ Minimize Response Time (e.g. for human interaction)ā€“ Maximize Throughput (e.g. for large computations)ā€“ Fairness (e.g. Proper Sharing of Resources)ā€“ Predictability (e.g. Hard/Soft Realtime)

ā€¢ Round-Robin Scheduling: ā€“ Give each thread a small amount of CPU time when it executes; cycle

between all ready threadsā€“ Pros: Better for short jobs

ā€¢ Shortest Job First (SJF)/Shortest Remaining Time First (SRTF):ā€“ Run whatever job has the least amount of computation to do/least

remaining amount of computation to doā€“ Pros: Optimal (average response time) ā€“ Cons: Hard to predict future, Unfair

ā€¢ Multi-Level Feedback Scheduling:ā€“ Multiple queues of different priorities and scheduling algorithmsā€“ Automatic promotion/demotion of process priority in order to

approximate SJF/SRTF

Page 33: CS162 Operating Systems and Systems Programming Lecture 10

2/25/2020 Kubiatowicz CS162 Ā©UCB Fall 2020 33

Summary (2 of 2)ā€¢ Lottery Scheduling:

ā€“ Give each thread a priority-dependent number of tokens (short tasksā‡’more tokens)

ā€¢ Linux CFS Scheduler : Fair fraction of CPUā€“ Approximates a ā€œidealā€ multitasking processor

ā€¢ Realtime Schedulers such as EDFā€“ Guaranteed behavior by meeting deadlinesā€“ Realtime tasks defined by tuple of compute time and periodā€“ Schedulability test: is it possible to meet deadlines with proposed set of

processes?