Upload
others
View
6
Download
0
Embed Size (px)
Citation preview
CPU SchedulingICS332 — Operating Systems
Henri Casanova ([email protected])
Spring 2018
Henri Casanova ([email protected]) CPU Scheduling
CPU Scheduling
CPU Scheduling: The process by which the OS decides whichprocesses/threads should run (and for how long)
Necessary in a multi-programming environmentReminder: only Ready processes can be scheduled
In these lecture notes I will use the word process but it should beunderstood as “processes and threads”
The policy: the scheduling strategy
A usual broad goal is to improve system performance andproductivity, including:Maximize CPU utilization (Ideally: CPU is never idle when there iswork to do)Make processes “happy”
The mechanism: the dispatcher
The OS component that knows how to switch between processes onthe CPU (implements the context-switch mechanism)It must be fast (i.e., low dispatcher latency)
There are strong theoretical underpinnings and a huge scientificliterature on the topic but we will focus on pragmatic issues
Henri Casanova ([email protected]) CPU Scheduling
CPU Scheduling
CPU Scheduling: The process by which the OS decides whichprocesses/threads should run (and for how long)
Necessary in a multi-programming environmentReminder: only Ready processes can be scheduled
In these lecture notes I will use the word process but it should beunderstood as “processes and threads”
The policy: the scheduling strategy
A usual broad goal is to improve system performance andproductivity, including:Maximize CPU utilization (Ideally: CPU is never idle when there iswork to do)Make processes “happy”
The mechanism: the dispatcher
The OS component that knows how to switch between processes onthe CPU (implements the context-switch mechanism)It must be fast (i.e., low dispatcher latency)
There are strong theoretical underpinnings and a huge scientificliterature on the topic but we will focus on pragmatic issues
Henri Casanova ([email protected]) CPU Scheduling
CPU Scheduling
CPU Scheduling: The process by which the OS decides whichprocesses/threads should run (and for how long)
Necessary in a multi-programming environmentReminder: only Ready processes can be scheduled
In these lecture notes I will use the word process but it should beunderstood as “processes and threads”
The policy: the scheduling strategy
A usual broad goal is to improve system performance andproductivity, including:Maximize CPU utilization (Ideally: CPU is never idle when there iswork to do)Make processes “happy”
The mechanism: the dispatcher
The OS component that knows how to switch between processes onthe CPU (implements the context-switch mechanism)It must be fast (i.e., low dispatcher latency)
There are strong theoretical underpinnings and a huge scientificliterature on the topic but we will focus on pragmatic issues
Henri Casanova ([email protected]) CPU Scheduling
Long-term/Short-term Scheduling
Long-Term Scheduler
Selects processes from asubmitted pool of processesand loads them to memory
Orchestrate process executionsin the long term (minutes,hours, ...)
Executed every [10 minutes, orhour...]
Can construct complexschedules
Uses sophisticated decisionalgorithms(Take my graduate ICS632course)
Short-Term Scheduler
Selects already-in-memoryprocesses to run
Orchestrate process executionsin the very short term (next1000ths of seconds)
Executed every [10’s ofmilliseconds, ]
Cannot make complex decision
Use simple decision algorithms(these lecture notes)
OSes: Short-Term Scheduler (or CPU Scheduler)
Long-Term Scheduling: done by non-OS software (Job Schedulers)
Henri Casanova ([email protected]) CPU Scheduling
Long-term/Short-term Scheduling
Long-Term Scheduler
Selects processes from asubmitted pool of processesand loads them to memory
Orchestrate process executionsin the long term (minutes,hours, ...)
Executed every [10 minutes, orhour...]
Can construct complexschedules
Uses sophisticated decisionalgorithms(Take my graduate ICS632course)
Short-Term Scheduler
Selects already-in-memoryprocesses to run
Orchestrate process executionsin the very short term (next1000ths of seconds)
Executed every [10’s ofmilliseconds, ]
Cannot make complex decision
Use simple decision algorithms(these lecture notes)
OSes: Short-Term Scheduler (or CPU Scheduler)
Long-Term Scheduling: done by non-OS software (Job Schedulers)
Henri Casanova ([email protected]) CPU Scheduling
Long-term/Short-term Scheduling
Long-Term Scheduler
Selects processes from asubmitted pool of processesand loads them to memory
Orchestrate process executionsin the long term (minutes,hours, ...)
Executed every [10 minutes, orhour...]
Can construct complexschedules
Uses sophisticated decisionalgorithms(Take my graduate ICS632course)
Short-Term Scheduler
Selects already-in-memoryprocesses to run
Orchestrate process executionsin the very short term (next1000ths of seconds)
Executed every [10’s ofmilliseconds, ]
Cannot make complex decision
Use simple decision algorithms(these lecture notes)
OSes: Short-Term Scheduler (or CPU Scheduler)
Long-Term Scheduling: done by non-OS software (Job Schedulers)
Henri Casanova ([email protected]) CPU Scheduling
Long-term/Short-term Scheduling
Long-Term Scheduler
Selects processes from asubmitted pool of processesand loads them to memory
Orchestrate process executionsin the long term (minutes,hours, ...)
Executed every [10 minutes, orhour...]
Can construct complexschedules
Uses sophisticated decisionalgorithms(Take my graduate ICS632course)
Short-Term Scheduler
Selects already-in-memoryprocesses to run
Orchestrate process executionsin the very short term (next1000ths of seconds)
Executed every [10’s ofmilliseconds, ]
Cannot make complex decision
Use simple decision algorithms(these lecture notes)
OSes: Short-Term Scheduler (or CPU Scheduler)
Long-Term Scheduling: done by non-OS software (Job Schedulers)
Henri Casanova ([email protected]) CPU Scheduling
Long-term/Short-term Scheduling
Long-Term Scheduler
Selects processes from asubmitted pool of processesand loads them to memory
Orchestrate process executionsin the long term (minutes,hours, ...)
Executed every [10 minutes, orhour...]
Can construct complexschedules
Uses sophisticated decisionalgorithms(Take my graduate ICS632course)
Short-Term Scheduler
Selects already-in-memoryprocesses to run
Orchestrate process executionsin the very short term (next1000ths of seconds)
Executed every [10’s ofmilliseconds, ]
Cannot make complex decision
Use simple decision algorithms(these lecture notes)
OSes: Short-Term Scheduler (or CPU Scheduler)
Long-Term Scheduling: done by non-OS software (Job Schedulers)
Henri Casanova ([email protected]) CPU Scheduling
Long-term/Short-term Scheduling
Long-Term Scheduler
Selects processes from asubmitted pool of processesand loads them to memory
Orchestrate process executionsin the long term (minutes,hours, ...)
Executed every [10 minutes, orhour...]
Can construct complexschedules
Uses sophisticated decisionalgorithms(Take my graduate ICS632course)
Short-Term Scheduler
Selects already-in-memoryprocesses to run
Orchestrate process executionsin the very short term (next1000ths of seconds)
Executed every [10’s ofmilliseconds, ]
Cannot make complex decision
Use simple decision algorithms(these lecture notes)
OSes: Short-Term Scheduler (or CPU Scheduler)
Long-Term Scheduling: done by non-OS software (Job Schedulers)
Henri Casanova ([email protected]) CPU Scheduling
Long-term/Short-term Scheduling
Long-Term Scheduler
Selects processes from asubmitted pool of processesand loads them to memory
Orchestrate process executionsin the long term (minutes,hours, ...)
Executed every [10 minutes, orhour...]
Can construct complexschedules
Uses sophisticated decisionalgorithms(Take my graduate ICS632course)
Short-Term Scheduler
Selects already-in-memoryprocesses to run
Orchestrate process executionsin the very short term (next1000ths of seconds)
Executed every [10’s ofmilliseconds, ]
Cannot make complex decision
Use simple decision algorithms(these lecture notes)
OSes: Short-Term Scheduler (or CPU Scheduler)
Long-Term Scheduling: done by non-OS software (Job Schedulers)
Henri Casanova ([email protected]) CPU Scheduling
Long-term/Short-term Scheduling
Long-Term Scheduler
Selects processes from asubmitted pool of processesand loads them to memory
Orchestrate process executionsin the long term (minutes,hours, ...)
Executed every [10 minutes, orhour...]
Can construct complexschedules
Uses sophisticated decisionalgorithms(Take my graduate ICS632course)
Short-Term Scheduler
Selects already-in-memoryprocesses to run
Orchestrate process executionsin the very short term (next1000ths of seconds)
Executed every [10’s ofmilliseconds, ]
Cannot make complex decision
Use simple decision algorithms(these lecture notes)
OSes: Short-Term Scheduler (or CPU Scheduler)
Long-Term Scheduling: done by non-OS software (Job Schedulers)
Henri Casanova ([email protected]) CPU Scheduling
Long-term/Short-term Scheduling
Long-Term Scheduler
Selects processes from asubmitted pool of processesand loads them to memory
Orchestrate process executionsin the long term (minutes,hours, ...)
Executed every [10 minutes, orhour...]
Can construct complexschedules
Uses sophisticated decisionalgorithms
(Take my graduate ICS632course)
Short-Term Scheduler
Selects already-in-memoryprocesses to run
Orchestrate process executionsin the very short term (next1000ths of seconds)
Executed every [10’s ofmilliseconds, ]
Cannot make complex decision
Use simple decision algorithms(these lecture notes)
OSes: Short-Term Scheduler (or CPU Scheduler)
Long-Term Scheduling: done by non-OS software (Job Schedulers)
Henri Casanova ([email protected]) CPU Scheduling
Long-term/Short-term Scheduling
Long-Term Scheduler
Selects processes from asubmitted pool of processesand loads them to memory
Orchestrate process executionsin the long term (minutes,hours, ...)
Executed every [10 minutes, orhour...]
Can construct complexschedules
Uses sophisticated decisionalgorithms
(Take my graduate ICS632course)
Short-Term Scheduler
Selects already-in-memoryprocesses to run
Orchestrate process executionsin the very short term (next1000ths of seconds)
Executed every [10’s ofmilliseconds, ]
Cannot make complex decision
Use simple decision algorithms
(these lecture notes)
OSes: Short-Term Scheduler (or CPU Scheduler)
Long-Term Scheduling: done by non-OS software (Job Schedulers)
Henri Casanova ([email protected]) CPU Scheduling
Long-term/Short-term Scheduling
Long-Term Scheduler
Selects processes from asubmitted pool of processesand loads them to memory
Orchestrate process executionsin the long term (minutes,hours, ...)
Executed every [10 minutes, orhour...]
Can construct complexschedules
Uses sophisticated decisionalgorithms(Take my graduate ICS632course)
Short-Term Scheduler
Selects already-in-memoryprocesses to run
Orchestrate process executionsin the very short term (next1000ths of seconds)
Executed every [10’s ofmilliseconds, ]
Cannot make complex decision
Use simple decision algorithms
(these lecture notes)
OSes: Short-Term Scheduler (or CPU Scheduler)
Long-Term Scheduling: done by non-OS software (Job Schedulers)
Henri Casanova ([email protected]) CPU Scheduling
Long-term/Short-term Scheduling
Long-Term Scheduler
Selects processes from asubmitted pool of processesand loads them to memory
Orchestrate process executionsin the long term (minutes,hours, ...)
Executed every [10 minutes, orhour...]
Can construct complexschedules
Uses sophisticated decisionalgorithms(Take my graduate ICS632course)
Short-Term Scheduler
Selects already-in-memoryprocesses to run
Orchestrate process executionsin the very short term (next1000ths of seconds)
Executed every [10’s ofmilliseconds, ]
Cannot make complex decision
Use simple decision algorithms(these lecture notes)
OSes: Short-Term Scheduler (or CPU Scheduler)
Long-Term Scheduling: done by non-OS software (Job Schedulers)
Henri Casanova ([email protected]) CPU Scheduling
Long-term/Short-term Scheduling
Long-Term Scheduler
Selects processes from asubmitted pool of processesand loads them to memory
Orchestrate process executionsin the long term (minutes,hours, ...)
Executed every [10 minutes, orhour...]
Can construct complexschedules
Uses sophisticated decisionalgorithms(Take my graduate ICS632course)
Short-Term Scheduler
Selects already-in-memoryprocesses to run
Orchestrate process executionsin the very short term (next1000ths of seconds)
Executed every [10’s ofmilliseconds, ]
Cannot make complex decision
Use simple decision algorithms(these lecture notes)
OSes: Short-Term Scheduler (or CPU Scheduler)
Long-Term Scheduling: done by non-OS software (Job Schedulers)
Henri Casanova ([email protected]) CPU Scheduling
CPU or I/O Burst Cycles / CPU- or I/O-bound processes
Most processes alternate between CPU and I/O activities
These activities are called CPU bursts and I/O bursts
Henri Casanova ([email protected]) CPU Scheduling
CPU or I/O Burst Cycles / CPU- or I/O-bound processes
Consider a piece of a program that reads a line from a text file, i.e.:
.
.
.lines = new array()fileName = ”aTextFile”inputFile = File.open(fileName)line = inputFile.readline()inputFile.close()lines.append(line)...
Timeline:
lines = new array()
fileName = ”aTextFile”
inputFile = File.open(fileName)
line = inputFile.readline()
inputFile.close()
lines.append(line)
CPU burst
I/Oburst
Waiting
for I/O
CPU burst
Henri Casanova ([email protected]) CPU Scheduling
CPU or I/O Burst Cycles / CPU- or I/O-bound processes
Consider a piece of a program that reads a line from a text file, i.e.:
.
.
.lines = new array()fileName = ”aTextFile”inputFile = File.open(fileName)line = inputFile.readline()inputFile.close()lines.append(line)...
Timeline:
lines = new array()
fileName = ”aTextFile”
inputFile = File.open(fileName)
line = inputFile.readline()
inputFile.close()
lines.append(line)
CPU burst
I/Oburst
Waiting
for I/O
CPU burst
Henri Casanova ([email protected]) CPU Scheduling
CPU or I/O Burst Cycles / CPU- or I/O-bound processes
Consider a piece of a program that reads a line from a text file, i.e.:
.
.
.lines = new array()fileName = ”aTextFile”inputFile = File.open(fileName)line = inputFile.readline()inputFile.close()lines.append(line)...
Timeline:
lines = new array()
fileName = ”aTextFile”
inputFile = File.open(fileName)
line = inputFile.readline()
inputFile.close()
lines.append(line)
CPU burst
I/Oburst
Waiting
for I/O
CPU burst
Henri Casanova ([email protected]) CPU Scheduling
CPU or I/O Burst Cycles / CPU- or I/O-bound processes
Consider a piece of a program that reads a line from a text file, i.e.:
.
.
.lines = new array()fileName = ”aTextFile”inputFile = File.open(fileName)line = inputFile.readline()inputFile.close()lines.append(line)...
Timeline:
lines = new array()
fileName = ”aTextFile”
inputFile = File.open(fileName)
line = inputFile.readline()
inputFile.close()
lines.append(line)
CPU burst
I/Oburst
Waiting
for I/O
CPU burst
Henri Casanova ([email protected]) CPU Scheduling
CPU or I/O Burst Cycles / CPU- or I/O-bound processes
Consider a piece of a program that reads a line from a text file, i.e.:
.
.
.lines = new array()fileName = ”aTextFile”inputFile = File.open(fileName)line = inputFile.readline()inputFile.close()lines.append(line)...
Timeline:
lines = new array()
fileName = ”aTextFile”
inputFile = File.open(fileName)
line = inputFile.readline()
inputFile.close()
lines.append(line)
CPU burst
I/Oburst
Waiting
for I/O
CPU burst
Henri Casanova ([email protected]) CPU Scheduling
CPU or I/O Burst Cycles / CPU- or I/O-bound processes
I/O-bound process: a process that is mostly waiting for I/O withmostly (possibly many) short CPU bursts (e.g., /bin/cp)
CPU-bound process: a process that is mostly using the CPU withmostly (possibly many) short I/O bursts (e.g., a 3-D scene renderer)
One of the challenges of CPU scheduling is that the processpopulation is very diverse
Some processes are I/O-bound, some are CPU-bound, and some aresomewhere in betweenSome process are I/O-bound for a while, and then CPU-bound
Henri Casanova ([email protected]) CPU Scheduling
CPU or I/O Burst Cycles / CPU- or I/O-bound processes
I/O-bound process: a process that is mostly waiting for I/O withmostly (possibly many) short CPU bursts (e.g., /bin/cp)
CPU-bound process: a process that is mostly using the CPU withmostly (possibly many) short I/O bursts (e.g., a 3-D scene renderer)
One of the challenges of CPU scheduling is that the processpopulation is very diverse
Some processes are I/O-bound, some are CPU-bound, and some aresomewhere in betweenSome process are I/O-bound for a while, and then CPU-bound
Henri Casanova ([email protected]) CPU Scheduling
The CPU Scheduler
Whenever the CPU becomes idle, a Ready process must be selectedfor execution
Remember that the OS keeps track of process states in PCBs
Two classes of scheduling approaches:Non-preemptive scheduling: A process holds the CPU until itwillingly gives it up
“Old” OSes: Windows 3.*; Mac OS 9 (→ 2001)
Preemptive scheduling: A processes will be preempted even when itwould have happily continued executing
Typically after some timer expires“All” “recent” OSes: Windows 95 and later; Mac OS X; All Linux
Henri Casanova ([email protected]) CPU Scheduling
The CPU Scheduler
Whenever the CPU becomes idle, a Ready process must be selectedfor execution
Remember that the OS keeps track of process states in PCBs
Two classes of scheduling approaches:Non-preemptive scheduling: A process holds the CPU until itwillingly gives it up
“Old” OSes: Windows 3.*; Mac OS 9 (→ 2001)
Preemptive scheduling: A processes will be preempted even when itwould have happily continued executing
Typically after some timer expires“All” “recent” OSes: Windows 95 and later; Mac OS X; All Linux
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Decision Points
When do Scheduling Decisions happen?
(#1) A process goes from RUNNING to WAITING
e.g., waiting for I/O to complete
(#2) A process goes from RUNNING to READY
e.g., when an interrupt occurs (such as a timer going off)
(#3) A process goes from WAITING to READY
e.g., an I/O operation has completed
(#4) A process goes from RUNNING to TERMINATED
(#5) A process goes from NEW to READY
Non-preemptive Scheduling: Only (#1) and (#4)
Preemptive Scheduling: (#1), (#2), (#3), (#4), (#5)
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Decision Points
When do Scheduling Decisions happen?
(#1) A process goes from RUNNING to WAITING
e.g., waiting for I/O to complete
(#2) A process goes from RUNNING to READY
e.g., when an interrupt occurs (such as a timer going off)
(#3) A process goes from WAITING to READY
e.g., an I/O operation has completed
(#4) A process goes from RUNNING to TERMINATED
(#5) A process goes from NEW to READY
Non-preemptive Scheduling: Only (#1) and (#4)
Preemptive Scheduling: (#1), (#2), (#3), (#4), (#5)
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Decision Points
When do Scheduling Decisions happen?
(#1) A process goes from RUNNING to WAITING
e.g., waiting for I/O to complete
(#2) A process goes from RUNNING to READY
e.g., when an interrupt occurs (such as a timer going off)
(#3) A process goes from WAITING to READY
e.g., an I/O operation has completed
(#4) A process goes from RUNNING to TERMINATED
(#5) A process goes from NEW to READY
Non-preemptive Scheduling: Only (#1) and (#4)
Preemptive Scheduling: (#1), (#2), (#3), (#4), (#5)
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Decision Points
When do Scheduling Decisions happen?
(#1) A process goes from RUNNING to WAITING
e.g., waiting for I/O to complete
(#2) A process goes from RUNNING to READY
e.g., when an interrupt occurs (such as a timer going off)
(#3) A process goes from WAITING to READY
e.g., an I/O operation has completed
(#4) A process goes from RUNNING to TERMINATED
(#5) A process goes from NEW to READY
Non-preemptive Scheduling: Only (#1) and (#4)
Preemptive Scheduling: (#1), (#2), (#3), (#4), (#5)
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Decision Points
When do Scheduling Decisions happen?
(#1) A process goes from RUNNING to WAITING
e.g., waiting for I/O to complete
(#2) A process goes from RUNNING to READY
e.g., when an interrupt occurs (such as a timer going off)
(#3) A process goes from WAITING to READY
e.g., an I/O operation has completed
(#4) A process goes from RUNNING to TERMINATED
(#5) A process goes from NEW to READY
Non-preemptive Scheduling: Only (#1) and (#4)
Preemptive Scheduling: (#1), (#2), (#3), (#4), (#5)
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Decision Points
When do Scheduling Decisions happen?
(#1) A process goes from RUNNING to WAITING
e.g., waiting for I/O to complete
(#2) A process goes from RUNNING to READY
e.g., when an interrupt occurs (such as a timer going off)
(#3) A process goes from WAITING to READY
e.g., an I/O operation has completed
(#4) A process goes from RUNNING to TERMINATED
(#5) A process goes from NEW to READY
Non-preemptive Scheduling: Only (#1) and (#4)
Preemptive Scheduling: (#1), (#2), (#3), (#4), (#5)
Henri Casanova ([email protected]) CPU Scheduling
Preemptive Scheduling
Preemptive Scheduling is clearly better than Non-PreemptiveScheduling
Since a “while (1) {}” process won’t lock up the machine⇒ The OS remains in control
But Preemptive Scheduling creates Synchronization Issues
e.g., a process is doing something critical and gets preempted in themiddle of it...What if a process is preempted in the middle of a system call duringwhich the kernel is updating its own data structures?These are “work is neither to do or done, but halfway done” problemsSee the upcoming Synchronization module
For the moment, let’s ignore these issues
Henri Casanova ([email protected]) CPU Scheduling
Preemptive Scheduling
Preemptive Scheduling is clearly better than Non-PreemptiveScheduling
Since a “while (1) {}” process won’t lock up the machine⇒ The OS remains in control
But Preemptive Scheduling creates Synchronization Issues
e.g., a process is doing something critical and gets preempted in themiddle of it...What if a process is preempted in the middle of a system call duringwhich the kernel is updating its own data structures?These are “work is neither to do or done, but halfway done” problemsSee the upcoming Synchronization module
For the moment, let’s ignore these issues
Henri Casanova ([email protected]) CPU Scheduling
Preemptive Scheduling
Preemptive Scheduling is clearly better than Non-PreemptiveScheduling
Since a “while (1) {}” process won’t lock up the machine⇒ The OS remains in control
But Preemptive Scheduling creates Synchronization Issues
e.g., a process is doing something critical and gets preempted in themiddle of it...What if a process is preempted in the middle of a system call duringwhich the kernel is updating its own data structures?These are “work is neither to do or done, but halfway done” problemsSee the upcoming Synchronization module
For the moment, let’s ignore these issues
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Queues
The OS maintains Queues in which processes are placed
The Ready Queue contains processes that are in the Ready state
Device Queues contain processes waiting for particular devices
ReadyQueue
head
tailPCB7 PCB4
∅
Drive Unit 0Queue
head
tailPCB2 PCB5 PCB3
∅
Drive Unit 1Queue
head
tail
∅
∅
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Queues
The OS maintains Queues in which processes are placed
The Ready Queue contains processes that are in the Ready state
Device Queues contain processes waiting for particular devices
ReadyQueue
head
tailPCB7 PCB4
∅
Drive Unit 0Queue
head
tailPCB2 PCB5 PCB3
∅
Drive Unit 1Queue
head
tail
∅
∅
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Queues
The OS maintains Queues in which processes are placed
The Ready Queue contains processes that are in the Ready state
Device Queues contain processes waiting for particular devices
ReadyQueue
head
tailPCB7 PCB4
∅
Drive Unit 0Queue
head
tailPCB2 PCB5 PCB3
∅
Drive Unit 1Queue
head
tail
∅
∅
Henri Casanova ([email protected]) CPU Scheduling
Scheduling and Queues
ReadyQueue CPU
ProcessCreationProcessCompletion
I/O RequestI/O QueueI/O
Time Slice Expired
Many System Calls
Henri Casanova ([email protected]) CPU Scheduling
Scheduling and Queues
ReadyQueue CPU
ProcessCreationProcessCompletion
I/O RequestI/O QueueI/O
Time Slice Expired
Many System Calls
Henri Casanova ([email protected]) CPU Scheduling
Scheduling and Queues
ReadyQueue CPU
ProcessCreationProcessCompletion
I/O Request
I/O QueueI/O
Time Slice Expired
Many System Calls
Henri Casanova ([email protected]) CPU Scheduling
Scheduling and Queues
ReadyQueue CPU
ProcessCreationProcessCompletion
I/O RequestI/O QueueI/O
Time Slice Expired
Many System Calls
Henri Casanova ([email protected]) CPU Scheduling
Scheduling and Queues
ReadyQueue CPU
ProcessCreationProcessCompletion
I/O RequestI/O QueueI/O
Time Slice Expired
Many System Calls
Henri Casanova ([email protected]) CPU Scheduling
Scheduling and Queues
ReadyQueue CPU
ProcessCreationProcessCompletion
I/O RequestI/O QueueI/O
Time Slice Expired
Many System Calls
Henri Casanova ([email protected]) CPU Scheduling
Scheduling and Queues
ReadyQueue CPU
ProcessCreationProcessCompletion
I/O RequestI/O QueueI/O
Time Slice Expired
Many System Calls
Henri Casanova ([email protected]) CPU Scheduling
Scheduling and Queues
ReadyQueue CPU
ProcessCreationProcessCompletion
I/O RequestI/O QueueI/O
Time Slice Expired
Many System Calls
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Objectives
We have the mechanisms: Queues, Dispatcher, Context-switching
But what should be the goal of a scheduling policy?
There are many possible and conflicting objectives:
Maximize CPU UtilizationFraction of the time the CPU is not idle
Maximize ThroughputCount of “processes” terminated per time unit
Minimize Turnaround TimeTT = Time from process arrival to process completion
Minimize Response TimeRT = Time from process arrival until its “first execution” on the CPU
Minimize Waiting TimeWT = Time that a process spends in the Ready state
What should be optimized? Averages, Maxima, Variances?
A lot of theory here, that we won’t get into
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Objectives
We have the mechanisms: Queues, Dispatcher, Context-switching
But what should be the goal of a scheduling policy?
There are many possible and conflicting objectives:
Maximize CPU UtilizationFraction of the time the CPU is not idle
Maximize ThroughputCount of “processes” terminated per time unit
Minimize Turnaround TimeTT = Time from process arrival to process completion
Minimize Response TimeRT = Time from process arrival until its “first execution” on the CPU
Minimize Waiting TimeWT = Time that a process spends in the Ready state
What should be optimized? Averages, Maxima, Variances?
A lot of theory here, that we won’t get into
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Objectives
We have the mechanisms: Queues, Dispatcher, Context-switching
But what should be the goal of a scheduling policy?
There are many possible and conflicting objectives:
Maximize CPU UtilizationFraction of the time the CPU is not idle
Maximize ThroughputCount of “processes” terminated per time unit
Minimize Turnaround TimeTT = Time from process arrival to process completion
Minimize Response TimeRT = Time from process arrival until its “first execution” on the CPU
Minimize Waiting TimeWT = Time that a process spends in the Ready state
What should be optimized? Averages, Maxima, Variances?
A lot of theory here, that we won’t get into
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Objectives
We have the mechanisms: Queues, Dispatcher, Context-switching
But what should be the goal of a scheduling policy?
There are many possible and conflicting objectives:
Maximize CPU UtilizationFraction of the time the CPU is not idle
Maximize ThroughputCount of “processes” terminated per time unit
Minimize Turnaround TimeTT = Time from process arrival to process completion
Minimize Response TimeRT = Time from process arrival until its “first execution” on the CPU
Minimize Waiting TimeWT = Time that a process spends in the Ready state
What should be optimized? Averages, Maxima, Variances?
A lot of theory here, that we won’t get into
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Objectives
We have the mechanisms: Queues, Dispatcher, Context-switching
But what should be the goal of a scheduling policy?
There are many possible and conflicting objectives:
Maximize CPU UtilizationFraction of the time the CPU is not idle
Maximize ThroughputCount of “processes” terminated per time unit
Minimize Turnaround TimeTT = Time from process arrival to process completion
Minimize Response TimeRT = Time from process arrival until its “first execution” on the CPU
Minimize Waiting TimeWT = Time that a process spends in the Ready state
What should be optimized? Averages, Maxima, Variances?
A lot of theory here, that we won’t get into
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Objectives
We have the mechanisms: Queues, Dispatcher, Context-switching
But what should be the goal of a scheduling policy?
There are many possible and conflicting objectives:
Maximize CPU UtilizationFraction of the time the CPU is not idle
Maximize ThroughputCount of “processes” terminated per time unit
Minimize Turnaround TimeTT = Time from process arrival to process completion
Minimize Response TimeRT = Time from process arrival until its “first execution” on the CPU
Minimize Waiting TimeWT = Time that a process spends in the Ready state
What should be optimized? Averages, Maxima, Variances?
A lot of theory here, that we won’t get into
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Objectives
We have the mechanisms: Queues, Dispatcher, Context-switching
But what should be the goal of a scheduling policy?
There are many possible and conflicting objectives:
Maximize CPU UtilizationFraction of the time the CPU is not idle
Maximize ThroughputCount of “processes” terminated per time unit
Minimize Turnaround TimeTT = Time from process arrival to process completion
Minimize Response TimeRT = Time from process arrival until its “first execution” on the CPU
Minimize Waiting TimeWT = Time that a process spends in the Ready state
What should be optimized? Averages, Maxima, Variances?
A lot of theory here, that we won’t get into
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Objectives
We have the mechanisms: Queues, Dispatcher, Context-switching
But what should be the goal of a scheduling policy?
There are many possible and conflicting objectives:
Maximize CPU UtilizationFraction of the time the CPU is not idle
Maximize ThroughputCount of “processes” terminated per time unit
Minimize Turnaround TimeTT = Time from process arrival to process completion
Minimize Response TimeRT = Time from process arrival until its “first execution” on the CPU
Minimize Waiting TimeWT = Time that a process spends in the Ready state
What should be optimized? Averages
, Maxima, Variances?
A lot of theory here, that we won’t get into
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Objectives
We have the mechanisms: Queues, Dispatcher, Context-switching
But what should be the goal of a scheduling policy?
There are many possible and conflicting objectives:
Maximize CPU UtilizationFraction of the time the CPU is not idle
Maximize ThroughputCount of “processes” terminated per time unit
Minimize Turnaround TimeTT = Time from process arrival to process completion
Minimize Response TimeRT = Time from process arrival until its “first execution” on the CPU
Minimize Waiting TimeWT = Time that a process spends in the Ready state
What should be optimized? Averages, Maxima
, Variances?
A lot of theory here, that we won’t get into
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Objectives
We have the mechanisms: Queues, Dispatcher, Context-switching
But what should be the goal of a scheduling policy?
There are many possible and conflicting objectives:
Maximize CPU UtilizationFraction of the time the CPU is not idle
Maximize ThroughputCount of “processes” terminated per time unit
Minimize Turnaround TimeTT = Time from process arrival to process completion
Minimize Response TimeRT = Time from process arrival until its “first execution” on the CPU
Minimize Waiting TimeWT = Time that a process spends in the Ready state
What should be optimized? Averages, Maxima, Variances?
A lot of theory here, that we won’t get into
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Objectives
We have the mechanisms: Queues, Dispatcher, Context-switching
But what should be the goal of a scheduling policy?
There are many possible and conflicting objectives:
Maximize CPU UtilizationFraction of the time the CPU is not idle
Maximize ThroughputCount of “processes” terminated per time unit
Minimize Turnaround TimeTT = Time from process arrival to process completion
Minimize Response TimeRT = Time from process arrival until its “first execution” on the CPU
Minimize Waiting TimeWT = Time that a process spends in the Ready state
What should be optimized? Averages, Maxima, Variances?
A lot of theory here, that we won’t get into
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Objectives (2)
The challenge is that most objectives on the previous slides conflictwith each other
For instance: Having frequent context switches is good for responsetime but is bad for throughput
Because context-switching is pure overheadAt the end of the day you don’t want your CPU to have spent 20%of its time running the context-switching code in the kernel!
One thing is certain: the scheduling algorithms have to be fast
Not worth spending a lot of cycles deciding on which process to runnext, since in the meantime nothing’s running!Scheduling decisions have to be lightning fast
Let’s see a few standard algorithms...
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Objectives (2)
The challenge is that most objectives on the previous slides conflictwith each other
For instance: Having frequent context switches is good for responsetime but is bad for throughput
Because context-switching is pure overheadAt the end of the day you don’t want your CPU to have spent 20%of its time running the context-switching code in the kernel!
One thing is certain:
the scheduling algorithms have to be fast
Not worth spending a lot of cycles deciding on which process to runnext, since in the meantime nothing’s running!Scheduling decisions have to be lightning fast
Let’s see a few standard algorithms...
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Objectives (2)
The challenge is that most objectives on the previous slides conflictwith each other
For instance: Having frequent context switches is good for responsetime but is bad for throughput
Because context-switching is pure overheadAt the end of the day you don’t want your CPU to have spent 20%of its time running the context-switching code in the kernel!
One thing is certain: the scheduling algorithms have to be fast
Not worth spending a lot of cycles deciding on which process to runnext, since in the meantime nothing’s running!Scheduling decisions have to be lightning fast
Let’s see a few standard algorithms...
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Objectives (2)
The challenge is that most objectives on the previous slides conflictwith each other
For instance: Having frequent context switches is good for responsetime but is bad for throughput
Because context-switching is pure overheadAt the end of the day you don’t want your CPU to have spent 20%of its time running the context-switching code in the kernel!
One thing is certain: the scheduling algorithms have to be fast
Not worth spending a lot of cycles deciding on which process to runnext, since in the meantime nothing’s running!Scheduling decisions have to be lightning fast
Let’s see a few standard algorithms...
Henri Casanova ([email protected]) CPU Scheduling
Scheduling Objectives (2)
The challenge is that most objectives on the previous slides conflictwith each other
For instance: Having frequent context switches is good for responsetime but is bad for throughput
Because context-switching is pure overheadAt the end of the day you don’t want your CPU to have spent 20%of its time running the context-switching code in the kernel!
One thing is certain: the scheduling algorithms have to be fast
Not worth spending a lot of cycles deciding on which process to runnext, since in the meantime nothing’s running!Scheduling decisions have to be lightning fast
Let’s see a few standard algorithms...
Henri Casanova ([email protected]) CPU Scheduling
Working Assumptions
1 CPU, 1 core
Execution of 1 instruction takes 1 time unit
All processes consist of 1 CPU burst (we’ll change this later)
⇒ CPU Utilization is thus always maximized
Henri Casanova ([email protected]) CPU Scheduling
Non-Preemptive FCFS
FCFS: First Come - First Serve
Straightforward to implement: make the Ready Queue a FIFO
Example:
Process Burst time Arrival timeP1 24 0P2 3 1P3 3 2
Henri Casanova ([email protected]) CPU Scheduling
Gantt Chart vs ASCII Art
Process Burst time Arrival time
P1 24 0P2 3 1P3 3 2
Gantt Chart
P1 P2P3
0 10 20 30
P1 arrivalP2 arrivalP3 arrival
0 24 27 30
CPU Utilization: 100%Throughput: 3 processes/30 tu = 10%
Average Turnaround Time:(24−0)+(27−1)+(30−2)
3= 26 tu
Average Response Time:(0−0)+(24−1)+(27−2)
3= 16 tu
Average Waiting Time:(24−0−24)+(27−1−3)+(30−2−3)
3=
16 tu
ASCII Art0 1 2 3
Time 01234567890123456789012345678901
CPU x1111111111111111111111111222333x
x: CPU idle#: CPU running P#Events:
t=0 CPU idle. P1 arrival. P1 is scheduled.Ready Queue (RQ) is empty: RQ = ()
t=1 CPU running P1 (29 t.u. left). P2 arrival.RQ = (P2)
t=2 CPU running P1 (28 t.u. left). P3 arrival.RQ = (P2, P3)
t=24 P1 complete. P2 is scheduled. RQ =(P3)
t=27 P2 complete. P3 is scheduled. RQ = ()
t=30 P3 complete. RQ = ()
End (of execution)
Henri Casanova ([email protected]) CPU Scheduling
Gantt Chart vs ASCII Art
Process Burst time Arrival time
P1 24 0P2 3 1P3 3 2
Gantt Chart
P1 P2P3
0 10 20 30
P1 arrivalP2 arrivalP3 arrival
0 24 27 30
CPU Utilization: 100%Throughput: 3 processes/30 tu = 10%
Average Turnaround Time:(24−0)+(27−1)+(30−2)
3= 26 tu
Average Response Time:(0−0)+(24−1)+(27−2)
3= 16 tu
Average Waiting Time:(24−0−24)+(27−1−3)+(30−2−3)
3=
16 tu
ASCII Art0 1 2 3
Time 01234567890123456789012345678901
CPU x1111111111111111111111111222333x
x: CPU idle#: CPU running P#Events:
t=0 CPU idle. P1 arrival. P1 is scheduled.Ready Queue (RQ) is empty: RQ = ()
t=1 CPU running P1 (29 t.u. left). P2 arrival.RQ = (P2)
t=2 CPU running P1 (28 t.u. left). P3 arrival.RQ = (P2, P3)
t=24 P1 complete. P2 is scheduled. RQ =(P3)
t=27 P2 complete. P3 is scheduled. RQ = ()
t=30 P3 complete. RQ = ()
End (of execution)
Henri Casanova ([email protected]) CPU Scheduling
Evaluating Algorithms
We have made one realization of one experiment/simulation for onescheduling algorithm
One example is sometimes enough to identify potential weaknesses
For Non-Preemptive FCFS, the Turnaround Time can be very largebecause the Ready Queue is FIFO
This is just like buying one apple at the supermarket and being stuckbehind somebody with a full cart that arrived 1 second before you atthe only checkout lane
As typical in math/algorithms:It’s easy-ish to find a bad counter example for an algorithm
You just need to find one and write it up
It’s hard to prove (i.e., have a theorem) that quantifies how good orhow “not too bad” an algorithm is
You have to consider all possible cases and thus do formal reasoning
And yes, a lot of Computer Science is a branch of MathematicsInsert rant on “CS majors don’t know enough math” here
From professors and employers
Henri Casanova ([email protected]) CPU Scheduling
Evaluating Algorithms
We have made one realization of one experiment/simulation for onescheduling algorithm
One example is sometimes enough to identify potential weaknesses
For Non-Preemptive FCFS, the Turnaround Time can be very largebecause the Ready Queue is FIFO
This is just like buying one apple at the supermarket and being stuckbehind somebody with a full cart that arrived 1 second before you atthe only checkout lane
As typical in math/algorithms:It’s easy-ish to find a bad counter example for an algorithm
You just need to find one and write it up
It’s hard to prove (i.e., have a theorem) that quantifies how good orhow “not too bad” an algorithm is
You have to consider all possible cases and thus do formal reasoning
And yes, a lot of Computer Science is a branch of MathematicsInsert rant on “CS majors don’t know enough math” here
From professors and employers
Henri Casanova ([email protected]) CPU Scheduling
Evaluating Algorithms
We have made one realization of one experiment/simulation for onescheduling algorithm
One example is sometimes enough to identify potential weaknesses
For Non-Preemptive FCFS, the Turnaround Time can be very largebecause the Ready Queue is FIFO
This is just like buying one apple at the supermarket and being stuckbehind somebody with a full cart that arrived 1 second before you atthe only checkout lane
As typical in math/algorithms:It’s easy-ish to find a bad counter example for an algorithm
You just need to find one and write it up
It’s hard to prove (i.e., have a theorem) that quantifies how good orhow “not too bad” an algorithm is
You have to consider all possible cases and thus do formal reasoning
And yes, a lot of Computer Science is a branch of MathematicsInsert rant on “CS majors don’t know enough math” here
From professors and employers
Henri Casanova ([email protected]) CPU Scheduling
Evaluating Algorithms
We have made one realization of one experiment/simulation for onescheduling algorithm
One example is sometimes enough to identify potential weaknesses
For Non-Preemptive FCFS, the Turnaround Time can be very largebecause the Ready Queue is FIFO
This is just like buying one apple at the supermarket and being stuckbehind somebody with a full cart that arrived 1 second before you atthe only checkout lane
As typical in math/algorithms:It’s easy-ish to find a bad counter example for an algorithm
You just need to find one and write it up
It’s hard to prove (i.e., have a theorem) that quantifies how good orhow “not too bad” an algorithm is
You have to consider all possible cases and thus do formal reasoning
And yes, a lot of Computer Science is a branch of MathematicsInsert rant on “CS majors don’t know enough math” here
From professors and employers
Henri Casanova ([email protected]) CPU Scheduling
Non-Preemptive FCFS: Convoy Effect
Non-Preemptive FCFS suffers from the Convoy Effect problem
Forget our “a single CPU burst” assumption, and think about this:
1 CPU-bound process with only a few I/O burstsn I/O-bound processes with frequent short CPU bursts
The Convoy Effect:
All I/O-bound processes block on I/OThe CPU-bound gets the CPU for a long timeAll I/O devices do their work and I/O-bound processes becomeReady
An I/O-bound process now needs to do a tiny bit of work on theCPU, which is hogged by the CPU-bound processConsequence: I/O resources sit idle even though there are manyprocesses who could use them if they could only get the CPU for afew cyclesThe “stuck at the supermarket example”, with a “and therefore youcan’t go back to your real work” consequence
Bottom-line: Non-Preemptive FCFS is just not a good idea
Henri Casanova ([email protected]) CPU Scheduling
Non-Preemptive FCFS: Convoy Effect
Non-Preemptive FCFS suffers from the Convoy Effect problem
Forget our “a single CPU burst” assumption, and think about this:
1 CPU-bound process with only a few I/O burstsn I/O-bound processes with frequent short CPU bursts
The Convoy Effect:
All I/O-bound processes block on I/OThe CPU-bound gets the CPU for a long timeAll I/O devices do their work and I/O-bound processes becomeReady
An I/O-bound process now needs to do a tiny bit of work on theCPU, which is hogged by the CPU-bound processConsequence: I/O resources sit idle even though there are manyprocesses who could use them if they could only get the CPU for afew cyclesThe “stuck at the supermarket example”, with a “and therefore youcan’t go back to your real work” consequence
Bottom-line: Non-Preemptive FCFS is just not a good idea
Henri Casanova ([email protected]) CPU Scheduling
Non-Preemptive FCFS: Convoy Effect
Non-Preemptive FCFS suffers from the Convoy Effect problem
Forget our “a single CPU burst” assumption, and think about this:
1 CPU-bound process with only a few I/O burstsn I/O-bound processes with frequent short CPU bursts
The Convoy Effect:
All I/O-bound processes block on I/OThe CPU-bound gets the CPU for a long timeAll I/O devices do their work and I/O-bound processes becomeReady
An I/O-bound process now needs to do a tiny bit of work on theCPU, which is hogged by the CPU-bound processConsequence: I/O resources sit idle even though there are manyprocesses who could use them if they could only get the CPU for afew cycles
The “stuck at the supermarket example”, with a “and therefore youcan’t go back to your real work” consequence
Bottom-line: Non-Preemptive FCFS is just not a good idea
Henri Casanova ([email protected]) CPU Scheduling
Non-Preemptive FCFS: Convoy Effect
Non-Preemptive FCFS suffers from the Convoy Effect problem
Forget our “a single CPU burst” assumption, and think about this:
1 CPU-bound process with only a few I/O burstsn I/O-bound processes with frequent short CPU bursts
The Convoy Effect:
All I/O-bound processes block on I/OThe CPU-bound gets the CPU for a long timeAll I/O devices do their work and I/O-bound processes becomeReady
An I/O-bound process now needs to do a tiny bit of work on theCPU, which is hogged by the CPU-bound processConsequence: I/O resources sit idle even though there are manyprocesses who could use them if they could only get the CPU for afew cyclesThe “stuck at the supermarket example”, with a “and therefore youcan’t go back to your real work” consequence
Bottom-line: Non-Preemptive FCFS is just not a good idea
Henri Casanova ([email protected]) CPU Scheduling
Non-Preemptive FCFS: Convoy Effect
Non-Preemptive FCFS suffers from the Convoy Effect problem
Forget our “a single CPU burst” assumption, and think about this:
1 CPU-bound process with only a few I/O burstsn I/O-bound processes with frequent short CPU bursts
The Convoy Effect:
All I/O-bound processes block on I/OThe CPU-bound gets the CPU for a long timeAll I/O devices do their work and I/O-bound processes becomeReady
An I/O-bound process now needs to do a tiny bit of work on theCPU, which is hogged by the CPU-bound processConsequence: I/O resources sit idle even though there are manyprocesses who could use them if they could only get the CPU for afew cyclesThe “stuck at the supermarket example”, with a “and therefore youcan’t go back to your real work” consequence
Bottom-line: Non-Preemptive FCFS is just not a good idea
Henri Casanova ([email protected]) CPU Scheduling
SJF (Shortest Job First)
Shortest Job First: When a scheduling decision needs to be made,always pick the process with the shortest CPU burst
In some cases some algorithms can be proven optimal for a metric
SJF is provably optimal for average turnaround time!
In the theoretical literature, look for SRPT (Shortest RemainingProcessing Time) or STCF (Shortest Time-to-Completion First) orPreemptive Shortest Job First (PSJF)Known (proven) to be optimal both in non-preemptive andpreemptive modes!
This could be implemented in supermarkets!
But as a customer you likely don’t care for “best on average” if it’s“bad for you right now”
This sounds pretty good, let’s look at how it works on an example...
Henri Casanova ([email protected]) CPU Scheduling
SJF (Shortest Job First)
Shortest Job First: When a scheduling decision needs to be made,always pick the process with the shortest CPU burst
In some cases some algorithms can be proven optimal for a metric
SJF is provably optimal for average turnaround time!
In the theoretical literature, look for SRPT (Shortest RemainingProcessing Time) or STCF (Shortest Time-to-Completion First) orPreemptive Shortest Job First (PSJF)Known (proven) to be optimal both in non-preemptive andpreemptive modes!
This could be implemented in supermarkets!
But as a customer you likely don’t care for “best on average” if it’s“bad for you right now”
This sounds pretty good, let’s look at how it works on an example...
Henri Casanova ([email protected]) CPU Scheduling
SJF (Shortest Job First)
Shortest Job First: When a scheduling decision needs to be made,always pick the process with the shortest CPU burst
In some cases some algorithms can be proven optimal for a metric
SJF is provably optimal for average turnaround time!
In the theoretical literature, look for SRPT (Shortest RemainingProcessing Time) or STCF (Shortest Time-to-Completion First) orPreemptive Shortest Job First (PSJF)Known (proven) to be optimal both in non-preemptive andpreemptive modes!
This could be implemented in supermarkets!
But as a customer you likely don’t care for “best on average” if it’s“bad for you right now”
This sounds pretty good, let’s look at how it works on an example...
Henri Casanova ([email protected]) CPU Scheduling
SJF (Shortest Job First)
Shortest Job First: When a scheduling decision needs to be made,always pick the process with the shortest CPU burst
In some cases some algorithms can be proven optimal for a metric
SJF is provably optimal for average turnaround time!
In the theoretical literature, look for SRPT (Shortest RemainingProcessing Time) or STCF (Shortest Time-to-Completion First) orPreemptive Shortest Job First (PSJF)Known (proven) to be optimal both in non-preemptive andpreemptive modes!
This could be implemented in supermarkets!
But as a customer you likely don’t care for “best on average” if it’s“bad for you right now”
This sounds pretty good, let’s look at how it works on an example...
Henri Casanova ([email protected]) CPU Scheduling
Non-Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
P1 P4 P2 P3
0 10 12 18 25
CPU Utilization: 100%
Average Turnaround Time: (10−0)+(12−5)+(18−2)+(25−4)4 = 13.5 tu
Average Response Time: (0−0)+(10−5)+(12−2)+(18−4)4 = 7.25 tu
Average Waiting Time: (10−0−10)+(18−2−6)+(25−4−7)+(12−5−2)4 = 7.25 tu
Compare to NP-FCFS: P1, then P2 at 10, then P3 at 16, then P4 at 23, End at 25
CPU: 100%
ATT: (10−0)+(16−2)+(23−4)+(25−5)4
= 15.75 tu (worse)
ART: (0−0)+(10−2)+(16−4)+(23−5)4
= 9.5 tu (worse)
AWT: (10−0−10)+(16−2−6)+(23−4−7)+(25−5−2)4
= 9.5 tu (worse)
Henri Casanova ([email protected]) CPU Scheduling
Non-Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
P1 P4 P2 P3
0 10 12 18 25
CPU Utilization: 100%
Average Turnaround Time: (10−0)+(12−5)+(18−2)+(25−4)4 = 13.5 tu
Average Response Time: (0−0)+(10−5)+(12−2)+(18−4)4 = 7.25 tu
Average Waiting Time: (10−0−10)+(18−2−6)+(25−4−7)+(12−5−2)4 = 7.25 tu
Compare to NP-FCFS: P1, then P2 at 10, then P3 at 16, then P4 at 23, End at 25
CPU: 100%
ATT: (10−0)+(16−2)+(23−4)+(25−5)4
= 15.75 tu (worse)
ART: (0−0)+(10−2)+(16−4)+(23−5)4
= 9.5 tu (worse)
AWT: (10−0−10)+(16−2−6)+(23−4−7)+(25−5−2)4
= 9.5 tu (worse)
Henri Casanova ([email protected]) CPU Scheduling
Non-Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
P1 P4 P2 P3
0 10 12 18 25
CPU Utilization: 100%
Average Turnaround Time: (10−0)+(12−5)+(18−2)+(25−4)4 = 13.5 tu
Average Response Time: (0−0)+(10−5)+(12−2)+(18−4)4 = 7.25 tu
Average Waiting Time: (10−0−10)+(18−2−6)+(25−4−7)+(12−5−2)4 = 7.25 tu
Compare to NP-FCFS: P1, then P2 at 10, then P3 at 16, then P4 at 23, End at 25
CPU: 100%
ATT: (10−0)+(16−2)+(23−4)+(25−5)4
= 15.75 tu (worse)
ART: (0−0)+(10−2)+(16−4)+(23−5)4
= 9.5 tu (worse)
AWT: (10−0−10)+(16−2−6)+(23−4−7)+(25−5−2)4
= 9.5 tu (worse)
Henri Casanova ([email protected]) CPU Scheduling
Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Preemptive means “Any event can cause rescheduling”
t=0: CPU idle; P1 arrives; P1 is scheduled for 10 tu unless new event; RQ = ∅t=2: P2 arrives. Reschedule! RQ={(P1, 8 tu left), (P2, 6 tu left)} ⇒ P2 is scheduledt=4: P3 arrives. Reschedule! RQ={(P1, 8), (P2, 4), (P3, 7)} ⇒ P2 is scheduledt=5: P4 arrives. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7), (P4, 2)} ⇒ P4 is scheduledt=7: P4 terminates. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7)} ⇒ P2 is scheduledt=10: P2 terminates. Reschedule! RQ={(P1, 8), (P3, 7)} ⇒ P3 is scheduledt=17: P3 terminates. Reschedule! RQ={(P1, 8)} ⇒ P1 is scheduled
t=25: P1 terminates. Reschedule! RQ={} ⇒ End of experiment
Timeline 01234 56789 01234 56789 01234 56789CPU 11
222 44222 33333 33111 11111 xxxxx
(P1 completed at 25; P2 at 10; P3 at 17; P4 at 7)
CPU: 100%
ATT:(25−0)+(10−2)+(17−4)+(7−5)
4= 12.0 tu
ART:(0−0)+(2−2)+(10−4)+(5−5)
4= 1.5 tu
AWT:(25−0−10)+(10−2−6)+(17−4−7)+(7−5−2)
4= 5.75 tu
Henri Casanova ([email protected]) CPU Scheduling
Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Preemptive means “Any event can cause rescheduling”
t=0: CPU idle; P1 arrives; P1 is scheduled for 10 tu unless new event; RQ = ∅t=2: P2 arrives. Reschedule! RQ={(P1, 8 tu left), (P2, 6 tu left)} ⇒ P2 is scheduledt=4: P3 arrives. Reschedule! RQ={(P1, 8), (P2, 4), (P3, 7)} ⇒ P2 is scheduledt=5: P4 arrives. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7), (P4, 2)} ⇒ P4 is scheduledt=7: P4 terminates. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7)} ⇒ P2 is scheduledt=10: P2 terminates. Reschedule! RQ={(P1, 8), (P3, 7)} ⇒ P3 is scheduledt=17: P3 terminates. Reschedule! RQ={(P1, 8)} ⇒ P1 is scheduled
t=25: P1 terminates. Reschedule! RQ={} ⇒ End of experiment
Timeline 01234 56789 01234 56789 01234 56789CPU 11
222 44222 33333 33111 11111 xxxxx
(P1 completed at 25; P2 at 10; P3 at 17; P4 at 7)
CPU: 100%
ATT:(25−0)+(10−2)+(17−4)+(7−5)
4= 12.0 tu
ART:(0−0)+(2−2)+(10−4)+(5−5)
4= 1.5 tu
AWT:(25−0−10)+(10−2−6)+(17−4−7)+(7−5−2)
4= 5.75 tu
Henri Casanova ([email protected]) CPU Scheduling
Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Preemptive means “Any event can cause rescheduling”
t=0: CPU idle; P1 arrives; P1 is scheduled for 10 tu unless new event; RQ = ∅
t=2: P2 arrives. Reschedule! RQ={(P1, 8 tu left), (P2, 6 tu left)} ⇒ P2 is scheduledt=4: P3 arrives. Reschedule! RQ={(P1, 8), (P2, 4), (P3, 7)} ⇒ P2 is scheduledt=5: P4 arrives. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7), (P4, 2)} ⇒ P4 is scheduledt=7: P4 terminates. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7)} ⇒ P2 is scheduledt=10: P2 terminates. Reschedule! RQ={(P1, 8), (P3, 7)} ⇒ P3 is scheduledt=17: P3 terminates. Reschedule! RQ={(P1, 8)} ⇒ P1 is scheduled
t=25: P1 terminates. Reschedule! RQ={} ⇒ End of experiment
Timeline 01234 56789 01234 56789 01234 56789CPU 11
222 44222 33333 33111 11111 xxxxx
(P1 completed at 25; P2 at 10; P3 at 17; P4 at 7)
CPU: 100%
ATT:(25−0)+(10−2)+(17−4)+(7−5)
4= 12.0 tu
ART:(0−0)+(2−2)+(10−4)+(5−5)
4= 1.5 tu
AWT:(25−0−10)+(10−2−6)+(17−4−7)+(7−5−2)
4= 5.75 tu
Henri Casanova ([email protected]) CPU Scheduling
Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Preemptive means “Any event can cause rescheduling”
t=0: CPU idle; P1 arrives; P1 is scheduled for 10 tu unless new event; RQ = ∅t=2: P2 arrives. Reschedule! RQ={(P1, 8 tu left), (P2, 6 tu left)} ⇒ P2 is scheduled
t=4: P3 arrives. Reschedule! RQ={(P1, 8), (P2, 4), (P3, 7)} ⇒ P2 is scheduledt=5: P4 arrives. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7), (P4, 2)} ⇒ P4 is scheduledt=7: P4 terminates. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7)} ⇒ P2 is scheduledt=10: P2 terminates. Reschedule! RQ={(P1, 8), (P3, 7)} ⇒ P3 is scheduledt=17: P3 terminates. Reschedule! RQ={(P1, 8)} ⇒ P1 is scheduled
t=25: P1 terminates. Reschedule! RQ={} ⇒ End of experiment
Timeline 01234 56789 01234 56789 01234 56789CPU 11
222 44222 33333 33111 11111 xxxxx
(P1 completed at 25; P2 at 10; P3 at 17; P4 at 7)
CPU: 100%
ATT:(25−0)+(10−2)+(17−4)+(7−5)
4= 12.0 tu
ART:(0−0)+(2−2)+(10−4)+(5−5)
4= 1.5 tu
AWT:(25−0−10)+(10−2−6)+(17−4−7)+(7−5−2)
4= 5.75 tu
Henri Casanova ([email protected]) CPU Scheduling
Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Preemptive means “Any event can cause rescheduling”
t=0: CPU idle; P1 arrives; P1 is scheduled for 10 tu unless new event; RQ = ∅t=2: P2 arrives. Reschedule! RQ={(P1, 8 tu left), (P2, 6 tu left)} ⇒ P2 is scheduled
t=4: P3 arrives. Reschedule! RQ={(P1, 8), (P2, 4), (P3, 7)} ⇒ P2 is scheduledt=5: P4 arrives. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7), (P4, 2)} ⇒ P4 is scheduledt=7: P4 terminates. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7)} ⇒ P2 is scheduledt=10: P2 terminates. Reschedule! RQ={(P1, 8), (P3, 7)} ⇒ P3 is scheduledt=17: P3 terminates. Reschedule! RQ={(P1, 8)} ⇒ P1 is scheduled
t=25: P1 terminates. Reschedule! RQ={} ⇒ End of experiment
Timeline 01234 56789 01234 56789 01234 56789CPU 1122
2 44222 33333 33111 11111 xxxxx
(P1 completed at 25; P2 at 10; P3 at 17; P4 at 7)
CPU: 100%
ATT:(25−0)+(10−2)+(17−4)+(7−5)
4= 12.0 tu
ART:(0−0)+(2−2)+(10−4)+(5−5)
4= 1.5 tu
AWT:(25−0−10)+(10−2−6)+(17−4−7)+(7−5−2)
4= 5.75 tu
Henri Casanova ([email protected]) CPU Scheduling
Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Preemptive means “Any event can cause rescheduling”
t=0: CPU idle; P1 arrives; P1 is scheduled for 10 tu unless new event; RQ = ∅t=2: P2 arrives. Reschedule! RQ={(P1, 8 tu left), (P2, 6 tu left)} ⇒ P2 is scheduledt=4: P3 arrives. Reschedule! RQ={(P1, 8), (P2, 4), (P3, 7)} ⇒ P2 is scheduled
t=5: P4 arrives. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7), (P4, 2)} ⇒ P4 is scheduledt=7: P4 terminates. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7)} ⇒ P2 is scheduledt=10: P2 terminates. Reschedule! RQ={(P1, 8), (P3, 7)} ⇒ P3 is scheduledt=17: P3 terminates. Reschedule! RQ={(P1, 8)} ⇒ P1 is scheduled
t=25: P1 terminates. Reschedule! RQ={} ⇒ End of experiment
Timeline 01234 56789 01234 56789 01234 56789CPU 1122
2 44222 33333 33111 11111 xxxxx
(P1 completed at 25; P2 at 10; P3 at 17; P4 at 7)
CPU: 100%
ATT:(25−0)+(10−2)+(17−4)+(7−5)
4= 12.0 tu
ART:(0−0)+(2−2)+(10−4)+(5−5)
4= 1.5 tu
AWT:(25−0−10)+(10−2−6)+(17−4−7)+(7−5−2)
4= 5.75 tu
Henri Casanova ([email protected]) CPU Scheduling
Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Preemptive means “Any event can cause rescheduling”
t=0: CPU idle; P1 arrives; P1 is scheduled for 10 tu unless new event; RQ = ∅t=2: P2 arrives. Reschedule! RQ={(P1, 8 tu left), (P2, 6 tu left)} ⇒ P2 is scheduledt=4: P3 arrives. Reschedule! RQ={(P1, 8), (P2, 4), (P3, 7)} ⇒ P2 is scheduled
t=5: P4 arrives. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7), (P4, 2)} ⇒ P4 is scheduledt=7: P4 terminates. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7)} ⇒ P2 is scheduledt=10: P2 terminates. Reschedule! RQ={(P1, 8), (P3, 7)} ⇒ P3 is scheduledt=17: P3 terminates. Reschedule! RQ={(P1, 8)} ⇒ P1 is scheduled
t=25: P1 terminates. Reschedule! RQ={} ⇒ End of experiment
Timeline 01234 56789 01234 56789 01234 56789CPU 11222
44222 33333 33111 11111 xxxxx
(P1 completed at 25; P2 at 10; P3 at 17; P4 at 7)
CPU: 100%
ATT:(25−0)+(10−2)+(17−4)+(7−5)
4= 12.0 tu
ART:(0−0)+(2−2)+(10−4)+(5−5)
4= 1.5 tu
AWT:(25−0−10)+(10−2−6)+(17−4−7)+(7−5−2)
4= 5.75 tu
Henri Casanova ([email protected]) CPU Scheduling
Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Preemptive means “Any event can cause rescheduling”
t=0: CPU idle; P1 arrives; P1 is scheduled for 10 tu unless new event; RQ = ∅t=2: P2 arrives. Reschedule! RQ={(P1, 8 tu left), (P2, 6 tu left)} ⇒ P2 is scheduledt=4: P3 arrives. Reschedule! RQ={(P1, 8), (P2, 4), (P3, 7)} ⇒ P2 is scheduledt=5: P4 arrives. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7), (P4, 2)} ⇒ P4 is scheduled
t=7: P4 terminates. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7)} ⇒ P2 is scheduledt=10: P2 terminates. Reschedule! RQ={(P1, 8), (P3, 7)} ⇒ P3 is scheduledt=17: P3 terminates. Reschedule! RQ={(P1, 8)} ⇒ P1 is scheduled
t=25: P1 terminates. Reschedule! RQ={} ⇒ End of experiment
Timeline 01234 56789 01234 56789 01234 56789CPU 11222
44222 33333 33111 11111 xxxxx
(P1 completed at 25; P2 at 10; P3 at 17; P4 at 7)
CPU: 100%
ATT:(25−0)+(10−2)+(17−4)+(7−5)
4= 12.0 tu
ART:(0−0)+(2−2)+(10−4)+(5−5)
4= 1.5 tu
AWT:(25−0−10)+(10−2−6)+(17−4−7)+(7−5−2)
4= 5.75 tu
Henri Casanova ([email protected]) CPU Scheduling
Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Preemptive means “Any event can cause rescheduling”
t=0: CPU idle; P1 arrives; P1 is scheduled for 10 tu unless new event; RQ = ∅t=2: P2 arrives. Reschedule! RQ={(P1, 8 tu left), (P2, 6 tu left)} ⇒ P2 is scheduledt=4: P3 arrives. Reschedule! RQ={(P1, 8), (P2, 4), (P3, 7)} ⇒ P2 is scheduledt=5: P4 arrives. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7), (P4, 2)} ⇒ P4 is scheduled
t=7: P4 terminates. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7)} ⇒ P2 is scheduledt=10: P2 terminates. Reschedule! RQ={(P1, 8), (P3, 7)} ⇒ P3 is scheduledt=17: P3 terminates. Reschedule! RQ={(P1, 8)} ⇒ P1 is scheduled
t=25: P1 terminates. Reschedule! RQ={} ⇒ End of experiment
Timeline 01234 56789 01234 56789 01234 56789CPU 11222 44
222 33333 33111 11111 xxxxx
(P1 completed at 25; P2 at 10; P3 at 17; P4 at 7)
CPU: 100%
ATT:(25−0)+(10−2)+(17−4)+(7−5)
4= 12.0 tu
ART:(0−0)+(2−2)+(10−4)+(5−5)
4= 1.5 tu
AWT:(25−0−10)+(10−2−6)+(17−4−7)+(7−5−2)
4= 5.75 tu
Henri Casanova ([email protected]) CPU Scheduling
Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Preemptive means “Any event can cause rescheduling”
t=0: CPU idle; P1 arrives; P1 is scheduled for 10 tu unless new event; RQ = ∅t=2: P2 arrives. Reschedule! RQ={(P1, 8 tu left), (P2, 6 tu left)} ⇒ P2 is scheduledt=4: P3 arrives. Reschedule! RQ={(P1, 8), (P2, 4), (P3, 7)} ⇒ P2 is scheduledt=5: P4 arrives. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7), (P4, 2)} ⇒ P4 is scheduledt=7: P4 terminates. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7)} ⇒ P2 is scheduled
t=10: P2 terminates. Reschedule! RQ={(P1, 8), (P3, 7)} ⇒ P3 is scheduledt=17: P3 terminates. Reschedule! RQ={(P1, 8)} ⇒ P1 is scheduled
t=25: P1 terminates. Reschedule! RQ={} ⇒ End of experiment
Timeline 01234 56789 01234 56789 01234 56789CPU 11222 44
222 33333 33111 11111 xxxxx
(P1 completed at 25; P2 at 10; P3 at 17; P4 at 7)
CPU: 100%
ATT:(25−0)+(10−2)+(17−4)+(7−5)
4= 12.0 tu
ART:(0−0)+(2−2)+(10−4)+(5−5)
4= 1.5 tu
AWT:(25−0−10)+(10−2−6)+(17−4−7)+(7−5−2)
4= 5.75 tu
Henri Casanova ([email protected]) CPU Scheduling
Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Preemptive means “Any event can cause rescheduling”
t=0: CPU idle; P1 arrives; P1 is scheduled for 10 tu unless new event; RQ = ∅t=2: P2 arrives. Reschedule! RQ={(P1, 8 tu left), (P2, 6 tu left)} ⇒ P2 is scheduledt=4: P3 arrives. Reschedule! RQ={(P1, 8), (P2, 4), (P3, 7)} ⇒ P2 is scheduledt=5: P4 arrives. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7), (P4, 2)} ⇒ P4 is scheduledt=7: P4 terminates. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7)} ⇒ P2 is scheduled
t=10: P2 terminates. Reschedule! RQ={(P1, 8), (P3, 7)} ⇒ P3 is scheduledt=17: P3 terminates. Reschedule! RQ={(P1, 8)} ⇒ P1 is scheduled
t=25: P1 terminates. Reschedule! RQ={} ⇒ End of experiment
Timeline 01234 56789 01234 56789 01234 56789CPU 11222 44222
33333 33111 11111 xxxxx
(P1 completed at 25; P2 at 10; P3 at 17; P4 at 7)
CPU: 100%
ATT:(25−0)+(10−2)+(17−4)+(7−5)
4= 12.0 tu
ART:(0−0)+(2−2)+(10−4)+(5−5)
4= 1.5 tu
AWT:(25−0−10)+(10−2−6)+(17−4−7)+(7−5−2)
4= 5.75 tu
Henri Casanova ([email protected]) CPU Scheduling
Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Preemptive means “Any event can cause rescheduling”
t=0: CPU idle; P1 arrives; P1 is scheduled for 10 tu unless new event; RQ = ∅t=2: P2 arrives. Reschedule! RQ={(P1, 8 tu left), (P2, 6 tu left)} ⇒ P2 is scheduledt=4: P3 arrives. Reschedule! RQ={(P1, 8), (P2, 4), (P3, 7)} ⇒ P2 is scheduledt=5: P4 arrives. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7), (P4, 2)} ⇒ P4 is scheduledt=7: P4 terminates. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7)} ⇒ P2 is scheduledt=10: P2 terminates. Reschedule! RQ={(P1, 8), (P3, 7)} ⇒ P3 is scheduled
t=17: P3 terminates. Reschedule! RQ={(P1, 8)} ⇒ P1 is scheduled
t=25: P1 terminates. Reschedule! RQ={} ⇒ End of experiment
Timeline 01234 56789 01234 56789 01234 56789CPU 11222 44222
33333 33111 11111 xxxxx
(P1 completed at 25; P2 at 10; P3 at 17; P4 at 7)
CPU: 100%
ATT:(25−0)+(10−2)+(17−4)+(7−5)
4= 12.0 tu
ART:(0−0)+(2−2)+(10−4)+(5−5)
4= 1.5 tu
AWT:(25−0−10)+(10−2−6)+(17−4−7)+(7−5−2)
4= 5.75 tu
Henri Casanova ([email protected]) CPU Scheduling
Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Preemptive means “Any event can cause rescheduling”
t=0: CPU idle; P1 arrives; P1 is scheduled for 10 tu unless new event; RQ = ∅t=2: P2 arrives. Reschedule! RQ={(P1, 8 tu left), (P2, 6 tu left)} ⇒ P2 is scheduledt=4: P3 arrives. Reschedule! RQ={(P1, 8), (P2, 4), (P3, 7)} ⇒ P2 is scheduledt=5: P4 arrives. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7), (P4, 2)} ⇒ P4 is scheduledt=7: P4 terminates. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7)} ⇒ P2 is scheduledt=10: P2 terminates. Reschedule! RQ={(P1, 8), (P3, 7)} ⇒ P3 is scheduled
t=17: P3 terminates. Reschedule! RQ={(P1, 8)} ⇒ P1 is scheduled
t=25: P1 terminates. Reschedule! RQ={} ⇒ End of experiment
Timeline 01234 56789 01234 56789 01234 56789CPU 11222 44222 33333 33
111 11111 xxxxx
(P1 completed at 25; P2 at 10; P3 at 17; P4 at 7)
CPU: 100%
ATT:(25−0)+(10−2)+(17−4)+(7−5)
4= 12.0 tu
ART:(0−0)+(2−2)+(10−4)+(5−5)
4= 1.5 tu
AWT:(25−0−10)+(10−2−6)+(17−4−7)+(7−5−2)
4= 5.75 tu
Henri Casanova ([email protected]) CPU Scheduling
Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Preemptive means “Any event can cause rescheduling”
t=0: CPU idle; P1 arrives; P1 is scheduled for 10 tu unless new event; RQ = ∅t=2: P2 arrives. Reschedule! RQ={(P1, 8 tu left), (P2, 6 tu left)} ⇒ P2 is scheduledt=4: P3 arrives. Reschedule! RQ={(P1, 8), (P2, 4), (P3, 7)} ⇒ P2 is scheduledt=5: P4 arrives. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7), (P4, 2)} ⇒ P4 is scheduledt=7: P4 terminates. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7)} ⇒ P2 is scheduledt=10: P2 terminates. Reschedule! RQ={(P1, 8), (P3, 7)} ⇒ P3 is scheduledt=17: P3 terminates. Reschedule! RQ={(P1, 8)} ⇒ P1 is scheduled
t=25: P1 terminates. Reschedule! RQ={} ⇒ End of experiment
Timeline 01234 56789 01234 56789 01234 56789CPU 11222 44222 33333 33
111 11111 xxxxx
(P1 completed at 25; P2 at 10; P3 at 17; P4 at 7)
CPU: 100%
ATT:(25−0)+(10−2)+(17−4)+(7−5)
4= 12.0 tu
ART:(0−0)+(2−2)+(10−4)+(5−5)
4= 1.5 tu
AWT:(25−0−10)+(10−2−6)+(17−4−7)+(7−5−2)
4= 5.75 tu
Henri Casanova ([email protected]) CPU Scheduling
Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Preemptive means “Any event can cause rescheduling”
t=0: CPU idle; P1 arrives; P1 is scheduled for 10 tu unless new event; RQ = ∅t=2: P2 arrives. Reschedule! RQ={(P1, 8 tu left), (P2, 6 tu left)} ⇒ P2 is scheduledt=4: P3 arrives. Reschedule! RQ={(P1, 8), (P2, 4), (P3, 7)} ⇒ P2 is scheduledt=5: P4 arrives. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7), (P4, 2)} ⇒ P4 is scheduledt=7: P4 terminates. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7)} ⇒ P2 is scheduledt=10: P2 terminates. Reschedule! RQ={(P1, 8), (P3, 7)} ⇒ P3 is scheduledt=17: P3 terminates. Reschedule! RQ={(P1, 8)} ⇒ P1 is scheduled
t=25: P1 terminates. Reschedule! RQ={} ⇒ End of experiment
Timeline 01234 56789 01234 56789 01234 56789CPU 11222 44222 33333 33111 11111 xxxxx
(P1 completed at 25; P2 at 10; P3 at 17; P4 at 7)
CPU: 100%
ATT:(25−0)+(10−2)+(17−4)+(7−5)
4= 12.0 tu
ART:(0−0)+(2−2)+(10−4)+(5−5)
4= 1.5 tu
AWT:(25−0−10)+(10−2−6)+(17−4−7)+(7−5−2)
4= 5.75 tu
Henri Casanova ([email protected]) CPU Scheduling
Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Preemptive means “Any event can cause rescheduling”
t=0: CPU idle; P1 arrives; P1 is scheduled for 10 tu unless new event; RQ = ∅t=2: P2 arrives. Reschedule! RQ={(P1, 8 tu left), (P2, 6 tu left)} ⇒ P2 is scheduledt=4: P3 arrives. Reschedule! RQ={(P1, 8), (P2, 4), (P3, 7)} ⇒ P2 is scheduledt=5: P4 arrives. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7), (P4, 2)} ⇒ P4 is scheduledt=7: P4 terminates. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7)} ⇒ P2 is scheduledt=10: P2 terminates. Reschedule! RQ={(P1, 8), (P3, 7)} ⇒ P3 is scheduledt=17: P3 terminates. Reschedule! RQ={(P1, 8)} ⇒ P1 is scheduled
t=25: P1 terminates. Reschedule! RQ={} ⇒ End of experiment
Timeline 01234 56789 01234 56789 01234 56789CPU 11222 44222 33333 33111 11111 xxxxx
(P1 completed at 25; P2 at 10; P3 at 17; P4 at 7)
CPU: 100%
ATT:(25−0)+(10−2)+(17−4)+(7−5)
4= 12.0 tu
ART:(0−0)+(2−2)+(10−4)+(5−5)
4= 1.5 tu
AWT:(25−0−10)+(10−2−6)+(17−4−7)+(7−5−2)
4= 5.75 tu
Henri Casanova ([email protected]) CPU Scheduling
Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Preemptive means “Any event can cause rescheduling”
t=0: CPU idle; P1 arrives; P1 is scheduled for 10 tu unless new event; RQ = ∅t=2: P2 arrives. Reschedule! RQ={(P1, 8 tu left), (P2, 6 tu left)} ⇒ P2 is scheduledt=4: P3 arrives. Reschedule! RQ={(P1, 8), (P2, 4), (P3, 7)} ⇒ P2 is scheduledt=5: P4 arrives. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7), (P4, 2)} ⇒ P4 is scheduledt=7: P4 terminates. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7)} ⇒ P2 is scheduledt=10: P2 terminates. Reschedule! RQ={(P1, 8), (P3, 7)} ⇒ P3 is scheduledt=17: P3 terminates. Reschedule! RQ={(P1, 8)} ⇒ P1 is scheduled
t=25: P1 terminates. Reschedule! RQ={} ⇒ End of experiment
Timeline 01234 56789 01234 56789 01234 56789CPU 11222 44222 33333 33111 11111 xxxxx
(P1 completed at 25; P2 at 10; P3 at 17; P4 at 7)
CPU: 100%
ATT:(25−0)+(10−2)+(17−4)+(7−5)
4= 12.0 tu
ART:(0−0)+(2−2)+(10−4)+(5−5)
4= 1.5 tu
AWT:(25−0−10)+(10−2−6)+(17−4−7)+(7−5−2)
4= 5.75 tu
Henri Casanova ([email protected]) CPU Scheduling
Preemptive SJF: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Preemptive means “Any event can cause rescheduling”
t=0: CPU idle; P1 arrives; P1 is scheduled for 10 tu unless new event; RQ = ∅t=2: P2 arrives. Reschedule! RQ={(P1, 8 tu left), (P2, 6 tu left)} ⇒ P2 is scheduledt=4: P3 arrives. Reschedule! RQ={(P1, 8), (P2, 4), (P3, 7)} ⇒ P2 is scheduledt=5: P4 arrives. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7), (P4, 2)} ⇒ P4 is scheduledt=7: P4 terminates. Reschedule! RQ={(P1, 8), (P2, 3), (P3, 7)} ⇒ P2 is scheduledt=10: P2 terminates. Reschedule! RQ={(P1, 8), (P3, 7)} ⇒ P3 is scheduledt=17: P3 terminates. Reschedule! RQ={(P1, 8)} ⇒ P1 is scheduled
t=25: P1 terminates. Reschedule! RQ={} ⇒ End of experiment
Timeline 01234 56789 01234 56789 01234 56789CPU 11222 44222 33333 33111 11111 xxxxx
(P1 completed at 25; P2 at 10; P3 at 17; P4 at 7)
CPU: 100%
ATT:(25−0)+(10−2)+(17−4)+(7−5)
4= 12.0 tu
ART:(0−0)+(2−2)+(10−4)+(5−5)
4= 1.5 tu
AWT:(25−0−10)+(10−2−6)+(17−4−7)+(7−5−2)
4= 5.75 tu
Henri Casanova ([email protected]) CPU Scheduling
Algorithms Comparison On Our Example
NP-FCFS NP-SJF P-SJFCPU 100% 100% 100%ATT 15.75 13.5 12.0ART 9.5 7.25 1.5AWT 9.5 6.0 5.75
On this example, Preemptive SJF is better than Non-PreemptiveSJF, which is better than Non-Preemptive FCFSThis seems to make sense, but cannot be generalized
i.e., we can generate an infinite number of examples in which this isnot trueBut the probability that this is true of a random example is probablypretty high
It would be easy to construct cases for which the results arecompletely reversed for some metrics
Not for average turn-around (ATT) time though, as SJF is optimalfor this metric
But even if we feel that SJF is a good idea, there are someproblems...
Henri Casanova ([email protected]) CPU Scheduling
Algorithms Comparison On Our Example
NP-FCFS NP-SJF P-SJFCPU 100% 100% 100%ATT 15.75 13.5 12.0ART 9.5 7.25 1.5AWT 9.5 6.0 5.75
On this example, Preemptive SJF is better than Non-PreemptiveSJF, which is better than Non-Preemptive FCFSThis seems to make sense, but cannot be generalized
i.e., we can generate an infinite number of examples in which this isnot trueBut the probability that this is true of a random example is probablypretty high
It would be easy to construct cases for which the results arecompletely reversed for some metrics
Not for average turn-around (ATT) time though, as SJF is optimalfor this metric
But even if we feel that SJF is a good idea, there are someproblems...
Henri Casanova ([email protected]) CPU Scheduling
Drawbacks of SJF
Process Starvation:
Consider one long CPU-burst process and a never-ending flow ofincoming short CPU-burst onesThe process with the long CPU-burst will never be schedule:wait-time is infinite!
Supermarket SJF: you show up with your cart with 10 items, butevery second somebody shows up with 1 apple and you wait
This can be fixed by weighting the CPU burst duration with thearrival time (i.e., define some kind of priority so that after waiting along time a process gets scheduled)But then this modified algorithm is no longer optimal for averageturn-around time
CPU-Burst Durations must be known
This is a bigger issue: in real life how do we know CPU burstdurations?It’s not like when you start a process you tell the OS something like:“Here is a new process that has 24 CPU-bursts each of 2 seconds”The OS is really in the dark about what processes will do!
Are sophisticated scheduling algorithms even worthwhile then????
Henri Casanova ([email protected]) CPU Scheduling
Drawbacks of SJF
Process Starvation:
Consider one long CPU-burst process and a never-ending flow ofincoming short CPU-burst onesThe process with the long CPU-burst will never be schedule:wait-time is infinite!
Supermarket SJF: you show up with your cart with 10 items, butevery second somebody shows up with 1 apple and you wait
This can be fixed by weighting the CPU burst duration with thearrival time (i.e., define some kind of priority so that after waiting along time a process gets scheduled)But then this modified algorithm is no longer optimal for averageturn-around time
CPU-Burst Durations must be known
This is a bigger issue: in real life how do we know CPU burstdurations?It’s not like when you start a process you tell the OS something like:“Here is a new process that has 24 CPU-bursts each of 2 seconds”The OS is really in the dark about what processes will do!
Are sophisticated scheduling algorithms even worthwhile then????
Henri Casanova ([email protected]) CPU Scheduling
Predicting CPU-burst Durations
The duration of a CPU-burst is known only when it is over
Let us make the assumption that future CPU bursts depend onprevious CPU bursts
Let τn be the predicted duration for burst #n and tn the observedduration
τn+1 = αtn + (1− α)τn
where α is a parameter between 0 and 1
If α = 0: τn+1 = τn: Do not use the observed valueIf α = 1: τn+1 = tn: The next CPU-burst will last exactly the sametime as the last oneOtherwise: Put more or less weight on the observation or theprediction
Since τn depends on τn−1, we have an induction
This is called Exponential Smoothing...
Henri Casanova ([email protected]) CPU Scheduling
Predicting CPU-burst Durations
The duration of a CPU-burst is known only when it is over
Let us make the assumption that future CPU bursts depend onprevious CPU bursts
Let τn be the predicted duration for burst #n and tn the observedduration
τn+1 = αtn + (1− α)τn
where α is a parameter between 0 and 1
If α = 0: τn+1 = τn: Do not use the observed valueIf α = 1: τn+1 = tn: The next CPU-burst will last exactly the sametime as the last oneOtherwise: Put more or less weight on the observation or theprediction
Since τn depends on τn−1, we have an induction
This is called Exponential Smoothing...
Henri Casanova ([email protected]) CPU Scheduling
Predicting CPU-burst Durations
The duration of a CPU-burst is known only when it is over
Let us make the assumption that future CPU bursts depend onprevious CPU bursts
Let τn be the predicted duration for burst #n and tn the observedduration
τn+1 = αtn + (1− α)τn
where α is a parameter between 0 and 1
If α = 0: τn+1 = τn: Do not use the observed valueIf α = 1: τn+1 = tn: The next CPU-burst will last exactly the sametime as the last oneOtherwise: Put more or less weight on the observation or theprediction
Since τn depends on τn−1, we have an induction
This is called Exponential Smoothing...
Henri Casanova ([email protected]) CPU Scheduling
Predicting CPU-burst Durations
The duration of a CPU-burst is known only when it is over
Let us make the assumption that future CPU bursts depend onprevious CPU bursts
Let τn be the predicted duration for burst #n and tn the observedduration
τn+1 = αtn + (1− α)τn
where α is a parameter between 0 and 1
If α = 0: τn+1 = τn: Do not use the observed valueIf α = 1: τn+1 = tn: The next CPU-burst will last exactly the sametime as the last oneOtherwise: Put more or less weight on the observation or theprediction
Since τn depends on τn−1, we have an induction
This is called Exponential Smoothing...
Henri Casanova ([email protected]) CPU Scheduling
Exponential Smoothing
Exponential Averaging or Exponential Smoothing: Gives more importance torecent past
τn+1 = αtn + (1− α)αtn−1 + (1− α)2αtn−2
+ · · · + (1− α)jαtn−j + · · · + (1− α)nαt0 + (1− α)n+1τ0
=
n∑j=0
(1− α)jαtn−j
+ (1− α)n+1τ0
Example for τ0 = 10, α = 0.5:Observed CPU bursts ti 6 4 6 4 13 13 13 . . .
“Guess” τi 10 8 6 6 5 9 11 12 . . .
0
5
10
0 5 10
Observed
PredictedBut OSes don’t do this
(too costly)
Henri Casanova ([email protected]) CPU Scheduling
Exponential Smoothing
Exponential Averaging or Exponential Smoothing: Gives more importance torecent past
τn+1 = αtn + (1− α)αtn−1 + (1− α)2αtn−2
+ · · · + (1− α)jαtn−j + · · · + (1− α)nαt0 + (1− α)n+1τ0
=
n∑j=0
(1− α)jαtn−j
+ (1− α)n+1τ0
Example for τ0 = 10, α = 0.5:Observed CPU bursts ti 6 4 6 4 13 13 13 . . .
“Guess” τi 10 8 6 6 5 9 11 12 . . .
0
5
10
0 5 10
Observed
PredictedBut OSes don’t do this
(too costly)
Henri Casanova ([email protected]) CPU Scheduling
Exponential Smoothing
Exponential Averaging or Exponential Smoothing: Gives more importance torecent past
τn+1 = αtn + (1− α)αtn−1 + (1− α)2αtn−2
+ · · · + (1− α)jαtn−j + · · · + (1− α)nαt0 + (1− α)n+1τ0
=
n∑j=0
(1− α)jαtn−j
+ (1− α)n+1τ0
Example for τ0 = 10, α = 0.5:Observed CPU bursts ti 6 4 6 4 13 13 13 . . .
“Guess” τi 10 8 6 6 5 9 11 12 . . .
0
5
10
0 5 10
Observed
Predicted
But OSes don’t do this(too costly)
Henri Casanova ([email protected]) CPU Scheduling
Exponential Smoothing
Exponential Averaging or Exponential Smoothing: Gives more importance torecent past
τn+1 = αtn + (1− α)αtn−1 + (1− α)2αtn−2
+ · · · + (1− α)jαtn−j + · · · + (1− α)nαt0 + (1− α)n+1τ0
=
n∑j=0
(1− α)jαtn−j
+ (1− α)n+1τ0
Example for τ0 = 10, α = 0.5:Observed CPU bursts ti 6 4 6 4 13 13 13 . . .
“Guess” τi 10 8 6 6 5 9 11 12 . . .
0
5
10
0 5 10
Observed
PredictedBut OSes don’t do this
(too costly)
Henri Casanova ([email protected]) CPU Scheduling
Priority Scheduling
Most OSes support a notion of process priority (i.e., a number)
e.g., Windows: Highest: 31; Lowest: 1e.g., Linux: Highest: 0; Lowest: 139
The Ready Queue is then implemented as Priority Queue
Can be preemptive or non-preemptive
Main issue: Process Starvation
A low priority process may never run (on some systems some havebeen found that couldn’t run for years!)Can be solved via some “priority aging” solution
SJF is a special case of Priority Scheduling
The priority is not some arbitrary number but is the remaining bursttime (the lower, the higher the priority)And, as we’ve seen, it’s hard to compute
Henri Casanova ([email protected]) CPU Scheduling
Priority Scheduling
Most OSes support a notion of process priority (i.e., a number)
e.g., Windows: Highest: 31; Lowest: 1e.g., Linux: Highest: 0; Lowest: 139
The Ready Queue is then implemented as Priority Queue
Can be preemptive or non-preemptive
Main issue: Process Starvation
A low priority process may never run (on some systems some havebeen found that couldn’t run for years!)Can be solved via some “priority aging” solution
SJF is a special case of Priority Scheduling
The priority is not some arbitrary number but is the remaining bursttime (the lower, the higher the priority)And, as we’ve seen, it’s hard to compute
Henri Casanova ([email protected]) CPU Scheduling
Priority Scheduling
Most OSes support a notion of process priority (i.e., a number)
e.g., Windows: Highest: 31; Lowest: 1e.g., Linux: Highest: 0; Lowest: 139
The Ready Queue is then implemented as Priority Queue
Can be preemptive or non-preemptive
Main issue: Process Starvation
A low priority process may never run (on some systems some havebeen found that couldn’t run for years!)Can be solved via some “priority aging” solution
SJF is a special case of Priority Scheduling
The priority is not some arbitrary number but is the remaining bursttime (the lower, the higher the priority)And, as we’ve seen, it’s hard to compute
Henri Casanova ([email protected]) CPU Scheduling
Round-Robin Scheduling
Most OSes today use some form of Preemptive Round-RobinScheduling
Defines a (fixed) time quantum: e.g., 10-100ms
Main idea: A process never runs longer than the time quantumbefore yielding to another ready process (via a context-switch)
If it is the only ready process, it runs againIf its CPU-burst is shorter than the time quantum, then it yields toother processes “early”
The Ready Queue is usually implemented as a FIFO
A process that becomes Ready is queued last
Simple scheduling algorithm:
Pick the first process from the Ready QueueSet a timer to interrupt the process after one time quantumDispatch the process to the CPUWait for it to finish its time quantum (early) or for the timer to go offRepeat
Henri Casanova ([email protected]) CPU Scheduling
Round-Robin Scheduling
Most OSes today use some form of Preemptive Round-RobinScheduling
Defines a (fixed) time quantum: e.g., 10-100ms
Main idea: A process never runs longer than the time quantumbefore yielding to another ready process (via a context-switch)
If it is the only ready process, it runs againIf its CPU-burst is shorter than the time quantum, then it yields toother processes “early”
The Ready Queue is usually implemented as a FIFO
A process that becomes Ready is queued last
Simple scheduling algorithm:
Pick the first process from the Ready QueueSet a timer to interrupt the process after one time quantumDispatch the process to the CPUWait for it to finish its time quantum (early) or for the timer to go offRepeat
Henri Casanova ([email protected]) CPU Scheduling
Round-Robin Scheduling
Most OSes today use some form of Preemptive Round-RobinScheduling
Defines a (fixed) time quantum: e.g., 10-100ms
Main idea: A process never runs longer than the time quantumbefore yielding to another ready process (via a context-switch)
If it is the only ready process, it runs againIf its CPU-burst is shorter than the time quantum, then it yields toother processes “early”
The Ready Queue is usually implemented as a FIFO
A process that becomes Ready is queued last
Simple scheduling algorithm:
Pick the first process from the Ready QueueSet a timer to interrupt the process after one time quantumDispatch the process to the CPUWait for it to finish its time quantum (early) or for the timer to go offRepeat
Henri Casanova ([email protected]) CPU Scheduling
The Kernel is NOT a running process
It’s important to understand how the Kernel scheduler works
After setting the timer it dispatches a process to the CPU
Then that process goes through a fetch-decode-execute cycle
While that’s going on, Kernel code is not running
How could it? The kernel is NOT a running process
Then one of two things happen:
Case #1: The running process does some I/O
All I/O is done via a system callTherefore we’re back in the kernel code!
Case #2: The timer goes off
The CPU generates an interrupt that causes a jump to an interrupthandler in the kernelTherefore we’re back in the kernel code!
So, no matter what happens, Kernel code runs soon
And therefore Kernel code can do whatever it needs to doWhich makes it “look” like the Kernel is always running (it’s not)
Henri Casanova ([email protected]) CPU Scheduling
Round-Robin Scheduling: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Quantum = 4
Time 01234 56789 01234 56789 01234Process 11112 22233 33111 14422 33311
(if P3 arrives one maninibefore the end of P1 timequantum at t=4)
Or
Time 01234 56789 01234 56789 01234Process 11112 22211 11333 34422 11333
(if P3 arrives one manini af-ter the end of P1 time quan-tum at t=4)
(P3 before P1)
CPU Utilization: 100%
ATT:(25−0)+(20−2)+(23−4)+(18−5)
4= 18.75 tu
AWT: 15+12+12+114
= 12.5 tu
ART:(0−0)+(4−2)+(8−4)+(16−5)
4= 4.25 tu
(P1 before P3)
CPU Utilization: 100%
ATT:(22−0)+(20−2)+(25−4)+(18−5)
4= 18.5 tu
AWT: 12+12+14+114
= 12.25 tu
ART:(0−0)+(4−2)+(12−4)+(16−5)
4= 5.25 tu
Henri Casanova ([email protected]) CPU Scheduling
Round-Robin Scheduling: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Quantum = 4
Time 01234 56789 01234 56789 01234Process 11112 22233 33111 14422 33311
(if P3 arrives one maninibefore the end of P1 timequantum at t=4)
Or
Time 01234 56789 01234 56789 01234Process 11112 22211 11333 34422 11333
(if P3 arrives one manini af-ter the end of P1 time quan-tum at t=4)
(P3 before P1)
CPU Utilization: 100%
ATT:(25−0)+(20−2)+(23−4)+(18−5)
4= 18.75 tu
AWT: 15+12+12+114
= 12.5 tu
ART:(0−0)+(4−2)+(8−4)+(16−5)
4= 4.25 tu
(P1 before P3)
CPU Utilization: 100%
ATT:(22−0)+(20−2)+(25−4)+(18−5)
4= 18.5 tu
AWT: 12+12+14+114
= 12.25 tu
ART:(0−0)+(4−2)+(12−4)+(16−5)
4= 5.25 tu
Henri Casanova ([email protected]) CPU Scheduling
Round-Robin Scheduling: Example
Process Burst time Arrival timeP1 10 0P2 6 2P3 7 4P4 2 5
Quantum = 4
Time 01234 56789 01234 56789 01234Process 11112 22233 33111 14422 33311
(if P3 arrives one maninibefore the end of P1 timequantum at t=4)
Or
Time 01234 56789 01234 56789 01234Process 11112 22211 11333 34422 11333
(if P3 arrives one manini af-ter the end of P1 time quan-tum at t=4)
(P3 before P1)
CPU Utilization: 100%
ATT:(25−0)+(20−2)+(23−4)+(18−5)
4= 18.75 tu
AWT: 15+12+12+114
= 12.5 tu
ART:(0−0)+(4−2)+(8−4)+(16−5)
4= 4.25 tu
(P1 before P3)
CPU Utilization: 100%
ATT:(22−0)+(20−2)+(25−4)+(18−5)
4= 18.5 tu
AWT: 12+12+14+114
= 12.25 tu
ART:(0−0)+(4−2)+(12−4)+(16−5)
4= 5.25 tu
Henri Casanova ([email protected]) CPU Scheduling
Round-Robin (RR) Scheduling
In general, RR has better response time than SJF
Main advantage: No starvation!
Main concern: What is the “best” value for the quantum?
Context-switching is not freeShort time quantum: Great response time / Interactivity but highoverheadLong time quantum: Poor response time / Interactivity and lowoverhead
In practice: The time quantum is around 10ms and a context-switchis around 10 µs
Henri Casanova ([email protected]) CPU Scheduling
Round-Robin (RR) Scheduling
In general, RR has better response time than SJF
Main advantage: No starvation!
Main concern: What is the “best” value for the quantum?
Context-switching is not freeShort time quantum: Great response time / Interactivity but highoverheadLong time quantum: Poor response time / Interactivity and lowoverhead
In practice: The time quantum is around 10ms and a context-switchis around 10 µs
Henri Casanova ([email protected]) CPU Scheduling
Round-Robin (RR) Scheduling
In general, RR has better response time than SJF
Main advantage: No starvation!
Main concern: What is the “best” value for the quantum?
Context-switching is not freeShort time quantum: Great response time / Interactivity but highoverheadLong time quantum: Poor response time / Interactivity and lowoverhead
In practice: The time quantum is around 10ms and a context-switchis around 10 µs
Henri Casanova ([email protected]) CPU Scheduling
Round-Robin (RR) Scheduling
In general, RR has better response time than SJF
Main advantage: No starvation!
Main concern: What is the “best” value for the quantum?
Context-switching is not freeShort time quantum: Great response time / Interactivity but highoverheadLong time quantum: Poor response time / Interactivity and lowoverhead
In practice: The time quantum is around 10ms and a context-switchis around 10 µs
Henri Casanova ([email protected]) CPU Scheduling
Multilevel Feedback Queue (MLFQ) Scheduling
As we’ve seen, we want ways to prioritize processes (systemprocesses, interactive processes, user processes, ...)
Simple idea: One Ready Queue per priority class
Scheduling within queues:
High-priority queues could be RR, low priority P-FCFS
Scheduling between the queues:
One option is preemptive priority scheduling: A process can run onlyif all higher priority queues are emptyAnother is time-slicing among queues: e.g., 60% on highest queue;20% on second-highest; 10% etc.
Henri Casanova ([email protected]) CPU Scheduling
Multilevel Feedback Queue (MLFQ) Scheduling
As we’ve seen, we want ways to prioritize processes (systemprocesses, interactive processes, user processes, ...)
Simple idea: One Ready Queue per priority class
Scheduling within queues:
High-priority queues could be RR, low priority P-FCFS
Scheduling between the queues:
One option is preemptive priority scheduling: A process can run onlyif all higher priority queues are emptyAnother is time-slicing among queues: e.g., 60% on highest queue;20% on second-highest; 10% etc.
Henri Casanova ([email protected]) CPU Scheduling
Multilevel Feedback Queue (MLFQ) Scheduling
As we’ve seen, we want ways to prioritize processes (systemprocesses, interactive processes, user processes, ...)
Simple idea: One Ready Queue per priority class
Scheduling within queues:
High-priority queues could be RR, low priority P-FCFS
Scheduling between the queues:
One option is preemptive priority scheduling: A process can run onlyif all higher priority queues are emptyAnother is time-slicing among queues: e.g., 60% on highest queue;20% on second-highest; 10% etc.
Henri Casanova ([email protected]) CPU Scheduling
Multilevel Feedback Queue (MLFQ) Scheduling
Having fixed priorities is useful and all OSes support it
But say we have 20 competing “user processes”, should they reallyall have the same priority?
Surely, at times some should have higher priority, i.e., when the useris interacting with themFor these processes, we should make sure that they are not stuck inthe Ready Queue a long, i.e., human-perceivable, time
A pervasive idea: allow processes to move among priority queues
i.e., a process can change priority throughout execution
Let’s see why how this is typically done, and why it’s useful...
Henri Casanova ([email protected]) CPU Scheduling
Multilevel Feedback Queue (MLFQ) Scheduling
Having fixed priorities is useful and all OSes support it
But say we have 20 competing “user processes”, should they reallyall have the same priority?
Surely, at times some should have higher priority, i.e., when the useris interacting with themFor these processes, we should make sure that they are not stuck inthe Ready Queue a long, i.e., human-perceivable, time
A pervasive idea: allow processes to move among priority queues
i.e., a process can change priority throughout execution
Let’s see why how this is typically done, and why it’s useful...
Henri Casanova ([email protected]) CPU Scheduling
Multilevel Feedback Queue (MLFQ) Scheduling
Having fixed priorities is useful and all OSes support it
But say we have 20 competing “user processes”, should they reallyall have the same priority?
Surely, at times some should have higher priority, i.e., when the useris interacting with themFor these processes, we should make sure that they are not stuck inthe Ready Queue a long, i.e., human-perceivable, time
A pervasive idea: allow processes to move among priority queues
i.e., a process can change priority throughout execution
Let’s see why how this is typically done, and why it’s useful...
Henri Casanova ([email protected]) CPU Scheduling
MLFQ Scheduling: basic principle
Here are the typical rules for MLFQ scheduling
If the priority of P1 is greater than the priority of P2, dispatch P1
Run processes in higher priority Ready Queues first
If the priority of P1 is equal to the priority of P2, run P1 and P2with RR
Processes in the same Ready Queue are scheduled using round-robin
When a new process is created, give it the highest priority
When you start a new process, you want it to run right now most ofthe time
If a process uses all its time quantum, reduce its priority
This one requires some explanation...
Henri Casanova ([email protected]) CPU Scheduling
MLFQ Scheduling: basic principle
Here are the typical rules for MLFQ scheduling
If the priority of P1 is greater than the priority of P2, dispatch P1
Run processes in higher priority Ready Queues first
If the priority of P1 is equal to the priority of P2, run P1 and P2with RR
Processes in the same Ready Queue are scheduled using round-robin
When a new process is created, give it the highest priority
When you start a new process, you want it to run right now most ofthe time
If a process uses all its time quantum, reduce its priority
This one requires some explanation...
Henri Casanova ([email protected]) CPU Scheduling
MLFQ Scheduling: basic principle
Here are the typical rules for MLFQ scheduling
If the priority of P1 is greater than the priority of P2, dispatch P1
Run processes in higher priority Ready Queues first
If the priority of P1 is equal to the priority of P2, run P1 and P2with RR
Processes in the same Ready Queue are scheduled using round-robin
When a new process is created, give it the highest priority
When you start a new process, you want it to run right now most ofthe time
If a process uses all its time quantum, reduce its priority
This one requires some explanation...
Henri Casanova ([email protected]) CPU Scheduling
MLFQ Scheduling: basic principle
Here are the typical rules for MLFQ scheduling
If the priority of P1 is greater than the priority of P2, dispatch P1
Run processes in higher priority Ready Queues first
If the priority of P1 is equal to the priority of P2, run P1 and P2with RR
Processes in the same Ready Queue are scheduled using round-robin
When a new process is created, give it the highest priority
When you start a new process, you want it to run right now most ofthe time
If a process uses all its time quantum, reduce its priority
This one requires some explanation...
Henri Casanova ([email protected]) CPU Scheduling
MLFQ: Rationale
If a process uses its whole time quantum it’s likely CPU-bound
i.e., its CPU bursts are much longer than the time quantume.g., a matrix-multiplication
If a process does not use its time quantum, it’s likely I/O-bound
i.e., its CPU bursts are shorter than the time quantum because itmostly does I/Oe.g., a text editor
With the rules in the previous slide, when a process behaves in aCPU-bound manner, its priority decreases, otherwise it increases
Rationale: Non-CPU-intensive processes should get the CPUquickly on the rare occasions they need it for a little bit
Interactive processes should always have high priorityIf a process is mostly doing I/O, it should be able to get to it ASAP
Henri Casanova ([email protected]) CPU Scheduling
MLFQ: Rationale
If a process uses its whole time quantum it’s likely CPU-bound
i.e., its CPU bursts are much longer than the time quantume.g., a matrix-multiplication
If a process does not use its time quantum, it’s likely I/O-bound
i.e., its CPU bursts are shorter than the time quantum because itmostly does I/Oe.g., a text editor
With the rules in the previous slide, when a process behaves in aCPU-bound manner, its priority decreases, otherwise it increases
Rationale: Non-CPU-intensive processes should get the CPUquickly on the rare occasions they need it for a little bit
Interactive processes should always have high priorityIf a process is mostly doing I/O, it should be able to get to it ASAP
Henri Casanova ([email protected]) CPU Scheduling
MLFQ: Rationale
If a process uses its whole time quantum it’s likely CPU-bound
i.e., its CPU bursts are much longer than the time quantume.g., a matrix-multiplication
If a process does not use its time quantum, it’s likely I/O-bound
i.e., its CPU bursts are shorter than the time quantum because itmostly does I/Oe.g., a text editor
With the rules in the previous slide, when a process behaves in aCPU-bound manner, its priority decreases, otherwise it increases
Rationale: Non-CPU-intensive processes should get the CPUquickly on the rare occasions they need it for a little bit
Interactive processes should always have high priorityIf a process is mostly doing I/O, it should be able to get to it ASAP
Henri Casanova ([email protected]) CPU Scheduling
MLFQ: Good but...
MLFQ is highly configurable
Number of queuesScheduling algorithm for each queue
And likely smaller time quanta in higher priority queues
Scheduling algorithm across queuesPolicy used to promote/demote a process
There are so many choices, that making a good one may be hard
What is good for one system (i.e., one typical workload) might benot be for another one
Also, this algorithm is somewhat sophisticated, which might havehigh overhead
Henri Casanova ([email protected]) CPU Scheduling
Summary so far
FCFS is simple, but can have really high turn-around time(supermarket analogy)
SJF is good for turn-around time, but can lead to starvation andrequires the CPU burst times be known, which is not the case
RR solve these problems with a time quantum
It is very useful to have a notion of process priority and MLFQ usesone Ready Queue per priority level, each with its own schedulingalgorithm
There are rules to move processes between Ready QueuesA popular rule: if a process doesn’t use its full time quantum, bumpup its priority because it’s likely interactive
Some of the above algorithms can be configured in several ways, andthere are many other algorithms (we won’t be talking about)
This begs the question: How do we know that an algorithm is good?
Henri Casanova ([email protected]) CPU Scheduling
Summary so far
FCFS is simple, but can have really high turn-around time(supermarket analogy)
SJF is good for turn-around time, but can lead to starvation andrequires the CPU burst times be known, which is not the case
RR solve these problems with a time quantum
It is very useful to have a notion of process priority and MLFQ usesone Ready Queue per priority level, each with its own schedulingalgorithm
There are rules to move processes between Ready QueuesA popular rule: if a process doesn’t use its full time quantum, bumpup its priority because it’s likely interactive
Some of the above algorithms can be configured in several ways, andthere are many other algorithms (we won’t be talking about)
This begs the question: How do we know that an algorithm is good?
Henri Casanova ([email protected]) CPU Scheduling
Summary so far
FCFS is simple, but can have really high turn-around time(supermarket analogy)
SJF is good for turn-around time, but can lead to starvation andrequires the CPU burst times be known, which is not the case
RR solve these problems with a time quantum
It is very useful to have a notion of process priority and MLFQ usesone Ready Queue per priority level, each with its own schedulingalgorithm
There are rules to move processes between Ready QueuesA popular rule: if a process doesn’t use its full time quantum, bumpup its priority because it’s likely interactive
Some of the above algorithms can be configured in several ways, andthere are many other algorithms (we won’t be talking about)
This begs the question: How do we know that an algorithm is good?
Henri Casanova ([email protected]) CPU Scheduling
Summary so far
FCFS is simple, but can have really high turn-around time(supermarket analogy)
SJF is good for turn-around time, but can lead to starvation andrequires the CPU burst times be known, which is not the case
RR solve these problems with a time quantum
It is very useful to have a notion of process priority and MLFQ usesone Ready Queue per priority level, each with its own schedulingalgorithm
There are rules to move processes between Ready QueuesA popular rule: if a process doesn’t use its full time quantum, bumpup its priority because it’s likely interactive
Some of the above algorithms can be configured in several ways, andthere are many other algorithms (we won’t be talking about)
This begs the question: How do we know that an algorithm is good?
Henri Casanova ([email protected]) CPU Scheduling
Summary so far
FCFS is simple, but can have really high turn-around time(supermarket analogy)
SJF is good for turn-around time, but can lead to starvation andrequires the CPU burst times be known, which is not the case
RR solve these problems with a time quantum
It is very useful to have a notion of process priority and MLFQ usesone Ready Queue per priority level, each with its own schedulingalgorithm
There are rules to move processes between Ready QueuesA popular rule: if a process doesn’t use its full time quantum, bumpup its priority because it’s likely interactive
Some of the above algorithms can be configured in several ways, andthere are many other algorithms (we won’t be talking about)
This begs the question: How do we know that an algorithm is good?
Henri Casanova ([email protected]) CPU Scheduling
Evaluating Algorithms
What is a good scheduling algorithm?
Theoretical Analysis
Essentially, take two scheduling algorithms A and B, take a metric(e.g., turnaround time), and likely you will find one instance in whichA > B, and another instance in which A < BIn rare cases you can show that an algorithm is optimal (e.g., SRPTfor average turnaround time)
Few analytical/theoretical results are available for scheduling
Simulation
Test a (very large number) cases by producing Gantt Charts (not byhand) and computing metricse.g., A is better than B in x% of the cases for the y metric
Ground-Truth Testing
Implement both A and B in the kernel (a lot of work!)Use both for x hours for some benchmark workload, and compute ametrice.g., A is better than B for the y metric when running workload z
Henri Casanova ([email protected]) CPU Scheduling
Homework Assignment: Pencil-and-Paper
Let’s look at our next Homework Assignment...
Henri Casanova ([email protected]) CPU Scheduling
Scheduling in Windows
From XP (and beyond)
Detailed in the Scheduling Priorities page
Each process has a Priority class (7 of them)
Each thread of the process has a Priority class (7 of them)
And everything is converted to an Integer between 1 and 31e.g. (IDLE PRIORITY CLASS,THREAD PRIORITY BELOW NORMAL) = Priority 3e.g. (REALTIME PRIORITY CLASS,THREAD PRIORITY LOWEST) = Priority 22e.g. (NORMAL, NORMAL) is 8
(One system process has a priority of 0)
Scheduler features priorities, time quanta, multi-level queues, andpreemptive scheduling
Henri Casanova ([email protected]) CPU Scheduling
Scheduling in Windows
When a thread’s quantum runs out, unless the thread is in thereal-time class (priority > 15), the thread’s priority is lowered
It is likely a CPU-bound thread, and we need to keep the systeminteractive
When a thread “wakes up”, its priority is boosted
It was likely I/O and could need more activity
The boost depends on what the thread was waiting for
e.g., if it was the keyboard, it is an interactive thread and the boostshould be large
These are the same general ideas as in other OSes (e.g., see SolarisMLFQ implementation in OSTEP 8.5): preserving interactivity is akey concern
Something specific: The idle thread in Win XP
A “bogus” idle thread of priority 1“runs” (and does nothing) if nothing else can runSimplifies OS design to avoid the “no process is running” specialcases in kernel code!
Henri Casanova ([email protected]) CPU Scheduling
Scheduling in Windows
When a thread’s quantum runs out, unless the thread is in thereal-time class (priority > 15), the thread’s priority is lowered
It is likely a CPU-bound thread, and we need to keep the systeminteractive
When a thread “wakes up”, its priority is boosted
It was likely I/O and could need more activity
The boost depends on what the thread was waiting for
e.g., if it was the keyboard, it is an interactive thread and the boostshould be large
These are the same general ideas as in other OSes (e.g., see SolarisMLFQ implementation in OSTEP 8.5): preserving interactivity is akey concern
Something specific: The idle thread in Win XP
A “bogus” idle thread of priority 1“runs” (and does nothing) if nothing else can runSimplifies OS design to avoid the “no process is running” specialcases in kernel code!
Henri Casanova ([email protected]) CPU Scheduling
Scheduling in Linux: 1.2 and 2.2
The Linux kernel has a long history of scheduler development
Kernel 1.2 (1995): Simplicity and speed
Round-Robin schedulingImplemented with a circular queue
Kernel 2.2 (1999): Towards sophistication
Scheduling classes: Real-Time, Non-Preemptible, Non-Real-TimePriorities within classes
Numeric value between 0 and 130The lower the number, the higher the priority“Real-Time” Tasks have priorities between 0 (highest) and 99“Other” Tasks have priorities between 100 and 140 (lowest)
Henri Casanova ([email protected]) CPU Scheduling
Scheduling in Linux: 2.4 (2001)
The schedule proceeds as a sequence of epochs
Within each epoch, each process is given a time slice of someduration
Time slice durations are computed differently for different processesdepending on how they used their previous time slices
A time slice does not have to be used “all at once”
A process can get the CPU multiple times in an epoch, until its timeslice is used
Once all Ready processes have used their time slice, the currentepoch ends, and a new epoch begins
Of course, some processes will still be blocked, waiting for events,and they will wake up during an upcoming epoch
Henri Casanova ([email protected]) CPU Scheduling
Scheduling in Linux: 2.4 (2001)
How to (re)compute time slices?
If a process uses its whole time slice, then it will get the same oneIf a process has not used its whole time slice (e.g., because blockedon I/O) then it will get a larger time slice!
Counter-intuitive
but:
Getting a larger time slice does not mean it will use it if it is notReady anywayThose processes that block often will thus never use their (enlarged)time slicesBut, priorities between threads (i.e., how the scheduler picks themfrom the Ready Queue) are computed based on the time sliceduration: A larger time slice leads to a higher priority
Henri Casanova ([email protected]) CPU Scheduling
Scheduling in Linux: 2.4 (2001)
How to (re)compute time slices?
If a process uses its whole time slice, then it will get the same oneIf a process has not used its whole time slice (e.g., because blockedon I/O) then it will get a larger time slice!
Counter-intuitive but:
Getting a larger time slice does not mean it will use it if it is notReady anywayThose processes that block often will thus never use their (enlarged)time slicesBut, priorities between threads (i.e., how the scheduler picks themfrom the Ready Queue) are computed based on the time sliceduration: A larger time slice leads to a higher priority
Henri Casanova ([email protected]) CPU Scheduling
Scheduling in Linux: 2.4 (2001)
Main issue: O(n) scheduling (linear-time scheduling)
At each scheduling event, the scheduler needs to go through thewhole list of ready processes to pick one to run, complexity O(n)The larger the number of processes (i.e., n), the longer thescheduling overhead!
“Instead of wasting your time thinking about what to run, just runsome process already!”
There were also other concerns with 2.4 scheduling (e.g., the adventof multi-core machines), and the O(n) scheduler became obsolete
Increasing numbers of cores make scheduling more complicated andschedulers have changed quite a bit in recent years
Henri Casanova ([email protected]) CPU Scheduling
Scheduling in Linux: 2.6.0 (2003) to 2.6.22
The kernel 2.6.0 (2003) tries to resolve the O(n) problem, and a fewothers
It came with a O(1) scheduler (constant-time scheduler)
It uses implementation trick so that there is no loop on the readyprocesses
Imagine having to write code with a “no loop” constraint!
During an epoch a process can be:
Either an Active process: Its time slice has not been fully consumedOr an Expired process: It has used all of its time slice
The kernel keeps two arrays of RR queues:
One array for the active processes: at each index = priority, the listof processes of that priorityOne array for the expired processes: at each index = priority, the listof processes of that priority
Henri Casanova ([email protected]) CPU Scheduling
Scheduling in Linux: 2.6.0 (2003) to 2.6.22
The kernel 2.6.0 (2003) tries to resolve the O(n) problem, and a fewothers
It came with a O(1) scheduler (constant-time scheduler)
It uses implementation trick so that there is no loop on the readyprocesses
Imagine having to write code with a “no loop” constraint!
During an epoch a process can be:
Either an Active process: Its time slice has not been fully consumedOr an Expired process: It has used all of its time slice
The kernel keeps two arrays of RR queues:
One array for the active processes: at each index = priority, the listof processes of that priorityOne array for the expired processes: at each index = priority, the listof processes of that priority
Henri Casanova ([email protected]) CPU Scheduling
Scheduling in Linux: 2.6.0 (2003) to 2.6.22
The kernel 2.6.0 (2003) tries to resolve the O(n) problem, and a fewothers
It came with a O(1) scheduler (constant-time scheduler)
It uses implementation trick so that there is no loop on the readyprocesses
Imagine having to write code with a “no loop” constraint!
During an epoch a process can be:
Either an Active process: Its time slice has not been fully consumedOr an Expired process: It has used all of its time slice
The kernel keeps two arrays of RR queues:
One array for the active processes: at each index = priority, the listof processes of that priorityOne array for the expired processes: at each index = priority, the listof processes of that priority
Henri Casanova ([email protected]) CPU Scheduling
Scheduling in Linux: O(1) Scheduling
Implementation through the following data structure:struct prio array {int nr active; // Total number of processesunsigned int32 bitmap[5]; // Priority bitmapstruct list head queue[MAX PRIO]; // The queues}
Bitmap (see ICS 312/331): Markers of presence/absence
Conveniently 5*32 bits = 160 > 140 priority levels
Initially all bits in the bitmap are set to 0
When a process of some priority becomes ready:
Set the corresponding bit to 1i.e. Create the bit mask 0....010...0 (1 << PRIORITY)then OR it with the bitmap
Append (the address of) the PCB to the tail of the list in queue
Henri Casanova ([email protected]) CPU Scheduling
Scheduling in Linux: O(1) Scheduling
Finding the highest Ready process is “as difficult” as finding the firstbit set to 1 in the bitmap
It does not depend on the number of processes in the systemMost ISAs provide an instruction to do just that in a few cycles(bsfl on x86)
Finding the next process to run is therefore (in pseudo-code):prio array.queue[bsfl(prio array.bitmap)]
No looping over processes!
Henri Casanova ([email protected]) CPU Scheduling
Scheduling in Linux: O(1) Scheduling
Recalculating Time Slices
When the time slice of a process expires it is moved from the activearray to the expired array
At this time, the process time slice is recomputed
That way we never have a “recompute all time slices” which wouldmonopolize the kernel for a while and hinder interactivityMaintains the O(1)-time property
When the active array is empty, it is swapped with the expired array(This is a pointer swap, not a copy, so it’s O(1)-time)
Henri Casanova ([email protected]) CPU Scheduling
Scheduling in Linux: ≥ 2.6.23
Problem with the O(1) scheduler: the code in the kernel becameintricate and hard to maintain
Seemed to blur “policy” and “mechanism”...
CFS: Completely Fair Scheduler
Developed by the developer of O(1), with ideas from others
Main idea: keep track of how fairly the CPU has been allocated toprocesses, and “fix” the unfairness
For each process, the kernel keeps track of its virtual time
The sum of the time intervals during which the process was giventhe CPU since the process startedCould be much smaller than the time since the process started
Goal of the scheduler: give the CPU to the process with the smallestvirtual time
i.e., to the process that is the least “happy”
Henri Casanova ([email protected]) CPU Scheduling
Scheduling in Linux: ≥ 2.6.23
Tasks are stored in a red-black tree
O(log n) time to retrieve the least happy processO(1) to update its virtual time once it’s done running for a whileO(log n) time to re-insert it into the red-black tree
As they are given the CPU, processes migrate from the left of thetree to the right
Note that I/O processes that do few CPU bursts will never have alarge virtual time, and thus will be “high priority”
Tons of other things in there controlled by configuration parameters
Henri Casanova ([email protected]) CPU Scheduling
Scheduling in Linux: Future
Not everybody loves CFS
Some say it just will not work for running thousands of processes in a“multi-core server” environmentBut then the author never really said it would
At this point, it seems that having a single scheduler fordesktop/laptop usage and server usage is just really difficult
Having many configuration parameters is perhaps not helpful
Other schedulers are typically proposed and hotly debated relativelyfrequently
Henri Casanova ([email protected]) CPU Scheduling
Conclusions
Many scheduling algorithms have been proposed
Modern OSes use preemptive scheduling, typically using round-robin
Most of them use some multilevel feedback priority queue scheme
A common concern is to ensure interactivity
I/O bound processes often are interactive, and thus should have highpriority
Another is to have “quick” (low overhead) short-term scheduling
There is a lot of scheduling work going on in kernel development
We’ll have a Quiz on this topic next week
Henri Casanova ([email protected]) CPU Scheduling