Upload
others
View
5
Download
0
Embed Size (px)
Citation preview
14/10/2015
1
Definitions
A task set is said to be feasible, if there exists an
algorithm that generates a feasible schedule for .
Examples of constraints
• Timing constraints: activation, period, deadline, jitter.
• Precedence: order of execution between tasks.• Resources: synchronization for mutual exclusion.
A schedule is said to be feasible if it satisfies a set of
constraints.
A task set is said to be schedulable with an algorithm A,
if A generates a feasible schedule.
3
Feasibility vs. schedulability
Feasible task sets
Space of all
task sets
Task sets schedulable
with alg. ATask sets
schedulablewith alg. B
unfeasible task set
The scheduling problem
Given a set of n tasks, a set P of p processors, and a setR of r resources, find an assignment of P and R to thatproduces a feasible schedule under a set of constraints.
Schedulingalgorithm
R
P feasible
constraints
Complexity
In 1975, Garey and Johnson showed that thegeneral scheduling problem is NP hard .
In practice, it means that the time for finding a feasibleschedule grows exponentially with the number of tasks.
Fortunately, polynomial time algorithms can befound under particular conditions.
Let’s consider an application with n = 30 tasks on a
processor in which the elementary step takes 1 s
Consider 3 algorithms with the following complexity:
A1: O(n) A2: O(n8) A3: O(8
n)
30 s 182 hours 40.000billion years
Why do we care about complexity?
14/10/2015
1
Definitions
A task set is said to be feasible, if there exists an
algorithm that generates a feasible schedule for .
Examples of constraints
• Timing constraints: activation, period, deadline, jitter.
• Precedence: order of execution between tasks.• Resources: synchronization for mutual exclusion.
A schedule is said to be feasible if it satisfies a set of
constraints.
A task set is said to be schedulable with an algorithm A,
if A generates a feasible schedule.
3
Feasibility vs. schedulability
Feasible task sets
Space of all
task sets
Task sets schedulable
with alg. ATask sets
schedulablewith alg. B
unfeasible task set
The scheduling problem
Given a set of n tasks, a set P of p processors, and a setR of r resources, find an assignment of P and R to thatproduces a feasible schedule under a set of constraints.
Schedulingalgorithm
R
P feasible
constraints
Complexity
In 1975, Garey and Johnson showed that thegeneral scheduling problem is NP hard .
In practice, it means that the time for finding a feasibleschedule grows exponentially with the number of tasks.
Fortunately, polynomial time algorithms can befound under particular conditions.
Let’s consider an application with n = 30 tasks on a
processor in which the elementary step takes 1 s
Consider 3 algorithms with the following complexity:
A1: O(n) A2: O(n8) A3: O(8
n)
30 s 182 hours 40.000billion years
Why do we care about complexity?
14/10/2015
1
Definitions
A task set is said to be feasible, if there exists an
algorithm that generates a feasible schedule for .
Examples of constraints
• Timing constraints: activation, period, deadline, jitter.
• Precedence: order of execution between tasks.• Resources: synchronization for mutual exclusion.
A schedule is said to be feasible if it satisfies a set of
constraints.
A task set is said to be schedulable with an algorithm A,
if A generates a feasible schedule.
3
Feasibility vs. schedulability
Feasible task sets
Space of all
task sets
Task sets schedulable
with alg. ATask sets
schedulablewith alg. B
unfeasible task set
The scheduling problem
Given a set of n tasks, a set P of p processors, and a setR of r resources, find an assignment of P and R to thatproduces a feasible schedule under a set of constraints.
Schedulingalgorithm
R
P feasible
constraints
Complexity
In 1975, Garey and Johnson showed that thegeneral scheduling problem is NP hard .
In practice, it means that the time for finding a feasibleschedule grows exponentially with the number of tasks.
Fortunately, polynomial time algorithms can befound under particular conditions.
Let’s consider an application with n = 30 tasks on a
processor in which the elementary step takes 1 s
Consider 3 algorithms with the following complexity:
A1: O(n) A2: O(n8) A3: O(8
n)
30 s 182 hours 40.000billion years
Why do we care about complexity?
14/10/2015
1
Definitions
A task set is said to be feasible, if there exists an
algorithm that generates a feasible schedule for .
Examples of constraints
• Timing constraints: activation, period, deadline, jitter.
• Precedence: order of execution between tasks.• Resources: synchronization for mutual exclusion.
A schedule is said to be feasible if it satisfies a set of
constraints.
A task set is said to be schedulable with an algorithm A,
if A generates a feasible schedule.
3
Feasibility vs. schedulability
Feasible task sets
Space of all
task sets
Task sets schedulable
with alg. ATask sets
schedulablewith alg. B
unfeasible task set
The scheduling problem
Given a set of n tasks, a set P of p processors, and a setR of r resources, find an assignment of P and R to thatproduces a feasible schedule under a set of constraints.
Schedulingalgorithm
R
P feasible
constraints
Complexity
In 1975, Garey and Johnson showed that thegeneral scheduling problem is NP hard .
In practice, it means that the time for finding a feasibleschedule grows exponentially with the number of tasks.
Fortunately, polynomial time algorithms can befound under particular conditions.
Let’s consider an application with n = 30 tasks on a
processor in which the elementary step takes 1 s
Consider 3 algorithms with the following complexity:
A1: O(n) A2: O(n8) A3: O(8
n)
30 s 182 hours 40.000billion years
Why do we care about complexity?
14/10/2015
1
Definitions
A task set is said to be feasible, if there exists an
algorithm that generates a feasible schedule for .
Examples of constraints
• Timing constraints: activation, period, deadline, jitter.
• Precedence: order of execution between tasks.• Resources: synchronization for mutual exclusion.
A schedule is said to be feasible if it satisfies a set of
constraints.
A task set is said to be schedulable with an algorithm A,
if A generates a feasible schedule.
3
Feasibility vs. schedulability
Feasible task sets
Space of all
task sets
Task sets schedulable
with alg. ATask sets
schedulablewith alg. B
unfeasible task set
The scheduling problem
Given a set of n tasks, a set P of p processors, and a setR of r resources, find an assignment of P and R to thatproduces a feasible schedule under a set of constraints.
Schedulingalgorithm
R
P feasible
constraints
Complexity
In 1975, Garey and Johnson showed that thegeneral scheduling problem is NP hard .
In practice, it means that the time for finding a feasibleschedule grows exponentially with the number of tasks.
Fortunately, polynomial time algorithms can befound under particular conditions.
Let’s consider an application with n = 30 tasks on a
processor in which the elementary step takes 1 s
Consider 3 algorithms with the following complexity:
A1: O(n) A2: O(n8) A3: O(8
n)
30 s 182 hours 40.000billion years
Why do we care about complexity?
14/10/2015
1
Definitions
A task set is said to be feasible, if there exists an
algorithm that generates a feasible schedule for .
Examples of constraints
• Timing constraints: activation, period, deadline, jitter.
• Precedence: order of execution between tasks.• Resources: synchronization for mutual exclusion.
A schedule is said to be feasible if it satisfies a set of
constraints.
A task set is said to be schedulable with an algorithm A,
if A generates a feasible schedule.
3
Feasibility vs. schedulability
Feasible task sets
Space of all
task sets
Task sets schedulable
with alg. ATask sets
schedulablewith alg. B
unfeasible task set
The scheduling problem
Given a set of n tasks, a set P of p processors, and a setR of r resources, find an assignment of P and R to thatproduces a feasible schedule under a set of constraints.
Schedulingalgorithm
R
P feasible
constraints
Complexity
In 1975, Garey and Johnson showed that thegeneral scheduling problem is NP hard .
In practice, it means that the time for finding a feasibleschedule grows exponentially with the number of tasks.
Fortunately, polynomial time algorithms can befound under particular conditions.
Let’s consider an application with n = 30 tasks on a
processor in which the elementary step takes 1 s
Consider 3 algorithms with the following complexity:
A1: O(n) A2: O(n8) A3: O(8
n)
30 s 182 hours 40.000billion years
Why do we care about complexity?
14/10/2015
2
Simplifying assumptions
Single processor
Homogeneous task sets
Fully preemptive tasks
Simultaneous activations
No precedence constraints
No resource constraints
Algorithm taxonomy
Preemptive vs. Non Preemptive
Static vs. dynamic
On line vs. Off line
Optimal vs. Heuristic
Static vs. Dynamic
Staticscheduling decisions are taken based onfixed parameters, statically assigned totasks before activation.
Dynamicscheduling decisions are taken based onparameters that can change with time.
Off-line vs. On-line
Off-lineall scheduling decisions are taken beforetask activation: the schedule is stored in atable (table-driven scheduling ).
On-linescheduling decisions are taken at run timeon the set of active tasks.
Optimal vs. Heuristic
OptimalThey generate a schedule that minimizesa cost function, defined based on anoptimality criterion.
HeuristicThey generate a schedule according to aheuristic function that tries to satisfy anoptimality criterion, but there is noguarantee of success.
Optimality criteria
Feasibility : Find a feasible schedule if there exists one.
Minimize the maximum lateness
Minimize the number of deadline miss
Assign a value to each task, then maximizethe cumulative value of the feasible tasks
14/10/2015
2
Simplifying assumptions
Single processor
Homogeneous task sets
Fully preemptive tasks
Simultaneous activations
No precedence constraints
No resource constraints
Algorithm taxonomy
Preemptive vs. Non Preemptive
Static vs. dynamic
On line vs. Off line
Optimal vs. Heuristic
Static vs. Dynamic
Staticscheduling decisions are taken based onfixed parameters, statically assigned totasks before activation.
Dynamicscheduling decisions are taken based onparameters that can change with time.
Off-line vs. On-line
Off-lineall scheduling decisions are taken beforetask activation: the schedule is stored in atable (table-driven scheduling ).
On-linescheduling decisions are taken at run timeon the set of active tasks.
Optimal vs. Heuristic
OptimalThey generate a schedule that minimizesa cost function, defined based on anoptimality criterion.
HeuristicThey generate a schedule according to aheuristic function that tries to satisfy anoptimality criterion, but there is noguarantee of success.
Optimality criteria
Feasibility : Find a feasible schedule if there exists one.
Minimize the maximum lateness
Minimize the number of deadline miss
Assign a value to each task, then maximizethe cumulative value of the feasible tasks
14/10/2015
2
Simplifying assumptions
Single processor
Homogeneous task sets
Fully preemptive tasks
Simultaneous activations
No precedence constraints
No resource constraints
Algorithm taxonomy
Preemptive vs. Non Preemptive
Static vs. dynamic
On line vs. Off line
Optimal vs. Heuristic
Static vs. Dynamic
Staticscheduling decisions are taken based onfixed parameters, statically assigned totasks before activation.
Dynamicscheduling decisions are taken based onparameters that can change with time.
Off-line vs. On-line
Off-lineall scheduling decisions are taken beforetask activation: the schedule is stored in atable (table-driven scheduling ).
On-linescheduling decisions are taken at run timeon the set of active tasks.
Optimal vs. Heuristic
OptimalThey generate a schedule that minimizesa cost function, defined based on anoptimality criterion.
HeuristicThey generate a schedule according to aheuristic function that tries to satisfy anoptimality criterion, but there is noguarantee of success.
Optimality criteria
Feasibility : Find a feasible schedule if there exists one.
Minimize the maximum lateness
Minimize the number of deadline miss
Assign a value to each task, then maximizethe cumulative value of the feasible tasks
14/10/2015
2
Simplifying assumptions
Single processor
Homogeneous task sets
Fully preemptive tasks
Simultaneous activations
No precedence constraints
No resource constraints
Algorithm taxonomy
Preemptive vs. Non Preemptive
Static vs. dynamic
On line vs. Off line
Optimal vs. Heuristic
Static vs. Dynamic
Staticscheduling decisions are taken based onfixed parameters, statically assigned totasks before activation.
Dynamicscheduling decisions are taken based onparameters that can change with time.
Off-line vs. On-line
Off-lineall scheduling decisions are taken beforetask activation: the schedule is stored in atable (table-driven scheduling ).
On-linescheduling decisions are taken at run timeon the set of active tasks.
Optimal vs. Heuristic
OptimalThey generate a schedule that minimizesa cost function, defined based on anoptimality criterion.
HeuristicThey generate a schedule according to aheuristic function that tries to satisfy anoptimality criterion, but there is noguarantee of success.
Optimality criteria
Feasibility : Find a feasible schedule if there exists one.
Minimize the maximum lateness
Minimize the number of deadline miss
Assign a value to each task, then maximizethe cumulative value of the feasible tasks
14/10/2015
2
Simplifying assumptions
Single processor
Homogeneous task sets
Fully preemptive tasks
Simultaneous activations
No precedence constraints
No resource constraints
Algorithm taxonomy
Preemptive vs. Non Preemptive
Static vs. dynamic
On line vs. Off line
Optimal vs. Heuristic
Static vs. Dynamic
Staticscheduling decisions are taken based onfixed parameters, statically assigned totasks before activation.
Dynamicscheduling decisions are taken based onparameters that can change with time.
Off-line vs. On-line
Off-lineall scheduling decisions are taken beforetask activation: the schedule is stored in atable (table-driven scheduling ).
On-linescheduling decisions are taken at run timeon the set of active tasks.
Optimal vs. Heuristic
OptimalThey generate a schedule that minimizesa cost function, defined based on anoptimality criterion.
HeuristicThey generate a schedule according to aheuristic function that tries to satisfy anoptimality criterion, but there is noguarantee of success.
Optimality criteria
Feasibility : Find a feasible schedule if there exists one.
Minimize the maximum lateness
Minimize the number of deadline miss
Assign a value to each task, then maximizethe cumulative value of the feasible tasks
14/10/2015
2
Simplifying assumptions
Single processor
Homogeneous task sets
Fully preemptive tasks
Simultaneous activations
No precedence constraints
No resource constraints
Algorithm taxonomy
Preemptive vs. Non Preemptive
Static vs. dynamic
On line vs. Off line
Optimal vs. Heuristic
Static vs. Dynamic
Staticscheduling decisions are taken based onfixed parameters, statically assigned totasks before activation.
Dynamicscheduling decisions are taken based onparameters that can change with time.
Off-line vs. On-line
Off-lineall scheduling decisions are taken beforetask activation: the schedule is stored in atable (table-driven scheduling ).
On-linescheduling decisions are taken at run timeon the set of active tasks.
Optimal vs. Heuristic
OptimalThey generate a schedule that minimizesa cost function, defined based on anoptimality criterion.
HeuristicThey generate a schedule according to aheuristic function that tries to satisfy anoptimality criterion, but there is noguarantee of success.
Optimality criteria
Feasibility : Find a feasible schedule if there exists one.
Minimize the maximum lateness
Minimize the number of deadline miss
Assign a value to each task, then maximizethe cumulative value of the feasible tasks
14/10/2015
3
Task set assumptions
We consider algorithms for different types of tasks:
Single-job tasks (one shot)tasks with a single activation (not recurrent)
Periodic tasksrecurrent tasks regularly activated by a timer (eachtask potentially generates infinite jobs)
Aperiodic/Sporadic tasksrecurrent tasks irregularly activated by events (eachtask potentially generates infinite jobs)
Mixed task sets
Graham’s Notation
denotes the number of processors
denotes the constraints on tasks
denotes the optimality criterion
Examples:
1 preem. RavgUniprocessor algorithm for preemptive tasks that minimizes the average response time.
2 sync LmaxDual-core algorithm for synchronous tasks that minimizes the maximum lateness.
4 preem. LmaxQuad-core algorithm for preemptive tasks that minimizes the maximum lateness.
Classical scheduling policies
First Come First Served
Shortest Job First
Priority Scheduling
Round Robin
Not suited for real-time systems
First Come First Served
It assigns the CPU to tasks based on their arrival
times (intrinsically non preemptive):
t
Ready queue
CPU1234
a1 a2 a3 a4
1 2 3 4
s2 s3 s4 f4
17
‐ Very unpredictable
response times strongly depend on task arrivals:
0 4 8 12 16 20 24 28 32t
1 2 3
a1 a2 a3R3 = 26R2 = 26R1 = 20
0 4 8 12 16 20 24 28 32t
3 2 1
a3 a2 a1R3 = 2R2 = 8R1 = 26
First Come First Served
18
Shortest Job First (SJF)
Static (Ci is a constant parameter)
It can be used on line or off‐line
Can be preemptive or non preemptive
It minimizes the average response time
It selects the ready task with the shortest
computation time.
14/10/2015
3
Task set assumptions
We consider algorithms for different types of tasks:
Single-job tasks (one shot)tasks with a single activation (not recurrent)
Periodic tasksrecurrent tasks regularly activated by a timer (eachtask potentially generates infinite jobs)
Aperiodic/Sporadic tasksrecurrent tasks irregularly activated by events (eachtask potentially generates infinite jobs)
Mixed task sets
Graham’s Notation
denotes the number of processors
denotes the constraints on tasks
denotes the optimality criterion
Examples:
1 preem. RavgUniprocessor algorithm for preemptive tasks that minimizes the average response time.
2 sync LmaxDual-core algorithm for synchronous tasks that minimizes the maximum lateness.
4 preem. LmaxQuad-core algorithm for preemptive tasks that minimizes the maximum lateness.
Classical scheduling policies
First Come First Served
Shortest Job First
Priority Scheduling
Round Robin
Not suited for real-time systems
First Come First Served
It assigns the CPU to tasks based on their arrival
times (intrinsically non preemptive):
t
Ready queue
CPU1234
a1 a2 a3 a4
1 2 3 4
s2 s3 s4 f4
17
‐ Very unpredictable
response times strongly depend on task arrivals:
0 4 8 12 16 20 24 28 32t
1 2 3
a1 a2 a3R3 = 26R2 = 26R1 = 20
0 4 8 12 16 20 24 28 32t
3 2 1
a3 a2 a1R3 = 2R2 = 8R1 = 26
First Come First Served
18
Shortest Job First (SJF)
Static (Ci is a constant parameter)
It can be used on line or off‐line
Can be preemptive or non preemptive
It minimizes the average response time
It selects the ready task with the shortest
computation time.
14/10/2015
3
Task set assumptions
We consider algorithms for different types of tasks:
Single-job tasks (one shot)tasks with a single activation (not recurrent)
Periodic tasksrecurrent tasks regularly activated by a timer (eachtask potentially generates infinite jobs)
Aperiodic/Sporadic tasksrecurrent tasks irregularly activated by events (eachtask potentially generates infinite jobs)
Mixed task sets
Graham’s Notation
denotes the number of processors
denotes the constraints on tasks
denotes the optimality criterion
Examples:
1 preem. RavgUniprocessor algorithm for preemptive tasks that minimizes the average response time.
2 sync LmaxDual-core algorithm for synchronous tasks that minimizes the maximum lateness.
4 preem. LmaxQuad-core algorithm for preemptive tasks that minimizes the maximum lateness.
Classical scheduling policies
First Come First Served
Shortest Job First
Priority Scheduling
Round Robin
Not suited for real-time systems
First Come First Served
It assigns the CPU to tasks based on their arrival
times (intrinsically non preemptive):
t
Ready queue
CPU1234
a1 a2 a3 a4
1 2 3 4
s2 s3 s4 f4
17
‐ Very unpredictable
response times strongly depend on task arrivals:
0 4 8 12 16 20 24 28 32t
1 2 3
a1 a2 a3R3 = 26R2 = 26R1 = 20
0 4 8 12 16 20 24 28 32t
3 2 1
a3 a2 a1R3 = 2R2 = 8R1 = 26
First Come First Served
18
Shortest Job First (SJF)
Static (Ci is a constant parameter)
It can be used on line or off‐line
Can be preemptive or non preemptive
It minimizes the average response time
It selects the ready task with the shortest
computation time.
14/10/2015
3
Task set assumptions
We consider algorithms for different types of tasks:
Single-job tasks (one shot)tasks with a single activation (not recurrent)
Periodic tasksrecurrent tasks regularly activated by a timer (eachtask potentially generates infinite jobs)
Aperiodic/Sporadic tasksrecurrent tasks irregularly activated by events (eachtask potentially generates infinite jobs)
Mixed task sets
Graham’s Notation
denotes the number of processors
denotes the constraints on tasks
denotes the optimality criterion
Examples:
1 preem. RavgUniprocessor algorithm for preemptive tasks that minimizes the average response time.
2 sync LmaxDual-core algorithm for synchronous tasks that minimizes the maximum lateness.
4 preem. LmaxQuad-core algorithm for preemptive tasks that minimizes the maximum lateness.
Classical scheduling policies
First Come First Served
Shortest Job First
Priority Scheduling
Round Robin
Not suited for real-time systems
First Come First Served
It assigns the CPU to tasks based on their arrival
times (intrinsically non preemptive):
t
Ready queue
CPU1234
a1 a2 a3 a4
1 2 3 4
s2 s3 s4 f4
17
‐ Very unpredictable
response times strongly depend on task arrivals:
0 4 8 12 16 20 24 28 32t
1 2 3
a1 a2 a3R3 = 26R2 = 26R1 = 20
0 4 8 12 16 20 24 28 32t
3 2 1
a3 a2 a1R3 = 2R2 = 8R1 = 26
First Come First Served
18
Shortest Job First (SJF)
Static (Ci is a constant parameter)
It can be used on line or off‐line
Can be preemptive or non preemptive
It minimizes the average response time
It selects the ready task with the shortest
computation time.
14/10/2015
3
Task set assumptions
We consider algorithms for different types of tasks:
Single-job tasks (one shot)tasks with a single activation (not recurrent)
Periodic tasksrecurrent tasks regularly activated by a timer (eachtask potentially generates infinite jobs)
Aperiodic/Sporadic tasksrecurrent tasks irregularly activated by events (eachtask potentially generates infinite jobs)
Mixed task sets
Graham’s Notation
denotes the number of processors
denotes the constraints on tasks
denotes the optimality criterion
Examples:
1 preem. RavgUniprocessor algorithm for preemptive tasks that minimizes the average response time.
2 sync LmaxDual-core algorithm for synchronous tasks that minimizes the maximum lateness.
4 preem. LmaxQuad-core algorithm for preemptive tasks that minimizes the maximum lateness.
Classical scheduling policies
First Come First Served
Shortest Job First
Priority Scheduling
Round Robin
Not suited for real-time systems
First Come First Served
It assigns the CPU to tasks based on their arrival
times (intrinsically non preemptive):
t
Ready queue
CPU1234
a1 a2 a3 a4
1 2 3 4
s2 s3 s4 f4
17
‐ Very unpredictable
response times strongly depend on task arrivals:
0 4 8 12 16 20 24 28 32t
1 2 3
a1 a2 a3R3 = 26R2 = 26R1 = 20
0 4 8 12 16 20 24 28 32t
3 2 1
a3 a2 a1R3 = 2R2 = 8R1 = 26
First Come First Served
18
Shortest Job First (SJF)
Static (Ci is a constant parameter)
It can be used on line or off‐line
Can be preemptive or non preemptive
It minimizes the average response time
It selects the ready task with the shortest
computation time.
14/10/2015
3
Task set assumptions
We consider algorithms for different types of tasks:
Single-job tasks (one shot)tasks with a single activation (not recurrent)
Periodic tasksrecurrent tasks regularly activated by a timer (eachtask potentially generates infinite jobs)
Aperiodic/Sporadic tasksrecurrent tasks irregularly activated by events (eachtask potentially generates infinite jobs)
Mixed task sets
Graham’s Notation
denotes the number of processors
denotes the constraints on tasks
denotes the optimality criterion
Examples:
1 preem. RavgUniprocessor algorithm for preemptive tasks that minimizes the average response time.
2 sync LmaxDual-core algorithm for synchronous tasks that minimizes the maximum lateness.
4 preem. LmaxQuad-core algorithm for preemptive tasks that minimizes the maximum lateness.
Classical scheduling policies
First Come First Served
Shortest Job First
Priority Scheduling
Round Robin
Not suited for real-time systems
First Come First Served
It assigns the CPU to tasks based on their arrival
times (intrinsically non preemptive):
t
Ready queue
CPU1234
a1 a2 a3 a4
1 2 3 4
s2 s3 s4 f4
17
‐ Very unpredictable
response times strongly depend on task arrivals:
0 4 8 12 16 20 24 28 32t
1 2 3
a1 a2 a3R3 = 26R2 = 26R1 = 20
0 4 8 12 16 20 24 28 32t
3 2 1
a3 a2 a1R3 = 2R2 = 8R1 = 26
First Come First Served
18
Shortest Job First (SJF)
Static (Ci is a constant parameter)
It can be used on line or off‐line
Can be preemptive or non preemptive
It minimizes the average response time
It selects the ready task with the shortest
computation time.
14/10/2015
4
19
SJF - Optimality
SL SJF
S L’t
fS’ fL fL’ = fS<r0
fS’ + fL’ fS + fL
)()(1
)'(1
')(11
Rrfn
rfn
Rn
iii
n
iii
20
’ ’’ *. . .
)''(')()( RRR . . . *)( R
* = SJF
)( SJFR is the minimum response timeachievable by any algorithm
SJF - Optimality
21
Is SJF suited for Real-Time?
‐ It is not optimal in the sense of feasibility
0 4 8 12 16 20 24 28 32t
1 2
d1 d2 d3A SJF feasible
3
0 4 8 12 16 20 24 28 32t
3 2 1
d1 d2 d3SJF not feasible
22
Each task has a priority Pi, typically Pi [0, 255]
The task with the highest priority is selected for
execution.
Tasks with the same priority are served FCFS
Priority Scheduling
pi 1/Ci SJF
pi 1/ai FCFS
NOTE:
23
Priority Scheduling
Problem: starvationlow priority tasks may experience long delaysdue to the preemption of high priority tasks.
A possible solution: agingpriority increases with waiting time
24
The ready queue is served with FCFS, but ...
‐ Each task i cannot execute for more than Q
time units (Q = time quantum).
‐ When Q expires, i is put back in the queue.
CPU
READY queue
Q expired
Round Robin
14/10/2015
4
19
SJF - Optimality
SL SJF
S L’t
fS’ fL fL’ = fS<r0
fS’ + fL’ fS + fL
)()(1
)'(1
')(11
Rrfn
rfn
Rn
iii
n
iii
20
’ ’’ *. . .
)''(')()( RRR . . . *)( R
* = SJF
)( SJFR is the minimum response timeachievable by any algorithm
SJF - Optimality
21
Is SJF suited for Real-Time?
‐ It is not optimal in the sense of feasibility
0 4 8 12 16 20 24 28 32t
1 2
d1 d2 d3A SJF feasible
3
0 4 8 12 16 20 24 28 32t
3 2 1
d1 d2 d3SJF not feasible
22
Each task has a priority Pi, typically Pi [0, 255]
The task with the highest priority is selected for
execution.
Tasks with the same priority are served FCFS
Priority Scheduling
pi 1/Ci SJF
pi 1/ai FCFS
NOTE:
23
Priority Scheduling
Problem: starvationlow priority tasks may experience long delaysdue to the preemption of high priority tasks.
A possible solution: agingpriority increases with waiting time
24
The ready queue is served with FCFS, but ...
‐ Each task i cannot execute for more than Q
time units (Q = time quantum).
‐ When Q expires, i is put back in the queue.
CPU
READY queue
Q expired
Round Robin
14/10/2015
4
19
SJF - Optimality
SL SJF
S L’t
fS’ fL fL’ = fS<r0
fS’ + fL’ fS + fL
)()(1
)'(1
')(11
Rrfn
rfn
Rn
iii
n
iii
20
’ ’’ *. . .
)''(')()( RRR . . . *)( R
* = SJF
)( SJFR is the minimum response timeachievable by any algorithm
SJF - Optimality
21
Is SJF suited for Real-Time?
‐ It is not optimal in the sense of feasibility
0 4 8 12 16 20 24 28 32t
1 2
d1 d2 d3A SJF feasible
3
0 4 8 12 16 20 24 28 32t
3 2 1
d1 d2 d3SJF not feasible
22
Each task has a priority Pi, typically Pi [0, 255]
The task with the highest priority is selected for
execution.
Tasks with the same priority are served FCFS
Priority Scheduling
pi 1/Ci SJF
pi 1/ai FCFS
NOTE:
23
Priority Scheduling
Problem: starvationlow priority tasks may experience long delaysdue to the preemption of high priority tasks.
A possible solution: agingpriority increases with waiting time
24
The ready queue is served with FCFS, but ...
‐ Each task i cannot execute for more than Q
time units (Q = time quantum).
‐ When Q expires, i is put back in the queue.
CPU
READY queue
Q expired
Round Robin
14/10/2015
4
19
SJF - Optimality
SL SJF
S L’t
fS’ fL fL’ = fS<r0
fS’ + fL’ fS + fL
)()(1
)'(1
')(11
Rrfn
rfn
Rn
iii
n
iii
20
’ ’’ *. . .
)''(')()( RRR . . . *)( R
* = SJF
)( SJFR is the minimum response timeachievable by any algorithm
SJF - Optimality
21
Is SJF suited for Real-Time?
‐ It is not optimal in the sense of feasibility
0 4 8 12 16 20 24 28 32t
1 2
d1 d2 d3A SJF feasible
3
0 4 8 12 16 20 24 28 32t
3 2 1
d1 d2 d3SJF not feasible
22
Each task has a priority Pi, typically Pi [0, 255]
The task with the highest priority is selected for
execution.
Tasks with the same priority are served FCFS
Priority Scheduling
pi 1/Ci SJF
pi 1/ai FCFS
NOTE:
23
Priority Scheduling
Problem: starvationlow priority tasks may experience long delaysdue to the preemption of high priority tasks.
A possible solution: agingpriority increases with waiting time
24
The ready queue is served with FCFS, but ...
‐ Each task i cannot execute for more than Q
time units (Q = time quantum).
‐ When Q expires, i is put back in the queue.
CPU
READY queue
Q expired
Round Robin
14/10/2015
4
19
SJF - Optimality
SL SJF
S L’t
fS’ fL fL’ = fS<r0
fS’ + fL’ fS + fL
)()(1
)'(1
')(11
Rrfn
rfn
Rn
iii
n
iii
20
’ ’’ *. . .
)''(')()( RRR . . . *)( R
* = SJF
)( SJFR is the minimum response timeachievable by any algorithm
SJF - Optimality
21
Is SJF suited for Real-Time?
‐ It is not optimal in the sense of feasibility
0 4 8 12 16 20 24 28 32t
1 2
d1 d2 d3A SJF feasible
3
0 4 8 12 16 20 24 28 32t
3 2 1
d1 d2 d3SJF not feasible
22
Each task has a priority Pi, typically Pi [0, 255]
The task with the highest priority is selected for
execution.
Tasks with the same priority are served FCFS
Priority Scheduling
pi 1/Ci SJF
pi 1/ai FCFS
NOTE:
23
Priority Scheduling
Problem: starvationlow priority tasks may experience long delaysdue to the preemption of high priority tasks.
A possible solution: agingpriority increases with waiting time
24
The ready queue is served with FCFS, but ...
‐ Each task i cannot execute for more than Q
time units (Q = time quantum).
‐ When Q expires, i is put back in the queue.
CPU
READY queue
Q expired
Round Robin
14/10/2015
4
19
SJF - Optimality
SL SJF
S L’t
fS’ fL fL’ = fS<r0
fS’ + fL’ fS + fL
)()(1
)'(1
')(11
Rrfn
rfn
Rn
iii
n
iii
20
’ ’’ *. . .
)''(')()( RRR . . . *)( R
* = SJF
)( SJFR is the minimum response timeachievable by any algorithm
SJF - Optimality
21
Is SJF suited for Real-Time?
‐ It is not optimal in the sense of feasibility
0 4 8 12 16 20 24 28 32t
1 2
d1 d2 d3A SJF feasible
3
0 4 8 12 16 20 24 28 32t
3 2 1
d1 d2 d3SJF not feasible
22
Each task has a priority Pi, typically Pi [0, 255]
The task with the highest priority is selected for
execution.
Tasks with the same priority are served FCFS
Priority Scheduling
pi 1/Ci SJF
pi 1/ai FCFS
NOTE:
23
Priority Scheduling
Problem: starvationlow priority tasks may experience long delaysdue to the preemption of high priority tasks.
A possible solution: agingpriority increases with waiting time
24
The ready queue is served with FCFS, but ...
‐ Each task i cannot execute for more than Q
time units (Q = time quantum).
‐ When Q expires, i is put back in the queue.
CPU
READY queue
Q expired
Round Robin
14/10/2015
5
25
n = number of task in the system
t
nQ nQ
Qi
ii nC
Q
CnQR )(
Round Robin
Each task runs as it was executing alone on a virtualprocessor n times slower than the real one.
Time sharing
26
Round Robin
‐ if Q > max(Ci) then RR FCFS
‐ if Q context switch time () then
t
n(Q + )
Q +
n(Q + )
Q
QnC
Q
CQnR i
ii )(
27
CPU
High priority
Medium priority
Low priority
system tasks
interactive tasks
batch tasks
PRIORITY
RR
FCFS
Multi-Level Scheduling
28
Multi-Level Scheduling
CPU
priority
30
Real-Time Algorithms
They can either schedule tasks by
relative deadlines Di (static )
absolute deadlines d i (dynamic )
Di
diri
14/10/2015
5
25
n = number of task in the system
t
nQ nQ
Qi
ii nC
Q
CnQR )(
Round Robin
Each task runs as it was executing alone on a virtualprocessor n times slower than the real one.
Time sharing
26
Round Robin
‐ if Q > max(Ci) then RR FCFS
‐ if Q context switch time () then
t
n(Q + )
Q +
n(Q + )
Q
QnC
Q
CQnR i
ii )(
27
CPU
High priority
Medium priority
Low priority
system tasks
interactive tasks
batch tasks
PRIORITY
RR
FCFS
Multi-Level Scheduling
28
Multi-Level Scheduling
CPU
priority
30
Real-Time Algorithms
They can either schedule tasks by
relative deadlines Di (static )
absolute deadlines d i (dynamic )
Di
diri
14/10/2015
5
25
n = number of task in the system
t
nQ nQ
Qi
ii nC
Q
CnQR )(
Round Robin
Each task runs as it was executing alone on a virtualprocessor n times slower than the real one.
Time sharing
26
Round Robin
‐ if Q > max(Ci) then RR FCFS
‐ if Q context switch time () then
t
n(Q + )
Q +
n(Q + )
Q
QnC
Q
CQnR i
ii )(
27
CPU
High priority
Medium priority
Low priority
system tasks
interactive tasks
batch tasks
PRIORITY
RR
FCFS
Multi-Level Scheduling
28
Multi-Level Scheduling
CPU
priority
30
Real-Time Algorithms
They can either schedule tasks by
relative deadlines Di (static )
absolute deadlines d i (dynamic )
Di
diri
14/10/2015
5
25
n = number of task in the system
t
nQ nQ
Qi
ii nC
Q
CnQR )(
Round Robin
Each task runs as it was executing alone on a virtualprocessor n times slower than the real one.
Time sharing
26
Round Robin
‐ if Q > max(Ci) then RR FCFS
‐ if Q context switch time () then
t
n(Q + )
Q +
n(Q + )
Q
QnC
Q
CQnR i
ii )(
27
CPU
High priority
Medium priority
Low priority
system tasks
interactive tasks
batch tasks
PRIORITY
RR
FCFS
Multi-Level Scheduling
28
Multi-Level Scheduling
CPU
priority
30
Real-Time Algorithms
They can either schedule tasks by
relative deadlines Di (static )
absolute deadlines d i (dynamic )
Di
diri
14/10/2015
5
25
n = number of task in the system
t
nQ nQ
Qi
ii nC
Q
CnQR )(
Round Robin
Each task runs as it was executing alone on a virtualprocessor n times slower than the real one.
Time sharing
26
Round Robin
‐ if Q > max(Ci) then RR FCFS
‐ if Q context switch time () then
t
n(Q + )
Q +
n(Q + )
Q
QnC
Q
CQnR i
ii )(
27
CPU
High priority
Medium priority
Low priority
system tasks
interactive tasks
batch tasks
PRIORITY
RR
FCFS
Multi-Level Scheduling
28
Multi-Level Scheduling
CPU
priority
30
Real-Time Algorithms
They can either schedule tasks by
relative deadlines Di (static )
absolute deadlines d i (dynamic )
Di
diri
14/10/2015
5
25
n = number of task in the system
t
nQ nQ
Qi
ii nC
Q
CnQR )(
Round Robin
Each task runs as it was executing alone on a virtualprocessor n times slower than the real one.
Time sharing
26
Round Robin
‐ if Q > max(Ci) then RR FCFS
‐ if Q context switch time () then
t
n(Q + )
Q +
n(Q + )
Q
QnC
Q
CQnR i
ii )(
27
CPU
High priority
Medium priority
Low priority
system tasks
interactive tasks
batch tasks
PRIORITY
RR
FCFS
Multi-Level Scheduling
28
Multi-Level Scheduling
CPU
priority
30
Real-Time Algorithms
They can either schedule tasks by
relative deadlines Di (static )
absolute deadlines d i (dynamic )
Di
diri
14/10/2015
6
31
Earliest Due Date
Problem
Algorithm [Jackson 55]
- Order the ready queue by increasing deadline.
Assumptions- All the tasks are simultaneously activated
- Preemption is not needed.- Static (Di is fixed)
Property- It minimizes the maximum lateness (Lmax)
1 sync Lmax
32
Lateness
L i > 0
di f iri
di
L i < 0
f iri
L i = fi di
33
Maximum Lateness
Lmax = maxi (L i)
if (Lmax < 0) then
no task exceeds its deadline
34
EDD - Optimality
Lmax = La = fa da
La’ = fa’ da < fa da
Lb’ = fb’ db < fa da
L’ max < Lmax
AB EDD
A B’t
fa’ fb fb’ = fa<r0 da db
35
EDD - Optimality
’ ’’ *. . .
)''(')()( maxmaxmax LLL . . . *)(max L
* = EDD
)(max EDDL is the minimum valueachievable by any algorithm
36
EDD guarantee test (off line)
tf1 f2 f3 f4
1 2 3 4
A task set is feasible iff ii dfi
i
kki Cf
1i
i
kk DCi
1
14/10/2015
6
31
Earliest Due Date
Problem
Algorithm [Jackson 55]
- Order the ready queue by increasing deadline.
Assumptions- All the tasks are simultaneously activated
- Preemption is not needed.- Static (Di is fixed)
Property- It minimizes the maximum lateness (Lmax)
1 sync Lmax
32
Lateness
L i > 0
di f iri
di
L i < 0
f iri
L i = fi di
33
Maximum Lateness
Lmax = maxi (L i)
if (Lmax < 0) then
no task exceeds its deadline
34
EDD - Optimality
Lmax = La = fa da
La’ = fa’ da < fa da
Lb’ = fb’ db < fa da
L’ max < Lmax
AB EDD
A B’t
fa’ fb fb’ = fa<r0 da db
35
EDD - Optimality
’ ’’ *. . .
)''(')()( maxmaxmax LLL . . . *)(max L
* = EDD
)(max EDDL is the minimum valueachievable by any algorithm
36
EDD guarantee test (off line)
tf1 f2 f3 f4
1 2 3 4
A task set is feasible iff ii dfi
i
kki Cf
1i
i
kk DCi
1
14/10/2015
6
31
Earliest Due Date
Problem
Algorithm [Jackson 55]
- Order the ready queue by increasing deadline.
Assumptions- All the tasks are simultaneously activated
- Preemption is not needed.- Static (Di is fixed)
Property- It minimizes the maximum lateness (Lmax)
1 sync Lmax
32
Lateness
L i > 0
di f iri
di
L i < 0
f iri
L i = fi di
33
Maximum Lateness
Lmax = maxi (L i)
if (Lmax < 0) then
no task exceeds its deadline
34
EDD - Optimality
Lmax = La = fa da
La’ = fa’ da < fa da
Lb’ = fb’ db < fa da
L’ max < Lmax
AB EDD
A B’t
fa’ fb fb’ = fa<r0 da db
35
EDD - Optimality
’ ’’ *. . .
)''(')()( maxmaxmax LLL . . . *)(max L
* = EDD
)(max EDDL is the minimum valueachievable by any algorithm
36
EDD guarantee test (off line)
tf1 f2 f3 f4
1 2 3 4
A task set is feasible iff ii dfi
i
kki Cf
1i
i
kk DCi
1
14/10/2015
6
31
Earliest Due Date
Problem
Algorithm [Jackson 55]
- Order the ready queue by increasing deadline.
Assumptions- All the tasks are simultaneously activated
- Preemption is not needed.- Static (Di is fixed)
Property- It minimizes the maximum lateness (Lmax)
1 sync Lmax
32
Lateness
L i > 0
di f iri
di
L i < 0
f iri
L i = fi di
33
Maximum Lateness
Lmax = maxi (L i)
if (Lmax < 0) then
no task exceeds its deadline
34
EDD - Optimality
Lmax = La = fa da
La’ = fa’ da < fa da
Lb’ = fb’ db < fa da
L’ max < Lmax
AB EDD
A B’t
fa’ fb fb’ = fa<r0 da db
35
EDD - Optimality
’ ’’ *. . .
)''(')()( maxmaxmax LLL . . . *)(max L
* = EDD
)(max EDDL is the minimum valueachievable by any algorithm
36
EDD guarantee test (off line)
tf1 f2 f3 f4
1 2 3 4
A task set is feasible iff ii dfi
i
kki Cf
1i
i
kk DCi
1
14/10/2015
6
31
Earliest Due Date
Problem
Algorithm [Jackson 55]
- Order the ready queue by increasing deadline.
Assumptions- All the tasks are simultaneously activated
- Preemption is not needed.- Static (Di is fixed)
Property- It minimizes the maximum lateness (Lmax)
1 sync Lmax
32
Lateness
L i > 0
di f iri
di
L i < 0
f iri
L i = fi di
33
Maximum Lateness
Lmax = maxi (L i)
if (Lmax < 0) then
no task exceeds its deadline
34
EDD - Optimality
Lmax = La = fa da
La’ = fa’ da < fa da
Lb’ = fb’ db < fa da
L’ max < Lmax
AB EDD
A B’t
fa’ fb fb’ = fa<r0 da db
35
EDD - Optimality
’ ’’ *. . .
)''(')()( maxmaxmax LLL . . . *)(max L
* = EDD
)(max EDDL is the minimum valueachievable by any algorithm
36
EDD guarantee test (off line)
tf1 f2 f3 f4
1 2 3 4
A task set is feasible iff ii dfi
i
kki Cf
1i
i
kk DCi
1
14/10/2015
6
31
Earliest Due Date
Problem
Algorithm [Jackson 55]
- Order the ready queue by increasing deadline.
Assumptions- All the tasks are simultaneously activated
- Preemption is not needed.- Static (Di is fixed)
Property- It minimizes the maximum lateness (Lmax)
1 sync Lmax
32
Lateness
L i > 0
di f iri
di
L i < 0
f iri
L i = fi di
33
Maximum Lateness
Lmax = maxi (L i)
if (Lmax < 0) then
no task exceeds its deadline
34
EDD - Optimality
Lmax = La = fa da
La’ = fa’ da < fa da
Lb’ = fb’ db < fa da
L’ max < Lmax
AB EDD
A B’t
fa’ fb fb’ = fa<r0 da db
35
EDD - Optimality
’ ’’ *. . .
)''(')()( maxmaxmax LLL . . . *)(max L
* = EDD
)(max EDDL is the minimum valueachievable by any algorithm
36
EDD guarantee test (off line)
tf1 f2 f3 f4
1 2 3 4
A task set is feasible iff ii dfi
i
kki Cf
1i
i
kk DCi
1
14/10/2015
7
37
Problem
Algorithm [Horn 74]
- Order the ready queue by increasing absolute deadline
Assumptions
- Tasks can be activated dynamically
- Dynamic algorithm (di depends on ai)
- Tasks can be preempted at any time
Property- It minimizes the maximum lateness (Lmax)
1 preem. Lmax
Earliest Deadline First
38
EDF Example
39
EDF Guarantee test (on line)
t
c1(t)
c2(t)
c3(t)
c4(t)
tdtci i
i
kk
1
)(
40
Complexity
EDD
Scheduler (queue ordering):Feasibility Test (guarantee test):
EDF
Scheduler (insertion in the queue):Feasibility Test (guarantee single task):
O(n log n)
O(n)
O(n)
O(n)
41
EDF optimality
In the sense of feasibility [Dertouzos 1974]
An algorithm A is optimal in the sense offeasibility if it generates a feasible schedule, ifthere exists one.
Demonstration method
It is sufficient to prove that, given an arbitraryfeasible schedule, the schedule generated byEDF is also feasible.
42
0 2 4 6 8 10 12 14 16
t = 4
EDF
1
2
3
4
Dertouzos Transformationfor (t = 0 to Dmax–1)
if ((t) E(t)) {(tE) = (t);(t) = E(t);
}
(t) = task executing at time t
E(t) = task with min d at time t
tE = time at which E is executed
(t) = 4E(t) = 2tE = 6
14/10/2015
7
37
Problem
Algorithm [Horn 74]
- Order the ready queue by increasing absolute deadline
Assumptions
- Tasks can be activated dynamically
- Dynamic algorithm (di depends on ai)
- Tasks can be preempted at any time
Property- It minimizes the maximum lateness (Lmax)
1 preem. Lmax
Earliest Deadline First
38
EDF Example
39
EDF Guarantee test (on line)
t
c1(t)
c2(t)
c3(t)
c4(t)
tdtci i
i
kk
1
)(
40
Complexity
EDD
Scheduler (queue ordering):Feasibility Test (guarantee test):
EDF
Scheduler (insertion in the queue):Feasibility Test (guarantee single task):
O(n log n)
O(n)
O(n)
O(n)
41
EDF optimality
In the sense of feasibility [Dertouzos 1974]
An algorithm A is optimal in the sense offeasibility if it generates a feasible schedule, ifthere exists one.
Demonstration method
It is sufficient to prove that, given an arbitraryfeasible schedule, the schedule generated byEDF is also feasible.
42
0 2 4 6 8 10 12 14 16
t = 4
EDF
1
2
3
4
Dertouzos Transformationfor (t = 0 to Dmax–1)
if ((t) E(t)) {(tE) = (t);(t) = E(t);
}
(t) = task executing at time t
E(t) = task with min d at time t
tE = time at which E is executed
(t) = 4E(t) = 2tE = 6
14/10/2015
7
37
Problem
Algorithm [Horn 74]
- Order the ready queue by increasing absolute deadline
Assumptions
- Tasks can be activated dynamically
- Dynamic algorithm (di depends on ai)
- Tasks can be preempted at any time
Property- It minimizes the maximum lateness (Lmax)
1 preem. Lmax
Earliest Deadline First
38
EDF Example
39
EDF Guarantee test (on line)
t
c1(t)
c2(t)
c3(t)
c4(t)
tdtci i
i
kk
1
)(
40
Complexity
EDD
Scheduler (queue ordering):Feasibility Test (guarantee test):
EDF
Scheduler (insertion in the queue):Feasibility Test (guarantee single task):
O(n log n)
O(n)
O(n)
O(n)
41
EDF optimality
In the sense of feasibility [Dertouzos 1974]
An algorithm A is optimal in the sense offeasibility if it generates a feasible schedule, ifthere exists one.
Demonstration method
It is sufficient to prove that, given an arbitraryfeasible schedule, the schedule generated byEDF is also feasible.
42
0 2 4 6 8 10 12 14 16
t = 4
EDF
1
2
3
4
Dertouzos Transformationfor (t = 0 to Dmax–1)
if ((t) E(t)) {(tE) = (t);(t) = E(t);
}
(t) = task executing at time t
E(t) = task with min d at time t
tE = time at which E is executed
(t) = 4E(t) = 2tE = 6
14/10/2015
7
37
Problem
Algorithm [Horn 74]
- Order the ready queue by increasing absolute deadline
Assumptions
- Tasks can be activated dynamically
- Dynamic algorithm (di depends on ai)
- Tasks can be preempted at any time
Property- It minimizes the maximum lateness (Lmax)
1 preem. Lmax
Earliest Deadline First
38
EDF Example
39
EDF Guarantee test (on line)
t
c1(t)
c2(t)
c3(t)
c4(t)
tdtci i
i
kk
1
)(
40
Complexity
EDD
Scheduler (queue ordering):Feasibility Test (guarantee test):
EDF
Scheduler (insertion in the queue):Feasibility Test (guarantee single task):
O(n log n)
O(n)
O(n)
O(n)
41
EDF optimality
In the sense of feasibility [Dertouzos 1974]
An algorithm A is optimal in the sense offeasibility if it generates a feasible schedule, ifthere exists one.
Demonstration method
It is sufficient to prove that, given an arbitraryfeasible schedule, the schedule generated byEDF is also feasible.
42
0 2 4 6 8 10 12 14 16
t = 4
EDF
1
2
3
4
Dertouzos Transformationfor (t = 0 to Dmax–1)
if ((t) E(t)) {(tE) = (t);(t) = E(t);
}
(t) = task executing at time t
E(t) = task with min d at time t
tE = time at which E is executed
(t) = 4E(t) = 2tE = 6
14/10/2015
7
37
Problem
Algorithm [Horn 74]
- Order the ready queue by increasing absolute deadline
Assumptions
- Tasks can be activated dynamically
- Dynamic algorithm (di depends on ai)
- Tasks can be preempted at any time
Property- It minimizes the maximum lateness (Lmax)
1 preem. Lmax
Earliest Deadline First
38
EDF Example
39
EDF Guarantee test (on line)
t
c1(t)
c2(t)
c3(t)
c4(t)
tdtci i
i
kk
1
)(
40
Complexity
EDD
Scheduler (queue ordering):Feasibility Test (guarantee test):
EDF
Scheduler (insertion in the queue):Feasibility Test (guarantee single task):
O(n log n)
O(n)
O(n)
O(n)
41
EDF optimality
In the sense of feasibility [Dertouzos 1974]
An algorithm A is optimal in the sense offeasibility if it generates a feasible schedule, ifthere exists one.
Demonstration method
It is sufficient to prove that, given an arbitraryfeasible schedule, the schedule generated byEDF is also feasible.
42
0 2 4 6 8 10 12 14 16
t = 4
EDF
1
2
3
4
Dertouzos Transformationfor (t = 0 to Dmax–1)
if ((t) E(t)) {(tE) = (t);(t) = E(t);
}
(t) = task executing at time t
E(t) = task with min d at time t
tE = time at which E is executed
(t) = 4E(t) = 2tE = 6
14/10/2015
8
43
0 2 4 6 8 10 12 14 16
t = 4
1
2
3
4
(t) = 4E(t) = 2tE = 6
(t) = task executing at time t
E(t) = task with min d at time t
tE = time at which E is executed
Dertouzos Transformationfor (t = 0 to Dmax–1)
if ((t) E(t)) {(tE) = (t);(t) = E(t);
}
44
0 2 4 6 8 10 12 14 16
t = 5
1
2
3
4
(t) = 3E(t) = 2tE = 7
(t) = task executing at time t
E(t) = task with min d at time t
tE = time at which E is executed
Dertouzos Transformationfor (t = 0 to Dmax–1)
if ((t) E(t)) {(tE) = (t);(t) = E(t);
}
45
0 2 4 6 8 10 12 14 16
t = 5
1
2
3
4
(t) = 3E(t) = 2tE = 7
(t) = task executing at time t
E(t) = task with min d at time t
tE = time at which E is executed
Dertouzos Transformationfor (t = 0 to Dmax–1)
if ((t) E(t)) {(tE) = (t);(t) = E(t);
}
46
0 2 4 6 8 10 12 14 16
t = 5
1
2
3
4
Dertouzos TransformationDertouzos transformation preserves schedulability, in fact:
for the advanced slice, this is obvious!
for the postponed slice, the slack cannot decrease:
47
A property of optimal algorithms
If a task set is not schedulable by anoptimal algorithm, then cannot bescheduled by any other algorithm.
If an algorithm A minimizes Lmax then A isalso optimal in the sense of feasibility.The opposite is not true.
48
Non Preemptive Scheduling
Under non preemptive execution, EDF is not optimal:
1
2
1
2
Feasible schedule
EDF
0 2 431 5 6 7 8 9
0 2 431 5 6 7 8 9
14/10/2015
8
43
0 2 4 6 8 10 12 14 16
t = 4
1
2
3
4
(t) = 4E(t) = 2tE = 6
(t) = task executing at time t
E(t) = task with min d at time t
tE = time at which E is executed
Dertouzos Transformationfor (t = 0 to Dmax–1)
if ((t) E(t)) {(tE) = (t);(t) = E(t);
}
44
0 2 4 6 8 10 12 14 16
t = 5
1
2
3
4
(t) = 3E(t) = 2tE = 7
(t) = task executing at time t
E(t) = task with min d at time t
tE = time at which E is executed
Dertouzos Transformationfor (t = 0 to Dmax–1)
if ((t) E(t)) {(tE) = (t);(t) = E(t);
}
45
0 2 4 6 8 10 12 14 16
t = 5
1
2
3
4
(t) = 3E(t) = 2tE = 7
(t) = task executing at time t
E(t) = task with min d at time t
tE = time at which E is executed
Dertouzos Transformationfor (t = 0 to Dmax–1)
if ((t) E(t)) {(tE) = (t);(t) = E(t);
}
46
0 2 4 6 8 10 12 14 16
t = 5
1
2
3
4
Dertouzos TransformationDertouzos transformation preserves schedulability, in fact:
for the advanced slice, this is obvious!
for the postponed slice, the slack cannot decrease:
47
A property of optimal algorithms
If a task set is not schedulable by anoptimal algorithm, then cannot bescheduled by any other algorithm.
If an algorithm A minimizes Lmax then A isalso optimal in the sense of feasibility.The opposite is not true.
48
Non Preemptive Scheduling
Under non preemptive execution, EDF is not optimal:
1
2
1
2
Feasible schedule
EDF
0 2 431 5 6 7 8 9
0 2 431 5 6 7 8 9
14/10/2015
9
49
1
2
0 2 431 5 6 7 8 9
Non Preemptive Scheduling
To achieve optimality, an algorithm should be clairvoyant,and decide to leave the CPU idle in the presence of readytasks:
If we forbid leaving the CPU idle in the presence of readytasks, then EDF is optimal. We say that:
NP-EDF is optimal among non-idle scheduling algorithms
50
empty schedule
partial schedule
complete schedule
F N FNN N N N
depth = n
# leaves = n!
complexity: O(n n!)
Non Preemptive Scheduling
The problem of finding a feasible schedule is NP hard andis treated off-line with tree search algorithms:
51
Bratley’s Algorithm [Bratley 71]
( 1 | no-preem | Lmax )
Reduces the average complexity by a pruning rule:
Do not expand unless the partial schedule isfound to be strongly feasible .
A partial schedule is said to be stronglyfeasible if adding any of the remaining nodesit remains feasible.
52
6 51 2
6 531
6 61 3
6 3 431 21 3 4
6 4 4
Example
4 2 71 1 51 2 60 2 4
1
2
3
4
ai Ci di
16
Scheduled task
finishing time
1 2 3 46 2 3 2
2
3 4
3 1
4
2
3 7
12
7
1
1 = {4, 2, 3, 1}2 = {4, 3, 2, 1}
Bratley’s Algorithm [Bratley 71]
( 1 | no-preem | Lmax )
53
Spring algorithm [Stankovic & Ramamritham 87]
min H
min H
min H
min H
Backtrackingis possible
Heuristic search
1. The schedule for a set of N tasks is constructed in N steps
2. The search is driven by a heuristic function H
3. At each step the algorithm selects the task that minimizes the heuristic function
54
H = ri FCFSH = Ci SJFH = Di DMH = di EDF
H = w1 ri + w2 Di
H = w1 Ci + w2 di
H = w1 Vi + w2 di
Spring algorithm [Stankovic & Ramamritham 87]
Heuristic functions
Example of heuristic functions:
Composite heuristic functions:
14/10/2015
9
49
1
2
0 2 431 5 6 7 8 9
Non Preemptive Scheduling
To achieve optimality, an algorithm should be clairvoyant,and decide to leave the CPU idle in the presence of readytasks:
If we forbid leaving the CPU idle in the presence of readytasks, then EDF is optimal. We say that:
NP-EDF is optimal among non-idle scheduling algorithms
50
empty schedule
partial schedule
complete schedule
F N FNN N N N
depth = n
# leaves = n!
complexity: O(n n!)
Non Preemptive Scheduling
The problem of finding a feasible schedule is NP hard andis treated off-line with tree search algorithms:
51
Bratley’s Algorithm [Bratley 71]
( 1 | no-preem | Lmax )
Reduces the average complexity by a pruning rule:
Do not expand unless the partial schedule isfound to be strongly feasible .
A partial schedule is said to be stronglyfeasible if adding any of the remaining nodesit remains feasible.
52
6 51 2
6 531
6 61 3
6 3 431 21 3 4
6 4 4
Example
4 2 71 1 51 2 60 2 4
1
2
3
4
ai Ci di
16
Scheduled task
finishing time
1 2 3 46 2 3 2
2
3 4
3 1
4
2
3 7
12
7
1
1 = {4, 2, 3, 1}2 = {4, 3, 2, 1}
Bratley’s Algorithm [Bratley 71]
( 1 | no-preem | Lmax )
53
Spring algorithm [Stankovic & Ramamritham 87]
min H
min H
min H
min H
Backtrackingis possible
Heuristic search
1. The schedule for a set of N tasks is constructed in N steps
2. The search is driven by a heuristic function H
3. At each step the algorithm selects the task that minimizes the heuristic function
54
H = ri FCFSH = Ci SJFH = Di DMH = di EDF
H = w1 ri + w2 Di
H = w1 Ci + w2 di
H = w1 Vi + w2 di
Spring algorithm [Stankovic & Ramamritham 87]
Heuristic functions
Example of heuristic functions:
Composite heuristic functions:
14/10/2015
10
55
H = Ei ( w1 ri + w2 Di )H = Ei ( w1 Ci + w2 di )
Eligibility
Ei =
i
Ei = 1
i
Spring algorithm [Stankovic & Ramamritham 87]
Possibility to handle precedence constraints:
Heuristic functions:
Heuristic functions
56
Complexity:
min H
min H
min H
min H
Exhaustive search:Heuristic search:Heuristic w. k btracks:
Heuristic algorithm
Spring algorithm [Stankovic & Ramamritham 87]
O(N·N!)
O(N2)
O(kN2)
57
Task ID
start time
length
next
Implementation
Spring algorithm [Stankovic & Ramamritham 87]
If a feasible schedule is not found, does not meanthat there not exist one.
If a feasible solution is found, the schedule isstored into a disptach list:
58
Handling precedence constraints
Given a precedence graph, it constructs the schedule fromthe tail: among the nodes with no successors, LDF selectsthe task with the latest deadline:
Latest Deadline First (LDF) [Lawler 73]
5
A
B C
D E F3 6
5 4
2 deadline
FECDBALDF
dA dD dC dB,dE dF
EDF FEC DBA
D misses its deadline
1 | prec, sync | Lmax
59
Handling precedence constraints
Assumes that arrival times are known a priori;
Transforms precedence constraints into timingconstraints by modifying arrival times and deadlinesbased on the precedence graph;
Applies EDF to the modified task set.
EDF* [Chetto & Chetto 89]
1 | prec, preem | Lmax
60
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
A
B
A
B
CA
CB
The idea is to:
postpone the arrival time of a successor
advance the deadine of a predecessor
rB dB
rA dA
14/10/2015
10
55
H = Ei ( w1 ri + w2 Di )H = Ei ( w1 Ci + w2 di )
Eligibility
Ei =
i
Ei = 1
i
Spring algorithm [Stankovic & Ramamritham 87]
Possibility to handle precedence constraints:
Heuristic functions:
Heuristic functions
56
Complexity:
min H
min H
min H
min H
Exhaustive search:Heuristic search:Heuristic w. k btracks:
Heuristic algorithm
Spring algorithm [Stankovic & Ramamritham 87]
O(N·N!)
O(N2)
O(kN2)
57
Task ID
start time
length
next
Implementation
Spring algorithm [Stankovic & Ramamritham 87]
If a feasible schedule is not found, does not meanthat there not exist one.
If a feasible solution is found, the schedule isstored into a disptach list:
58
Handling precedence constraints
Given a precedence graph, it constructs the schedule fromthe tail: among the nodes with no successors, LDF selectsthe task with the latest deadline:
Latest Deadline First (LDF) [Lawler 73]
5
A
B C
D E F3 6
5 4
2 deadline
FECDBALDF
dA dD dC dB,dE dF
EDF FEC DBA
D misses its deadline
1 | prec, sync | Lmax
59
Handling precedence constraints
Assumes that arrival times are known a priori;
Transforms precedence constraints into timingconstraints by modifying arrival times and deadlinesbased on the precedence graph;
Applies EDF to the modified task set.
EDF* [Chetto & Chetto 89]
1 | prec, preem | Lmax
60
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
A
B
A
B
CA
CB
The idea is to:
postpone the arrival time of a successor
advance the deadine of a predecessor
rB dB
rA dA
14/10/2015
10
55
H = Ei ( w1 ri + w2 Di )H = Ei ( w1 Ci + w2 di )
Eligibility
Ei =
i
Ei = 1
i
Spring algorithm [Stankovic & Ramamritham 87]
Possibility to handle precedence constraints:
Heuristic functions:
Heuristic functions
56
Complexity:
min H
min H
min H
min H
Exhaustive search:Heuristic search:Heuristic w. k btracks:
Heuristic algorithm
Spring algorithm [Stankovic & Ramamritham 87]
O(N·N!)
O(N2)
O(kN2)
57
Task ID
start time
length
next
Implementation
Spring algorithm [Stankovic & Ramamritham 87]
If a feasible schedule is not found, does not meanthat there not exist one.
If a feasible solution is found, the schedule isstored into a disptach list:
58
Handling precedence constraints
Given a precedence graph, it constructs the schedule fromthe tail: among the nodes with no successors, LDF selectsthe task with the latest deadline:
Latest Deadline First (LDF) [Lawler 73]
5
A
B C
D E F3 6
5 4
2 deadline
FECDBALDF
dA dD dC dB,dE dF
EDF FEC DBA
D misses its deadline
1 | prec, sync | Lmax
59
Handling precedence constraints
Assumes that arrival times are known a priori;
Transforms precedence constraints into timingconstraints by modifying arrival times and deadlinesbased on the precedence graph;
Applies EDF to the modified task set.
EDF* [Chetto & Chetto 89]
1 | prec, preem | Lmax
60
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
A
B
A
B
CA
CB
The idea is to:
postpone the arrival time of a successor
advance the deadine of a predecessor
rB dB
rA dA
14/10/2015
10
55
H = Ei ( w1 ri + w2 Di )H = Ei ( w1 Ci + w2 di )
Eligibility
Ei =
i
Ei = 1
i
Spring algorithm [Stankovic & Ramamritham 87]
Possibility to handle precedence constraints:
Heuristic functions:
Heuristic functions
56
Complexity:
min H
min H
min H
min H
Exhaustive search:Heuristic search:Heuristic w. k btracks:
Heuristic algorithm
Spring algorithm [Stankovic & Ramamritham 87]
O(N·N!)
O(N2)
O(kN2)
57
Task ID
start time
length
next
Implementation
Spring algorithm [Stankovic & Ramamritham 87]
If a feasible schedule is not found, does not meanthat there not exist one.
If a feasible solution is found, the schedule isstored into a disptach list:
58
Handling precedence constraints
Given a precedence graph, it constructs the schedule fromthe tail: among the nodes with no successors, LDF selectsthe task with the latest deadline:
Latest Deadline First (LDF) [Lawler 73]
5
A
B C
D E F3 6
5 4
2 deadline
FECDBALDF
dA dD dC dB,dE dF
EDF FEC DBA
D misses its deadline
1 | prec, sync | Lmax
59
Handling precedence constraints
Assumes that arrival times are known a priori;
Transforms precedence constraints into timingconstraints by modifying arrival times and deadlinesbased on the precedence graph;
Applies EDF to the modified task set.
EDF* [Chetto & Chetto 89]
1 | prec, preem | Lmax
60
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
A
B
A
B
CA
CB
The idea is to:
postpone the arrival time of a successor
advance the deadine of a predecessor
rB dB
rA dA
14/10/2015
10
55
H = Ei ( w1 ri + w2 Di )H = Ei ( w1 Ci + w2 di )
Eligibility
Ei =
i
Ei = 1
i
Spring algorithm [Stankovic & Ramamritham 87]
Possibility to handle precedence constraints:
Heuristic functions:
Heuristic functions
56
Complexity:
min H
min H
min H
min H
Exhaustive search:Heuristic search:Heuristic w. k btracks:
Heuristic algorithm
Spring algorithm [Stankovic & Ramamritham 87]
O(N·N!)
O(N2)
O(kN2)
57
Task ID
start time
length
next
Implementation
Spring algorithm [Stankovic & Ramamritham 87]
If a feasible schedule is not found, does not meanthat there not exist one.
If a feasible solution is found, the schedule isstored into a disptach list:
58
Handling precedence constraints
Given a precedence graph, it constructs the schedule fromthe tail: among the nodes with no successors, LDF selectsthe task with the latest deadline:
Latest Deadline First (LDF) [Lawler 73]
5
A
B C
D E F3 6
5 4
2 deadline
FECDBALDF
dA dD dC dB,dE dF
EDF FEC DBA
D misses its deadline
1 | prec, sync | Lmax
59
Handling precedence constraints
Assumes that arrival times are known a priori;
Transforms precedence constraints into timingconstraints by modifying arrival times and deadlinesbased on the precedence graph;
Applies EDF to the modified task set.
EDF* [Chetto & Chetto 89]
1 | prec, preem | Lmax
60
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
A
B
A
B
CA
CB
The idea is to:
postpone the arrival time of a successor
advance the deadine of a predecessor
rB dB
rA dA
14/10/2015
10
55
H = Ei ( w1 ri + w2 Di )H = Ei ( w1 Ci + w2 di )
Eligibility
Ei =
i
Ei = 1
i
Spring algorithm [Stankovic & Ramamritham 87]
Possibility to handle precedence constraints:
Heuristic functions:
Heuristic functions
56
Complexity:
min H
min H
min H
min H
Exhaustive search:Heuristic search:Heuristic w. k btracks:
Heuristic algorithm
Spring algorithm [Stankovic & Ramamritham 87]
O(N·N!)
O(N2)
O(kN2)
57
Task ID
start time
length
next
Implementation
Spring algorithm [Stankovic & Ramamritham 87]
If a feasible schedule is not found, does not meanthat there not exist one.
If a feasible solution is found, the schedule isstored into a disptach list:
58
Handling precedence constraints
Given a precedence graph, it constructs the schedule fromthe tail: among the nodes with no successors, LDF selectsthe task with the latest deadline:
Latest Deadline First (LDF) [Lawler 73]
5
A
B C
D E F3 6
5 4
2 deadline
FECDBALDF
dA dD dC dB,dE dF
EDF FEC DBA
D misses its deadline
1 | prec, sync | Lmax
59
Handling precedence constraints
Assumes that arrival times are known a priori;
Transforms precedence constraints into timingconstraints by modifying arrival times and deadlinesbased on the precedence graph;
Applies EDF to the modified task set.
EDF* [Chetto & Chetto 89]
1 | prec, preem | Lmax
60
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
A
B
A
B
CA
CB
The idea is to:
postpone the arrival time of a successor
advance the deadine of a predecessor
rB dB
rA dA
14/10/2015
11
61
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
A
B
A
B
CA
CB
The idea is to:
postpone the arrival time of a successor: r*B = rA + CA
advance the deadine of a predecessor: d*A = dB – CB
rB dB
rA dA
r*B d*A
62
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
1. For all root nodes, set r*i = ri.
2. Select a task i such that all its immediate predecessors have been modified, else exit.
3. Set r*i = max { ri , max (r*k + Ck) }.
4. Repeat from line 2.
Arrival time modification
ki
63
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
1. For all leaves, set d*i = di.
2. Select a task i such that all its immediate successors have been modified, else exit.
3. Set d*i = min { di , min (d*k – Ck) }.
4. Repeat from line 2.
Deadline modification
ik
Summary
activ prec preem algorithm authors complexity
Sync
Asyn
Asyn
Asyn
Asyn
Sync
Asyn
N
N
N
Y
Y
Y
Y
*
*
N
N
N
Y
Y
EDD
EDF
NP-EDF
Tree search
Spring
LDF
EDF*
Jackson ‘55
Horn ‘74
Jeffay ‘91
Bratley ‘71
Stankovic ‘87
Lawler ‘73
Chetto2 ‘89
O(n logn)
O(n) task
O(n·n!)
O(n2)
O(n logn)
O(n logn)
O(n logn)
14/10/2015
11
61
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
A
B
A
B
CA
CB
The idea is to:
postpone the arrival time of a successor: r*B = rA + CA
advance the deadine of a predecessor: d*A = dB – CB
rB dB
rA dA
r*B d*A
62
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
1. For all root nodes, set r*i = ri.
2. Select a task i such that all its immediate predecessors have been modified, else exit.
3. Set r*i = max { ri , max (r*k + Ck) }.
4. Repeat from line 2.
Arrival time modification
ki
63
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
1. For all leaves, set d*i = di.
2. Select a task i such that all its immediate successors have been modified, else exit.
3. Set d*i = min { di , min (d*k – Ck) }.
4. Repeat from line 2.
Deadline modification
ik
Summary
activ prec preem algorithm authors complexity
Sync
Asyn
Asyn
Asyn
Asyn
Sync
Asyn
N
N
N
Y
Y
Y
Y
*
*
N
N
N
Y
Y
EDD
EDF
NP-EDF
Tree search
Spring
LDF
EDF*
Jackson ‘55
Horn ‘74
Jeffay ‘91
Bratley ‘71
Stankovic ‘87
Lawler ‘73
Chetto2 ‘89
O(n logn)
O(n) task
O(n·n!)
O(n2)
O(n logn)
O(n logn)
O(n logn)
14/10/2015
11
61
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
A
B
A
B
CA
CB
The idea is to:
postpone the arrival time of a successor: r*B = rA + CA
advance the deadine of a predecessor: d*A = dB – CB
rB dB
rA dA
r*B d*A
62
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
1. For all root nodes, set r*i = ri.
2. Select a task i such that all its immediate predecessors have been modified, else exit.
3. Set r*i = max { ri , max (r*k + Ck) }.
4. Repeat from line 2.
Arrival time modification
ki
63
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
1. For all leaves, set d*i = di.
2. Select a task i such that all its immediate successors have been modified, else exit.
3. Set d*i = min { di , min (d*k – Ck) }.
4. Repeat from line 2.
Deadline modification
ik
Summary
activ prec preem algorithm authors complexity
Sync
Asyn
Asyn
Asyn
Asyn
Sync
Asyn
N
N
N
Y
Y
Y
Y
*
*
N
N
N
Y
Y
EDD
EDF
NP-EDF
Tree search
Spring
LDF
EDF*
Jackson ‘55
Horn ‘74
Jeffay ‘91
Bratley ‘71
Stankovic ‘87
Lawler ‘73
Chetto2 ‘89
O(n logn)
O(n) task
O(n·n!)
O(n2)
O(n logn)
O(n logn)
O(n logn)
14/10/2015
11
61
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
A
B
A
B
CA
CB
The idea is to:
postpone the arrival time of a successor: r*B = rA + CA
advance the deadine of a predecessor: d*A = dB – CB
rB dB
rA dA
r*B d*A
62
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
1. For all root nodes, set r*i = ri.
2. Select a task i such that all its immediate predecessors have been modified, else exit.
3. Set r*i = max { ri , max (r*k + Ck) }.
4. Repeat from line 2.
Arrival time modification
ki
63
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
1. For all leaves, set d*i = di.
2. Select a task i such that all its immediate successors have been modified, else exit.
3. Set d*i = min { di , min (d*k – Ck) }.
4. Repeat from line 2.
Deadline modification
ik
Summary
activ prec preem algorithm authors complexity
Sync
Asyn
Asyn
Asyn
Asyn
Sync
Asyn
N
N
N
Y
Y
Y
Y
*
*
N
N
N
Y
Y
EDD
EDF
NP-EDF
Tree search
Spring
LDF
EDF*
Jackson ‘55
Horn ‘74
Jeffay ‘91
Bratley ‘71
Stankovic ‘87
Lawler ‘73
Chetto2 ‘89
O(n logn)
O(n) task
O(n·n!)
O(n2)
O(n logn)
O(n logn)
O(n logn)
14/10/2015
11
61
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
A
B
A
B
CA
CB
The idea is to:
postpone the arrival time of a successor: r*B = rA + CA
advance the deadine of a predecessor: d*A = dB – CB
rB dB
rA dA
r*B d*A
62
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
1. For all root nodes, set r*i = ri.
2. Select a task i such that all its immediate predecessors have been modified, else exit.
3. Set r*i = max { ri , max (r*k + Ck) }.
4. Repeat from line 2.
Arrival time modification
ki
63
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
1. For all leaves, set d*i = di.
2. Select a task i such that all its immediate successors have been modified, else exit.
3. Set d*i = min { di , min (d*k – Ck) }.
4. Repeat from line 2.
Deadline modification
ik
Summary
activ prec preem algorithm authors complexity
Sync
Asyn
Asyn
Asyn
Asyn
Sync
Asyn
N
N
N
Y
Y
Y
Y
*
*
N
N
N
Y
Y
EDD
EDF
NP-EDF
Tree search
Spring
LDF
EDF*
Jackson ‘55
Horn ‘74
Jeffay ‘91
Bratley ‘71
Stankovic ‘87
Lawler ‘73
Chetto2 ‘89
O(n logn)
O(n) task
O(n·n!)
O(n2)
O(n logn)
O(n logn)
O(n logn)
14/10/2015
11
61
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
A
B
A
B
CA
CB
The idea is to:
postpone the arrival time of a successor: r*B = rA + CA
advance the deadine of a predecessor: d*A = dB – CB
rB dB
rA dA
r*B d*A
62
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
1. For all root nodes, set r*i = ri.
2. Select a task i such that all its immediate predecessors have been modified, else exit.
3. Set r*i = max { ri , max (r*k + Ck) }.
4. Repeat from line 2.
Arrival time modification
ki
63
Handling precedence constraints
EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax
1. For all leaves, set d*i = di.
2. Select a task i such that all its immediate successors have been modified, else exit.
3. Set d*i = min { di , min (d*k – Ck) }.
4. Repeat from line 2.
Deadline modification
ik
Summary
activ prec preem algorithm authors complexity
Sync
Asyn
Asyn
Asyn
Asyn
Sync
Asyn
N
N
N
Y
Y
Y
Y
*
*
N
N
N
Y
Y
EDD
EDF
NP-EDF
Tree search
Spring
LDF
EDF*
Jackson ‘55
Horn ‘74
Jeffay ‘91
Bratley ‘71
Stankovic ‘87
Lawler ‘73
Chetto2 ‘89
O(n logn)
O(n) task
O(n·n!)
O(n2)
O(n logn)
O(n logn)
O(n logn)