Upload
dale-morrison
View
219
Download
0
Embed Size (px)
DESCRIPTION
Lecture 2, CS52703 The Closed System View Computing system Plant Sense Actuate Extract a model (Timed Automaton) ; Verify if this closed system meets a given specification.
Citation preview
20.01.05 Lecture 2, CS5270 1
The Real Time Computing Environment I
CS 5270 Lecture 2
20.01.05 Lecture 2, CS5270 2
A Conceptual Framework• The outside view:
– Closed system : verification
– Open system : Synthesis, schdulability analysis, …
• The inside view– Architecture– Interrupts– Task scheduling…..– Embedded software
• Modeling, analysis, verification are important at all layers!
20.01.05 Lecture 2, CS5270 3
The Closed System View
Computing system
Plant
SenseActuate
Extract a model (Timed Automaton) ; Verify if this closed system meets a given specification.
20.01.05 Lecture 2, CS5270 4
The Open System View
Plant
SenseActuate
For this open system , synthesize a controller (real time computing system) such that the closed system meets a given specification.
20.01.05 Lecture 2, CS5270 5
The Open System View
Actuate
For this open system , synthesize a controller (real time computing system) such that the closed system meets a given specification.
Plant
Sense
Computing system
20.01.05 Lecture 2, CS5270 6
The Outside View
• Timed Automata :– outside view
closed system Verification
• Model extraction• Specification of properties• Verification methods/tools.
20.01.05 Lecture 2, CS5270 7
The Inside View
Plant
Sense
Computing system
Actuate
What is inside the black box?
20.01.05 Lecture 2, CS5270 8
Sense
Computing system
Actuate
Sense
Actuate
20.01.05 Lecture 2, CS5270 9
Distributed Architecture
20.01.05 Lecture 2, CS5270 10
A Node
20.01.05 Lecture 2, CS5270 11
A Node
Often, multiple instances of the above for fault tolerance!
20.01.05 Lecture 2, CS5270 12
A Node
20.01.05 Lecture 2, CS5270 13
The Host Computer
DSP Processor
ASIC
Timer
Memory
Bus
20.01.05 Lecture 2, CS5270 14
The Host Computer
DSP Processor
ASIC
Timer
Memory
20.01.05 Lecture 2, CS5270 15
Tasks
DATA SETS
TASK1
TASK2
TASK3
TASK4
RT !mages!
20.01.05 Lecture 2, CS5270 16
RT Images
• RT entity:– Some item of interest whose value changes over
time.– Pressure, temperature, valve position …
• Continuous RT entity:– Can be observed at any point in time
pressure
• Discrete RT entity– Can be observed only between specified occurrences
of interesting events
20.01.05 Lecture 2, CS5270 17
RT Images
• RT Image:– Current picture of an RT entity.– <Name, time-of-observation, Value>
• Accuracy:– Value– Temporal
• <N, t, v> is -accurate if the value of N was v at some time in the interval (t-, t).
20.01.05 Lecture 2, CS5270 18
RT Images
• Suppose <N, v> is observed at time t and used at time t’.
• Then the maximum error (v’ – v) depends on the temporal accuracy () and the maximum gradient of N during this interval.
• If the gradient is high then must be small and tasks using N must be scheduled often! (this is a fair but crude statement)
20.01.05 Lecture 2, CS5270 19
Accuracy
RT Image Max.Change V-Accuracy T-accuracyPiston Position
6000 RPM 0.1 degrees 3secs
Acc. pedal 100%/sec 1% 10 msecs
Eng. Load 50%/sec 1% 20 msecs
Oil temp 10%/min 1% 6 seconds
20.01.05 Lecture 2, CS5270 20
The Design Challenge
• Derive a model of the closed system (external).– Specification/requirements– Timing– Notion of physical time
• Design and implement –a distributed, fault-tolerant, optimal- real time computing system so that the closed system meets the specification/requirements.
20.01.05 Lecture 2, CS5270 21
The Structural Elements
• Each computing node will be assigned a set of tasks to perform the intended functions.
• Task :– Execution of a (simple) sequential program.
Read the input data The internal state of the task (include RT profiles) Terminate with production of results and updating
internal state of the task.
20.01.05 Lecture 2, CS5270 22
Tasks
• The (real time) operating system provides the control signal for each initiation of the task.
• Stateless task: no internal state at the time of initiation.
• Task with state
20.01.05 Lecture 2, CS5270 23
Tasks
• Simple task:– No synchronization point within the task.– Does not block due to lack of progress by other tasks
in the system.– But can get interrupted (preempted) by the operating
system.– Total execution time can be computed in isolation.– The Worst Case Execution Time of task over all
possible relevant inputs. Correct estimate of WCET is crucial for guaranteeing real
time constraints will be met.
20.01.05 Lecture 2, CS5270 24
Complex Tasks• Contains blocking synchronization statement:
– “wait” semaphore operation.– “receive” message operation.
• Must wait till another task has updated a common data structure:– Data dependency– Sharing
• Must wait for input to arrive.• WCET of a complex task can not be computed
in isolation..
20.01.05 Lecture 2, CS5270 25
Interfaces
• Interfaces:– Common boundary between two subsystems.– Design is essentially interface design.– Designing and implementing the interface
“glue logic” consumes the major portion of the design cycle.
20.01.05 Lecture 2, CS5270 26
20.01.05 Lecture 2, CS5270 27
Interfaces
• Interface Parameters:– Control signals flowing across the interface
and the associated task invocations.– Temporal properties to be satisfied by the
control signals and data values flowing across.
– Functional relationships between input and output data.
20.01.05 Lecture 2, CS5270 28
The Host Computer
DSP Processor
ASIC
Timer
Memory
20.01.05 Lecture 2, CS5270 29
Tasks
DATA SETS
TASK1
TASK2
TASK3
TASK4
20.01.05 Lecture 2, CS5270 30
Tasks• There will be tasks that are triggered by
exceptions, interrupts and alarms.• There will be tasks that need to be executed
periodically.• These tasks may have precedence
relationships.• These tasks may have deadlines.• These tasks may share data structures.• They may have to execute on the same
processor.• We must schedule!
20.01.05 Lecture 2, CS5270 31
Scheduling: Basic Concepts
• Scheduling Policy: – CPU has to execute –sequentially- a set of concurrent
tasks. If T1 and T2 are both executable at t we must choose
between T! and T2
• Scheduling Algorithm: – The recipe (algorithm) which determines at each time
t which task to execute .• Dispatching:
Allocating the CPU to the task selected by the scheduling algorithm.
20.01.05 Lecture 2, CS5270 32
Scheduling: Basic Concepts
• Active Task:– A task which can potentially execute on the
CPU (which may or may not be available).• Ready Task:
– An active task which is waiting for the CPU• Running Task:
– An active task in execution.• Ready Queue:
– The queue in which ready tasks are kept.
20.01.05 Lecture 2, CS5270 33
Scheduling : Basic Concepts• Preemption:
– Tasks may be activated dynamically time of activation not determined.
– If the task activated at time is more important (has higher priority) than the running task: running task is interrupted and inserted in the running
queue. • Preemption is needed for :
– Exception-handling tasks.– Tasks may have different levels of criticality.– Improve system responsiveness, throughput,
utilization etc.
20.01.05 Lecture 2, CS5270 34
The Ready Queue
20.01.05 Lecture 2, CS5270 35
Schedules• Task set J = { J1, J2, …, Jn}• A schedule assigns at each t one task to the processor
so that each task is eventually completed.• A schedule can be preemptive.• Schedules will have to perform context switching.• A schedule is feasible if all tasks can be completed while
satisfying the given constraints.• A task set is schedulable If there is at least one
scheduling algorithm which produces a feasible schedule.
20.01.05 Lecture 2, CS5270 36
Schedule
20.01.05 Lecture 2, CS5270 37
Schedule
0 1 3 5 9
1.5? 7?
20.01.05 Lecture 2, CS5270 38
Schedule with Preemption
39
Schedule with Preemption
0 1 3 4 6 7.5 9.5
2? 5?
Context switch?
20.01.05 Lecture 2, CS5270 40
Task Constraints
• Timing Constraints• Precedence Constraints• Resource Constraints
20.01.05 Lecture 2, CS5270 41
Timing Constraints
• Timing Constraints:– A task should meet its deadline.
Hard Soft
• Relevant Parameters for the task Ji:– arrival time ai
Request time, release time
– computation time Ci
time needed to execute Ji (without interruption).
20.01.05 Lecture 2, CS5270 42
Timing Parameters
• deadline di
– time before which Ji must be completed.
• start time si
• finishing time fi
• value (priority?) vi:– The relative importance of Ji.
20.01.05 Lecture 2, CS5270 43
Basic Timing Parameters
20.01.05 Lecture 2, CS5270 44
Basic Timing Parameters
Di = di – ai
Relative deadline
20.01.05 Lecture 2, CS5270 45
Timing Parameters• Pattern of activation:
– Periodic task regularly activated at a constant rate.
instances or jobs corresponding to the same task. i the phase of I :
The activation time of the first instance of the periodic task i. Ti the period of the task. Di the relative deadline Often, one assumes Di = Ti
– Aperiodic task: Same as periodic tasks but the activation times are NOT
periodic.
20.01.05 Lecture 2, CS5270 46
Periodic/Aperiodic Task
20.01.05 Lecture 2, CS5270 47
Task Constraints
• Timing Constraints• Precedence Constraints• Resource Constraints
20.01.05 Lecture 2, CS5270 48
Precedence Constraints
• Precedence Constraints:– Tasks can not be executed in any arbitrary
order. Data dependencies. Control strategy
• Task Graphs:– Instead of Task sets.– Nodes are tasks– Edges capture precedence.
20.01.05 Lecture 2, CS5270 49
Task Graph
20.01.05 Lecture 2, CS5270 50
The Task Graph
20.01.05 Lecture 2, CS5270 51
Task Constraints
• Timing Constraints• Precedence Constraints• Resource Constraints
20.01.05 Lecture 2, CS5270 52
Resource Constraints• resource:
– software structure used by a task during its execution.– A data structure, variables, an area of main memory,
a file, a piece of code, a set of registers of a peripheral device.
• Shared resource:– Used by more than one task.
• Exclusive resource:– No simultaneous access.– Require mutual exclusion.– Operating must provide a synchronization mechanism
to ensure sequential access..
20.01.05 Lecture 2, CS5270 53
Critical Section
• Critical section:– A piece of code belonging to task executed
under mutual exclusion constraints.• Mutual exclusion enforced by
semaphores.– wait(s)
Blocked if s = 0. – signal(s)
s is set to 1 when signal(s) executes.
20.01.05 Lecture 2, CS5270 54
Structure of Critical Sections.
20.01.05 Lecture 2, CS5270 55
Wait State• A task waiting for an exclusive resource is blocked on
that resource.• Tasks blocked on the same resource are kept in a wait
queue associated with the semaphore protecting the resource.
• A task in the running state executing wait(s) on a locked semaphore (s = 0) enters the waiting state.
• When a task currently using the resource executes signal(s), the semaphore is released.
• When a task leaves its waiting state (because the semaphore has been released) it goes into the ready state:– Why not enter the running state?
20.01.05 Lecture 2, CS5270 56
Waiting State
20.01.05 57
Blocking via Exclusive Resource
J1 has higher priority than J2.
Preemption is in play.
Only one processor available.
20.01.05 Lecture 2, CS5270 58
Multiprocessor Settings
a1 e1 d H
a2 e2 s rec
20.01.05 Lecture 2, CS5270 59
Scheduling Problem.
• Task set {J1, J2,..,Jn}
• Processors {P1, P2,…, Pm}
• Resources {R1, R2, …,Rs}• Timing constraints• Precedence constraints• Resource constraints• Problem: Assign processors and resources to
tasks so that all the tasks can be finished under the imposed constraints.
20.01.05 Lecture 2, CS5270 60
Scheduling Problem
• The general problem (in fact various simpler versions of it) is NP-complete.
20.01.05 Lecture 2, CS5270 61
Scheduling Problem• The general problem (in fact various simpler versions of
it) is NP-complete.• There is a non-deterministic Turing Machine TM and a
polynomial in one variable p(n) (egs. 8n3 + 5n + 6) such for each problem instance of size n (in binary representation!), TM determines if there exists a schedule and if so outputs one in atmost p(n) steps.
• Any non-deterministic polynomial time problem can be transformed in deterministic polynomial time to the general scheduling problem.
• Only exponential time deterministic algorithms are known.
20.01.05 Lecture 2, CS5270 62
Scheduling Problem
• Algorithm1 O(n)• Algorithm2 O(5n)• Each computation step 1 sec.• n = 30• Algorithm1 : 30 seconds.• Algorithm2 : 30, 000, 000 years!
20.01.05 Lecture 2, CS5270 63
Scheduling Problems• Must find imperfect but efficient solutions to
scheduling problems.• Great variety of algorithms exist:
– various assumptions– Different complexities– Different pragmatic contents.
• Optimal scheduling algorithm:– Minimizes a given cost function.– If no cost function, then no algorithm in the same
class can produce a feasible schedule if the optimal one can not.
20.01.05 Lecture 2, CS5270 64
A Classic Example
• Rate Monotonic Scheduling.– Task set : {J1, J2, …, Jn}– Each task is periodic. T1, T2,.., Tn
– i = 0 for each i.– Di = Ti for each i.– Pre-emption allowed.– Only one processor– No precedence constraints– No shared resources.
20.01.05 Lecture 2, CS5270 65
RMS
• The RMS algorithm:– Assign a static priority to the tasks according
to their periods. Tasks with shorter periods have higher priorities.
– Preemption policy: If Ti is executing and Tj arrives which has higher
priority (shorter period), then preempt Ti and start executing Tj.
20.01.05 Lecture 2, CS5270 66
RMS Results
• RMS is optimal.– If a set of of periodic tasks (satisfying the
assumptions set out previously) is not schedulable under RMS then no static priority algorithm can schedule this set of tasks.
• RMS requires very little run time processing.
• Static scheduling policy.
20.01.05 Lecture 2, CS5270 67
Process Utilization Factor• Task set = {T1, T2, …, Tn}• Process Utilization Factor
– Ci / Ti
– C1 / T1 + C2 / T2 + … Cn / Tn
• If this factor is GREATER than 1 then the task set can not be scheduled.– Why?
• If UF ≤ 1 it may be schedulable.• If UF Ulub then it is guaranteed to be
schedulable.
20.01.05 Lecture 2, CS5270 68
Process Utilization Factor
• Task set = {T1, T2, …, Tn}• If UF Ulub then it is guaranteed to be
schedulable.• Ulub = n( 21/n – 1)• For large n this is approximately 0.69.• But if UF is greater than Ulub and not greater
than 1, we must check explicitly whether the task set is schedulable (under RM).
20.01.05 Lecture 2, CS5270 69
EDF
• Earliest Deadline First.– Tasks with earlier deadlines will have higher
priorities.– Applies to both periodic and aperiodic tasks.– EDF is optimal for dynamic priority algorithms.– A set of periodic tasks is schedulable with
EDF iff the utilization factor is not greater than 1.
20.01.05 Lecture 2, CS5270 70
An Example
• {T1, T2}• T1
– Period = 5 – Computation time = 2
20.01.05 Lecture 2, CS5270 71
An Example
• {T1, T2}• T2
– Period = 7 – Computation time = 4
20.01.05 Lecture 2, CS5270 72
An RMS Schedule ?
20.01.05 Lecture 2, CS5270 73
An RMS Schedule
Time-Overflow
20.01.05 Lecture 2, CS5270 74
The Example
• UF = 2 / 5 + 4 / 7 = 0.4 + 0.57 = 0.97• Guaranteed to be schedulable under EDF!
20.01.05 Lecture 2, CS5270 75
Resource Access Protocols
• Multiple tasks.• Uniprocessor• Shared resources.
– Need proper protocols for accessing shared resources.
– Resource access protocols.• Avoid priority inversion!