View
216
Download
2
Category
Preview:
Citation preview
Complexities Arising in Real Systems In the task model assumed so far,
» all tasks are independent,» there is no penalty for preemption,» preemptions may occur at any time, and» an unlimited number of priority levels exists.
What do we do for cases where these assumption do not hold?
Effect of Blocking on SchedulabilityTasks may have nonpreemptive regions due to system calls, critical sections, I/O calls, etc.
Suppose we know that bi is maximum total duration for which eachjob of task Ti may be blocked by lower-priority tasks.
How does the scheduling analysis presented previously change?
Fixed-Priority Systems
Time-demand analysis. Similar to before, but the TD function is now:
)p,min(D t 0for ep
tbe(t)w iik
1i
1k kiii
Schedulability Under EDF
Theorem: In a system where jobs are scheduled under EDF,a job Jk with relative deadline Dk can block a job Ji with relativedeadline Di if and only if Dk > Di.
Theorem: In a system where jobs are scheduled under EDF,a job Jk with relative deadline Dk can block a job Ji with relativedeadline Di if and only if Dk > Di.
In an EDF-scheduled system, all deadlines will be met if the followingholds for every i = 1, 2, …, n:
1)p,min(D
b
)p,min(D
e
ii
in
1k kk
k
Effect of Suspensions
T3
T2
T1
Example Schedule: Three tasks, T1 = (3,0.5), T2 = (4,1), T3 = (6,2).
Here’s the system with no suspensions:
Effect of Suspensions
T3
T2
T1
Example Schedule: Three tasks, T1 = (3,0.5), T2 = (4,1), T3 = (6,2).
Here’s the system assuming J2,2 begins with a 2 time unit suspension:
T1 is completely unaffected by T2’s suspension.
T3’s worst-case response time lengthens from 4 to 5 time units.
Scheduling Analysis with Suspensions
Calculate a “blocking term” due to suspensions:
We are adding the effects of self-suspension plus potential delay of higher priority task suspensions to the blocking term
bi(ss) = maximum self-suspension time of Ti
+ k=1,…,i-1 min(ek, maximum self-suspension time of Tk)
fixed-priorities assumed here
Context Switches In reality, context switches don’t take 0 time. In a preemptive system in which job priorities are
fixed (e.g., RM, DM, or EDF) we can inflate job execution costs as follows.
• If each job of Ti self-suspends Ki times, add 2(Ki + 1)CS to ei.
In a scheme like LLF, in which a job’s priority is dynamic, context switching costs may be prohibitive.
A nonpreemptive scheme will context switch the least.» Proofs that EDF is better than nonpreemptive EDF assume a cost of
zero for preemptions!
Limited Priority Levels
In reality, the number of priority levels in a system will be limited.» Most real-time OSs have at most 256 priority levels.
As a consequence of this, we may have multiple tasks per priority level. Two issues:» How does this impact scheduling analysis?» How do we assign real priorities?
Scheduling AnalysisMost systems schedule same-priority tasks on a round robin or FIFObasis. Assuming this, we can adjust our analysis as follows.
TDA: The time-demand function becomes:
)p,min(D t 0for ep
t ebe(t)w ii
(i)TTk
k(i)TTkiii
HkEk
Tick Scheduling We have assumed so far that the scheduler is activated
whenever a job is released. In many systems, the scheduler is activated only at
clock interrupts. This is called tick scheduling, time-based scheduling,
or quantum-based scheduling. Two main consequences for scheduling:
• We must regard the scheduler itself as a high-priority periodic task.
• We may have additional blocking times due to the possibility that a job can be released between clock interrupts.
Caches and Virtual Memory Caches are problematic for (at least) three reasons:
• Conditional branches make it difficult to predict which instructions and data will be needed next.– Generally, it is harder to predict whether a data item will be in the cache
than whether an instruction will be in the cache.
• Preemptions and migrations can cause blocks brought into the cache to be removed.
• With shared caches on multicore platforms, it seems impossible to predict what’s happening on other cores.
Mixing Real-Time and Non-Real-Time in Priority-Driven Systems
Sporadic and Aperiodic Tasks/Job
Sporadic task: Ti is specified by (i, pi, ei, Di).» pi is the minimum time between job releases.
Aperiodic jobs: non-real-time.» Released at arbitrary times.» Has no deadline and ei is unspecified.
How do we meet the following goals?» Never miss the deadline of a sporadic task» Minimize either the response time of the aperiodic
job at the head of the queue, or the average response time of all aperiodic jobs.
Background Scheduling Sporadic tasks are scheduled using any priority-driven
scheduling algorithm. Aperiodic jobs are scheduled in the background:
» Aperiodic jobs are executed only when there is no sporadic job ready to execute.
» Simple to implement and always produces correct schedules.• The lowest-priority task executes jobs from the aperiodic job queue.
» We can improve response times without jeopardizing deadlines by using a slack stealing algorithm to delay the execution of sporadic jobs as long as possible.
Server Scheduling
Sporadic tasks are scheduled using any priority-driven scheduling algorithm.
Aperiodic jobs are executed by a special server: The server is given an execution budget es.
» Server jobs consume the budget as they execute
» No server job can execute when the budget is depleted
» Budget is regularly replenished according to an algorithm dependent on the type of server
Example: Deferrable Server (DS) Let the task TDS = (ps, es) be a deferrable server.
Replenishment Rule:» The execution budget is set to es at time instants kps, for k0.
» Note: Unused execution budget cannot be carried over to the next period.
The scheduler treats the deferrable server as a sporadic task that may suspend itself during execution (i.e., when the aperiodic queue is empty).
Comp 737, Fall 2014 Mixed Jobs - 17
DS with RM SchedulingAnother Example: Two tasks, T1 = (2,3.5,1.5), T2 = (6.5,0.5), and a deferrable server TDS = (3,1). Assume an aperiodic job Ja arrives at time t = 2.8 with and execution time of ea = 1.7.
T2
T1
0 1 2 3 4 5 6 7 8 9 10 11 12
TDS
The response time of the aperiodic job Ja is 3.7.
TDS Budget
1.0
4.7
2.8 6.5
Notice that the processor demand created by the DS in the interval [65, completion of T1’s job] is twice what it would be if it were an ordinary sporadic task! This is because we preserve the bandwidth of the DS.
Double-Hit Example
T2
T1
65 66 67 68 69 70 71 72 73 74 75 76 77
TDS
TDS Budget1.0
T1 just makes it!
t0
t0=
Observation:Initial es: Demand of the server if it consumes budget at the beginning the time demand interval, but at the end of its period.
Next es term: demand the DS can produce with remaining time t-es in the interval
This fully encompasses the double-hit demand of the server
TDA with a DS
)p, min(D t 0for ep
te
p
etebe(t)w iik
1i
1k ks
s
ssiii
Total Bandwidth Server (TBS) One way to reduce the response time of aperiodic jobs
whose wcet is known in a deadline-driven system is to » allocate a fixed (maximum) percentage, US, of the processor
to serve aperiodic jobs, and
» make sure the aperiodic load never exceeds this maximum utilization value.
» When an aperiodic job comes in, assign it a deadline such that the demand created by all of the aperiodic jobs in any feasible interval never exceeds the maximum utilization US
allocated to aperiodic jobs.
Schedulability with a TBS
A necessary and sufficient schedulability condition for TBS under implicit deadline systems
Theorem: A system T of n independent, preemptable, sporadic tasks with relative deadlines equal to their periods is schedulable with a TBS if and only if
where UT = is the processor utilization of the
sporadic tasks and US is the processor utilization of the TBS.
Theorem: A system T of n independent, preemptable, sporadic tasks with relative deadlines equal to their periods is schedulable with a TBS if and only if
where UT = is the processor utilization of the
sporadic tasks and US is the processor utilization of the TBS.
1UU ST
n
1k k
k
p
e
Four Bandwidth-Preserving Servers Deferrable Servers (1987).
» Oldest and simplest of the bandwidth-preserving servers.
» Static-priority algorithms by Lehoczky, Sha, and Strosnider.
» Deadline-driven algorithms by Ghazalie and Baker (1995).
Sporadic Servers (1989).» Static-priority algorithms by Sprunt, Sha, and Lehoczky.
» Deadline-driven algorithms by Ghazalie and Baker (1995).
Total Bandwidth Servers (1994, 1995).» Deadline-driven algorithms by Spuri and Buttazzo.
Constant Utilization Servers (1997).» Deadline-driven algorithms by Deng, Liu, and Sun.
Real-Time Operating Systems (RTOSs)
Capabilities of Commercial RTOSs We will look at these RTOSs:
» LynxOS, Nucleus RTOS, and VxWorks. Each of these systems shares the following attributes:
» Compliant or partially compliant to the Real-Time POSIX API Standard:• Preemptive, fixed-priority scheduling. • Standard synchronization primitives (mutex and message passing).• Each also has its own API.
» Modular and scalable:• The kernel is small so that it can fit in ROM in embedded systems.• I/O, file, and networking modules can be added.
Shared Attributes (Continued)
» Fast and efficient:• Most are microkernel systems.• Low overhead.• Small context switch time, interrupt latency, and semaphore get/release
latency: usually one to a few microseconds.• Nonpreemptable portions of kernel functions are highly optimized, short, and
as deterministic as possible.• Many have system calls that require no trap: applications run in kernel mode.
» Support split interrupt handling.» Flexible scheduling:
• All offer at least 32 priority levels: min required by real-time POSIX.• Most offer 128 or 256 priority levels.• FIFO or RR scheduling for equal-priority threads.• Can change priorities, but EDF scheduling is not supported.
Shared Attributes (Continued)
» Relatively High Clock and Timer Resolution:
» No Paging or Swapping:• May not offer memory protection: often the kernel and all tasks execute in
kernel mode, sharing one common address space.
• Level of memory protection may be settable (ranging from “none” to “private virtual memory”).
» Optional Networking Support:• Can be configured to support TCP/IP with an optional module.
LynxOS Based on a microkernel, which provides:
» scheduling, interrupt dispatch, and synchronization.
Kernel Plug-Ins (KPIs) are lightweight multi-threaded kernel service modules that can be added so that:» LynxOS can serve as a multi-purpose Unix OS.
» LynxOS can emulate Linux and UNIX system call APIs.
» LynxOS can be configured as a self-hosted system.• Embedded applications are developed on the same system on which they
are deployed and run (simplifies development and debugging).
Thus, LynxOS also provides optional memory protection (with an MMU) and demand paging.
Nucleus RTOS Designed for embedded applications.
» Reported to have been shipped in over 3 Billion devices.» Automation, consumer electronics, cell phones,
navigation, medical devices, …» Can be shrunk to fit into 13 KB memory.
• More recently: 2 KB of flash
POSIX and ITRON support. Can be scaled up to larger systems.
» MMU support.» USB, multimedia, networking, etc.» Development must occur on a Windows or Linux host.
VxWorks Used on the Mars Pathfinder:
» Shortly after landing, Pathfinder was resetting. This was due to a classical uncontrolled priority inversion problem. The system was detecting a missed deadline and forcing a reset.
» VxWorks supports the Priority Inheritance Protocol (PIP), but it is disabled by default.
» A simple patch enabled the PIP and saved the day!
VxWorks is one of the few RTOSs that is a monolithic system rather than being based on a microkernel.
However, it does allow major functions, such as memory protection and priority inheritance, to be disabled.
Supports POSIX and most POSIX real-time extensions.
VxWorks (Continued) Recent releases of VxWorks advertise multicore support. Provides virtual-to-physical address mapping using an
MMU if one is available.» Can make portions of memory non-cacheable.
The Eclipse-based Workbench tools create a cross-platform development environment:» These tools execute on a host machine and communicate with the
target over an I/O interface.
PREEMPT_RT A patch against mainline Linux to make Linux more
“real-time friendly.” Key features:
» Reduced interrupt latencies.» Priority inheritance.» High-resolution timers.» Split interrupt handling.
Many of these features have since been integrated into mainline Linux.» The official goal was for PREEMPT_RT to eventually be
completely merged into mainline Linux.» Development on PREEMPT_RT, however, has slowed
recently
Linux with PREEMPT_RT The Good:
» 100 total priority levels by default. » Linux provides flexible scheduling policies and a limited notion
of servers/containers.» Well-supported and rich software development environment:
standard compilers, standard libraries, file systems, etc.» You have the source. So if you don’t like something, change it!
The Bad:» Compared to other embedded RTOSs, Linux has high resource
requirements. The Ugly:
» Linux is a monolithic kernel (not a microkernel) with sometimes-complex dependencies, and some parts aren’t well-documented.• So, some changes are easier said than done.
Linux: Scheduling 100 Priority Levels Linux provides SCHED_FIFO, SCHED_RR and
SCHED_OTHER scheduling policies.» SCHED_FIFO and SCHED_RR are fixed-priority
algorithms for real-time processes.» SCHED_OTHER is a time-sharing algorithm for non-
real-time processes, which execute at a lower priority than real-time processes. Limited bandwidth enforcement is available.
Recently, SCHED_DEADLINE was added.
LITMUSRT
LInux Testbed for MUltiprocessor Scheduling in Real-Time systems.
UNC’s real-time patch against stock Linux.» Focus: scheduling and synchronization.» Does not address other areas (such as interrupt handling,
memory footprint).» A research tool, not a complete RTOS.
Future direction.» Rebase on top of PREEMPT_RT.» Power-aware? Frequency-scaling?» Heterogeneous platforms?
Recommended