Real Time Operating System. What happens when we power-on the PC??

Embed Size (px)

Citation preview

  • Slide 1
  • Real Time Operating System
  • Slide 2
  • What happens when we power-on the PC??
  • Slide 3
  • Slide 4
  • Slide 5
  • Slide 6
  • Slide 7
  • Slide 8
  • Slide 9
  • Slide 10
  • Slide 11
  • Real Time Operating System
  • Slide 12
  • But First What is RTS???
  • Slide 13
  • What is Real time system???????? A system is said to be Real Time if it is required to complete its work & deliver its services on time. A RTS is a system where correctness of computing depends not only on the correctness of the logical result of the computation but also on the result of delivery time. But in practice it is not possible and also costly to achieve this requirement. So, people classified the real time systems in to the following types.
  • Slide 14
  • Classification of RTS REAL TIME SYSTEM SOFT REAL TIME FIRM REAL TIME HARD REAL TIME
  • Slide 15
  • Hard Real time: Here missing an individual deadline results in catastrophic or bigger failure of the system which also causes a great financial loss. The examples for Hard real time systems are: Air traffic control Nuclear power plant control
  • Slide 16
  • Firm Real time In this, missing a deadline results in unacceptable quality reduction. Technically there is no difference with hard Real time, but economically the disaster risk is limited. Examples for Firm real time are : Failure of Ignition of a automobile Failure of opening of a safe
  • Slide 17
  • Soft real time: Here the dead line may not be fulfilled and can be recovered from. The reduction in system quality and performance is at an acceptable level. Examples of Soft real time systems : Multimedia transmission and reception Networking, telecom (Mobile) networks websites and services Computer games
  • Slide 18
  • What is RTOS???
  • Slide 19
  • REAL- TIME OPERATING SYSTEM (RTOS) : The ability of the operating system to provide a required level of service in a bounded response time. Or A real-time operating system (RTOS) is a program that schedules execution in a timely manner, manages system resources, and provides appropriate developing application code. A real-time operating system (RTOS) is a program that schedules execution in a timely manner, manages system resources, and provides appropriate developing application code.
  • Slide 20
  • RTOS Concepts What is RTOS? Multiple events handled by a single processor Events may occur simultaneously Processor must handle multiple, often competing events Wide range of RTOS systems Simple polling through multiple interrupt driven systems Each system activity designated as Task. RTOS is a multitasking system where multiple tasks run concurrently system shifts from task to task must remember key registers of each task called its context
  • Slide 21
  • RTOS Concepts cntd. RTOS responsible for all activities related to a task: scheduling and dispatching Inter-task communication memory system management input/output system management timing error management message management
  • Slide 22
  • Misconception of RTOS a)RTOS must be fast False statement: RTOS depends on its deterministic behavior and not on its processing speed. The ability of RTOS to response to events within a timeline does not imply it is fast. b)RTOS introduce considerable amount of overhead on CPU An RTOS typically only require between 1% to 4% of a CPU time. c)All RTOS are the same RTOS are generally designed for 3 types of real-time systems (i.e. hard, firm & soft).
  • Slide 23
  • Features of RTOS 1.Multitasking and Pre-emptibility: To support multiple tasks in real-time applications, an RTOS must be multi-tasked and pre-emptible. The scheduler should be able to preempt any task in the system and allocate the resource to the task that needs it most even at peak load. 2.Task Priority Preemption defines the capability to identify the task that needs a resource the most and allocates it the control to obtain the resource. In RTOS, such capability is achieved by assigning individual task with the appropriate priority level.
  • Slide 24
  • Contd 3.Reliable and Sufficient Inter Task Communication Mechanism: For multiple tasks to communicate in a timely manner and to ensure data integrity among each other, reliable and sufficient inter-task communication and synchronization mechanisms are required. 4.Priority Inheritance To allow applications with stringent priority requirements to be implemented, RTOS must have a sufficient number of priority levels when using priority scheduling.
  • Slide 25
  • Contd 5.Predefined Short Latencies An RTOS needs to have accurately defined short timing of its system calls. The behavior metrics are: Task switching latency: Task switching latency: The time needed to save the context of a currently executing task and switching to another task is desirable to be short. Interrupt latency: Interrupt latency: The time elapsed between execution of the last instruction of the interrupted task and the first instruction in the interrupt handler. 6.Control of Memory Management To ensure predictable response to an interrupt, an RTOS should provide way for task to lock its code and data into real memory.
  • Slide 26
  • Desktop OS versus RTOS
  • Slide 27
  • Difference between DESKTOP OS & RTOS Desktop OSRTOS 1.Large in MemoryNot Large Memory 2.Big or Large User Interface Management Limited No. of User Interface 3.Network protocol are usually in-built If required 4.Program with definite exit loop. Usually Infinite Loop 5.It is always plug-n-playFor PnP in RTOS we have to make a lot changes
  • Slide 28
  • ISR in RTOS Environment
  • Slide 29
  • Interrupt Routine in RTOS Environment In a system, the ISR should function as following ISRs have the higher priorities over the RTOS functions and the tasks. An ISR should not wait for a semaphore, mailbox message or queue message. But Three alternative ways systems to respond to hardware source calls from the interrupts
  • Slide 30
  • First Way: Direct Call to an ISR On an interrupt, the process running at the CPU is interrupted. ISR corresponding to that source starts executing. (STEP 1) A hardware source calls an ISR directly. The ISR just sends an ISR enter message to the RTOS. (STEP 2). ISR enter message is to inform the RTOS that an ISR has taken control of the CPU. The routine sends an ISR enter message to the RTOS which is stored @ memory allotted for OS message. (STEP 3) When ISR finishes, it sends ISR exit to the RTOS and there is return back to OS functions or task. (STEP 4)
  • Slide 31
  • 2 nd Way: RTOS first interrupting on an interrupt, then OS Calling the Corresponding ISR
  • Slide 32
  • Second Way: RTOS first interrupting on an interrupt On interrupt of a task, say, k-th task, the RTOS first gets itself the hardware source call and initiates the corresponding ISR after saving the present processor status (or context). Then the ISR during execution then can post one or more outputs for the events and messages into the mailboxes or queues.
  • Slide 33
  • RTOS first interrupting on an interrupt The ISR must be short and it must simply puts post the messages for another task. This task runs the remaining codes whenever it is scheduled. RTOS schedules only the tasks (processes) and switches the contexts between the tasks only. ISR executes only during a temporary suspension of a task.
  • Slide 34
  • 3 rd Way RTOS first interrupting on interrupt, then RTOS Initiating the ISR, and then an ISR
  • Slide 35
  • RTOS first interrupting on interrupt, then RTOS Initiating the ISR, and then an ISR On an interrupt, the RTOS first gets the hardware source call (STEP 1) and initiates the corresponding ISR after finishing other sections and then saving the processor status (or context)(STEP 2) The ISR executes the device. (STEP 3) The ISRs during execution then can send one or more outputs for the events and message into mailboxes or queues for the IST(Interrupt Service Thread) (STEP 4). The ISR just before the end, unmasks (enables) further pre-emption from the same or other hardware sources. (STEP 5)
  • Slide 36
  • RTOS first interrupting on interrupt, then RTOS Initiating the ISR, and then an ISR The ISTs in the memory that have received the messages from the ISRs executes (STEP 6) as per their priorities on return from the ISR. When no ISR or IST is pending execution in the memory, the interrupted task on return (STEP 7).
  • Slide 37
  • RTOS calling the corresponding ISR, the ISR sending messages to IST An RTOS can provide for two levels of interrupt service routines, a fast level ISR, FLISR and a slow level ISR (SLISR). The FLISR can also be called hardware interrupt ISR and the SLISR as software interrupt ISR. FLISR is called just the ISR in RTOS. The SLISR is called interrupt service thread (IST)
  • Slide 38
  • Cntd The ISR must be short, run critical and necessary codes only, and then must simply send the initiate call or messages to ISTs into the memory. The main function of IST is, it runs the remaining codes as per the schedule. They are SLISR (Slow Level ISR) running device independent codes as per the device priorities on signals (SWIs) from the ISR. The ISTs run in the kernel space. The system priorities are in order of ISR, then IST and Then TASK.
  • Slide 39
  • Inter- process/Task/Thre ad Communication (IPC)
  • Slide 40
  • Introductions What do you mean IPC????? Inter-process communication (IPC) is a set of methods or rules for the exchange of data among multiple threads in one or more processes. IPC may also be referred to as inter-thread communication and inter-application communication.
  • Slide 41
  • What are different Classifications of IPC??????? Synchronization communication Communication with/without data Uni-directional/bi-directional transfer Structured/un-structured data Destructive/non-destructive read
  • Slide 42
  • Mutual exclusion & Synchronization Communication Semaphore Binary semaphore Counting semaphore Mutex
  • Slide 43
  • Communication with data Structured data destructive read Message queue Unstructured data Uni-direction destructive read Named pipe(FIFO) Unnamed pipe Bi-direction Non-destructive read Shared memory Communication without data Event register Signal Condition variable
  • Slide 44
  • Semaphore Introduction It is a variable or abstract data type that provides a simple but useful abstraction for controlling access by multiple processes or threads to a common resources. Semaphore useful for Synchronize execution of multiple tasks Coordinate access to a shared resource RTOS provides a semaphore object and associated semaphore management services There are three types of semaphore 1.Binary Semaphore 2.Counting Semaphore 3.Mutex
  • Slide 45
  • Binary Semaphores Have a value of either 0 or 1 0: the semaphore is considered unavailable 1: the semaphore is considered available When a binary semaphore is first created It can be initialized to either available or unavailable (1or 0, respectively) When created as global resources Shared among all tasks that need them Any task could release a binary semaphore even if the task did not initially acquire it
  • Slide 46
  • The State Diagram of a Binary Semaphore
  • Slide 47
  • Counting Semaphores Use a counter to allow it to be acquired or released multiple times. The semaphore count assigned when it was first created denotes the number of semaphore tokens it has initially. Global resources that can be shared by all tasks that need them.
  • Slide 48
  • The State Diagram of a Counting Semaphore
  • Slide 49
  • Mutual Exclusion ( Mutex ) Smaphores Mutex A special binary semaphore that supports Ownership Recursive access Task deletion safety One or more protocols avoiding problems inherent to mutual exclusions The states of a mutex Locked(1) and unlocked (0) A mutex is initially created in the unlocked state.(initial count value=0)
  • Slide 50
  • Difference between MUTEX & Semaphore Assume, We have a buffer of 4096 byte length. In a process there are two thread T1 & T2. Suppose a thread T1 will collect the data and writes it to the buffer. A thread T2 will process the collected data from the buffer. But My Objective is, both the threads should not run at the same time. Using Mutex: A mutex provides mutual exclusion, either T1 or T2 can have the key (mutex) and proceed with their work. As long as the buffer is filled by thread T1, the Thread T2 needs to wait, and vice versa. At any point of time, only one thread can work with the entire buffer. Using Semaphore: A semaphore is a generalized mutex. In place of single buffer, we can split the 4 KB buffer into four 1 KB buffers (identical resources). A semaphore can be associated with these four buffers. The and producer can work on different buffers at the same time. A mutex is associated with an entity or process while a semaphore is associated with a resource.
  • Slide 51
  • The State Diagram of a Mutual Exclusion (Mutex) Semaphore Mutex
  • Slide 52
  • Typical Semaphore Use Semaphore useful for Synchronize execution of multiple tasks Coordinate access to a shared resource Examples Wait-and-signal synchronization Multiple-task wait-and-signal synchronization Credit-tracking synchronization Single shared-resource-access synchronization Recursive shared-resource-access synchronization Multiple shared-resource-access synchronization
  • Slide 53
  • Wait and Signal Synchronization Two tasks can communicate for the purpose of synchronization without exchanging data. Example: a binary semaphore can be used between two tasks to coordinate the transfer of execution control Binary semaphore is initially unavailable tWaitTask has higher priority and runs first Acquire the semaphore but blocked tSignTask has a chance to run Release semaphore and thus unlock tWaitTask
  • Slide 54
  • Wait and Signal Synchronization Between Two Tasks.
  • Slide 55
  • Wait and Signal Synchronization (When more than 2 tasks wants the shared resources) To coordinate the synchronization of more than two tasks Use the flush operation on the task-waiting list of a binary semaphore Example Binary semaphore is initially unavailable tWaitTask has higher priority and runs first Acquire the semaphore but blocked tSignTask has a chance to run Invoke a flush operation and thus unlock the three tWaitTask
  • Slide 56
  • Wait and Signal Synchronization
  • Slide 57
  • Message Queues
  • Slide 58
  • Message Queues Introduction (1/2) To provide inter-task data communication, kernels provide a message queue object and message queue management services. A message queue a buffer-like object through which tasks and ISRs send and receive messages to communicate and synchronize with data The message queue itself consists of a number of elements, each of which can hold a single message.
  • Slide 59
  • When a message queue is first created, it is assigned a queue control block (QCB) a message queue name a unique ID memory buffers a queue length a maximum message length task-waiting lists Kernel takes developer-supplied parameters to determine how much memory is required for the message queue: queue length and maximum message length Message Queues Introduction (2/2)
  • Slide 60
  • The associated parameters, and supporting data structures
  • Slide 61
  • Message Queue States (1/2)
  • Slide 62
  • When a task attempts to send a message to a full message queue, two ways of kernel implementation: the sending function returns an error code to that task Sending task is blocked and is moved into sending task-waiting list Message Queue States (2/2)
  • Slide 63
  • Message Queue Content (1/3) Message queues can be used to send and receive a variety of data. Some examples: a temperature value from a sensor a bitmap to draw on a display a text message to print to an LCD a keyboard event a data packet to send over the network
  • Slide 64
  • When a task sends a message to another task, the message normally is copied twice from senders memory area to the message queues memory area from the message queues memory area to receivers memory area Copying data can be expensive in terms of performance and memory requirements Message Queue Content (2/3)
  • Slide 65
  • Message copying and memory use for sending and receiving messages
  • Slide 66
  • Keep copying to a minimum time in a real-time embedded system: by keeping messages small (length wise) by using a pointer instead (Allocate memory using pointer function) When a queue becomes full, there may be a need for error handling and user codes for blocking the task(s). There may not be self-blocking. Message Queue Content (3/3)
  • Slide 67
  • Message Queue Storage (1/2) Message queues may be stored in a system pool or private buffers System Pools the messages of all queues are stored in one large shared area of memory Advantage: save on one memory only
  • Slide 68
  • Private Buffers Separate memory areas for each message queue Downside: uses up more memory requires enough reserved memory area for the full capacity of every message queue that will be created Advantage: better reliability ensures that messages do not get overwritten and that room is available for all messages Message Queue Storage (2/2)
  • Slide 69
  • Typical Message Queue Operations 1.Creating and deleting message queues 2.Sending and receiving messages 3.Obtaining message queue information
  • Slide 70
  • 1.Creating and deleting message queues When created, message queues are treated as global objects and are not owned by any particular task. When creating a message queue, a developer needs to decide Message queue length The maximum messages size The blocked tasks waiting order
  • Slide 71
  • 2.Sending and receiving messages
  • Slide 72
  • Sending Messages Tasks can send messages with different blocking policies: not block (ISRs and tasks) If a message queue is already full, the send call returns with an error, the sender does not block block with a timeout (tasks only) block forever (tasks only) The blocked task is placed in the message queues task- waiting list FIFO or priority-based order
  • Slide 73
  • a)Sending messages in FIFO or LIFO order
  • Slide 74
  • b)FIFO and priority based task waiting lists
  • Slide 75
  • Receiving Messages (1/2) Tasks can receive messages with different blocking policies: not blocking blocking with a timeout blocking forever Due to the empty message queue, the blocked task is placed in the message queues task-waiting list FIFO or priority-based order
  • Slide 76
  • FIFO and priority based task waiting lists
  • Slide 77
  • Messages can be read from the head of a message queue in two different ways: Destructive read removes the message from the message queues storage buffer after successfully read Non-Destructive read without removing the message Receiving Messages (2/2)
  • Slide 78
  • Obtaining Message Queue Information Obtain information about a message queue: message queue ID, task-waiting list queuing order (FIFO or priority-based), and the number of messages queued.
  • Slide 79
  • Broadcast Communication (1/3) Allow developers to broadcast a copy of the same message to multiple tasks Message broadcasting is a one-to-many-task relationship. tBroadcastTask() sends the message on which multiple tSink-Task() are waiting.
  • Slide 80
  • Broadcast Communication (2/3)
  • Slide 81
  • Broadcast Communication (3/3)
  • Slide 82
  • Pipes
  • Slide 83
  • Pipe Introduction (1/2) Provide unstructured data exchange and facilitate synchronization among tasks. A pipe is a unidirectional data exchange facility. Two descriptors or functions, one end for reading and one for writing. Data is written via one descriptor or function and read via the other. Reader becomes blocked when the pipe is empty, and writer becomes blocked when the pipe is full.
  • Slide 84
  • A common pipe Unidirectional pipe The data remains in the pipe as an unstructured byte stream. Data is read from the pipe in FIFO order.
  • Slide 85
  • Typically used to exchange data between a data-producing task and a data-consuming task. Allows several writers for the pipe with multiple readers on it. Pipe Introduction (2/2)
  • Slide 86
  • Pipes vs. Message Queue A pipe does not store multiple messages. data that it stores is not structured, but consists of a stream of bytes The data in a pipe cannot be prioritized the data flow is strictly FIFO Pipes support the powerful select operation, and message queues do not.
  • Slide 87
  • PipeQueue Pipes are a layer over message QueuesMessage queue is managed by kernel. All the queue memory allocated at creation. Pipe is a technique for passing information from one process to another Message queue is a method by which process can pass data using an interface. Two processes, one feeding the pipe with data while the other extracts the data at the other end. A message queue can be created by one process and used by multiple processes that read / write messages to the queue. Pipe is a linear array of bytes, as is a file, but it is used solely as an I/O stream. Queue is not a streaming interface. Pipe supports destructive reading. (once if you read it vanishes) Datagram-like behavior: reading an entry removes it from the queue. If you don't read the entire data, the rest is lost Pipe is one-way communication onlyHave a maximum number of elements and each element has maximum size
  • Slide 88
  • Pipe Control Blocks (1/2) Pipes can be dynamically created or destroyed. Kernel creates and maintains pipe-specific information in a pipe control block A kernel-allocated data buffer for the pipes input and output operation Buffer size is fixed when the pipe is created Current data byte count amount of readable data in the pipe (Capacity of Pipe) Current input and output position specifies the next write/read position in the buffer
  • Slide 89
  • Two task-waiting lists are associated with each pipe Pipe Control Blocks (2/2)
  • Slide 90
  • Pipe States Corresponds to the data transfer state between the reader and the writer of the pipe
  • Slide 91
  • Signals
  • Slide 92
  • Signal Introduction (1/3) A signal is a software interrupt that is generated when an event has occurred. Signals notify tasks of events that occurred during the execution of other tasks or ISRs these events are asynchronous to the notified task
  • Slide 93
  • Signals
  • Slide 94
  • The number and type of signals defined is both system- dependent and RTOS-dependent. each signal is associated with an event Unintentional such as an illegal instruction encountered during program execution Intentional such as a notification to one task from another that it is about to terminate a task can specify the particular actions to undertake when a signal arrives the task has no control over when it receives signals Signal Introduction (2/3)
  • Slide 95
  • When a signal arrives, the task is diverted from its normal execution path, and the corresponding signal routine is invoked. Signal routine, signal handler, asynchronous event handler, and asynchronous signal routine (ASR) Each signal is identified by an integer value, which is the signal number or vector number. Signal Introduction (3/3)
  • Slide 96
  • Signal number or Vector number
  • Slide 97
  • Basic Design of RTOS
  • Slide 98
  • An Embedded system with a single CPU can run only one process at an instance. The process at any instance may either be an ISR, or kernel function or task. Provides running the user threads in kernel space so that they execute fast. Provides effective handling of the ISRs, ISTs, tasks or threads Disabling and enabling of interrupts in user mode critical section code. Provides memory allocation and de-allocation functions in fixed blocks of memory.
  • Slide 99
  • Cntd. Provides for the uses of Semaphore(s) by the tasks or for the shared resources in a task or OS functions. Provides for effectively scheduling and running and blocking of the tasks in cases of multiple tasks. I/O Management with devices, files, mailboxes, pipes and sockets becomes simple using an RTOS.
  • Slide 100
  • Design Principles in RTOS Environment 1.Design with the ISRs and Tasks. 2.Each ISR design consisting of shorter code. 3.Design with using Interrupt Service Threads or Interrupt Service tasks. 4.Design Each Task with an infinite loop. 5.Design in the form of tasks for the Better and Predictable Response Time Control. 6.Design in the form of tasks Modular Design 7.Design in the form of tasks for Data Encapsulation. 8.Design with taking care of the time spent in the system calls 9.Design with Limited number of tasks
  • Slide 101
  • 1.Design with the ISRs and Tasks. The Embedded system hardware source call generates interrupts The ISR can post(send) the message for the RTOS and tasks parameters. No ISR instructions should block any task. Therefore, the ISR should not use MUTEX Lock functions. Only an RTOS initiates the actions according to the ISR-posted signal, semaphore, queues and pipe. On Interrupt (if it is not Masked) Saves the current process context on Stack Executes corresponding ISR Handling of interrupt is done by one of the three method as explained before
  • Slide 102
  • Cntd. RTOS provides for nesting of ISRs. Running ISR can be interrupted by higher priority ISR higher priority ISR starts executing Blocking the running of low priority ISR, After saving all related information on stack When high priority interrupt service completes and then Returns to the low priority interrupt after retrieving the saved context from the stack.
  • Slide 103
  • 2.Each ISR design consisting of shorter code. Since ISRs have higher priorities over the tasks, the ISR code should be made short so that the tasks dont wait longer to execute. A design principle is that the ISR code should be optimally short and the detailed computations be given to an IST or task by posting a message or parameters for that.
  • Slide 104
  • 3 Design with using Interrupt Service Threads or Interrupt Service tasks.(ISTs) In certain RTOSes, for servicing the interrupts, There are two levels, fast level ISRs and slow level ISTs, The priorities are first for the ISRs, then for the ISTs and then the task
  • Slide 105
  • Each task has a while loop which never terminates. A task waits for an IPC or signal to start. The task, which gets the signal or takes the IPC for which it is waiting, runs from the point where it was blocked or preempted. In preemptive scheduler, the high priority task can be delayed for some period to let the low priority task execute 4. Design Each Task with an infinite loop
  • Slide 106
  • 5. Design in the form of tasks for the Better and Predictable Response Time Control Provide the control over the response time of the different tasks. The different tasks are assigned different priorities and those tasks which system needs to execute with faster response are separated out. Response Time Control For example, in mobile phone device there is need for faster response to the phone call receiving task then the user key input.
  • Slide 107
  • 6. Design in the form of tasks Modular Design System of multiple tasks makes the design modular. The tasks provide modular design
  • Slide 108
  • 7. Design in the form of tasks for Data Encapsulation. System of multiple tasks encapsulates the code and data of one task from the other by use of global variables.
  • Slide 109
  • 8. Design with taking care of the time spent in the system calls Expected time in general depends on the specific target processor of the embedded system and the memory access times.
  • Slide 110
  • 9. Design with Limited number of tasks Limit the number of tasks and select the appropriate number of tasks to increase the response time to the tasks, And better control over shared resource and reduced memory requirement for stacks. The tasks, which share the data with number of tasks, can be designed as one single task.