Assignment in Embedded System(CT74)

Embed Size (px)

Citation preview

  • 7/30/2019 Assignment in Embedded System(CT74)

    1/24

    Assignment Embedded System

    1. Answer the following

    a. Define embedded system. List common characteristics of embedded systems. Which distinguishes itfrom computing systems?

    Answer:

    An embedded system is some combination of computer hardware and software, either fixed in capability orprogrammable, that is specifically designed for a particular function. Industrial machines, automobiles, medicalequipment, cameras, household appliances, airplanes, vending machines and toys (as well as the more obviouscellular phone and PDA) are among the myriad possible hosts of an embedded system.

    Definition of embedded system:

    An embedded system is a computer system designed to perform one or a few dedicated functions in real timeand control a complete device . It is a system dedicated for an application(s) or is a specific part of an applicationor product or a part of a larger system.

    Typically an embedded system consists of a microcomputer with software in ROM / FLASH memory, which startsrunning a dedicated application as soon as power is turned on and does not stop until power is turned off. Theprogram run by the processor is not generally reprogrammable by the end user.

    A general-purpose definition of embedded systems is that they are devices used to control, monitor or assist theoperation of equipment, machinery or plant. "Embedded" reflects the fact that they are an integral part of thesystem that includes hardware and mechanical parts.

    Characteristics of Embedded system:

    An Embedded system is characterized by the following:

    Dedicated functions or tasks or application

    Real time response

    generally not reprogrammable by the end user

    part of the system that includes hardware and mechanical parts.

    b. How to measure performance of a system? List the important parameters required to measureperformance of an embedded system?

    Answer:Performance measurement is another important area of SPE. This includes planning measurementexperiments to ensure that results are both representative and reproducible. Software also needs to beinstrumented to facilitate SPE data collection. Finally, once the performance critical components of the softwareare identified, they are measured early and often to validate the models that have been built and also to verifyearlier predictions.

    Figure: Key parameters influencing performance scenarios based on cycle counts

    Figure: Output from a Performance Calculator used to identify and track key performance scenarios

    Step 1: Determine where you need to be

    Reject nonspecific requirements or demands such as "the system should be as fast as possible.Instead, use quantitative terms such as Packet throughput must be 600K packets per second for IP forwarding.

    Understand potential future use cases of the system and design the necessary scalability to handlethem. Figure 9 shows an example of how to define these performance goals. To do this properly, the first step isto identify the system dimension. This is the context and establishes the what. Then the key attributes areidentified. This identifies how good the system "shall be". The metrics are then identified that determine howwell know. These metrics should include a should value and a must value.

    In the example, IP forwarding is the system dimension. For a networking application, IP forwarding is akey measurement focus for this application area. The key attribute is fast - the system is going to be measuredbased on how many packets can be forwarded through the system. The key metric is thousands of packets persecond (KPPS). The system should be able to achieve 600 Kpps and must reach at least 550 Kpps to meet theminimum system requirements.

    Figure: Defining quantitative performance goals

  • 7/30/2019 Assignment in Embedded System(CT74)

    2/24

    Assignment Embedded System

    Step 2: Determine where you are now

    Understand which system use cases are causing performance problems. Quantify these problems usingavailable tools and measurements. Figure 10 shows a debug architecture for a Multicore SoC that can providethe visibility hooks into the device for performance analysis and tuning. The figureshows a strategy for usingembedded profiling and analysis tools to provide visibility into a SoC in order to collect the necessary informationto quantify performance problems in an embedded system.

    Perform the appropriate assessment of the system to determine if the software architecture can supportperformance objectives. Can the performance issues be solved with standard software tuning and optimizationmethods? This is important because it's not desirable to spend many months tuning the application only todetermine later that the goals cannot be met using these tuning approaches and more fundamental changes arerequired. Ultimately, this phase needs to determine whether performance improvement requires re-design or iftuning is sufficient.

    Figure: A debug architecture for a Multicore SoC that can provide the visibility hooks into the devicefor performance analysis and tuning

    Figure: A tools strategy for using embedded profiling and analysis tools to provide visibility into a SoC inorder to collect the necessary information to quantify performance problems in an embedded system.

    Step 3: Decide if you can achieve the objectives

    There are several categories of performance optimization, ranging from the simple to the more complex:

    Low-cost/low ROI techniques

    Usually these techniques involve automatic optimization options. A common approach in embeddedsystems is the use of compiler options to enable more aggressive optimizations for the embedded software.

    High-cost/high ROI techniques

    Re-designing or re-factoring the embedded software architecture.

    Intermediate cost/intermediate ROI techniques

    This category includes optimizing algorithms and data structures (for example using a FFT instead of aDFT) as well as approaches like modifying software to use more efficient constructs.

    Step 4: Develop a plan for achieving the objectives

    The first step is to pareto rank the proposed solutions based on return on investment. There are variousways to estimate resource requirements, including modeling and benchmarking. Once the performance targetshave been determined, the tuning phase becomes iterative until the targets have been met. Figureshows anexample of a process used in optimizing DSP embedded software. As this figure shows, there is a definedprocess for optimizing the application based on an iterative set of steps:

    Understand key performance scenarios for the application

    http://eetimes.com/ContentEETimes/Images/Design/Embedded/2012/1012/Freescale%20SPE%20Figure%2010%20full%20size.jpg
  • 7/30/2019 Assignment in Embedded System(CT74)

    3/24

    Assignment Embedded System

    Set goals for key optimizations for performance, memory, and power

    Select processor architecture to match the DSP application and performance requirements

    Analyze key algorithms in the system and perform algorithmic transformation if necessary

    Analyze compiler performance and output for key benchmarks

    Write out of box code in a high level language (e.g.C)

    Debug and achieve correctness and develop regression test

    Profile application and pareto rank hot spots

    Turn on low level optimizations with the compiler

    Run test regression, profile application, and re-rank

    Tune C/C++ code to map to the hardware architecture

    Run test regression, profile application, and re-rank

    Instrument code to get data as close as possible to the CPU using DMA and other techniques

    Run test regression, profile application, and re-rank

    Instrument code to provide links to compiler with intrinsics, pragmas, keywords

    Run test regression, profile application, and re-rank

    Turn on higher level of optimizations using compiler directives

    Run test regression, profile application, and re-rank

    Re-write key inner loops using assembly languages

    Run test regression, profile application, and re-rank

    If goals are not met, re-partition the application in hardware and software and start over again. At each phase,if the goals are met, then document and save code build settings and compiler switch settings

    Figure: A Process for Managing the Performance of an embedded DSP application

    The first step is to gather data that can be used to support the analysis. This data includes, but is not limited to,time and cost to complete the performance analysis, software changes required, hardware costs if necessary,and software build and distribution costs.

    The next step is to gather data on the effect of the improvements which include things like hardware upgradesthat can be deferred, staff cost savings, etc

    http://eetimes.com/ContentEETimes/Images/Design/Embedded/2012/1012/Freescale%20SPE%20Figure%2012%20full%20size.jpg
  • 7/30/2019 Assignment in Embedded System(CT74)

    4/24

    Assignment Embedded System

    Performance Engineering can be applied to each phase of the embedded software development process. Forexample, the Rational Unified Process (RUP) has four key phases: Inception, Elaboration, Construction, andTransition(Figure 13).

    RUP is an iterative software development process framework created by the Rational Software Corporation (nowIBM). RUP is an adaptable process framework instead of a single concrete prescriptive process. Its intended tobe tailored by software development teams that will select the elements of the process.

    c. Explain an embedded system design life cycle model with a suitable example.

    Answer:

    Embedded Systems Design

    Embedded systems structural design is impending from systems engineering standpoint, more than a fewrepresentations (Embedded systems life cycle models) can be functional to illustrate the life cycle of embeddedsystems design. Most of these representations are based in the lead one or several mixture of the followingdevelopment models:

    Big Bang Model: There is fundamentally No planning, No processes prepared earlier than and throughout thedevelopment life cycle of the system. Big Bang is a cosmological model of preliminary circumstances andsucceeding development of world that is supported by the majority wide-ranging and precise enlightenment from

    present methodical facts and inspection. The term Big Bang commonly refers to the design that the cosmos hasprolonged from primeval burning and thick preliminary circumstance at several restricted point in time in thehistory.

    Code and Fix Model: The requirements are defined, but no strict processes are prepared earlier than thebeginning of development. It is a especially simple type of the model. Mainly it consists of two steps.

    Step 1: Writing the source code (development)

    Step 2: Find and Fix the bugs in that source code (Bug Fixing)

    Code and Fix Model is used in first phase of the software development. It can be used with small systems whichdo not necessitate maintenance.

    Waterfall Model: There is a process for developing a system design in steps, where outcome of one step driveinto the subsequent step. The waterfall development life cycle model has it is beginning in the manufacturing andconstruction industries; it is extremely planned physical environments in which following the fact revolutionize areprohibitively expensive, if not impracticable. As no official software development methodologies survived at theoccasion, this is hardware oriented model was simply custom-made for software development.

    Spiral Model: There is a process for developing a system design in steps, and all the way through the varioussteps, response (feedback) is obtain and implemented support into the process. The spiral model (Also known asspiral lifecycle model or spiral development) is a software development process, adding device of both in designand prototyping, in an attempt to unite advantages of top down and bottom up conceptions. Informationtechnology (IT) uses systems development method (SDM) Spiral Model combines the characteristic of thewaterfall model and the prototyping model.

    Embedded Systems Development Lifecycle Model

  • 7/30/2019 Assignment in Embedded System(CT74)

    5/24

    Assignment Embedded System

    d. Draw and explain the block diagram of a two level bus architecture in a microprocessor basedembedded system.

    Answer:

    The arbitration methods described are typically used to arbitrate among peripherals in an embedded system.However, many embedded systems contain multiple microprocessors communicating via a shared bus; such abus is sometimes called a network. Arbitration in such cases is typically built right into the bus protocol, since thebus serves as the only connection among the microprocessors. A key feature of such a connection is that aprocessor about to write to the bus has no way of knowing whether another processor is about to simultaneouslywrite to the bus. Because of the relatively long wires and high capacitances of such buses, a processor may writemany bits of data before those bits appear at another processor. For example, Ethernet and I2C use a method inwhich multiple processors may write to the bus simultaneously, resulting in a collision and causing any data onthe bus to be corrupted. The processors detect this collision, stop transmitting their data, wait for some time, andthen try transmitting again. The protocols must ensure that the contending processors dont start sending again atthe same time, or must at least use statistical methods that make the chances of them sending again at the sametime small. As another example, the CAN bus uses a clever address encoding scheme such that if two addressesare written simultaneously by different processors using the bus, the higher-priority address will override thelower-priority one. Each processor that is writing the bus also checks the bus, and if the address it is writing doesnot appear, then that processor realizes that a higher-priority transfer is taking place and so that processor stopswriting the bus.

  • 7/30/2019 Assignment in Embedded System(CT74)

    6/24

    Assignment Embedded System

    e. Describe how wireless communication will be useful in embedded system. Give brief description ofany two wireless protocols.

    Answer:

    Wireless communications is revolutionizing the world around us. Using wireless communications to send andreceive messages, browse the Internet, and access corporate databases from any location in the world hasalready become commonplace. Bluetooth, Ultra Wide Band, satellite, cellular, wireless LAN, fixed broadband,mobile computing, and WWAN communications offer promise of ubiquitous applications with always-on capabilityanywhere anytime. Wireless networks are essential for the unified, efficient and cost-effective exchange ofelectronic information within embedded component systems. By freeing the user from the cord, personalcommunications networks, wireless LAN's, mobile radio networks and cellular systems, harbor the promise offully distributed mobile computing and communications, anytime, anywhere.

    "Embedded in the system of life" - A new definition for the Embedded Systems in the near future! Indeed,embedded system applications are extending their scope and reach to every aspect of life including consumerelectronics, medicine, communication, aviation, battlefield, transport, finance, education, environment monitoringetc.. Embedded Systems with Networking and Wireless Communication capability are now generating a new setof requirements and challenges in the field of Embedded System Design.

    An Embedded Wireless Application - An embedded wireless application usually runs on a small portabledevice that has a microprocessor with limited speed, little memory and little or no hard disk. The mostcommon application is a cellular mobile phone that holds contact information in memory. Being compactwithin a device requires autonomy. You cannot access a large enterprise network and load applications andresources locally. The system is practically built-in. Both embedded and wireless systems require real-timeperformance. Some examples of wireless embedded applications are personal digital assistants, pagers,wireless mice, wireless keyboards, wireless laser printers and cordless bar code scanners. Bluetoothtechnology addresses the requirements of a few of these devices.

    Target Microprocessor - Both wireless and embedded applications must target their software towards specificboards or microprocessors such as Intel, PowerPC, ARM, HP and MIPS. Firmware is low-level code that runs

    on the raw processor. This firmware is CPU specific. Software runs on the firmware and is relativelyindependent of the underlying hardware.

    Operating Systems and Software - Examples of embedded operating systems are Wind River's VxWorks,Microsoft Windows Embedded XP and Microsoft Windows CE. Examples of a wireless system are PalmOSfor PDAs, Nokia's Symbian OS, Microsoft Windows Mobile and Microsoft Windows CE. Note how WindowsCE is both embedded and compact, which serves as a potential choice for a light, portable, embedded andwireless real-time system. VxWorks and other embedded real-time operating systems have wireless securityand Web service features in their middleware layers.

    Characteristics - To sum up the combined characteristics of embedded real time wireless systems, theyrequire a CPU with a reduced speed running on an OS whose kernel takes up little memory when loaded.The OS implements wireless protocols at data-link, network, transport, session and application layers, whichsupports an application development environment built in a limited device configuration. Such a system isautonomous and communicates with a variety of devices at each layer of communication

  • 7/30/2019 Assignment in Embedded System(CT74)

    7/24

    Assignment Embedded System

    f. The design and configuration of caches can have a large impact on performance and powerconsumption of a system. Justify.

    Answer:

    Any embedded system contains both on-chip and off-chip memory modules with different access times. Duringsystem integration, the decision to map critical data on to faster memories is crucial. In order to obtain goodperformance targeting less amounts of memory, the data buffers of the application need to be placed carefully indifferent types of memory. There have been huge research efforts intending to improve the performance of thememory hierarchy. Recent advancements in semiconductor technology have made power consumption also alimiting factor for embedded system design. SRAM being faster than the DRAM, cache memory comprising ofSRAM is configured between the CPU and the main memory. The CPU can access the main memory (DRAM)only via the cache memory. Cache memories are employed in all the computing applications along with theprocessors. The size of cache allowed for inclusion on a chip is limited by the large physical size and large powerconsumption of the SRAM cells used in cache memory. Hence, its effective configuration for small size and lowpower consumption is very crucial in embedded system design. We present an optimal cache configurationtechnique for the effective reduction of size and high performance. The proposed methodology was tested in realtime hardware using FPGA. Matrix multiplication algorithm with various sizes of workloads is hence validated. Forthe validation of the proposed approach we have used Xilinx ISE 9.2i for simulation and synthesis purposes. Theprescribed design was implemented in VHDL.

    In today's embedded systems, memory represents a major bottleneck in terms of cost, performance, and power.

    To overcome this, effective customization of memory is mandatory. Memory estimation and optimization arecrucial in identifying the effect of optimization methodology on the performance and energy requirements of thesystem, in turn obtaining a cost effective embedded system[1]. Figure 1 shows the basic processor architecture.It consists of a main memory module (DRAM), whose performance is far behind that of the connected processor.

    One of the solutions to reduce this bottleneck is to employ a cache memory (SRAM) in between the mainmemory and the processor as shown in figure 2, as SRAM cells have faster access time than DRAM. Also, ithelps in improving the overall system performance.

    g. List the advantages of Real Time OS in an embedded system. Give an example of a processsynchronization procedure in RTOS for an embedded system.

    Answer:

    A real-time operating system (RTOS) is an operating system (OS) intended to serve real-time applicationrequests.

    A key characteristic of an RTOS is the level of its consistency concerning the amount of time it takes to acceptand complete an application's task; the variability is jitter. A hardreal-time operating system has less jitter than asoftreal-time operating system. The chief design goal is not high throughput, but rather a guarantee of a soft orhard performance category. An RTOS that can usually orgenerallymeet a deadline is a soft real-time OS, but if itcan meet a deadline deterministically it is a hard real-time OS.

    [2]

    An RTOS has an advanced algorithm for scheduling. Scheduler flexibility enables a wider, computer-systemorchestration of process priorities, but a real-time OS is more frequently dedicated to a narrow set of applications.Key factors in a real-time OS are minimal interrupt latency and minimal thread switching latency; a real-time OSis valued more for how quickly or how predictably it can respond than for the amount of work it can perform in agiven period of time

    The advent of microprocessors has opened up several product opportunities that simply did not exist earlier.These intelligent processors have invaded and embedded themselves into all fields of our lives be it the kitchen(food processors, microwave ovens), the living rooms (televisions, airconditioners) or the work places (fax

    machines, pagers, laser printer, credit card readers) etc. As the complexities in the embedded appl icationsincrease, use of an operating system brings in lot of advantages. Most embedded systems also have real-timerequirements demanding the use of Real time Operating Systems (RTOS) capable of meeting the embeddedsystem requirements. Real-time Operating System allows real-time applications to be designed and expandedeasily. The use of an RTOS simplifies the design process by splitting the application code into separate tasks. AnRTOS allows one to make better use of the system recourses by providing with valuable services such assemaphores, mailboxes, queues, time delays, time outsetc. This report looks at the basic concepts ofembedded systems, operating systems and specifically at Real Time Operating Systems in order to identify thefeatures one has to look for in an RTOS before it is used in a real-time embedded application. Some of thepopular RTOS have been discussed in brief, giving their salient features, which make them suitable for differentapplications.

    2.

    a. Explain with the help of example how delayed market entry of an embedded product will yield to

    losses.Answer:

    http://en.wikipedia.org/wiki/Operating_systemhttp://en.wikipedia.org/wiki/Real-time_computinghttp://en.wikipedia.org/wiki/Task_(computing)http://en.wikipedia.org/wiki/Jitterhttp://en.wikipedia.org/wiki/Real-time_computing#Criteria_for_real-time_computinghttp://en.wikipedia.org/wiki/Real-time_computing#Criteria_for_real-time_computinghttp://en.wikipedia.org/wiki/Deterministic_algorithmhttp://en.wikipedia.org/wiki/Real-time_operating_system#cite_note-2http://en.wikipedia.org/wiki/Real-time_operating_system#cite_note-2http://en.wikipedia.org/wiki/Real-time_operating_system#cite_note-2http://en.wikipedia.org/wiki/Scheduling_(computing)http://en.wikipedia.org/wiki/Interrupt_latencyhttp://en.wikipedia.org/wiki/Thread_switching_latencyhttp://en.wikipedia.org/wiki/Thread_switching_latencyhttp://en.wikipedia.org/wiki/Interrupt_latencyhttp://en.wikipedia.org/wiki/Scheduling_(computing)http://en.wikipedia.org/wiki/Real-time_operating_system#cite_note-2http://en.wikipedia.org/wiki/Deterministic_algorithmhttp://en.wikipedia.org/wiki/Real-time_computing#Criteria_for_real-time_computinghttp://en.wikipedia.org/wiki/Real-time_computing#Criteria_for_real-time_computinghttp://en.wikipedia.org/wiki/Jitterhttp://en.wikipedia.org/wiki/Task_(computing)http://en.wikipedia.org/wiki/Real-time_computinghttp://en.wikipedia.org/wiki/Operating_system
  • 7/30/2019 Assignment in Embedded System(CT74)

    8/24

    Assignment Embedded System

    While constraining the hardware-software architecture is detrimental to software development cost, thecorresponding effect on development time can be even more devastating. Time-to-market costs often outweighdesign, prototyping, and production costs of commercial products. A recent survey showed that being six monthslate to market resulted in an average 33% profit loss, assuming a five-year product lifetime. Early market entryincreases product yield, market share, and brand name recognition. Figure below shows a model of demand andpotential sales revenues for a new product (based on market research performed by Logic Automation, nowowned by Synopsys). The un-shaded region of the triangle signifies revenue loss due to late market entry. If the

    product life cycle is short, being late to market can spell disaster.

    Cost-driven system-level design

    To effectively address these system-level design challenges, product developers need a unified approach thatconsiders the costs of both software and hardware options. This approach, which we call cost-driven system-

    level design, converge hardware and software design efforts into a methodology that improves cost, cycle time,and quality, and enhances design space exploration. We have developed such a methodology at Georgia

    Techs Centre for Signal and Image Processing under the auspices of the US Defence Advanced ResearchProjects

    Agencys RASSP (Rapid Prototyping of Application-Specific Digital Signal Processors) program. Aimed at COTS-based embedded systems, the methodology uses parametric cost and development time estimation models todrive the design process. It seamlessly integrates a cost-driven architecture design engine (CADE) with a library-based co-simulation and co-verification environment for rapid prototyping. We use virtual prototypes3 to performhierarchical design verification, with VHDL (VHSIC Hardware Description Language) software models of thehardware executing a representation of the application code. Figure 4 diagrams the overall process flow. Ourresearch focuses on demonstrating how to implement the shaded process steps (system definition andarchitecture definition) using

    virtual prototyping in an automated environment. We believe that emphasizing cost-related issues benefits the

    cost-effectiveness of embedded micro-systems more in the early design stages than in the later stages. Figure5,4 which depicts costs committed versus costs incurred over the product life cycle, illustrates the rationale forour belief. Although the front-end design process typically involves less than 10% of the total prototyping timeand cost, it accounts for more than 80% of a systems life-cycle cost. For this reason, our research focuses onthe front-end design process. Our approach uses cost estimation models as well as performance estimationmodels to facilitate system-level design exploration early in the design cycle. We model the architecture selectionprocess using mathematical programming formulations. We implement the models with commercial optimizationpackages, which efficiently solve complex problems, enabling the user to concentrate on problem-specific issuesrather than data structures and implementation details. As output, CADE produces candidate architectures thatwe verify using VHDL performance-modeling technology.

  • 7/30/2019 Assignment in Embedded System(CT74)

    9/24

    Assignment Embedded System

    b. Explain with an example the principle of priority inversion in interrupts in an embedded system.

    Answer:

    In computer science, priority inversion is a problematic scenario in scheduling when a higher priority task isindirectly preempted by a lower priority task effectively "inverting" the relative priorities of the two tasks.

    This violates the priority model that high priority tasks can only be prevented from running by higher priority tasksand briefly by low priority tasks which will quickly complete their use of a resource shared by the high and lowpriority tasks.

    Example of a priority inversion

    Consider a task L, with low priority that requires a resource R. Now, consider another task H, with high priority.This task also requires resource R. If H starts afterL has acquired resource R, then H has to wait to run until Lrelinquishes resource R.

    Everything works as expected up to this point, but problems arise when a new task M (which does not use R)starts with medium priority during this time. Since R is still in use (by L) H cannot run. Since M is the highestpriority unblocked task, it will be scheduled before L. Since L has been preempted by M, L cannot relinquish R.So M will run till it is finished, then L will run - at least up to a point where it can relinquish R - and then H will run.Thus, in the scenario above, a task with medium priority ran before a task with high priority, effectively giving us apriority inversion.

    In some cases, priority inversion can occur without causing immediate harmthe delayed execution of the highpriority task goes unnoticed, and eventually the low priority task releases the shared resource. However, thereare also many situations in which priority inversion can cause serious problems. If the high priority task is leftstarved of the resources, it might lead to a system malfunction or the triggering of pre-defined correctivemeasures, such as a watch dog timer resetting the entire system. The trouble experienced by the Mars lender"Mars Pathfinderis a classic example of problems caused by priority inversion in real-time systems.

    Priority inversion can also reduce the perceived performance of the system. Low priority tasks usually have a lowpriority because it is not important for them to finish promptly (for example, they might be a batch job or anothernon-interactive activity). Similarly, a high priority task has a high priority because it is more likely to be subject to

    http://en.wikipedia.org/wiki/Computer_sciencehttp://en.wikipedia.org/wiki/Scheduling_(computing)http://en.wikipedia.org/wiki/Task_(computing)http://en.wikipedia.org/wiki/Preemption_(computing)http://en.wikipedia.org/wiki/Resource_starvationhttp://en.wikipedia.org/wiki/Watchdog_timerhttp://en.wikipedia.org/wiki/Mars_Pathfinderhttp://en.wikipedia.org/wiki/Real-time_computinghttp://en.wikipedia.org/wiki/Perceived_performancehttp://en.wikipedia.org/wiki/Batch_jobhttp://en.wikipedia.org/wiki/Batch_jobhttp://en.wikipedia.org/wiki/Perceived_performancehttp://en.wikipedia.org/wiki/Real-time_computinghttp://en.wikipedia.org/wiki/Mars_Pathfinderhttp://en.wikipedia.org/wiki/Watchdog_timerhttp://en.wikipedia.org/wiki/Resource_starvationhttp://en.wikipedia.org/wiki/Preemption_(computing)http://en.wikipedia.org/wiki/Task_(computing)http://en.wikipedia.org/wiki/Scheduling_(computing)http://en.wikipedia.org/wiki/Computer_science
  • 7/30/2019 Assignment in Embedded System(CT74)

    10/24

    Assignment Embedded System

    strict time constraintsit may be providing data to an interactive user, or acting subject to real-time responseguarantees. Because priority inversion results in the execution of the low priority task blocking the high prioritytask, it can lead to reduced system responsiveness, or even the violation of response time guarantees.

    A similar problem called deadline interchange can occur within earliest deadline first scheduling (EDF).

    Solutions

    The existence of this problem has been known since the 1970s, but there is no fool-proof method to predict thesituation. There are however many existing solutions, of which the most common ones are:

    Disabling all interrupts to protect critical sections

    When disabled interrupts are used to prevent priority inversion, there are only two priorities: preemptible, andinterrupts disabled. With no third priority, inversion is impossible. Since there's only one piece of lock data (theinterrupt-enable bit), misordering locking is impossible, and so deadlocks cannot occur. Since the critical regionsalways run to completion, hangs do not occur. Note that this only works if all interrupts are disabled. If only aparticular hardware device's interrupt is disabled, priority inversion is reintroduced by the hardware's prioritizationof interrupts. A simple variation, "single shared-flag locking" is used on some systems with multiple CPUs. Thisscheme provides a single flag in shared memory that is used by all CPUs to lock all inter-processor criticalsections with a busy-wait. Inter processor communications are expensive and slow on most multiple CPUsystems. Therefore, most such systems are designed to minimize shared resources. As a result, this schemeactually works well on many practical systems. These methods are widely used in simple embedded systems,

    where they are prized for their reliability, simplicity and low resource use. These schemes also require cleverprogramming to keep the critical sections very brief. Many software engineers consider them impractical ingeneral-purpose computers.

    A priority ceiling

    With priority ceilings, the shared mutex process (that runs the operating system code) has a characteristic (high)priority of its own, which is assigned to the task locking the mutex. This works well, provided the other highpriority task(s) that tries to access the mutex does not have a priority higher than the ceiling priority.

    Priority inheritance

    Under the policy of priority inheritance, whenever a high priority task has to wait for some resource shared withan executing low priority task, the low priority task is temporarily assigned the priority of the highest waitingpriority task for the duration of its own use of the shared resource, thus keeping medium priority tasks from pre-empting the (originally) low priority task, and thereby affecting the waiting high priority task as well. Once the

    resource is released, the low priority task continues at its original priority level.Random boosting

    Ready tasks holding locks are randomly boosted in priority until they exit the critical section. This solution is usedin Microsoft Windows.

    3.

    a. What is an optimization? Explain the different optimization opportunities available to customizesingle-purpose processors.

    Answer:

    Optimizing Custom Single Purpose Processors

    Optimization of SPP is necessary to meet the design challenges. This involves removing some unnecessarystates from FSMD to simplify the design. Removal of redundant functional units can be another approach. Thus,optimization is the task of making the design metric values the best possible.

    Optimizing GCD Program

    This optimization can be carried by

    Optimizing the initial program

    This can be done by developing more efficient algorithm( in terms of time and space complexities ) and then,converting it to FSMD. For example, a more efficient algorithm for the GCD program is given below:

    int x,y,r;while (1){while ( ! go _ i );if (x _ i >= y _ i ) { x = x _ i; y = y _ i }else { x = y _ i; y = x _ i ; } // x must be the larger number

    while ( y != 0 ) {r = x % y ;x = y ;

    http://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling#Deadline_interchangehttp://en.wikipedia.org/wiki/Earliest_deadline_first_schedulinghttp://en.wikipedia.org/wiki/Busy-waithttp://en.wikipedia.org/wiki/Embedded_systemhttp://en.wikipedia.org/wiki/Priority_ceilinghttp://en.wikipedia.org/wiki/Mutual_exclusionhttp://en.wikipedia.org/wiki/Priority_inheritancehttp://en.wikipedia.org/wiki/Random_boostinghttp://en.wikipedia.org/wiki/Microsoft_Windowshttp://en.wikipedia.org/wiki/Microsoft_Windowshttp://en.wikipedia.org/wiki/Random_boostinghttp://en.wikipedia.org/wiki/Priority_inheritancehttp://en.wikipedia.org/wiki/Mutual_exclusionhttp://en.wikipedia.org/wiki/Priority_ceilinghttp://en.wikipedia.org/wiki/Embedded_systemhttp://en.wikipedia.org/wiki/Busy-waithttp://en.wikipedia.org/wiki/Earliest_deadline_first_schedulinghttp://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling#Deadline_interchange
  • 7/30/2019 Assignment in Embedded System(CT74)

    11/24

    Assignment Embedded System

    y = r ;}d _ o = x ;

    }

    The above algorithm makes use of modulo operation % and uses fewer steps and far more efficient in terms intime. The choice of algorithm can have the biggest impact on the efficiency of the designed processor.

    Optimizing the FSMD

    Template based procedure to convert a program into FSMD may result in an inefficient FSMD, as this procedureresults in many unnecessary states. Scheduling, is the task of assigning operations from the original program tostates in an FSMD. The scheduling obtained using template-based method ( shown in the figure below ) can beimproved. Some states can be merged into one, when there are no loop operations between them. Unwantedstates, can be removed whose outgoing transitions have constant values. The optimized (reduced) FSMD withonly 6 states ( from 13 states ) is shown in the figure . In deciding the number of states in an FSMD, theconsequent hardware constraint must also be looked into. For example, a particular program statement had theoperation a = b*c*d*e. Generating a single state for this operation will require usage of three multipliers in thedatapath. Multipliers are expensive. To avoid such usage, the operation can be broken down into smalleroperations like,t1 = b*c, t2 = d*e, and a = t1 * t2, with each smaller operation having its own state. Then, only onemultiplier would be needed in the data path. Multiplication operations can share the multiplier. While optimizing,time constraints must also should be considered.

    Optimizing the data path

    In this optimization process redundancy of functional units can be checked by sharing them. With a number of RTcomponents in the datapath, Allocation is the task of choosing which RT components to use in the datapath.Binding is the task of mapping operations from FSMD to allocated components. Scheduling, allocation, andbinding are highly interdependent. Sometimes, these tasks may have to be considered simultaneously.

    Optimizing FSM

    This is done through State encoding and State Minimization . State encoding is the task of assigning a unique bitpattern to each state in an FSM. CAD tools can be of great aid in searching for the best encoding that decidesthe size of the state register and the size of the combinational logic.

    State minimization is the task of merging equivalent states into a single state. Two states are equivalent if for allpossible input combinations, those two states generate the same outputs and transitions to the same next state.

  • 7/30/2019 Assignment in Embedded System(CT74)

    12/24

    Assignment Embedded System

    State Merging, on the other hand, is different from State minimization. State merging that is used in optimizingFSMD, changes the output.

    b. Describe Pipelining, Superscalar and VLIW Architectures.

    Answer:

    A superscalar architecture is one in which several instructions can be initiated simultaneously and executed

    independently.

    Pipelining allows several instructions to be executed at the same time, but they have to be in different pipelinestages at a given moment.

    Superscalar architectures include all features of pipelining but, in addition, there can be several instructionsexecuting simultaneously in the same pipeline stage.

    Superscalar architectures allow several instructions to be issued and completed per clock cycle.

    A superscalar architecture consists of a number of pipelines that are working in parallel.

    Depending on the number and kind of parallel units available, a certain number of instructions can beexecuted in parallel.

    In the following example a floating point and two integer operations can be issued and executedsimultaneously; each unit is pipelined and can execute several operations in different pipeline stages.

  • 7/30/2019 Assignment in Embedded System(CT74)

    13/24

    Assignment Embedded System

    Limitations on Parallel Execution

    The situations which prevent instructions to be executed in parallel by a superscalar architecture are verysimilar to those which prevent an efficient execution on any pipelined architecture.

    The consequences of these situations on superscalar architectures are more severe than those on simplepipelines, because the potential of parallelism in superscalars is greater and, thus, a greater opportunity islost.

    Limitations on Parallel Execution (contd)

    Three categories of limitations have to be considered:

    o Resource conflicts:

    They occur if two or more instructions compete for the same resource (register, memory, functionalunit) at the same time;

    They are similar to structural hazards discussed with pipelines. Introducing several parallel pipelined

    units, superscalar architectures try to reduce a part of possible resource conflicts.

    o Control (procedural) dependency

    The presence of branches creates major problems in assuring an optimal parallelism

    If instructions are of variable length, they cannot be fetched and issued in parallel; an instruction has to

    applicable to RISCs, with fixed instruction length and format.

    o Data conflicts

    Data conflicts are produced by data dependencies between instructions in the program. Becausesuperscalar architectures provide a great liberty in the order in which instructions can be issued andcompleted, data dependencies have to be considered with much attention.

    4.

    a. Compare the write ability and storage performance of popular memories.

  • 7/30/2019 Assignment in Embedded System(CT74)

    14/24

    Assignment Embedded System

    Answer:

  • 7/30/2019 Assignment in Embedded System(CT74)

    15/24

    Assignment Embedded System

    b. Implement a RS-232 interface with a microcontroller and explain the signals and commands in it.

    Answer:

    It is one of the simplest forms of microcontroller networking. It is commonly known as serial or RS-232communications. As you can see in Figure 1.2, RS-232 was designed to tie DTE (Data Terminal Equipment) andDCE (Data Communications Equipment) devices together electronically to effect bidirectional datacommunications between the devices.

    An example of a DTE device is the serial port on your personal computer. Under normal conditions, the DTEinterface on your personal computer asserts DTR (Data Terminal Ready) and RTS (Request To Send). DTR andRTS are called modem control signals. A typical DCE device interface responds to the assertion of DTR byactivating a signal called DSR (Data Set Ready). The DTE RTS signal is answered by CTS (Clear To Send) fromthe DCE device. A standard external modem that you would connect to your personal computer serial port is aperfect example of a DCE device.

    Lets look at them from a commented standards point of view.

    1. Pin 1 (Protective Ground Circuit, AA). This conductor is bonded to the equipment frame and can beconnected to external grounds if other regulations or applications require it.

    Comment: Normally, this is either left open or connected to the signal ground. This signal is not found in theDTE 9-pin serial connector.

    2. Pin 2 (Transmitted Data Circuit BA, TD). This is the data signal generated by the DTE. The serial bit streamfrom this pin is the data thats ultimately processed by a DCE device.

    Comment: This is pin 3 on the DTE 9-pin serial connector. This is one of the three minimum signalsrequired to effect an RS-232 asynchronous communications session.

    3. Pin 3 (Received Data Circuit BB, RD). Signals on this circuit are generated by the DCE. The serial bitstream originates at a remote DTE device and is a product of the receive circuitry of the local DCE device.

    This is usually digital data thats produced by an intelligent DCE or modem demodulator circuitry.

    Comment: This is pin 2 on the DTE 9-pin serial connector. This is another of the three minimum signalsrequired to effect an RS-232 asynchronous communications session.

    4. Pin 4 (Request To Send Circuit CA, RTS). This signal prepares the DCE device for a transmit operation.The RTS ON condition puts the DCE in transmit mode, while the OFF condition places the DCE in receivemode. The DCE should respond to an RTS ON by turning ON Clear to Send (CTS). Once RTS is turnedOFF, it shouldnt be turned ON again until CTS has been turned OFF. This signal is used in conjunction withDTR, DSR and DCD. RTS is used extensively in flow control.

    Comment: This is pin 7 on the DTE 9-pin serial connector. In simple 3-wire implementations this signal isleft disconnected. Sometimes you will see this signal tied to the CTS signal to satisfy a need for RTS andCTS to be active signals in the communications session. You will also see RTS feed CTS in a null modemarrangement.

    5. Pin 5 (Clear To Send Circuit CB, CTS). This signal acknowledges the DTE when RTS has been sensed bythe DCE device and usually signals the DTE that the DCE is ready to accept data to be transmitted. Data is

  • 7/30/2019 Assignment in Embedded System(CT74)

    16/24

    Assignment Embedded System

    transmitted across the communications medium only when this signal is active. This signal is used inconjunction with DTR, DSR and DCD. CTS is used in conjunction with RTS for flow control.

    Comment: This is pin 8 on the DTE 9-pin serial connector. In simple 3-wire implementations this signal isleft disconnected. Otherwise, youll see it tied to RTS in null modem arrangements or where CTS has to bean active participant in the communications session.

    6. Pin 6 (Data Set Ready Circuit CC, DSR). DSR indicates to the DTE device that the DCE equipment isconnected to a valid communication medium and, in some cases, indicates that the line is in the OFFHOOK condition. OFF HOOK is an indication that the DCE is either in dialing mode or in session withanother remote DCE. When this signal is OFF, the DTE should be instructed to ignore all other DCEsignals. If this signal is turned off before DTR, the DTE is to assume an aborted communication session.

    Comment: This is pin 6 on the DTE 9-pin serial connector. DSR is sometimes used in a flow controlarrangement with DTR. Some modems assert DSR when power to the modem is applied regardless of thecondition of the communications medium.

    7. Pin 7 (Signal Common Circuit, AB). This conductor establishes the common-ground reference for allinterchange circuits, except Circuit AA, protective ground. The RS-232-B specification permits this circuit tobe optionally connected to protective ground within the DCE device as necessary.

    Comment: This is pin 5 on the DTE 9-pin serial connector and is the only ground connection. This is thethird wire of the minimal 3-wire configuration. Thus, an RS- 232 asynchronous communications session can

    be effected with only three signals: TX (Transmit Data), RX (Receive Data) and signal ground.8. Pin 8 (Data Carrier Detect Circuit CF, DCD). This pin is also known as Received Line Signal Detect (RSLD)

    or Carrier Detect (CD). This signal is active when a suitable carrier is established between the local andremote DCE devices. When this signal is OFF, RD should be clamped to the mark state (binary 1).

    Comment: This is pin 1 on the DTE 9-pin serial connector. Normally in use only if a modem is in thecommunications signal path. You will also see this signal tied active in a null modem arrangement.

    9. Pin 20 (Data Terminal Ready Circuit CD, DTR). DTR signals are used to control switching of the DCE to thecommunication medium. DTR ON indicates to the DCE that connections in progress shall remain inprogress, and if no sessions are in progress, new connections can be made. DTR is normally turned off toinitiate ON HOOK (hang-up) conditions. The normal DCE response to activating DTR is to activate DSR.

    Comment: This is pin 4 on the DTE 9-pin serial connector. Unless you specify differently or run a programthat controls DTR, usually it is present on the personal computer serial port as long as the personal

    computer is powered on. Occasionally you will see this signal used in flow control.10. Pin 22 (Ring Indicator Circuit CE, RI). The ON condition of this signal indicates that a ring signal is being

    received from the communication medium (telephone line). Its normally up to the control program to act onthe presence of this signal.

    Comment: This is pin 9 on the DTE 9-pin serial connector. This signal follows the incoming ring to an extent.Normally, this signal is used by DCE auto-answer algorithms.

    That is all thats needed RS-232 signal-wise to establish a session between a DTE and a DCE device. Now thatyou have a feeling for what each RS-232 signal does, lets review how they react to each other with respect tothe transfer of data between a DTE and DCE device.

    a. Local DTE (personal computer, microcontroller, etc.) is powered up and DTR is asserted.

    b. Local DCE (modem, data set, microcontroller, etc.) is powered up and senses the DTR from the localDTE.

    c. Local DCE asserts DSR. If the DCE device is a modem, it goes off-hook (picks up the line). If a dial-upsession is to be established, the DTE sends a dial instruction and phone number to the modem.

    d. If the line is good and the other end (remote DCE) is ready or answers the dial-up from the local DCE, acarrier is generated/detected and the local and remote DCE devices assert DCD. The session isestablished.

    e. The transmitting DTE raises RTS.

    f. The transmitting DCE responds with CTS.

    g. The control program transmits or receives data.

    To perform RS-232 asynchronous communications with microcontrollers, we must employ a Voltage translationscheme of our own.

    5. Explain cache direct mapping, Fully associative and Set-associative mapping techniques.

    Answer:

  • 7/30/2019 Assignment in Embedded System(CT74)

    17/24

    Assignment Embedded System

    Cache memory (also called buffer memory) is local memory that reduces waiting times for information stored inthe RAM (Random Access Memory). In effect, the computer's main memory is slower than that of the processor.There are, however, types of memory that are much faster, but which have a greatly increased cost. The solutionis therefore to include this type of local memory close to the processor and to temporarily store the primary datato be processed in it.

    The speed of CPU is extremely high compared to the access time of main memory. Therefore the performance of

    CPU decreases due to the slow speed of main memory. To decrease the mismatch in operating speed, a smallmemory chip is attached between CPU and Main memory whose access time is very close to the processingspeed of CPU. It is called CACHE memory. CACHE memories are accessed much faster than conventionalRAM. It is used to store programs or data currently being executed or temporary data frequently used by theCPU. So each memory makes main memory to be faster and larger than it really is. It is also very expensive tohave bigger size of cache memory and its size is normally kept small.

    Mapping Memory Lines to Cache Lines - Three Strategies

    As a working example, suppose the cache has 27

    = 128 lines, each with 24

    = 16 words. Suppose the memory hasa 16-bit address, so that 2

    16= 64K words are in the memory's address space.

    Direct Mapping

    Under this mapping scheme, each memory line j maps to cache l ine j mod 128 so the memory address looks likethis:

    Here, the "Word" field selects one from among the 16 addressable words in a line. The "Line" field defines thecache line where this memory line should reside. The "Tag" field of the address is then compared with that cacheline's 5-bit tag to determine whether there is a hit or a miss. If there's a miss, we need to swap out the memoryline that occupies that position in the cache and replace it with the desired memory line.

    E.g., Suppose we want to read or write a word at the address 357A, whose 16 bits are 0011010101111010. Thistranslates to Tag = 6, line = 87, and Word = 10 (all in decimal). If line 87 in the cache has the same tag (6), thenmemory address 357A is in the cache. Otherwise, a miss has occurred and the contents of cache line 87 must bereplaced by the memory line 001101010111 = 855 before the read or write is executed.

    Direct mapping is the most efficient cache mapping scheme, but it is also the least effective in its utilization of thecache - that is, it may leave some cache lines unused.

  • 7/30/2019 Assignment in Embedded System(CT74)

    18/24

    Assignment Embedded System

    Asso ciat ive Mapping

    This mapping scheme attempts to improve cache utilization, but at the expense of speed. Here, the cache linetags are 12 bits, rather than 5, and any memory line can be stored in any cache line. The memory address lookslike this:

    Here, the "Tag" field identifies one of the 212

    = 4096 memory lines; all the cache tags are searched to find outwhether or not the Tag field matches one of the cache tags. If so, we have a hit, and if not there's a miss and weneed to replace one of the cache lines by this line before reading or writing into the cache. (The "Word" fieldagain selects one from among 16 addressable words (bytes) within the line.)

    For example, suppose again that we want to read or write a word at the address 357A, whose 16 bits are0011010101111010. Under associative mapping, this translates to Tag = 855 and Word = 10 (in decimal). So wesearch all of the 128 cache tags to see if any one of them will match with 855. If not, there's a miss and we needto replace one of the cache lines with line 855 from memory before completing the read or write. The search ofall 128 tags in the cache is time-consuming. However, the cache is fully utilized since none of its lines will beunused prior to a miss (recall that direct mapping may detect a miss even though the cache is not completely full

    of active lines).

    Set-associative Mappin g

    This scheme is a compromise between the direct and associative schemes described above. Here, the cache isdivided into sets of tags, and the set number is directly mapped from the memory address (e.g., memory line j ismapped to cache set j mod 64), as suggested by the diagram below:

    The memory address is now partitioned to like this:

    Here, the "Tag" field identifies one of the 26

    = 64 different memory lines in each of the 26

    = 64 different "Set"

    values. Since each cache set has room for only two lines at a time, the search for a match is limited to those twolines (rather than the entire cache). If there's a match, we have a hit and the read or write can proceedimmediately. Otherwise, there's a miss and we need to replace one of the two cache lines by this line beforereading or writing into the cache. (The "Word" field again select one from among 16 addressable words inside theline.)

    In set-associative mapping, when the number of lines per set is n, the mapping is called n-way associative. Forinstance, the above example is 2-way associative.

    E.g., Again suppose we want to read or write a word at the memory address 357A, whose 16 bits are0011010101111010. Under set-associative mapping, this translates to Tag = 13, Set = 23, and Word = 10 (all indecimal). So we search only the two tags in cache set 23 to see if either one matches tag 13. If so, we have a hit.Otherwise, one of these two must be replaced by the memory line being addressed (good old line 855) before theread or write can be executed.

    6.

    a. Explain the flow of actions in a peripheral to memory transfer with DMA in an embedded system. Giveits advantages over the transfer taking place with vectored interrupts.

  • 7/30/2019 Assignment in Embedded System(CT74)

    19/24

    Assignment Embedded System

    Answer:

    DMA (Direct Memory Access) provides an efficient way of Data Transfers across "a Peripheral and Memory" oracross "two memory regions". DMA is a processing engine which can perform data transfer operations (to or fromthe Memory). In absence of DMA engine, the CPU needs to handle these data operations, and hence the overallsystem performance is heavily reduced. DMA is specifically useful in the system which involve huge datatransfers (in absence of DMA, CPU will be busy doing these transfers most of the time and will not be available

    for other processing).DMA Parameters: DMA Transfers involve a Source and a Destination. DMA Engine Transfers the data fromSource to Destination. DMA engine requires source and destination addresses along with the Transfer Count inorder to perform the data transfers. The (Source or Destination) Address could be a physical address (in case ofa memory) or logical (in case of a peripheral). Transfer Counts specifies number of words which need to betransferred. As we mentioned before, Data transfer could be either from a Peripheral to Memory (generally calledReceived DMA) or from a Memory to Peripheral (generally called Transmit DMA) or from a Memory to anotherMemory (Generally called Memory DMA).Some DMA engines support additional parameters like Word-Size, and Address-Increment in addition to the StartAddress and Transfer Count. Word-Size specify the size of each transfer. Address-increment specifies the offsetfrom current address (in memory), which the next transfer should use. This provides a way of transferring datafrom non-contiguous memory locations.

    DMA Channels: DMA engine can support multiple DMA Channels. This means that at a given time, multiple

    DMA Transfers can happen (though physically only one transfer may be possible, but logically DMA can handlemany channels in parallel). This feature makes the life of software programmer very easy (as he does not have towait for the current DMA operations to finish before he programs the next DMA operation). Each DMA channelwill have control register where the DMA Parameters can be specified. DMA Channels also have an interruptassociated with it (on most processors) which (optionally) triggers after completion of DMA transfer. Inside theISR, programmer can take specific action (e.g. do some processing on the data which has been just receivedthrough DMA, or program a new DMA transfer).

    Chained DMA: Certain DMA controllers support an option for specifying DMA parameters in a Buffer (or array) inmemory rather than directly writing it to DMA control registers (This is mostly applicable for the second DMAoperation - parameters for first DMA operation are still specified in the control registers). This Buffer is calledDMA Transfer Control Block (TCB). DMA controller takes the address of DMA TCB as one of the parameters, (inaddition to the control parameters for first DMA transfer) and loads the DMA parameters (for second DMAoperation) automatically from the Memory (after first DMA Operation is over). The TCB also contains an entry for"Next TCB Address", which provides an easy way for chaining multiple DMA operations in an automatic fashion

    (rather than having to program it after completion of each DMA). The DMA chaining can be stopped, byspecifying a ZERO address in Next TCB Address field.

    Multi-dimensional DMA: combined with Address-Increment gives many options.

    Why use DMA?

    The obvious benefit of moving data using DMA transfers is that the processor can be doing something else whilethe transfer is in progress. However, using DMA sometimes has other advantages depending on the hardwareinvolved. These include:

    Data transformations application-specific processors, such as those targeted to video or digital signalprocessing, may be able to perform data transformations as part of the DMA transfer. These include byte-orderchanges and 2D block transfers (see below).

    Lower power if the processor load is reduced and there are fewer interrupts for example, one on completionof the whole transfer rather than one per item of data transferred it may be possible to run the processor at alower clock rate or even to enter a low power mode while DMA transfers are in flight.

    Higher data throughput a given processor may be able to handle more external interfaces at higher datarates, or a low-end processor might be able to handle more complicated interfaces such as Ethernet or USB.

    DMA transfers are also commonly used for inter-processor communication between cores in a multi-coreprocessor or processors in a multi-processor system.

    Types of DMA transfer

    To assess the benefits and consequences of using DMA it is necessary to know what is happening at thehardware level.

    DMA transfers can take different forms depending on the hardware design and the peripheral devices involved.The simplest is a known as a single-cycle DMA transfer and is typically used to transfer data between devicessuch as UARTs or audio codecs that produce or consume data a word at a time. In this situation the peripheral

    device uses a control line to signal that it has data to transfer or requires new data. The DMA controller obtainsaccess to the system bus, transfers the data, and then releases the bus. Access to the bus is granted when the

  • 7/30/2019 Assignment in Embedded System(CT74)

    20/24

    Assignment Embedded System

    processor, or another bus master, is not using the bus. Single-cycle DMA transfers are therefore interleaved withother bus transactions and do not much affect the operation of the processor.

    Another type of transfer is a burst transfer. This is used to transfer a block of data in a series of back-to-backaccesses to the system bus. The transfer starts with a bus request; when this is granted, the data is transferred inbursts, for example 128 bytes at a time. The burst size depends on the processor architecture and the peripheral,and may be programmable depending on the details of the hardware.

    While a burst transaction is occurring the processor will not be able to access the system bus. However,preventing the processor from accessing the system bus for example to fetch new instructions or data fromexternal memory may cause it to stall, which can reduce the system performance. To minimise the effects ofthis problem, the DMA controller may release the bus after a fixed number of burst transactions or when a pre-determined bandwidth limit has been reached. The system bus arbitration logic then determines which busmaster will next have access to the bus and when the DMA transfer will continue with the next block. The numberof bus masters and their relative priority is a wider system design issue that will not be addressed here. However,if the system needs to perform large DMA block transfers the system designer needs to carefully work out thebus bandwidth requirements to ensure there are no performance bottlenecks in the hardware or system design.

    b. Compare the Processes and Threads.

    Answer:

    Processes and Threads

    In concurrent programming, there are two basic units of execution: processes and threads. In the Javaprogramming language, concurrent programming is mostly concerned with threads. However, processes are alsoimportant.

    A computer system normally has many active processes and threads. This is true even in systems that only havea single execution core, and thus only have one thread actually executing at any given moment. Processing timefor a single core is shared among processes and threads through an OS feature called time slicing.

    It's becoming more and more common for computer systems to have multiple processors or processors withmultiple execution cores. This greatly enhances a system's capacity for concurrent execution of processes andthreads but concurrency is possible even on simple systems, without multiple processors or execution cores.

    Processes

    A process has a self-contained execution environment. A process generally has a complete, private set of basicrun-time resources; in particular, each process has its own memory space.

    Processes are often seen as synonymous with programs or applications. However, what the user sees as asingle application may in fact be a set of cooperating processes. To facilitate communication between processes,most operating systems support Inter Process Communication (IPC) resources, such as pipes and sockets. IPCis used not just for communication between processes on the same system, but processes on different systems.

    Most implementations of the Java virtual machine run as a single process. A Java application can createadditional processes using a ProcessBuilder object. Multiprocess applications are beyond the scope of thislesson.

    Threads

    Threads are sometimes called lightweight processes. Both processes and threads provide an executionenvironment, but creating a new thread requires fewer resources than creating a new process.

    Threads exist within a process every process has at least one. Threads share the process's resources,

    including memory and open files. This makes for efficient, but potentially problematic, communication.

    Multithreaded execution is an essential feature of the Java platform. Every application has at least one thread or several, if you count "system" threads that do things like memory management and signal handling. But fromthe application programmer's point of view, you start with just one thread, called the main thread. This thread hasthe ability to create additional threads, as we'll demonstrate in the next section.

    7.

    a. How is an embedded system applied in telecommunication devices and systems? Illustrate with thehelp of a case study.

    Answer:

    Embedded Systems has witnessed tremendous growth in the last one decade. Almost all the fast developingsectors like automobile, aeronautics, space, rail, mobile communications, and electronic payment solutions havewitnessed increased use of Embedded technologies. Greater value to mobility is one of the prominent reasonsfor the rise and development of Embedded technologies.

    Initially, Embedded Systems were used for large, safety-critical and business-critical applications that included

    http://docs.oracle.com/javase/7/docs/api/java/lang/ProcessBuilder.htmlhttp://docs.oracle.com/javase/7/docs/api/java/lang/ProcessBuilder.html
  • 7/30/2019 Assignment in Embedded System(CT74)

    21/24

    Assignment Embedded System

    Rocket & satellite control

    Energy production control

    Telephone switches

    Air Traffic Control

    Embedded Systems research and development is now concerned with a very large proportion of the advancedproducts designed in the world. In one way, Embedded technologies run global transport industry that includesavionics, space, automotive, and trains. But, it is the electrical and electronic appliances like cameras, toys,televisions, home appliances, audio systems, and cellular phones that really are the visual interface of EmbeddedSystems for the common consumer.

    Advanced Embedded Technologies are deployed in developing

    Process Controls (energy production and distribution, factory automation and optimization)Telecommunications (satellites, mobile phones and telecom networks),

    Energy management (production, distribution, and optimized use)

    Security (e-commerce, smart cards)

    Health (hospital equipment, and mobile monitoring)

    In the last few years the emphasis of Embedded technologies was on achieving feasibility, but now the trend istowards achieving optimality. Optimality or optimal design of embedded systems means

    Targeting a given market segment at the lowest cost and delivery time possible

    Seamless integration with the physical and electronic environment

    Understanding the real-world constraints such as hard deadlines, reliability, availability, robustness,power consumption, and cost

    VECTOR Institute provides enough exposure to students in all Embedded technologies by making them work onreal-time and multi-domain Embedded projects.

    Automobile sector

    Automobile sector has been in the forefront of acquiring and utilizing Embedded technology to produce highlyefficient electric motors. These electric motors include brushless DC motors, induction motors and DC motors,that use electric/electronic motor controllers.

    European automotive industry enjoys a prominent place in utilizing Embedded technology to achieve betterengine control. They have been utilizing the recent Embedded innovations such as brake-by-wire and drive-by-wire.

    Embedded technology finds immediate importance in electric vehicles, and hybrid vehicles. Here Embeddedapplications bring about greater efficiency and ensure reduced pollution. Embedded technology has also helpedin developing automotive safety systems such as the

    Anti-lock braking system (ABS)

    Electronic Stability Control (ESC/ESP)

    Traction control (TCS)

    Automatic four-wheel drive

    VECTOR Institute has endeared itself to the Automotive industry by providing quality Embedded personnel.

    Aerospace & Avionics

    Aerospace and Avionics demand a complex mixture of hardware, electronics, and embedded software. Forefficient working, hardware, electronics and embedded software must interact with many other entities andsystems. Embedded engineers confront major challenges,

    Creating Embedded systems on time

    Taking the budgetary constraints into consideration

    Ensuring that the complex software and hardware interactions are right

    Assembling components that meet specifications and perform effectively together

  • 7/30/2019 Assignment in Embedded System(CT74)

    22/24

    Assignment Embedded System

    Understanding the larger context of the embedded software

    Adopting the latest in Embedded technology like the fly-by-wire

    VECTOR Institute prepares embedded students for the challenges associated with Aerospace and Avionicsindustry.

    Telecommunications

    If ever there is an industry that has reaped the benefits to Embedded Technology, for sure, it is onlyTelecommunications. The Telecom industry utilizes numerous embedded systems from telephone switches forthe network to mobile phones at the end-user. The Telecom computer network also uses dedicated routers andnetwork bridges to route data.

    Embedded engineers help in ensuring high-speed networking. This is the most critical part of embeddedapplications. The Ethernet switches and network interfaces are designed to provide the necessary bandwidth.These will allow in rapidly incorporating Ethernet connections into advanced Embedded applications.

    VECTOR Institute provides enough exposure to Embedded students in a broad range of application types. TheseEmbedded application types range from high availability telecom and networking applications to rugged industrialand military environments.

    We prepare Embedded students for the challenges associated with Telecom industry.Consumer Electronics

    Consumer electronics has also benefited a lot from Embedded technologies. Consumer electronics includes

    Personal Digital Assistants (PDAs)

    MP3 players

    Mobile phones

    Videogame consoles

    Digital cameras

    DVD players

    GPS receivers

    Printers

    Even the household appliances, that include microwave ovens, washing machines and dishwashers, areincluding embedded systems to provide flexibility, efficiency and features. The latest in Embedded applicationsare seen as advanced HVAC systems that uses networked thermostats to more accurately and efficiently controltemperature.

    In the present times, home automation solutions are being increasingly built on embedded technologies. Homeautomation includes wired and wireless-networking to control lights, climate, security, audio/visual, surveillance,etc., all of which use embedded devices for sensing and controlling.

    VECTOR Institute prepares embedded students for the challenges associated with Consumer Electronicsindustry.

    Railroad

    Railroad signalling in Europe relies heavily on embedded systems that allows for faster, safer and heavier traffic.Embedded technology has brought a sea of change in the way Railroad Signals are managed and Rail traffic inlarge volumes is streamlined.

    The Embedded technology enabled Railroad Safety Equipment is increasingly being adopted by Railwaynetworks across the globe, with an assurance of far lesser Rail disasters to report. VECTOR Institute preparesembedded students for the challenges associated with Railroad industry.

    Electronic payment solutions sector

    In the present times there is stiff competition amongst embedded solutions providers to deliver innovative, andhigh-performance electronic payment solutions that are easy to use and highly secure. Embedded engineersknowledgeable in trusted proprietary technology develop the secure, encrypted transactions between payment

    systems and major financial institutions.

  • 7/30/2019 Assignment in Embedded System(CT74)

    23/24

    Assignment Embedded System

    The market for mobile payments systems is growing rapidly. It is driven by retailers, restaurants, and otherbusinesses that want to service customers anywhere, anytime. With the use of mobile devices, mostly mobilephones becoming very popular, embedded technologies compatible with mobile are being developed to promotepayment systems.

    VECTOR Institute prepares embedded students for the challenges associated with Electronic Payment solutionssector.

    Smart cards industry

    Smart cards, though began prominently as either a debit or a credit card, are now being introduced in personalidentification and entitlement schemes at regional, national, and international levels. Smart cards are appearingnow as Citizen Cards, drivers licenses, and patient cards.

    We also come across contactless smart cards that are part of ICAO biometric passports aim to enhance securityfor international travel. Europe enjoys precedence in the use of Smart cards. All the E-services (e-banking, e-health, e-training) are based on the leading edge in smart-card related technologies.

    VECTOR Institute has endeared itself to the Smart cards industry by providing quality Embedded personnel.

    b. Write short notes on:

    i. Network-oriented arbitration

    Answer:

    Multiple peripherals might request service from a single resource. For example, multiple peripherals might sharea single microprocessor that services their interrupt requests. As another example, multiple peripherals mightshare a single DMA controller that services their DMA requests. In such situations, two or more peripherals mayrequest service simultaneously. We therefore must have some method to arbitrate among these contendingrequests, i.e., to decide which one of the contending peripherals gets service, and thus which peripherals need towait. Several methods exist.

    1. Priority arbiter

    2. Daisy-chain arbitration

    3. Network-oriented arbitration methods

    Network-oriented arbitration methods

    The arbitration methods described are typically used to arbitrate among peripherals in an embedded system.However, many embedded systems contain multiple microprocessors communicating via a shared bus; such abus is sometimes called a network. Arbitration in such cases is typically built right into the bus protocol, since thebus serves as the only connection among the microprocessors. A key feature of such a connection is that aprocessor about to write to the bus has no way of knowing whether another processor is about to simultaneouslywrite to the bus. Because of the relatively long wires and high capacitances of such buses, a processor may writemany bits of data before those bits appear at another processor. For example, Ethernet and I2C use a method inwhich multiple processors may write to the bus simultaneously, resulting in a collision and causing any data onthe bus to be corrupted. The processors detect this collision, stop transmitting their data, wait for some time, andthen try transmitting again. The protocols must ensure that the contending processors dont start sending again atthe same time, or must at least use statistical methods that make the chances of them sending again at the sametime small.

    As another example, the CAN bus uses a clever address encoding scheme such that if two addresses are writtensimultaneously by different processors using the bus, the higher-priority address will override the lower priority

    one. Each processor that is writing the bus also checks the bus, and if the address it is writing does not appear,then that processor realizes that a higher-priority transfer is taking place and so that processor stops writing thebus.

    ii. Error detection and correction

    Answer:

    Error detection and correction or error control

    These are techniques that enable reliable delivery ofdigital data over unreliable communication channels. Manycommunication channels are subject to channel noise, and thus errors may be introduced during transmissionfrom the source to a receiver. Error detection techniques allow detecting such errors, while error correctionenables reconstruction of the original data

    Regardless of the design of the transmission system, there will be errors, resulting in the change of one or morebits in a transmitted frame. When a code word is transmitted one or more number of transmitted bits will bereversed due to transmission impairments. Thus error will be introduced. It is possible to detect these errors if the

    http://en.wikipedia.org/wiki/Digital_datahttp://en.wikipedia.org/wiki/Communication_channelhttp://en.wikipedia.org/wiki/Noise_(electronics)http://en.wikipedia.org/wiki/Noise_(electronics)http://en.wikipedia.org/wiki/Communication_channelhttp://en.wikipedia.org/wiki/Digital_data
  • 7/30/2019 Assignment in Embedded System(CT74)

    24/24

    Assignment Embedded System

    received code word is not one of the valid code words. To detect the errors at the receiver, the valid code wordsshould be separated by a distance of more than 1.

    The concept of including extra information in the transmission of error detection is a good one. But instead ofrepeating the entire data stream, a shorter group of bits may be appended to the end of each unit. This techniqueis called redundancy because the extra bits are redundant to the information; they are discarded as soon as theaccuracy of the transmission has been determined.

    Error correction is the mechanism by which we can make changes in the received erroneous data to make it freefrom error.

    The two most common error correction mechanisms are:

    iii. Error correction by Retransmission

    iv. Forward Error Correction.