25
Unit 1.0 Introduction 1.1 Introduction to Computer System 1.1.1 Computer-oriented control process Computer control system control system typically comprises of a computer or microprocessor , a control program which handles data from sensors and sends signals to output devices and an interface box to convert signals between the sensors and the processor . The role of computers in control Computers can respond very rapidly to change. Systems can run 24 hours a day, 365 days a year. Control systems can operate in places that humans would find dangerous or awkward. Outputs are consistent and error free. Computers can process data quickly and machines can operate faster than humans. Computers are now used to control many types of devices such as: air conditioning and central heating systems in large buildings security systems and burglar alarms manufacturing processes traffic lights and pedestrian crossings The role of sensors in control Sensors are used to measure physical quantities such as temperature, light, pressure, sound, and humidity. They send signals to the processor . For example: A security alarm system may have an infrared sensor which sends a signal when the beam is broken. A heat sensitive sensor in the corner of a room may detect the presence of a person. Temperature sensors could be used to control the heating in a large building. Magnetic sensors are used to detect metal and can be placed in roads to monitor traffic flow. 1

E5144 UNIT 1 Introduction to Computer Control System

Embed Size (px)

Citation preview

Page 1: E5144 UNIT 1 Introduction to Computer Control System

Unit 1.0 Introduction

1.1 Introduction to Computer System

1.1.1 Computer-oriented control process

Computer control system

control system typically comprises of a computer or microprocessor, a control program which

handles data from sensors and sends signals to output devices and an interface box to convert

signals between the sensors and the processor.

The role of computers in control

Computers can respond very rapidly to change.

Systems can run 24 hours a day, 365 days a year.

Control systems can operate in places that humans would find dangerous or

awkward.

Outputs are consistent and error free.

Computers can process data quickly and machines can operate faster than humans.

Computers are now used to control many types of devices such as:

air conditioning and central heating systems in large buildings

security systems and burglar alarms

manufacturing processes

traffic lights and pedestrian crossings

The role of sensors in control

Sensors are used to measure physical quantities such as temperature, light, pressure, sound,

and humidity. They send signals to the processor. For example:

A security alarm system may have an infrared sensor which sends a signal when the

beam is broken.

A heat sensitive sensor in the corner of a room may detect the presence of a

person.

Temperature sensors could be used to control the heating in a large building.

Magnetic sensors are used to detect metal and can be placed in roads to monitor

traffic flow.

1

Page 2: E5144 UNIT 1 Introduction to Computer Control System

1.1.2 Use of computer control system

Other physical quantities that can be transmitted directly to the computer's processor include:

rainfall/water levels

radiation level

pH level

oxygen level

Analogue to digital conversion

Data such as pressure, light and temperature is analogue data. Computers can only work with

digital data.

An interface box or analogue to digital converter (ADC) is needed to convert the analogue data

from the sensors into digital data the computer can process.

Feedback cycle

The diagram below shows a control program for maintaining the water level in a fish tank.

Feedback cycle for a fish tank

The control program stores the highest and lowest acceptable water levels and what action to

take if they're exceeded.

The process is continuous and is called a feedback cycle.

Stages of the feedback cycle

1. water level falls too low

2. sensor detects water level is too low

3. valve opened to let water in

4. sensor detects water level is too high

2

Page 3: E5144 UNIT 1 Introduction to Computer Control System

5. valve opened to let water out

A computer-controlled greenhouse

To get the best plant growing conditions temperature and humidity (moisture in the air) have to

be controlled.

The greenhouse therefore has temperature and humidity sensors linked to a computer, and the

computer has a control program storing details of the correct temperature and humidity

settings. The greenhouse is fitted with a heater, sprinkler and window motor, also linked to the

computer.

If the humidity falls below the values stored in the program, the computer activates the

sprinklers and closes the windows. If the temperature falls outside the values stored in the

program, the heater is activated by the computer.

The system monitors the conditions night and day with immediate response to any changes. To

alter the growing conditions the values in the computer program can of course be changed.

1.1.3 Digital Computer Architecture

3

Page 4: E5144 UNIT 1 Introduction to Computer Control System

We will discuss this sub-unit through:a) Memory conceptb) Basic units of digital computerc) Addressingd) Interruptse) Interfaces

Introduction

There are many types of computer architectures:

Quantum computer vs Chemical computer Scalar processor vs Vector processor Non-Uniform Memory Access (NUMA) computers Register machine vs Stack machine Harvard architecture vs von Neumann architecture Cellular architecture

In computer science, computer architecture is the conceptual design and fundamental operational structure of a computer system. It is a blueprint and functional description of requirements and design implementations for the various parts of a computer, focusing largely on the way by which the central processing unit (CPU) performs internally and accesses addresses in memory.

It may also be defined as the science and art of selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals.

Computer architecture comprises at least three main subcategories:[1]

Instruction set architecture , or ISA, is the abstract image of a computing system that is seen by a machine language (or assembly language) programmer, including the instruction set, word size, memory address modes, processor registers, and address and data formats.

Microarchitecture , also known as Computer organization is a lower level, more concrete and detailed, description of the system that involves how the constituent parts of the system are interconnected and how they interoperate in order to implement the ISA.[2]The size of a computer's cache for instance, is an organizational issue that generally has nothing to do with the ISA.

System Design which includes all of the other hardware components within a computing system such as:

1. System interconnects such as computer buses and switches 2. Memory controllers and hierarchies 3. CPU off-load mechanisms such as direct memory access (DMA) 4. Issues like multiprocessing.

4

Page 5: E5144 UNIT 1 Introduction to Computer Control System

Once both ISA and microarchitecture have been specified, the actual device needs to be designed into hardware. This design process is called implementation. Implementation is usually not considered architectural definition, but rather hardware design engineering.

Implementation can be further broken down into three (not fully distinct) pieces:

Logic Implementation — design of blocks defined in the microarchitecture at (primarily) the register-transfer and gate levels.

Circuit Implementation — transistor-level design of basic elements (gates, multiplexers, latches etc) as well as of some larger blocks (ALUs, caches etc) that may be implemented at this level, or even (partly) at the physical level, for performance reasons.

Physical Implementation — physical circuits are drawn out, the different circuit components are placed in a chip floorplan or on a board and the wires connecting them are routed.

For CPUs, the entire implementation process is often called CPU design.

More specific usages of the term include more general wider-scale hardware architectures, such as cluster computing and Non-Uniform Memory Access (NUMA) architectures.

5

Page 6: E5144 UNIT 1 Introduction to Computer Control System

6

Page 7: E5144 UNIT 1 Introduction to Computer Control System

a) Memory Concept

b) Basic units of a digital computeri. memory

Computer memory refers to devices that are used to store data or programs (sequences of instructions) on a temporary or permanent basis for use in an electronic digital computer. Computers represent information in binary code, written as sequences of 0s and 1s. Each binary digit (or "bit") may be stored by any physical system that can be in either of two stable states, to represent 0 and 1. Such a system is called bistable. This could be an on-off switch, an electrical capacitor that can store or lose a charge, a magnet with its polarity up or down, or a surface that can have a pit or not. Today, capacitors and transistors, functioning as tiny electrical switches, are used for temporary storage, and either disks or tape with a magnetic coating, or plastic discs with patterns of pits are used for long-term storage.

Computer memory is usually meant to refer to the semiconductor technology that is used to store information in electronic devices. Current primary computer memory makes use of integrated circuits consisting of silicon-based transistors. There are two main types of memory: volatile and non-volatile.

7

Page 8: E5144 UNIT 1 Introduction to Computer Control System

History

Detail of the back of a section of ENIAC, showing vacuum tubes

In the early 1940s, memory technology mostly permitted a capacity of a few bytes. The first digital computer, the ENIAC, using 20,000 octal-base radio vacuum tubes allowed simple calculations involving 20 numbers of ten decimal digits which were held in the vacuum tube accumulators.

The next significant advance in computer memory was with acoustic delay line memory developed by J. Presper Eckert in the early 1940s. Through the construction of a glass tube filled with mercury and plugged at each end with a quartz crystal, delay lines could store bits of information within the quartz and transfer it through sound waves propagating through mercury. Delay line memory would be limited to a capacity of up to a few hundred thousand bits to remain efficient.

Two alternatives to the delay line, the Williams tube and Selectron tube, were developed in 1946, both using electron beams in glass tubes as means of storage. Using cathode ray tubes, Fred

8

Page 9: E5144 UNIT 1 Introduction to Computer Control System

Williams would invent the Williams tube, which would be the first random access computer memory. The Williams tube would prove to be advantageous to the Selectron tube because of its greater capacity (the Selectron was limited to 256 bits, while the Williams tube could store thousands) and being less expensive. The Williams tube would nevertheless prove to be frustratingly sensitive to environmental disturbances.

Efforts began in the late 1940s to find non-volatile memory. Jay Forrester, Jan A. Rajchman and An Wang would be credited with the development of magnetic core memory, which would allow for recall of memory after power loss. Magnetic core memory would become the dominant form of memory until the development of transistor based memory in the late 1960s.[citation needed]

Volatile memory

Volatile memory is computer memory that requires power to maintain the stored information. Current semiconductor volatile memory technology is usually either static RAM (see SRAM) or dynamic RAM (see DRAM). Static RAM exhibits data remanence, but is still volatile, since all data is lost when memory is not powered. Whereas, dynamic RAM allows data to be leaked and disappear automatically without a refreshing. Upcoming volatile memory technologies that hope to replace or compete with SRAM and DRAM include Z-RAM, TTRAM and A-RAM.

Non-volatile memory

Non-volatile memory is computer memory that can retain the stored information even when not powered. Examples of non-volatile memory include read-only memory (see ROM), flash memory, most types of magnetic computer storage devices (e.g. hard disks, floppy discs and magnetic tape), optical discs, and early computer storage methods such as paper tape and punch cards.Upcoming non-volatile memory technologies include FeRAM, CBRAM, PRAM, SONOS, RRAM, Racetrack memory, NRAM and Millipede.

Virtual memory

Virtual memory is a computer system technique developed at the University of Manchester, which gives an application program the impression that it has contiguous working memory (an address space), while in fact it may be physically fragmented and may even overflow on to disk storage.

Developed for multitasking kernels, virtual memory provides two primary functions:

1. Each process has its own address space, thereby not required to be relocated nor required to use relative addressing mode.

2. Each process sees one contiguous block of free memory upon launch. Fragmentation is hidden.

All implementations (excluding emulators) require hardware support. This is typically in the form of a memory management unit built into the CPU.

9

Page 10: E5144 UNIT 1 Introduction to Computer Control System

Systems that use this technique make programming of large applications easier and use real physical memory (e.g. RAM) more efficiently than those without virtual memory. Virtual memory differs significantly from memory virtualization in that virtual memory allows resources to be virtualized as memory for a specific system, as opposed to a large pool of memory being virtualized as smaller pools for many different systems.

Note that "virtual memory" is more than just "using disk space to extend physical memory size" - which is merely the extension of the memory hierarchy to include hard disk drives. Extending memory to disk is a normal consequence of using virtual memory techniques, but could be done by other means such as overlays or swapping programs and their data completely out to disk while they are inactive. The definition of "virtual memory" is based on redefining the address space with a contiguous virtual memory addresses to "trick" programs into thinking they are using large blocks of contiguous addresses.

Modern general-purpose computer operating systems generally use virtual memory techniques for ordinary applications, such as word processors, spreadsheets, multimedia players, accounting, etc., except where the required hardware support (a memory management unit) is unavailable. Older operating systems, such as DOS [1] of the 1980s, or those for the mainframes of the 1960s, generally had no virtual memory functionality - notable exceptions being the Atlas, B5000 and Apple Computer's Lisa.

Embedded systems and other special-purpose computer systems which require very fast and/or very consistent response times may opt not to use virtual memory due to decreased determinism. This is based on the idea that unpredictable processor exceptions produce unwanted jitter on CPU operated I/O, which the smaller embedded processors often perform directly to keep cost and power consumption low, and the associated simple application has little use for multitasking features.

10

Page 11: E5144 UNIT 1 Introduction to Computer Control System

ii. control unit

General operation

The outputs of the control unit controls the activity of the rest of the device. A control unit can be thought of as a finite state machine. -->

The control unit is the circuitry that controls the flow of data through the processor, and coordinates the activities of the other units within it. In a way, it is the "brain within the brain", as it controls what happens inside the processor, which in turn controls the rest of the PC.

A few examples of devices that require a control unit is CPU. The modern information age would not be possible without complex control unit designs.

Fetch the instruction from memory then decode that instruction after that generate the control signals.

The control unit is the circuitry that controls the flow of information through the processor, and coordinates the activities of the other units within it. In a way, it is the "brain within the brain", as it controls what happens inside the processor, which in turn controls the rest of the PC.

The functions performed by the control unit vary greatly by the internal architecture of the CPU, since the control unit really implements this architecture. On a regular processor that executes x86 instructions natively, the control unit performs the tasks of fetching, decoding, managing execution and then storing results. On a processor with a RISC core the control unit has significantly more work to do. It manages the translation of x86 instructions to RISC micro-instructions, manages scheduling the micro-instructions between the various execution units, and juggles the output from these units to make sure they end up where they are supposed to go. On one of these processors the control unit may be broken into other units (such as a scheduling unit to handle scheduling and a

11

Page 12: E5144 UNIT 1 Introduction to Computer Control System

retirement unit to deal with results coming from the pipeline) due to the complexity of the job it must perform.

12

Page 13: E5144 UNIT 1 Introduction to Computer Control System

iii. arithmetic logic unit

In computing, an arithmetic logic unit (ALU) is a digital circuit that performs arithmetic and logical operations. The ALU is a fundamental building block of the central processing unit (CPU) of a computer, and even the simplest microprocessors contain one for purposes such as maintaining timers. The processors found inside modern CPUs and graphics processing units (GPUs) accommodate very powerful and very complex ALUs; a single component may contain a number of ALUs.

Mathematician John von Neumann proposed the ALU concept in 1945, when he wrote a report on the foundations for a new computer called the EDVAC. Research into ALUs remains an important part of computer science, falling under Arithmetic and logic structures in the ACM Computing Classification System.

Simple operations

A simple example arithmetic logic unit (2-bit ALU) that does AND, OR, XOR, and addition

13

Page 14: E5144 UNIT 1 Introduction to Computer Control System

Most ALUs can perform the following operations:

Integer arithmetic operations (addition, subtraction, and sometimes multiplication and division, though this is more expensive)

Bitwise logic operations (AND, NOT, OR, XOR) Bit-shifting operations (shifting or rotating a word by a specified number of bits to the left or

right, with or without sign extension). Shifts can be interpreted as multiplications by 2 and divisions by 2.

iv. input output unit

I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O.

Often, I/O devices are complex computers in their own right with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O.

c) Addressing

In computing, an address space defines a range of discrete addresses, each of which may correspond to a physical or virtual memory register, a network host, peripheral device, disk sector, or other logical or physical entity. The Internet Assigned Numbers Authority (IANA) allocates ranges of numbers to various registries in order to enable them to each manage their particular address space.

Addressing modes are an aspect of the instruction set architecture in most central processing unit (CPU) designs. The various addressing modes that are defined in a given instruction set architecture define how machine language instructions in that architecture identify the operand (or operands) of each instruction. An addressing mode specifies how to calculate the effective memory address of an operand by using information held in registers and/or constants contained within a machine instruction or elsewhere.

In computer programming, addressing modes are primarily of interest to compiler writers and to those who write code directly in assembly language.

Different computer architectures vary greatly as to the number of addressing modes they provide in hardware. There are some benefits to eliminating complex addressing modes and using only one or a few simpler addressing modes, even though it requires a few extra instructions, and perhaps an extra register.[1] It has proven[citation needed] much easier to design pipelined CPUs if the only addressing modes available are simple ones.

14

Page 15: E5144 UNIT 1 Introduction to Computer Control System

Simple addressing modes for code

Absolute +----+------------------------------+ |jump| address | +----+------------------------------+ (Effective PC address = address)

The effective address for an absolute instruction address is the address parameter itself with no modifications.

PC-relative +----+------------------------------+ |jump| offset | jump relative +----+------------------------------+ (Effective PC address = next instruction address + offset, offset may be negative)

The effective address for a PC-relative instruction address is the offset parameter added to the address of the next instruction. This offset is usually signed to allow reference to code both before and after the instruction.

This is particularly useful in connection with jumps, because typical jumps are to nearby instructions (in a high-level language most if or while statements are reasonably short). Measurements of actual programs suggest that an 8 or 10 bit offset is large enough for some 90% of conditional jumps[citation

needed].

Another advantage of program-relative addressing is that the code may be position-independent, i.e. it can be loaded anywhere in memory without the need to adjust any addresses.

Some versions of this addressing mode may be conditional referring to two registers ("jump if reg1==reg2"), one register ("jump unless reg1==0") or no registers, implicitly referring to some previously-set bit in the status register. See also conditional execution below.

Register indirect +-------+-----+ |jumpVia| reg | +-------+-----+ (Effective PC address = contents of register 'reg')

The effective address for a Register indirect instruction is the address in the specified register. For example, (A7) to access the content of address register A7.

15

Page 16: E5144 UNIT 1 Introduction to Computer Control System

The effect is to transfer control to the instruction whose address is in the specified register. Many RISC machines have a subroutine call instruction that places the return address in an address register—the register indirect addressing mode is used to return from that subroutine call.

Sequential addressing modes

sequential execution +------+ | nop | execute the following instruction +------+ (Effective PC address = next instruction address)

The CPU, after executing a sequential instruction, immediately executes the following instruction.

Sequential execution is not considered to be an addressing mode on some computers.

Most instructions on most CPU architectures are sequential instructions. Because most instructions are sequential instructions, CPU designers often add features that deliberately sacrifice performance on the other instructions—branch instructions—in order to make these sequential instructions run faster.

Conditional branches load the PC with one of 2 possible results, depending on the condition—most CPU architectures use some other addressing mode for the "taken" branch, and sequential execution for the "not taken" branch.

Many features in modern CPUs -- instruction prefetch and more complex pipelineing, Out-of-order execution, etc. -- maintain the illusion that each instruction finishes before the next one begins, giving the same final results, even though that's not exactly what happens internally.

Each "basic block" of such sequential instructions exhibits both temporal and spatial locality of reference.

CPUs that do not use sequential execution are extremely rare—they include some drum memory computers and the RTX 32P, which has no program counter.[3]

conditional execution

Some computer architectures (e.g. ARM) have conditional instructions which can in some cases obviate the need for conditional branches and avoid flushing the instruction pipeline. An instruction such as a 'compare' is used to set a condition code, and subsequent instructions include a test on that condition code to see whether they are obeyed or ignored.

16

Page 17: E5144 UNIT 1 Introduction to Computer Control System

skip +------+-----+-----+ |skipEQ| reg1| reg2| skip the following instruction if reg1=reg2 +------+-----+-----+ (Effective PC address = next instruction address + 1)

Skip addressing may be considered a special kind of PC-relative addressing mode with a fixed "+1" offset. Like PC-relative addressing, some CPUs have versions of this addressing mode that only refer to one register ("skip if reg1==0") or no registers, implicitly referring to some previously-set bit in the status register. Other CPUs have a version that selects a specific bit in a specific byte to test ("skip if bit 7 of reg12 is 0").

Unlike all other conditional branches, a "skip" instruction never needs to flush the instruction pipeline.

d) Interrupt handling

In computing, an interrupt is an asynchronous signal indicating the need for attention or a synchronous event in software indicating the need for a change in execution.

A hardware interrupt causes the processor to save its state of execution and begin execution of an interrupt handler.

Software interrupts are usually implemented as instructions in the instruction set, which cause a context switch to an interrupt handler similar to a hardware interrupt.

Interrupts are a commonly used technique for computer multitasking, especially in real-time computing. Such a system is said to be interrupt-driven.

An act of interrupting is referred to as an interrupt request (IRQ).

Simple addressing modes for data

Register +------+-----+-----+-----+ | mul | reg1| reg2| reg3| reg1 := reg2 * reg3; +------+-----+-----+-----+

This "addressing mode" does not have an effective address and is not considered to be an addressing mode on some computers.

In this example, all the operands are in registers, and the result is placed in a register.

17

Page 18: E5144 UNIT 1 Introduction to Computer Control System

Base plus offset, and variations

This is sometimes referred to as 'base plus displacement'

+------+-----+-----+----------------+ | load | reg | base| offset | reg := RAM[base + offset] +------+-----+-----+----------------+ (Effective address = offset + contents of specified base register)

The offset is usually a signed 16-bit value (though the 80386 expanded it to 32 bits).

If the offset is zero, this becomes an example of register indirect addressing; the effective address is just the value in the base register.

On many RISC machines, register 0 is fixed at the value zero. If register 0 is used as the base register, this becomes an example of absolute addressing. However, only a small portion of memory can be accessed (64 kilobytes, if the offset is 16 bits).

The 16-bit offset may seem very small in relation to the size of current computer memories (which is why the 80386 expanded it to 32-bit). It could be worse: IBM System/360 mainframes only have an unsigned 12-bit offset. However, the principle of locality of reference applies: over a short time span, most of the data items a program wants to access are fairly close to each other.

This addressing mode is closely related to the indexed absolute addressing mode.

Example 1: Within a subroutine a programmer will mainly be interested in the parameters and the local variables, which will rarely exceed 64 KB, for which one base register (the frame pointer) suffices. If this routine is a class method in an object-oriented language, then a second base register is needed which points at the attributes for the current object (this or self in some high level languages).

Example 2: If the base register contains the address of a composite type (a record or structure), the offset can be used to select a field from that record (most records/structures are less than 32 kB in size).

Immediate/literal +------+-----+-----+----------------+ | add | reg1| reg2| constant | reg1 := reg2 + constant; +------+-----+-----+----------------+

This "addressing mode" does not have an effective address, and is not considered to be an addressing mode on some computers.

The constant might be signed or unsigned. For example move.l #$FEEDABBA, D0 to move the immediate hex value of "FEEDABBA" into register D0.

18

Page 19: E5144 UNIT 1 Introduction to Computer Control System

Instead of using an operand from memory, the value of the operand is held within the instruction itself. On the DEC VAX machine, the literal operand sizes could be 6, 8, 16, or 32 bits long.

Andrew Tanenbaum showed that 98% of all the constants in a program would fit in 13 bits (see RISC design philosophy).

Implicit +-----------------+ | clear carry bit | +-----------------+

The implied addressing mode[1], also called the implicit addressing mode[2], does not explicitly specify an effective address for either the source or the destination (or sometimes both).

Either the source (if any) or destination effective address (or sometimes both) is implied by the opcode.

Implied addressing was quite common on older computers (up to mid-1970s). Such computers typically had only a single register in which arithmetic could be performed—the accumulator. Such accumulator machines implicitly reference that accumulator in almost every instruction. For example, the operation <a := b + c;> can be done using the sequence <load b; add c; store a;> -- the destination (the accumulator) is implied in every "load" and "add" instruction; the source (the accumulator) is implied in every "store" instruction.

Later computers generally had more than one general purpose register or RAM location which could be the source or destination or both for arithmetic—and so later computers need some other addressing mode to specify the source and destination of arithmetic.

Many computers (such as x86 and AVR) have one special-purpose register called the stack pointer which is implicitly incremented or decremented when pushing or popping data from the stack, and the source or destination effective address is (implicitly) the address stored in that stack pointer.

Most 32-bit computers (such as ARM and PowerPC) have more than one register which could be used as a stack pointer—and so use the "register autoincrement indirect" addressing mode to specify which of those registers should be used when pushing or popping data from a stack.

Some current computer architectures (e.g. IBM/390 and Intel Pentium) contain some instructions with implicit operands in order to maintain backwards compatibility with earlier designs.

On many computers, instructions that flip the user/system mode bit, the interrupt-enable bit, etc. implicitly specify the special register that holds those bits. This simplifies the hardware necessary to trap those instructions in order to meet the Popek and Goldberg virtualization requirements -- on such a system, the trap logic does not need to look at any operand (or at the final effective address), but only at the opcode.

A few CPUs have been designed where every operand is always implicitly specified in every instruction -- zero-operand

19

Page 20: E5144 UNIT 1 Introduction to Computer Control System

e) Interfacesi. Data lines

ii. Device linesiii. Instruction linesiv. Flag linesv. Interrupt lines

vi. Address lines

20