9
ASSIGNMENT 2 (1) What is meant by the term “Memory Hierarchy”? What is the rationale for having a memory hierarchy? Simply a hierarchical arrangement of storage in current computer architectures is called the memory hierarchy. The objective of having a memory hierarchy is to have a memory system with a sufficient capacity and which is cheap as the cheapest memory type and as fast as the fastest memory type. In principle, for a simple single processor machine, the memory architecture is quite simple, the memory is connected to the memory address lines and the memory data lines and to a set of control lines. So that whenever an address is presented to the memory the data corresponding to that address appears on the data lines. This is adequate for processors which can address a relatively small address space In general, the faster a memory is in general the more expensive it is per bit of storage. On systems which have a large amount of memory, there is usually a hierarchy of memories, each with different access speeds and storage capacities. Typically, a large system has a small amount of very high speed memory, called a cache where data from frequently used memory locations may be temporarily stored. This cache is connected to a much larger ``main memory'' which is a medium speed memory, currently likely to be ``dynamic memory''. Cache memory access times are typically 10 to 20 times faster than main memory access times. (In some very large computer systems, the main memory is organized into two or more ``banks'', each of which contains adjacent memory words which can be addressed individually and simultaneously. A memory organized in this way is called an ``interleaved'' memory.)

Memory Hierarchy

Embed Size (px)

Citation preview

Page 1: Memory Hierarchy

ASSIGNMENT 2

(1) What is meant by the term “Memory Hierarchy”? What is the rationale for having a memory hierarchy?

Simply a hierarchical arrangement of storage in current computer architectures is called the memory hierarchy. The objective of having a memory hierarchy is to have a memory system with a sufficient capacity and which is cheap as the cheapest memory type and as fast as the fastest memory type. In principle, for a simple single processor machine, the memory architecture is quite simple, the memory is connected to the memory address lines and the memory data lines and to a set of control lines. So that whenever an address is presented to the memory the data corresponding to that address appears on the data lines. This is adequate for processors which can address a relatively small address space In general, the faster a memory is in general the more expensive it is per bit of storage.

On systems which have a large amount of memory, there is usually a hierarchy of memories, each with different access speeds and storage capacities. Typically, a large system has a small amount of very high speed memory, called a cache where data from frequently used memory locations may be temporarily stored. This cache is connected to a much larger ``main memory'' which is a medium speed memory, currently likely to be ``dynamic memory''. Cache memory access times are typically 10 to 20 times faster than main memory access times. (In some very large computer systems, the main memory is organized into two or more ``banks'', each of which contains adjacent memory words which can be addressed individually and simultaneously. A memory organized in this way is called an ``interleaved'' memory.)

The largest block of ``memory'' in a modern computer system is usually one or more large magnetic disks, on which data is stored in fixed size blocks of from 256 to 8192 bytes. This disk memory is usually connected directly to the main memory, and has a variable access time depending on how far the disk head must move to reach the appropriate track, and how much the disk must rotate to reach the appropriate sector for the data. Some very large systems have multiple head disks which can read from several tracks at once.

Page 2: Memory Hierarchy

(2) Write short notes on the following terms related to memory and storage.

i. Latency

Latency is the period of time that one component in a system is spinning its wheels waiting for another component. Latency, therefore, is wasted time. For example, in accessing data on a disk, latency is defined as the time it takes to position the proper sector under the read /write head.

ii. Memory Controller

The memory controller is a digital circuit which manages the flow of data going to and from the main memory. It can be a separate as on the die of a microprocessor. Memory controllers contain the logic necessary to read and write dynamic random access memory (DRAM), and to "refresh" the DRAM by sending current through the entire device. Without constant refreshes, DRAM will lose the data written to it as the capacitors leak their charge within a fraction of a second

iii. Memory Bus

The memory bus is made up of two parts: the data bus and the address bus. When people just make reference to "the memory bus" they are usually referring to the data bus, which carries actual memory data within the PC. The address bus is used to select the memory address that the data will come from or go to on a read or write. Simply the memory bus is the set of wires that is used to carry memory addresses and data to and from the system RAM

iv. Little-endian

Little-endian is a term that describes the order in which a sequence of bytes are stored in computer memory. Little-endian is an order in which the "little end" (least significant value in the sequence) is stored first.

v. Virtual Memory

Virtual Memory is an imaginary memory area supported by some operating systems) in conjunction with the hardware. The purpose of virtual memory is to enlarge the address space, the set of addresses a program can utilize

Page 3: Memory Hierarchy

(3) Define the major characteristics of the following memory technologies.

i. Random Access Memory (RAM)

Short for Random Access Memory, RAM, also known as main memory or system memory, is a term commonly used to describe the memory within a computer. Today, it takes the form of integrated circuits that allow stored data to be accessed in any order.

ii. Magnetic Disks

A magnetic disk is a memory device, such as a floppy disk, a hard disk, or a removable cartridge, that is covered with a magnetic coating on which digital information is stored in the form of microscopically small, magnetized needles.

iii. Tape Drive

A tape drive is a device that stores computer data on magnetic tape, especially for backup and archiving purposes. Drives can be rewinding, where the device issues a rewind command at the end of a session, or non-rewinding. Rewinding devices are most commonly used when a tape is to be unmounted at the end of a session after batch processing of large amounts of data.

iv. Registers

Registers are temporary memory units that store words. The registers are located in the processor, instead of in RAM, so data can be accessed and stored faster. Processor registers are at the top of the memory hierarchy, and provide the fastest way for a CPU to access data. There are several types of CPU registers such as Program Counter (PC), Instruction Register (IR), Accumulator (A), Flag Register (F), General Purpose Register (GPR).

Page 4: Memory Hierarchy

(4) What is Von Neumann Architecture?

The Von Neumann architecture is a computer design model that uses a processing unit and a single separate storage structure to hold both instructions and data. Very few computers have a pure von Neumann architecture. Most computers add another step to check for interrupts, electronic events that could occur at any time.

An interrupt resembles the ring of a telephone, calling a person away from some lengthy task. Interrupts let a computer do other things while it waits for events. Von Neumann computers spend a lot of time moving data to and from the memory, and this slows the computer. So, engineers often separate the bus into two or more busses, usually one for instructions, and the other for data.

Simply we can say that The Von Neumann architecture is a design model for a stored-program digital computer that uses a processing unit and a single separate storage structure to hold both instructions and data. It is named after mathematician and early computer scientist John von Neumann. Such a computer implements a universal Turing machine, and the common "referential model" of specifying sequential architectures, in contrast with parallel architectures. A stored-program digital computer is one that keeps its program instructions as well as its data in read-write, random access memory. Stored-program computers were advancement over the program-controlled computers of the 1940s.

(6) Describe the structure of a CPU, providing details on the five major components.

(i) Control Unit

Control unit controls the operation of the CPU and rest of the machine. The control unit is a finite state machine that takes as its inputs the IR, the status register and the current major state of the cycle. Its rules are encoded either in random logic, a Programmable Logic Array (PLA), or Read-Only Memory (ROM), and its outputs are sent across the processor to each point requiring coordination or direction for the control unit.

(ii) Arithmetic and Logic Unit (ALU)

Page 5: Memory Hierarchy

An arithmetic-logic unit (ALU) is the part of a computer processor (CPU) that carries out arithmetic and logic operations on the operands in computer instruction words. In some processors, the ALU is divided into two units, an arithmetic unit (AU) and a logic unit (LU). Some processors contain more than one AU - for example, one for fixed-point operations and another for floating-point operations

(iii)Flag Register

The flag register is the status register that contains the current state of the processor. This register is 16-bits wide. Its successors, the EFLAGS and RFLAGS registers are 32-bits and 64-bits wide, respectively. The wider registers retain compatibility with their smaller predecessors.

(iv) Accumulator

Results of arithmetic and logical operations always goto the accumulator. Accumulator is connected directly to the output of the ALU. Accumulator is normally indicated by symbol ‘A’.

(v) Program Counter

Program counter holds the address of either the first byte of the next instruction to be fetched for execution or the address of the next byte of a multi byte instruction, which has not been completely fetched. In both the cases it gets incremented automatically one by one as the instruction bytes get fetched. Also Program register keeps the address of the next instruction.

(7) Discuss the roles of MAR and MBR in the memory access process.

MAR

MAR stand for the Memory Address Register. This holds the address of the memory word being referenced. All execution steps begin with PC - MAR.

MBR

Page 6: Memory Hierarchy

MBR stand for the Memory Buffer Register, also called MDR (Memory Data Register).This holds the data being read from memory or written to memory

(8) Describe the Fetch-Execute cycle of the Von Neumann Architecture.

Fetch-Execute Cycle is the sequence of actions that a central processing unit performs to execute each machine code instruction in a program.

At the beginning of each cycle the CPU presents the value of the program counter on the address bus. The CPU then fetches the instruction from main memory.

From the instruction register, the data forming the instruction is decoded and passed to the control unit which sends a sequence of control signals to the relevant function units of the CPU to perform the actions required by the instruction such as reading values from registers, passing them to the ALU to add them together and writing the result back to a register. The program counter is then incremented to address the next instruction and the cycle is repeated.

(9) Discuss how the controller controls the execution of a program.

The Control Unit decodes the instruction and understands it as a Load operation. Then after the first instruction cycle. Then the CPU can fully execute the first Assembly language instrution. Then second assembly instruction which is stored in memory address pointed by PC is loaded into the CPU. At the end of the second fetch cycle the PC is automatically incremented and starts pointing to the next memory address. Then IR will hold the second Assembly instruction. During the second instruction cycle the control unit identify this as another Load instruction.