CENG 450 Computer Systems & Architecture Lecture 2 Amirali Baniasadi amirali@ece.uvic.ca

Preview:

Citation preview

CENG 450Computer Systems &

Architecture

Lecture 2

Amirali Baniasadiamirali@ece.uvic.ca

Outline

Power & Cost Performance Performance measurement Amdahl's Law Benchmarks

History

1. “Big Iron” Computers: Used vacuum tubes, electric relays and bulk magnetic

storage devices. No microprocessors. No memory.Example: ENIAC (1945), IBM Mark 1 (1944)

History

Von Newmann:Invented EDSAC (1949).First Stored Program Computer. Uses Memory.

Importance: We are still using the same basic design.

Computer Components

Processor (CPU)

MemoryControl

Output

Input

keyboardMouseDisk

. . .

PrinterScreen

Disk. . .

Computer Components

Datapath of a von Newman machine

ALU

OP1 + OP2

Op1 Op2

OP1 + OP2...

Op1Op2

General-purpose Registers

ALU i/p registers

ALU o/p register

Bus

Computer Components

Processor(CPU): Active part of the motherboard Performs calculations & activates devices Gets instruction & data from memory Components are connected via Buses

Bus: Collection of parallel wires Transmits data, instructions, or control signals

Motherboard Physical chips for I/O connections, memory, & CPU

Computer Components

CPU consists of Datapath (ALU+ Registers):

Performs arithmetic & logical operations Control (CU):

Controls the data path, memory, & I/O devices

Sends signals that determine operations of datapath, memory, input & output

Technology Change

Technology changes rapidly HW

Vacuum tubes: Electron emitting devices Transistors: On-off switches controlled by electricity Integrated Circuits( IC/ Chips): Combines thousands of

transistors Very Large-Scale Integration( VLSI): Combines millions of

transistors What next?

SW Machine language: Zeros and ones Assembly language: Mnemonics High-Level Languages: English-like Artificial Intelligence languages: Functions & logic

predicates Object-Oriented Programming: Objects & operations on

objects

Moore’s Prediction

Moore’s Law:

A new generation of memory chips is introduced every 3 years Each new generation has 4 times as much memory as its predecessor

Computer technology doubles every 1.5 years:

Example: DRAM capacity

Kb

it c a

pa

city

Year of introduction1992

100,000

10,000

1000

100

1019901988198619841982198019781976

16M

4M

1M

256K

16K

64K

1994 1996

64M

Technology => dramatic change

Processor logic capacity: about 30% per year clock rate: about 20% per year

Memory DRAM capacity: about 60% per year (4x every 3 years) Memory speed: about 10% per year Cost per bit: improves about 25% per year

Disk capacity: about 60% per year

Question: Does every thing look OK?

Software Evolution.

Machine language Assembly language High-level languages Subroutine libraries

There is a large gap between what is convenient for computers & what is convenient for humans

Translation/Interpretation is needed between both

Language Evolution

swap (int v[], int k){ int temp temp = v[k]; v[k] = v[k+1]; v[k+1] = temp;}

00000000101000010000000000011000000000001000111000011000001000011000110001100010000000000000000010001100111100100000000000000100101011001111001000000000000000001010110001100010000000000000010000000011111000000000000000001000

swap: muli $2, $5, 4 add $2, $4, $2 lw $15, 0($2) lw $18, 4($2) sw $18, 0($2) sw $15, 4($2) jr $31

High-level language program (in C)

Assembly language program (for MIPS)

Binary machine language program (for MIPS)

HW - SW Components

Hardware Memory components Registers Register file memory Disks Functional components Adder, multiplier, dividers, . . . Comparators Control signals

Software Data Simple

• Characters• Integers• Floating-point• Pointers

Structured• Arrays• Structures ( records) Instructions

Data transfer Arithmetic Shift Control flow Comparison . . .

Things You Will Learn

Assembly language introduction/Review

How to analyze program performance

How to design processor components

How to enhance processors performance (caches, pipelines, parallel processors, multiprocessors)

The Processor Chip

Processor Chip Major Blocks

• Example: Intel Pentium • Area: 91 mm2

• ~ 3.3 million transistors ( 1 million for cache memory)

Data cache

ControlBranch

Instruction cache

Bus Integer data-path

Floating-point datapa

th

Memory

Categories Volatile memory

Loses information when power is switched-off Non-volatile memory

Keeps information when power is switched-off Types

Cache: Volatile Fast but expensive Smaller capacity Placed closer to the processor

Main memory Volatile Less expensive More capacity

Secondary memory Nonvolatile Low cost Very slow Unlimited capacity

Input-Output (I/O)

I/O devices have the hardest organization Wide range of speeds

Graphics vs. keyboard Wide range of requirements

SpeedStandardCost . . .

Least amount of research done in this area

Our Primary Focus

The processor (datapath and control) Implemented using millions of transistors Impossible to understand by looking at each transistor We need abstraction

Hides lower-level details to offer simple model at higher level

Advantages• Intensive & thorough research into the depths • Reveals more information• Omits unneeded details• Helps us cope with complexity

Examples of abstraction:

• Language hierarchy• Instruction set architecture (ISA)

Instruction Set Architecture (ISA)

Instruction set:

Complete set of instructions used by a machine

ISA:

Abstract interface between the HW and lowest-level SW. It encompasses information needed to write machine-language programs including

Instructions

Memory size

Registers used

. . .

Instruction Set Architecture (ISA)

ISA is considered part of the SW Several implementations for the same ISA can exist Modern ISA’s:

80x86/Pentium/K6, PowerPC, DEC Alpha, MIPS, SPARC, HP We are going to study MIPS

Advantages: Different implementations of the same architecture Easier to change than HW Standardizes instructions, machine language bit patterns, etc.

Disadvantage: Sometimes prevents using new innovations

Instruction Set Architecture (ISA)

Fetch Instruction From Memory

Decode Instruction determine its size & action

Fetch Operand data

Execute instruction & compute results or status

Store Result in memory

Determine Next Instruction’s address

•Instruction Execution Cycle

What Should we Learn?

A specific ISA (MIPS)

Performance issues - vocabulary and motivation

Instruction-Level Parallelism

How to Use Pipelining to improve performance

Exploiting Instruction-Level Parallelism w/ Software Approach

Memory: caches and virtual memory

I/O

What is Expected From You?

• Read textbook & readings!• Be up-to-date! • Come back with your input & questions for discussion!• Appreciate and participate in teamwork!

Power?

Everything is done by tiny switches

Their charge represents logic values Changing charge energy Power energy over time Devices are non-ideal power heat Excess heat Circuits breakdown

Need to keep power within acceptable limitsNeed to keep power within acceptable limits

POWER in the real world

1

10

100

1000

W/c

m2

Integrated Circuits Costs

Die cost = Wafer cost

Dies per Wafer * Die yield

Dies per wafer = Wafer Area * Wafer diameter

Die Area (2 * die area)1/2

Die yield = Wafer yield * ( 1+ (Defects per unit area * Die area )/ ) - = 4.0

Percentage of good dies on a wafer

Integrated Circuits Costs-example

Find the die yield for a die with a defect density of 0.6 for dies with areas 1.0 and .49.

For the larger die: Die yield = (1+ (0.6*1)/4) -4 = 0.57

for the smaller die: Die yield = (1+ (0.6*0.49)/4) -4 = 0.75

Why?

IC cost = Die cost + Testing cost + Packaging cost

Final test yield

Packaging Cost: depends on pins, heat dissipation, ...

Other Costs

Chip Die Packaging Testing Total

386DX $4 $1 $4 $9

486DX2 $12 $11 $12 $35

PowerPC 601 $53 $3 $21 $77

HP PA 7100 $73 $35 $16 $124

DEC Alpha $149 $30 $23 $202

SuperSPARC $272 $20 $34 $326

Pentium $417 $19 $37 $473

System Cost: Workstation

System Subsystem% of total cost

Cabinet Sheet metal, plastic1%

Power supply, fans2%

Cables, nuts, bolts1%

(Subtotal)(4%)

Motherboard Processor6%

DRAM (64MB)36%

Video system14%

I/O system3%

Printed Circuit board1%

(Subtotal)(60%)

I/O Devices Keyboard, mouse1%

Monitor22%

Hard disk (1 GB)7%

Tape drive (DAT)6%

(Subtotal)(36%)

COST v. PRICE

ComponentCost

componentcost

Direct Costs

componentcost

direct costs

Gross Margin

componentcost

direct costs

gross margin

AverageDiscount

list price

avg. selling price

Input: chips, displays, ...

Making it: labor, scrap, returns, ...

Overhead: R&D, rent, marketing, profits, ...

Commission: channel profit, volume discounts,

+33%

+25–100%

+50–80%

(25–31%)

(33–45%)

(8–10%)

(33–14%)

(WS–PC)Q: What % of company incomeon Research and Development (R&D)?

Performance

Purchasing perspective given a collection of machines, which has the

best performance ? least cost ? best performance / cost ?

Design perspective faced with design options, which has the

best performance improvement ? least cost ? best performance / cost ?

Both require basis for comparison metric for evaluation

Our goal is to understand cost & performance implications of architectural choices

Two notions of “performance”

° Time to do the task (Execution Time)

– execution time, response time, latency

° Tasks per day, hour, week, sec, ns. ..

– throughput, bandwidth

Response time and throughput often are in opposition

DC to Paris

6.5 hours

3 hours

Plane

Boeing 747

BAD/Sud Concorde

Speed

610 mph

1350 mph

Passengers

470

132

Throughput

286,700

178,200

Which has higher performance?

Example

• Time of Concorde vs. Boeing 747?

• Concord is 1350 mph / 610 mph = 2.2 times faster

= 6.5 hours / 3 hours

• Throughput of Concorde vs. Boeing 747 ?

• Concord is 178,200 pmph / 286,700 pmph = 0.62 “times faster”

• Boeing is 286,700 pmph / 178,200 pmph = 1.6 “times faster”

• Boeing is 1.6 times (“60%”)faster in terms of throughput

• Concord is 2.2 times (“120%”) faster in terms of flying time

We will focus primarily on execution time for a single job

Definitions

Performance is in units of things-per-second bigger is better

If we are primarily concerned with response time performance(x) = 1

execution_time(x)

" X is n times faster than Y" means

Performance(X)

n = ----------------------

Performance(Y)

Performance measurement

How about collection of programs? Example:

Three machines: A, B and C. Two Programs: P1 and P2.

A B C W(1) W(2) W(3)

P1 1 10 20 .5 .9 .99

P2 1000 100 20 .5 .1 .01

W(1) 500.5 55 20

Arithmetic mean: Weight i * Time i

W(2) 91.9 18 20

W(3) 2 10 20

Performance measurement

Other option: Geometric Means (Self study pages 37-39 text book)

Metrics of performance

Compiler

Programming Language

Application

DatapathControl

Transistors Wires Pins

ISA

Function Units

(millions) of Instructions per second – MIPS(millions) of (F.P.) operations per second – MFLOP/s

Cycles per second (clock rate)

Megabytes per second

Answers per monthOperations per second

Relating Processor Metrics

CPU execution time = CPU clock cycles X clock cycle time

or CPU execution time = CPU clock cycles ÷ clock rate

CPU clock cycles= Instructions X avg. clock cycles per instr.

or CPI = CPU clock cycles÷ Instructions

CPI tells us something about the Instruction Set Architecture, the Implementation of that architecture, and the program measured

Aspects of CPU PerformanceCPU time = Seconds = Instructions x Cycles x

Seconds

Program Program Instruction Cycle

CPU time = Seconds = Instructions x Cycles x Seconds

Program Program Instruction Cycle

instr. count CPI clock rate

Program

Compiler

Instr. Set Arch.

Organization

Technology

Aspects of CPU PerformanceCPU time = Seconds = Instructions x Cycles x

Seconds

Program Program Instruction Cycle

CPU time = Seconds = Instructions x Cycles x Seconds

Program Program Instruction Cycle

instr count CPI clock rate

Program X

Compiler X (x)

Instr. Set. X X

Organization X X

Technology X

Organizational Trade-offs

Compiler

Programming Language

Application

DatapathControl

Transistors Wires Pins

ISA

Function Units

Instruction Mix

Cycle Time

CPI

CPI

CPU time = ClockCycleTime * SUM CPI * Ii = 1

n

ii

CPI = SUM CPI * F where F = I i = 1

n

i i i i

Instruction Count

"instruction frequency"

Invest Resources where time is Spent!

CPI = (CPU Time * Clock Rate) / Instruction Count = Clock Cycles / Instruction Count

“Average cycles per instruction”

Example (RISC processor)

Typical Mix

Base Machine (Reg / Reg)

Op Freq Cycles CPI(i) % Time

ALU 50% 1 .5 23%

Load 20% 5 1.0 45%

Store 10% 3 .3 14%

Branch 20% 2 .4 18%

2.2

How much faster would the machine be if a better data cachereduced the average load time to 2 cycles?

How does this compare with using branch prediction to shave a cycle off the branch time?

What if two ALU instructions could be executed at once?

Example (RISC processor)

Typical Mix

Base Machine (Reg / Reg)

Op Freq Cycles CPI(i) % Time

ALU 50% 1 .5 23%

Load 20% 5 1.0 45%

Store 10% 3 .3 14%

Branch 20% 2 .4 18%

2.2

How much faster would the machine be if:A) Loads took “0” cycles? B) Stores took “0” cycles?C) ALU ops took “0” cycles?D)Branches took “0” cycles?

MAKE THE COMMON CASE FAST

Amdahl's Law

Speedup due to enhancement E:

ExTime w/o E Performance w/ E

Speedup(E) = -------------------- = ---------------------

ExTime w/ E Performance w/o E

Suppose that enhancement E accelerates a fraction F of the task

by a factor S and the remainder of the task is unaffected then,

ExTime(with E) = ((1-F) + F/S) X ExTime(without E)

Speedup(with E) = ExTime(without E) ÷ ((1-F) + F/S) X ExTime(without E)

Speedup(with E) =1/ ((1-F) + F/S)

Amdahl's Law-example

A new CPU makes Web serving 10 times faster. The old CPU spent 40% of the time on computation and 60% on waiting for I/O. What is the overall enhancement?

Fraction enhanced= 0.4

Speedup enhanced = 10

Speedup overall = 1 = 1.56

0.6 +0.4/10

Example from Quiz 1-2004

a)A program consists of 80% initialization code and of 20% code being the main iteration loop, which is run 1000 times. The total runtime of the program is 100 seconds. Calculate the fraction of the total run time needed for the initialization and the iteration. Which part would you optimize? B) The program should have a total run time of 60 seconds. How can this be achieved? (15 points)

Marketing Metrics

MIPS = Instruction Count / Time * 10^6

= Clock Rate / CPI * 10^6

•machines with different instruction sets ?

•programs with different instruction mixes ?

• dynamic frequency of instructions

• uncorrelated with performance

GFLOPS = FP Operations / Time * 10^9 playstation: 6.4 GFLOPS

•machine dependent

•often not where time is spent

Why Do Benchmarks? How we evaluate differences

Different systems Changes to a single system

Provide a target Benchmarks should represent large class of important

programs Improving benchmark performance should help many

programs For better or worse, benchmarks shape a field Good ones accelerate progress

good target for development Bad benchmarks hurt progress

help real programs v. sell machines/papers? Inventions that help real programs don’t help

benchmark

Basis of Evaluation

Actual Target Workload

Full Application Benchmarks

Small “Kernel” Benchmarks

Microbenchmarks

Cons

• representative• very specific• non-portable• difficult to run, or measure• hard to identify cause

• portable• widely used• improvements useful in reality

• easy to run, early in design cycle

• identify peak capability and potential bottlenecks

•less representative

• easy to “fool”

• “peak” may be a long way from application performance

Successful Benchmark: SPEC

1987 RISC industry mired in “bench marketing”:(“That is 8 MIPS machine, but they claim 10 MIPS!”)

EE Times + 5 companies band together to perform Systems Performance Evaluation Committee (SPEC) in 1988: Sun, MIPS, HP, Apollo, DEC

Create standard list of programs, inputs, reporting: some real programs, includes OS calls, some I/O

SPEC first round

First round 1989; 10 programs, single number to summarize performance

One program: 99% of time in single line of code New front-end compiler could improve dramatically

Benchmark

SPE

C P

erf

0

100

200

300

400

500

600

700

800

gcc

epre

sso

spic

e

doduc

nasa7

li

eqnto

tt

matr

ix300

fpppp

tom

catv

SPEC95

Eighteen application benchmarks (with inputs) reflecting a technical computing workload

Eight integer go, m88ksim, gcc, compress, li, ijpeg, perl, vortex

Ten floating-point intensive tomcatv, swim, su2cor, hydro2d, mgrid, applu,

turb3d, apsi, fppp, wave5 Must run with standard compiler flags

eliminate special undocumented incantations that may not even generate working code for real programs

Summary

Time is the measure of computer performance! Good products created when have:

Good benchmarks Good ways to summarize performance

If not good benchmarks and summary, then choice between improving product for real programs vs. improving product to get more sales=> sales almost always wins

Remember Amdahl’s Law: Speedup is limited by unimproved part of program

CPU time = Seconds = Instructions x Cycles x Seconds

Program Program Instruction Cycle

CPU time = Seconds = Instructions x Cycles x Seconds

Program Program Instruction Cycle

Readings & More…

Reminder:

READ:

TEXTBOOK: Chapter 1 pages 1 to 47 Moore paper (posted on course web site).

Recommended