63
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 1 Fundamentals of Quantitative Design and Analysis Computer Architecture A Quantitative Approach, Fifth Edition

Review of Chapter 1 and exercises

  • Upload
    vannhu

  • View
    226

  • Download
    4

Embed Size (px)

Citation preview

Page 1: Review of Chapter 1 and exercises

1Copyright © 2012, Elsevier Inc. All rights reserved.

Chapter 1

Fundamentals of Quantitative Design and Analysis

Computer ArchitectureA Quantitative Approach, Fifth Edition

Page 2: Review of Chapter 1 and exercises

2Copyright © 2012, Elsevier Inc. All rights reserved.

Computer Technology Performance improvements:

Improvements in semiconductor technology Feature size, clock speed

Improvements in computer architectures Enabled by HLL compilers, UNIX Lead to RISC architectures

Together have enabled: Lightweight computers Productivity-based managed/interpreted programming

languages SaaS, Virtualization, Cloud

Applications evolution: Speech, sound, images, video,

“augmented/extended reality”, “big data”

Introduction

Page 3: Review of Chapter 1 and exercises

3Copyright © 2012, Elsevier Inc. All rights reserved.

Single Processor PerformanceIntroduction

RISC

Move to multi-processor

Page 4: Review of Chapter 1 and exercises

4Copyright © 2012, Elsevier Inc. All rights reserved.

Current Trends in Architecture Cannot continue to leverage Instruction-Level

parallelism (ILP) Single processor performance improvement ended in

2003

New models for performance: Data-level parallelism (DLP) Thread-level parallelism (TLP) Request-level parallelism (RLP)

These require explicit restructuring of the application

Introduction

Page 5: Review of Chapter 1 and exercises

5Copyright © 2012, Elsevier Inc. All rights reserved.

Classes of Computers Personal Mobile Device (PMD)

e.g. smart phones, tablet computers (1.8 billion sold 2010) Emphasis on energy efficiency and real-time

Desktop Computing Emphasis on price-performance (0.35 billion)

Servers Emphasis on availability (very costly downtime!), scalability,

throughput (20 million) Clusters / Warehouse Scale Computers

Used for “Software as a Service (SaaS)”, PaaS, IaaS, etc. Emphasis on availability ($6M/hour-downtime at Amazon.com!)

and price-performance (power=80% of TCO!) Sub-class: Supercomputers, emphasis: floating-point

performance and fast internal networks, and big data analytics Embedded Computers (19 billion in 2010)

Emphasis: price

Classes of C

omputers

Page 6: Review of Chapter 1 and exercises

6Copyright © 2012, Elsevier Inc. All rights reserved.

Parallelism Classes of parallelism in applications:

Data-Level Parallelism (DLP) Task-Level Parallelism (TLP)

Classes of architectural parallelism: Instruction-Level Parallelism (ILP) Vector architectures/Graphic Processor Units (GPUs) Thread-Level Parallelism Request-Level Parallelism

Classes of C

omputers

Page 7: Review of Chapter 1 and exercises

7Copyright © 2012, Elsevier Inc. All rights reserved.

Flynn’s Taxonomy Single instruction stream, single data stream (SISD)

Single instruction stream, multiple data streams (SIMD) Vector architectures Multimedia extensions Graphics processor units

Multiple instruction streams, single data stream (MISD) No commercial implementation

Multiple instruction streams, multiple data streams (MIMD) Tightly-coupled MIMD Loosely-coupled MIMD

Classes of C

omputers

Page 8: Review of Chapter 1 and exercises

8Copyright © 2012, Elsevier Inc. All rights reserved.

Defining Computer Architecture “Old” view of computer architecture:

Instruction Set Architecture (ISA) design i.e. decisions regarding:

registers, memory addressing, addressing modes, instruction operands, available operations, control flow instructions, instruction encoding

“Real” computer architecture: Specific requirements of the target machine Design to maximize performance within constraints:

cost, power, and availability Includes ISA, microarchitecture, hardware

Defining C

omputer A

rchitecture

Page 9: Review of Chapter 1 and exercises

9Copyright © 2012, Elsevier Inc. All rights reserved.

Trends in Technology Integrated circuit technology

Transistor density: 35%/year Die size: 10-20%/year Integration overall: 40-55%/year

DRAM capacity: 25-40%/year (slowing)

Flash capacity: 50-60%/year 15-20X cheaper/bit than DRAM

Magnetic disk technology: 40%/year 15-25X cheaper/bit then Flash 300-500X cheaper/bit than DRAM

Trends in Technology

Page 10: Review of Chapter 1 and exercises

10Copyright © 2012, Elsevier Inc. All rights reserved.

Bandwidth and Latency Bandwidth or throughput

Total work done in a given time 10,000-25,000X improvement for processors over the

1st milestone 300-1200X improvement for memory and disks over

the 1st milestone

Latency or response time Time between start and completion of an event 30-80X improvement for processors over the 1st

milestone 6-8X improvement for memory and disks over the 1st

milestone

Trends in Technology

Page 11: Review of Chapter 1 and exercises

11Copyright © 2012, Elsevier Inc. All rights reserved.

Bandwidth and Latency

Log-log plot of bandwidth and latency milestones

Trends in Technology

Page 12: Review of Chapter 1 and exercises

12Copyright © 2012, Elsevier Inc. All rights reserved.

Transistors and Wires Feature size

Minimum size of transistor or wire in x or y dimension

10 microns in 1971 to .032 microns in 2011 Transistor performance scales linearly

Wire delay does not improve with feature size! Integration density scales quadratically Linear performance and quadratic density

growth present a challenge and opportunity, creating the need for computer architect!

Trends in Technology

Page 13: Review of Chapter 1 and exercises

13Copyright © 2012, Elsevier Inc. All rights reserved.

Power and Energy Problem: Get power in, get power out

Thermal Design Power (TDP) Characterizes sustained power consumption Used as target for power supply and cooling system Lower than peak power, higher than average power

consumption

Clock rate can be reduced dynamically to limit power consumption

Energy per task is often a better measurement

Trends in Pow

er and Energy

Page 14: Review of Chapter 1 and exercises

14Copyright © 2012, Elsevier Inc. All rights reserved.

Dynamic Energy and Power Dynamic energy

Transistor switch from 0 -> 1 or 1 -> 0 ½ x Capacitive load x Voltage2

Dynamic power ½ x Capacitive load x Voltage2 x Frequency switched

Reducing clock rate reduces power, not energy

Trends in Pow

er and Energy

Page 15: Review of Chapter 1 and exercises

15Copyright © 2012, Elsevier Inc. All rights reserved.

Power Intel 80386

consumed ~ 2 W 3.3 GHz Intel

Core i7 consumes 130 W

Heat must be dissipated from 1.5 x 1.5 cm chip

This is the limit of what can be cooled by air

Trends in Pow

er and Energy

Page 16: Review of Chapter 1 and exercises

16Copyright © 2012, Elsevier Inc. All rights reserved.

Reducing Power Techniques for reducing power:

Do nothing well Dynamic Voltage-Frequency Scaling Low power state for DRAM, disks Overclocking, turning off cores

Trends in Pow

er and Energy

Page 17: Review of Chapter 1 and exercises

17Copyright © 2012, Elsevier Inc. All rights reserved.

Static Power Static power consumption

Currentstatic x Voltage Scales with number of transistors To reduce: power gating Race-to-halt

The new primary evaluation for design innovation Tasks per joule Performance per watt

Trends in Pow

er and Energy

Page 18: Review of Chapter 1 and exercises

18Copyright © 2012, Elsevier Inc. All rights reserved.

Trends in Cost Cost driven down by learning curve

Yield

DRAM: price closely tracks cost

Microprocessors: price depends on volume 10% less for each doubling of volume

Trends in Cost

Page 19: Review of Chapter 1 and exercises

19Copyright © 2012, Elsevier Inc. All rights reserved.

Integrated Circuit Cost Integrated circuit

Bose-Einstein formula: Defects per unit area = 0.016-0.057 defects per square cm (2010) N = process-complexity factor = 11.5-15.5 (40 nm, 2010) The manufacturing process dictates the wafer cost, wafer yield and

defects per unit area The architect’s design affects the die area, which in turn affects the

defects and cost per die

Trends in Cost

Page 20: Review of Chapter 1 and exercises

20Copyright © 2012, Elsevier Inc. All rights reserved.

Dependability Systems alternate between two states of service

with respect to SLA/SLO:1. Service accomplishment, where service is delivered

as specified by SLA2. Service interruption, where the delivered service is

different from the SLA Module reliability: “failure(F)=transition from 1 to

2” and “repair(R)=transition from 2 to 1” Mean time to failure (MTTF) Mean time to repair (MTTR) Mean time between failures (MTBF) = MTTF + MTTR Availability = MTTF / MTBF

Dependability

Page 21: Review of Chapter 1 and exercises

21Copyright © 2012, Elsevier Inc. All rights reserved.

Measuring Performance Typical performance metrics:

Response time Throughput

Speedup of X relative to Y Execution timeY / Execution timeX

Execution time Wall clock time: includes all system overheads CPU time: only computation time

Benchmarks Kernels (e.g. matrix multiply) Toy programs (e.g. sorting) Synthetic benchmarks (e.g. Dhrystone) Benchmark suites (e.g. SPEC06fp, TPC-C)

Measuring P

erformance

Page 22: Review of Chapter 1 and exercises

22Copyright © 2012, Elsevier Inc. All rights reserved.

Principles of Computer Design Take Advantage of Parallelism

e.g. multiple processors, disks, memory banks, pipelining, multiple functional units

Principle of Locality Reuse of data and instructions

Focus on the Common Case Amdahl’s Law

Principles

Page 23: Review of Chapter 1 and exercises

23Copyright © 2012, Elsevier Inc. All rights reserved.

Principles of Computer Design The Processor Performance Equation

Principles

Page 24: Review of Chapter 1 and exercises

24Copyright © 2012, Elsevier Inc. All rights reserved.

Principles of Computer DesignP

rinciples

Different instruction types having different CPIs

Page 25: Review of Chapter 1 and exercises

25Copyright © 2012, Elsevier Inc. All rights reserved.

Chapter 1 Review & Examples

Page 26: Review of Chapter 1 and exercises

ISACSCE430/830

Instruction Set Architecture (ISA)• Serves as an interface between software and

hardware.• Provides a mechanism by which the software

tells the hardware what should be done.

instruction set

High level language code : C, C++, Java, Fortran,

hardware

Assembly language code: architecture specific statements

Machine language code: architecture specific bit patterns

software

compiler

assembler

Page 27: Review of Chapter 1 and exercises

ISACSCE430/830

Instruction Set Design Issues

• Instruction set design issues include:– Where are operands stored?

» registers, memory, stack, accumulator– How many explicit operands are there?

» 0, 1, 2, or 3 – How is the operand location specified?

» register, immediate, indirect, . . . – What type & size of operands are supported?

» byte, int, float, double, string, vector. . .– What operations are supported?

» add, sub, mul, move, compare . . .

Page 28: Review of Chapter 1 and exercises

ISACSCE430/830

Classifying ISAsAccumulator (before 1960, e.g. 68HC11):

1-address add A acc ¬ acc + mem[A]

Stack (1960s to 1970s):0-address add tos ¬ tos + next

Memory-Memory (1970s to 1980s):2-address add A, B mem[A] ¬ mem[A] + mem[B]3-address add A, B, C mem[A] ¬ mem[B] + mem[C]

Register-Memory (1970s to present, e.g. 80x86):2-address add R1, A R1 ¬ R1 + mem[A]

load R1, A R1 ¬ mem[A]

Register-Register (Load/Store, RISC) (1960s to present, e.g. MIPS):3-address add R1, R2, R3 R1 ¬ R2 + R3

load R1, R2 R1 ¬ mem[R2]store R1, R2 mem[R1] ¬ R2

Page 29: Review of Chapter 1 and exercises

ISACSCE430/830

Operand Locations in Four ISA ClassesGPR

Page 30: Review of Chapter 1 and exercises

ISACSCE430/830

Code Sequence C = A + B for Four Instruction Sets

Stack Accumulator Register(register-memory)

Register (load-store)

Push APush BAddPop C

Load AAdd BStore C

Load R1, AAdd R1, BStore C, R1

Load R1,ALoad R2, BAdd R3, R1, R2Store C, R3

memory memoryacc = acc + mem[C] R1 = R1 + mem[C] R3 = R1 + R2

Page 31: Review of Chapter 1 and exercises

ISACSCE430/830

Types of Addressing Modes (VAX)Addressing Mode Example Action

1. Register direct Add R4, R3 R4 <- R4 + R32. Immediate Add R4, #3 R4 <- R4 + 33. Displacement Add R4, 100(R1) R4 <- R4 + M[100 + R1]4. Register indirect Add R4, (R1) R4 <- R4 + M[R1]5. Indexed Add R4, (R1 + R2) R4 <- R4 + M[R1 + R2]6. Direct Add R4, (1000) R4 <- R4 + M[1000]7. Memory Indirect Add R4, @(R3) R4 <- R4 + M[M[R3]]8. AutoincrementAdd R4, (R2)+ R4 <- R4 + M[R2]

R2 <- R2 + d9. Autodecrement Add R4, (R2)- R4 <- R4 + M[R2]

R2 <- R2 - d10. Scaled Add R4, 100(R2)[R3] R4 <- R4 +

M[100 + R2 + R3*d]• Studies by [Clark and Emer] indicate that modes 1-4 account for

93% of all operands on the VAX.

Page 32: Review of Chapter 1 and exercises

ISACSCE430/830

Types of Operations

• Arithmetic and Logic: AND, ADD• Data Transfer: MOVE, LOAD,

STORE• Control BRANCH, JUMP,

CALL• System OS CALL, VM • Floating Point ADDF, MULF, DIVF• Decimal ADDD, CONVERT• String MOVE, COMPARE• Graphics (DE)COMPRESS

Page 33: Review of Chapter 1 and exercises

ISA-2CSCE430/830

MIPS Instructions

• All instructions exactly 32 bits wide• Different formats for different purposes• Similarities in formats ease implementation

op rs rt offset

6 bits 5 bits 5 bits 16 bits

op rs rt rd functshamt

6 bits 5 bits 5 bits 5 bits 5 bits 6 bits

R-Format

I-Format

op address

6 bits 26 bits

J-Format

31 0

31 0

31 0

Page 34: Review of Chapter 1 and exercises

ISA-2CSCE430/830

MIPS Instruction Types

• Arithmetic & Logical - manipulate data in registers

add $s1, $s2, $s3 $s1 = $s2 + $s3or $s3, $s4, $s5 $s3 = $s4 OR $s5

• Data Transfer - move register data to/from memory load & store

lw $s1, 100($s2) $s1 = Memory[$s2 + 100]sw $s1, 100($s2) Memory[$s2 + 100] = $s1

• Branch - alter program flowbeq $s1, $s2, 25 if ($s1==$s1) PC = PC + 4 + 4*25

else PC = PC + 4

Page 35: Review of Chapter 1 and exercises

ISA-2CSCE430/830

MIPS Arithmetic & Logical Instructions

• Instruction usage (assembly)add dest, src1, src2 dest=src1 + src2sub dest, src1, src2 dest=src1 - src2and dest, src1, src2 dest=src1 AND src2

• Instruction characteristics– Always 3 operands: destination + 2 sources– Operand order is fixed– Operands are always general purpose registers

• Design Principles:– Design Principle 1: Simplicity favors regularity– Design Principle 2: Smaller is faster

Page 36: Review of Chapter 1 and exercises

ISA-2CSCE430/830

Arithmetic & Logical Instructions - Binary Representation

• Used for arithmetic, logical, shift instructions– op: Basic operation of the instruction (opcode)– rs: first register source operand– rt: second register source operand– rd: register destination operand– shamt: shift amount (more about this later)– funct: function - specific type of operation

• Also called “R-Format” or “R-Type” Instructions

op rs rt rd functshamt

6 bits 5 bits 5 bits 5 bits 5 bits 6 bits

031

Page 37: Review of Chapter 1 and exercises

ISA-2CSCE430/830

op rs rt rd functshamt

6 bits 5 bits 5 bits 5 bits 5 bits 6 bits

Decimal

Binary

Arithmetic & Logical Instructions -Binary Representation Example

• Machine language for add $8, $17, $18

• See reference card for op, funct values

000000

0

10001

17

10010

18

01000

8

00000

0

100000

32

031

Page 38: Review of Chapter 1 and exercises

ISA-2CSCE430/830

MIPS Data Transfer Instructions

• Transfer data between registers and memory• Instruction format (assembly)

lw $dest, offset($addr) load wordsw $src, offset($addr) store word

• Uses:– Accessing a variable in main memory– Accessing an array element

Page 39: Review of Chapter 1 and exercises

ISA-2CSCE430/830

Review: Chapter 1

• Classes of Computers and Classes of Parallelism

• Technology Trend• Dependability• Performance Measurements and Benchmarks• Principles

Page 40: Review of Chapter 1 and exercises

ISA-2CSCE430/830

5 Classes of Computers

• Personal Mobile Devices– Cost is its primary concern– Energy, media performance, and responsiveness

• Desktop Computing– Price-Performance is its primary concern

• Servers– Availability, Scalability, and Throughput

• Clusters/warehouse-scale computers– Price-Performance, Energy

• Embedded Computer– Price

Page 41: Review of Chapter 1 and exercises

ISA-2CSCE430/830

Classes of Parallelism & Architectures• Data-Level Parallelism

– Data items can be operated on at the same time• Task-Level Parallelism

– Tasks can operate independently and largely in parallel

• Instruction-Level Parallelism: data-level para.– Pipelining, speculative execution

• Vector Architectures & GPU: data-level para.– A single instruction operates a collection of data in para.

• Thread-Level Parallelism: either data-level para. or task-level para.– Exploits parallelism via parallel threads

• Request-Level Parallelism: task-level para.– Exploits parallelism via decoupled tasks

Page 42: Review of Chapter 1 and exercises

ISA-2CSCE430/830

4 ways for hardware to support parallelism

• Single Instruction stream, Single Data stream– SISD

• Single Instruction stream, Multiple Data streams– SIMD, e.g., GPU, targets data-level parallelism

• Multiple Instruction streams, Single Data stream– MISD, no commercial multiprocessor of this type

• Multiple Instruction streams, Multiple Data streams– MIMD, e.g., multi-core processors, targets task-level parallelism

Page 43: Review of Chapter 1 and exercises

ISA-2CSCE430/830

Trend in Technology

• Integrated Circuit (IC) logic technology– Moore’s Law: a growth rate in transistor count on a chip

of about 40%-55% per year, or doubling every 18 or 24 months.

• Semiconductor DRAM– In 2011, a growth rate in capacity: 25%-40% per year

• Flash– A growth rate in capacity: 50%-60% per year

• Magnetic Disk– Since 2004, it has dropped back to 40% per year.

Page 44: Review of Chapter 1 and exercises

ISA-2CSCE430/830

Trend in Performance

• Bandwidth vs. Latency– The improvement on Bandwidth is much significant than

that on Latency.

Page 45: Review of Chapter 1 and exercises

ISA-2CSCE430/830

Growth in Processor Performance

RISC

Move to multi-processor

Parallelism: via Pipelining

Locality: using Cache

Hurdle: Power Wall

Lack: Instruction-

level Parallelism

Page 46: Review of Chapter 1 and exercises

ISA-2CSCE430/830

An example of Intel 486 CPU released in 1992 , 66MHz, w/ L2 Cache , 4.9-

6.3W

http://www.cpu-world.com/CPUs/80486/Intel-A80486DX2-66.html

Page 47: Review of Chapter 1 and exercises

ISA-2CSCE430/830

A CPU fan for Intel 486 CPU

http://www.cnaweb.com/486-ball-bearing-cpu-fan.aspx

Page 48: Review of Chapter 1 and exercises

ISA-2CSCE430/830

An example of Intel Pentium 4 CPU released in 2002, 2.8GHz, w/ 512KB Cache, 68.4W

http://www.pcplanetsystems.com/abc/product_details.php?item_id=146&category_id=61

Page 49: Review of Chapter 1 and exercises

ISA-2CSCE430/830

A typical CPU fan for Intel Pentium 4

http://www.dansdata.com/p4coc.htm

Page 50: Review of Chapter 1 and exercises

ISA-2CSCE430/830

A special CPU fan for gaming/multimedia users

http://www.pcper.com/reviews/Cases-and-Cooling/Asus-Star-Ice-CPU-Cooler-Review

Page 51: Review of Chapter 1 and exercises

ISA-2CSCE430/830

Trend in Power and Energy in IC

• Energydynamic – ½ X Capacitive Load X Voltage2

• Powerdynamic – ½ X Capacitive Load X Voltage2 X Freq. switched

• Example– Intel 486 66MHz Voltage: 5V– Intel Pentium 4 2.8GHz Voltage: 1.5V– Intel Core 990x 3.4GHz Voltage: 0.8-1.375V

• Improving Energy Efficiency– Do nothing well; Dynamic Voltage-Frequency

Scaling(DVFS); Design for typical case; Overclocking• Powerstatic

– Currentstatic X Voltage

Page 52: Review of Chapter 1 and exercises

ISA-2CSCE430/830

Dependability

• Service Accomplishment & Service Interruption• Transitions between 2 states: Failure & Restoration• Measurements

– Reliability: a measure of the continuous service accomplishment from a reference initial instant.» MTTF: Mean time to failure» FIT: failures per billion hours, 1/MTTF X 109 » MTTR: Mean time to repair» MTBF: Mean time between failures = MTTF + MTTR

– Availability: a measure of the service accomplishment with respect to the alternation between the two states.» MTTF/(MTTF+MTTR)» Upper bound: 100%

Page 53: Review of Chapter 1 and exercises

ISA-2CSCE430/830

Performance Measurements and Benchmarks

• Metrics– Throughput: a total amount of work done in a given time– Response time (Execution time): the time between the start and

the completion of an event• Speedup of X relative to Y

– Execution timeY / Execution timeX

• Execution time– Wall clock time: a latency to complete a task– CPU time: only computation time

• Benchmarks– Kernels, Toy programs, Synthetic benchmarks – Benchmark suites: SPEC [CPU] & TPC [Transaction Processing]– SpecRatio = Execution Timereference / Execution Timetarget

Page 54: Review of Chapter 1 and exercises

ISA-2CSCE430/830

Design Principles

• Take Advantage of Parallelism• Principle of Locality• Focus on the Common Case

– Amdahl’s Law

– Upper bound of the speedup: ?

Page 55: Review of Chapter 1 and exercises

ISA-2CSCE430/830

Example: Laundry Room

Washing Machine Drying Machine Clean LaundryDirty Laundry

30 minutes washing 90 minutes drying

Total Execution Time: 30+90 = 120 minutesWashing Portion: 30/120 = ¼ Drying Portion: 90/120 = ¾

Page 56: Review of Chapter 1 and exercises

ISA-2CSCE430/830

If we can have two drying machines

Washing Machine 2 Drying Machines Clean LaundryDirty Laundry

30 minutes washing 90/2=45 minutes drying

Page 57: Review of Chapter 1 and exercises

ISA-2CSCE430/830

Speedup: (30+90)/(30+45)=1.6

Washing Machine 2 Drying Machines Clean LaundryDirty Laundry

30 minutes washing 90/2=45 minutes drying

Page 58: Review of Chapter 1 and exercises

ISA-2CSCE430/830

If we can have unlimited drying machines

Washing Machine ∞ Drying Machines Clean LaundryDirty Laundry

30 minutes washing ? minutes drying

Page 59: Review of Chapter 1 and exercises

ISA-2CSCE430/830

Speedup: (30+90)/(30+0)=4

Washing Machine ∞ Drying Machines Clean LaundryDirty Laundry

30 minutes washing 90/∞ ≈ 0 minutes drying

Page 60: Review of Chapter 1 and exercises

ISA-2CSCE430/830

Design Principles

• Take Advantage of Parallelism• Principle of Locality• Focus on the Common Case

– Amdahl’s Law

– Upper bound of the speedup: » 1 / (1 - Fractionenhanced)

Page 61: Review of Chapter 1 and exercises

ISA-2CSCE430/830

Exercise 1

• If the new processor is 10 times faster than the original process, and we assume that the original processor is busy with computation 40% of the time and is waiting for I/O 60% of the time, what is the overall speedup gained by incorporating the enhancement?

• Fractionenhanced = 0.4, Speedupenhanced = 10 • Speedupoverall = 1/(0.6+0.4/10) = 1.56• What is the upper bound of the overall

speedup?• Upper bound = 1/0.6 = 1.67

Page 62: Review of Chapter 1 and exercises

ISA-2CSCE430/830

Exercise 2

• In a disk subsystem:– 10 disks, each rated at 1,000,000-hour MTTF– 1 ATA controller, 500,000-hour MTTF– 1 power supply, 200,000-hour MTTF– 1 fan, 200,000-hour MTTF– 1 ATA cable, 1,000,000-hour MTTF

• Assuming the lifetimes are exponentially distributed and that failures are independent, compute the MTTF of the system as a whole

Page 63: Review of Chapter 1 and exercises

ISA-2CSCE430/830

Exercise 2

• Because the overall failure rate of the collection is the sum of the failure rates of the modules, the failure rate of the system– = 10*(1/1,000,000) + 1/500,000 + 1/200,000 + 1/200,000 +

1/1,000,000 – = 23/1,000,000 or 23,000 FIT

• Because MTTF is the inverse of the failure rate– MTTFsystem = 1/(23/1,000,000) = 43,500 hours