Designing Tomorrow’s Computing Platforms Challenges, Solutions, and Tools Sudhanva Gurumurthi...

Preview:

Citation preview

Designing Tomorrow’s Computing Platforms

Challenges, Solutions, and Tools

Sudhanva Gurumurthie-mail: gurumurthi@cs.virginia.edu

Talk Outline

• Modern Computer Architecture– The Good– The Bad– The Ugly

• My Previous Work

• Current and Future Research

The Good

Source: http://www.intel.com/technology/silicon/mooreslaw/

Microprocessor Technology Advancement

• Plentiful Transistors– Superscalar, SMT, CMP– Larger caches, deeper memory-hierarchy– High-bandwidth access to memory

• Simultaneously, clock frequencies have grown tremendously

Storage Has Become Ubiquitous

Density

Speed

Source: Hitachi GST Technology Overview Charts, http://www.hitachigst.com/hdd/technolo/overview/storagetechchart.html

Growth in Drive Performance

The Bad

Power Dissipation

0

10

20

30

40

50

60

70

80

90

Po

we

r (W

)

8086 286 386 486 Pentium PentiumIII

Pentium4

Particle Induced Soft-Errors

01

Source: FACT Group, Intel

Are you kidding me?

• No!!– In 2000, Sun Microsystems reported random crashes

in one of its server products due to no parity-protection in the caches.

– Eugene Normand’s study of the error-logs of large systems indicated several such errors

– There are conference sessions and even full conferences/workshops devoted to this problem

– Have personal experience collecting and analyzing soft-error data

Where Do These Particles Come From?

• Neutrons– Terrestrial cosmic rays

• Alpha particles– Packaging

Should we worry?

• Yes!!– Thanks to Moore’s Law

• Lower operating voltages• Exponential increase transistor integration density• Power management (voltage-scaling)

– Larger systems

• Impractical to shield against cosmic rays– Need several feet of concrete– Radiation-hardening hurts performance, area, and

cost

Redundant Multi-Threading

InputReplicator

OutputComparator

Rest of the System

Source: Mukherjee et al, “Detailed Design and Evaluation of Redundant Multithreading Alternatives”, ISCA’02

Performance of Redundant Multi-Threading

0

5

10

15

20

25

30

35

40

45

Pe

rce

nta

ge

of

IPC

Lo

st

gzip swim vpr gcc mesa art mcf equake parser vortex bzip2

Temperature Affects Disk Drive Reliability

• Heat-Related Problems– Data corruption– Higher off-track errors– Head-crashes

Disk drive design constrained by the thermal-envelope• Puts a limit on drive performance

Source: D. Anderson et al, “More than an Interface – SCSI vs. ATA”, FAST 2003.

Power =~ (# Platters)*(RPM)2.8(Diameter)4.6

Increase RPM

Thermal-Constrained Design

Increase RPM

Lower Capacity

Shrink Platter

1 platterData Rate =~ (Linear-Density)*(RPM)*(Diameter)

(RPM)2.8 (Dia)4.6 (# Platters)

Lower Data Rate

Data-Rate Capacity

Temperature

40% AnnualIDR Growth

The BadDrive Temperature

10

100

1000

Year

Tem

pera

ture

(C

)

2.6" 2.1" 1.6"

Thermal-Envelope

The BadData Rate

30-60% Performance Boostfor 10,000 RPM Increase

Search-Engine Thermal Behavior

Thermal Envelope = 45.22 C

The Ugly

Design Tools

• Designing complex systems requires extensive simulation

• Need to model all aspects of the system– Software layers– Power– Temperature– Effect of faults

Simulation Problems

• Painfully slow– Speed vs. Accuracy

• No good support available for modeling effects like temperature and reliability

• Can themselves be hard to write

• Buggy

My Previous Work

Thesis Work:Power Management of

Enterprise Storage Systems

Enterprise Storage Market Growth

• Storage demand growing at annual rate of 60%– By 2008, a company would manage 10 times the

storage it has today.Sources:

1. “Enterprise Storage: A Look into the Future”, TNM Seminar Series, Oct. 31, 2000

2. “More Power Needed”, Energy User News, Nov. 2002

Power Demands of Data Centers“What matters most to the computer designers at Google is not speed but

power – low-power – because data centers can consume as much electricity as a city”, Eric Schmidt, CEO, Google

• Data centers consume several Megawatts of power

• Electricity bill– $4 billion/year– Disks account

for 27% of computing-load costs

• Difficult to cool at high power-densities

Sources:

1. “Intel’s Huge Bet Turns Iffy”, New York Times article, September 29, 2002

2. “Power, Heat, and Sledgehammer, Apr. 2002.

3. “Heat Density Trends in Data Processing, Computer Systems, and Telecommunications Equipment”, 2000.

Data Center Cooling Costs

• Data center of a large financial institution in New York City– Power consumption ~ 4.8 MW

Source: “Energy Benchmarking and Case Study – NY Data Center No. 2”, Lawrence Berkeley National Lab, July 2003.

51%42%

7%

Servers Air-Conditioning Other

Where Does Power Go?

Spindle Motor(SPM)

Voice-Coil Motor(VCM)

Idle = 9 W

Seek = 13 W

Standby = 1 W

4 W

Active = 11 W

Traditional Power Management (TPM)

Disk Active

Spindown

Standby Mode

Spinup

Disk Request

Time

Disk Active

Idleness

Detected

Idle

I/O Characteristics of Server Systems

• Large number of disks– RAID arrays

• Heavier I/O loads sustained over long periods.• Stringent performance requirements.• Server disks physically different

– Not made to use spindowns.– Longer spindown/spinup latencies

• Server Disk - Hitachi Ultrastar – 15 seconds/26 seconds• Laptop Disk - Hitachi Travelstar – 4.5 seconds

Feasibility of Applying TPM

• No prior study on how to tackle this problem systematically.

• Questions1. Is there idleness?2. Can we do TPM?

• Answers1. Yes2. No! Why??

• Large number of very short duration (few ms) idle-periods

The Solution

• Traditional Power Management– Not effective for server workloads

• Power =~ (# Platters)*(RPM)2.8(Diameter)4.6

– All three can be varied at design-time to meet the power budget

• Laptop vs. Server disk

– RPM could be varied dynamically

• Dynamic RPM (DRPM)

Potential Benefits of DRPM

0

10

20

30

40

50

60

70

80

% S

av

ing

s in

E idle

10 100 500 1000 10000 100000

Mean Inter-Arrival Time (ms)

TPMperf DRPMperf Combined

Control-Policy Performance

Research Impact

• The feasibility study [ISPASS’03] started off new research in server disk power management– Active groups: UIUC, Rutgers, UMass, UArizona,

Rochester

• DRPM paper [ISCA’03] widely cited in architecture and systems conferences like ISCA, HPCA, ASPLOS, SOSP, OSDI

• Multi-speed drives starting to appear in the market– Hitachi Deskstar 7K400

My Other Work

• Microarchitectural Techniques to Enhance Redundant Multi-Threading Performance– Instruction Reuse [ISCA’04]

• Soft-Error Data Collection and Analysis from Actual Systems (Intel)

• Soft-Error Tolerant Cache Coherence-Protocols (Intel)

• Simulator Design– SoftWatt [HPCA’02]

– MEMSIM (IBM Research)

More Details About My Work

• Papers:– S. Gurumurthi et al., Disk Drive Roadmap from the Thermal

Perspective: A Case for Dynamic Thermal Management, ISCA 2005.

– A. Parashar et al., A Complexity-Effective Approach to ALU Bandwidth Enhancement for Instruction-Level Temporal Redundancy, ISCA 2004.

– S. Gurumurthi et al., DRPM: Dynamic Speed Control for Power Management in Server Class Disks, ISCA 2003.

– S. Gurumurthi et al., Using Complete Machine Simulation for Software Power Estimation: The SoftWatt Approach, HPCA 2002.

• Available via my CS Department homepage.

Some Research Directions

• Temperature-Aware Storage Systems– Devices– Systems issues

• Multi-Dimensional Approach to Fault Tolerance– Tradeoffs between performance, power, reliability– Dynamic adaptation

• Microarchitectural Support for Security• Design of accurate and fast simulation tools

Research Directions in Storage

• Storage architecture is still quite a nascent field

• Plenty of research opportunities:– Emerging technologies

• MEMS, holographic, molecular storage

– New Research Avenues• Security• Application/Content-Awareness• Active disks• Long-term and survivable storage

Looking for Students!

• Shall be offering a research course in Spring 2006.– Many project opportunities

• Contact Information:– E-mail: gurumurthi@cs– Office: 236B, Olsson Hall

Divider Slide

Approach 1:Seek Throttling

T

E

M

P

E

R

A

T

U

R

E

TIME

Thermal-Envelope

VCM On

VCM Off

0.00%

10.00%

20.00%

30.00%

40.00%

50.00%

60.00%

70.00%

80.00%

90.00%

100.00%

Benchmark

Perc

en

tag

e o

f IP

C G

ap

(S

IE-D

IE)

reco

vere

d

DIE-IRB-1K-sat

DIE-2xALU

DIE-IRB-ideal

Results2-42% reduction in IPC

gap (avg. 23%)

Recommended