7
Keeping Old Computers Alive for Deeper Understanding of Computer Architecture Hisanobu Tomari Grad. School of Information Science and Technology, The University of Tokyo Tokyo, Japan [email protected] Kei Hiraki Grad. School of Information Science and Technology, The University of Tokyo Tokyo, Japan [email protected] Abstract—Computer architectures, as they are seen by stu- dents, are getting more and more monolithic: few years ago a student had access to x86 processor on his or her laptop, SPARC server in the backyard, MIPS and PowerPC on large SMP system, and Alpha on calculation server. Today, only architectures that students experience writing program on are x86 64 and possibly ARM. On one hand, this simplifies their learning, but on the other hand, this makes it harder to discover options that are available in designing an instruction set architecture. In this paper, we introduce our undergraduate course that teaches computer architecture design and evaluation that uses historic computers to make more processor architectures acces- sible to students. The collection of more than 270 old computers that were marketed in 1979 to 2014 are used in the class. These computers had to be repaired and restored to working condition before using in exercise for the undergraduate students. By experiencing different architectures from what is used everyday, students learn the context of features that are standard today. This paper also shows power consumption and benchmark comparison results obtained using old computers outside class- room. These data are also used as the basis for learning about concepts and issues of computer architecture. I. I NTRODUCTION As computer technology advances, it becomes more difficult to fully understand why things have converged to what they are now. History plays a role in letting us know both success and failure stories of the past. Without the knowledge of history, we will not just reinvent the wheel, but also fall in pitfalls that we have fallen before. There have been efforts of saving the history of computer technology by curation of resources about computers. Com- puter History Museum [6] and Computermuseum M ¨ unchen [5] are examples that actually collects computers. They are mostly focused on telling visitors the story behind each computer and how they looks like. Even though there are a few exceptions that are discussed later, most of these computers are dead – they do no longer run software. Keeping historic computers in a working condition has fundamentally different importance from leaving them in disrepair [1]. Computers are built to run software written for them. In future, it will be required to analyze and inspect how software worked in the past for both technical and cultural interests. For technical purposes, new methods of measuring performance can be applied to old computers retrospectively. Working hardware is a requisite for software execution environment that reproduces behavior of socially and culturally important computers. This paper presents our information science undergraduate course for teaching concepts and methodology of computer architecture. In this class, historic computer systems are used by students to learn different design concepts and perfor- mance results. Students learn the different instruction sets by programming on a number of working systems. This gives them an opportunity to learn what characteristics are shared among popular instruction set, and what are special features specific to an instruction set architecture. They can verify their interpretation of manuals through experiments. The knowledge of how designers in the past have decided architectural parameters and its end results are then used for designing students’ own architecture. This paper also shows power consumption and benchmark-to-benchmark comparison observations obtained through our collection of computers, which are also useful materials for teaching these issues. For students to work with old computers, the computers need to be in a working condition. Restoration requires dif- ferent set of skills from keeping them in shelves. There are several restoration efforts in museums [11][7]. However, their restoration effort is not sufficient: they are mainly targeted at only a few very early (pre-microprocessor) systems. As a result, many of the semi-modern computers that are kept in such ”museums” are not in a working condition, and they are going to be kept there for visitors to enjoy just the industrial design on the surface of each system. Completely different approach, where the computer is manufactured again, using the blueprints that are left by the designer is practiced by Swade [14]. This approach is only useful for limited set of computers where blueprints are available, and the effort it takes is too much for applying this method for many systems. After all, the scale of restoration and preservation so far has been insufficient for investigating technical trends and other details. There are people showing concerns about current state of history preservation surrounding computing. There are emulators and simulators that mimic the behavior of old systems [17][8], which can be considered as a way to preserve the behavior of computer systems. While running

Keeping Old Computers Alive for Deeper Understanding … · Keeping Old Computers Alive for Deeper Understanding of Computer Architecture Hisanobu Tomari ... 1975 MITS Altair 680

Embed Size (px)

Citation preview

Keeping Old Computers Alive for DeeperUnderstanding of Computer Architecture

Hisanobu TomariGrad. School of Information

Science and Technology,The University of Tokyo

Tokyo, [email protected]

Kei HirakiGrad. School of Information

Science and Technology,The University of Tokyo

Tokyo, [email protected]

Abstract—Computer architectures, as they are seen by stu-dents, are getting more and more monolithic: few years ago astudent had access to x86 processor on his or her laptop, SPARCserver in the backyard, MIPS and PowerPC on large SMP system,and Alpha on calculation server. Today, only architectures thatstudents experience writing program on are x86 64 and possiblyARM. On one hand, this simplifies their learning, but on theother hand, this makes it harder to discover options that areavailable in designing an instruction set architecture.

In this paper, we introduce our undergraduate course thatteaches computer architecture design and evaluation that useshistoric computers to make more processor architectures acces-sible to students. The collection of more than 270 old computersthat were marketed in 1979 to 2014 are used in the class. Thesecomputers had to be repaired and restored to working conditionbefore using in exercise for the undergraduate students. Byexperiencing different architectures from what is used everyday,students learn the context of features that are standard today.

This paper also shows power consumption and benchmarkcomparison results obtained using old computers outside class-room. These data are also used as the basis for learning aboutconcepts and issues of computer architecture.

I. INTRODUCTION

As computer technology advances, it becomes more difficultto fully understand why things have converged to what they arenow. History plays a role in letting us know both success andfailure stories of the past. Without the knowledge of history,we will not just reinvent the wheel, but also fall in pitfalls thatwe have fallen before.

There have been efforts of saving the history of computertechnology by curation of resources about computers. Com-puter History Museum [6] and Computermuseum Munchen [5]are examples that actually collects computers. They are mostlyfocused on telling visitors the story behind each computer andhow they looks like. Even though there are a few exceptionsthat are discussed later, most of these computers are dead –they do no longer run software. Keeping historic computers ina working condition has fundamentally different importancefrom leaving them in disrepair [1]. Computers are built torun software written for them. In future, it will be requiredto analyze and inspect how software worked in the past forboth technical and cultural interests. For technical purposes,new methods of measuring performance can be applied to old

computers retrospectively. Working hardware is a requisite forsoftware execution environment that reproduces behavior ofsocially and culturally important computers.

This paper presents our information science undergraduatecourse for teaching concepts and methodology of computerarchitecture. In this class, historic computer systems are usedby students to learn different design concepts and perfor-mance results. Students learn the different instruction setsby programming on a number of working systems. Thisgives them an opportunity to learn what characteristics areshared among popular instruction set, and what are specialfeatures specific to an instruction set architecture. They canverify their interpretation of manuals through experiments.The knowledge of how designers in the past have decidedarchitectural parameters and its end results are then used fordesigning students’ own architecture. This paper also showspower consumption and benchmark-to-benchmark comparisonobservations obtained through our collection of computers,which are also useful materials for teaching these issues.

For students to work with old computers, the computersneed to be in a working condition. Restoration requires dif-ferent set of skills from keeping them in shelves. There areseveral restoration efforts in museums [11][7]. However, theirrestoration effort is not sufficient: they are mainly targetedat only a few very early (pre-microprocessor) systems. As aresult, many of the semi-modern computers that are kept insuch ”museums” are not in a working condition, and they aregoing to be kept there for visitors to enjoy just the industrialdesign on the surface of each system. Completely differentapproach, where the computer is manufactured again, usingthe blueprints that are left by the designer is practiced bySwade [14]. This approach is only useful for limited set ofcomputers where blueprints are available, and the effort ittakes is too much for applying this method for many systems.After all, the scale of restoration and preservation so far hasbeen insufficient for investigating technical trends and otherdetails. There are people showing concerns about current stateof history preservation surrounding computing.

There are emulators and simulators that mimic the behaviorof old systems [17][8], which can be considered as a wayto preserve the behavior of computer systems. While running

simulators are inexpensive compared to keeping real hard-ware, the accuracy of both execution result and executiontime needs verification using working hardware. Writing anaccurate simulator for a system is harder a task than restoringand preserving old computers, For example, PTLsim is anx86 64 full-system simulator that is considered accurate, yetit has 4.30% difference in cycle count from real system[16].Our understanding is that simulators are useful for evaluatingvariations of parameters for specific design, but it is difficultto estimate the performance difference of two very distantarchitectures with different set of features. Full-system simu-lation involves implementing non-processor parts of a systemin software, which is more error-prone and difficult to verify.Also, undergraduate students may find setting up simulationenvironment harder than using real system. Additionally, thereare copyright issues regarding the use of firmware and othersoftware images on such systems.

Therefore, restoring and preserving actual computers is theeasiest method for inheriting the history of computers in bothtechnical and cultural aspects. There are more benefits to thisapproach in addition to educational uses. As the requirementssurrounding computer systems change, parameters that areconsidered important in a system design will change: powerconsumption is a current example. With working computersfrom wide range of configuration and age, the trend of changein such parameter is accurately observed. With the trend ofchange obtained, the future prediction of that parameter can beinferred, just like branch prediction algorithms refer to historytable. The future prediction are then used for designing nextgeneration computer systems.

We have collected more than 270 types of computer systemsthat were marketed over years 1979 to 2014. We extensivelyrepaired and restored most of them to working condition. TheCPU instruction set architecture family in our collection spansDEC Alpha, ARM, IA-32, Itanium 2, Intel 860, Motorola680x0, MIPS, HP PA-RISC, PowerPC, Renesas SH, SPARC,NEC SX, DEC VAX, x86 64. Older instruction set architec-tures such as M6800, 6502, Z80 are also in the collection,We have multiple implementations for most of these. SeeTable I for non-exhaustive/representative list of systems inour collection. The key aspect that differentiate our collectionfrom many other hobbyist-driven projects are that ours includesystems that do not have any games for it. Most systemmarketed toward consumers have games that run on them.There are a number of hobbyists who keep these systemsin working condition to play games on them. Our collectionincludes such personal systems too, but there are other classesof computers that are rarely collected. They are workstationsand servers, which were often used in scientific and industrialenvironment. They often use more costly, large-scale hardwareto implement higher performance than personal systems. Theyoften showcases what the technology could do at the time oftheir design, so they are necessary in our class.

The remainder of this paper is organized as follows. SectionII discusses the backgrounding teaching context of this course.Section III shows how we restore old computers to a state

TABLE IREPRESENTATIVE COMPUTER SYSTEMS IN OUR COLLECTION

Year System name1975 MITS Altair 6801978 IBM 5110 APL1979 SORD M223 mark III

NEC PC-80011981 IBM PC (5150)1982 Commodore 641983 NEC PC-100

Apple IIeHP-41CX

1984 NEC PC-9801M1985 HP 9000/310

NEC PC-9801VM21986 IBM PC AT (5170-319)1987 Commodore AMIGA 500

EPSON PC-286L V30Fujitsu FM77AV20EXNEC PC-9801VX41SONY HB-F1XD MSX2

1988 SHARP PC-E5001989 HP 9000/340C+

EPSON PC-386VIBM PS/2 55SX (8555)Apple Macintosh IIciSun386i/250SHARP X68000 PRO HD (CZ-662C-BK)TOSHIBA J-3100SS

1990 Apple Macintosh IIfx1991 Apple PowerBook 100

VAXstation 4000 60SGI Personal IRIS 4D/35SGI Indigo R3000SONY NEWS NWS-1460KUBOTA AVSstation TITAN Vistra 800

1992 HP 9000 747i/100SGI Indigo R4000/100

1993 Apple Macintosh LC 475DEC 3000/300DEC 3000/800Fujitsu FMTOWNS II MX

1994 HP 9000 712/80SparcStation 20Intel Deskside Xpress LMNeTpower FASTseries SP

1995 Apple Power Macintosh 7100/80IBM RS/6000 43P/120Sun Ultra1

1996 HP Vectra VE 5/200 Series 4SGI O2 R5000/180

1997 Apple PowerMac G3 MT 266 MHzCompaq Deskpro 4000 5200MMXDEC AlphaStation 500/400PalmPilot ProfessionalBull Estrella Series 300

1998 Ultra Enterprise 3000SGI Origin 2000Apple iMac rev.C Strawberry

2001 NEC SX-6i2002 HP rx5670

Blade 20002004 Apple PowerMac G5 1.8 GHz, Dual

SGI Prism2005 IBM OpenPower 710 (9123-710)

Sun Fire T20002006 HP ProLiant DL360 G52009 Apple MacPro4,1

SHARP PC-Z1 Netwalker

that can be used in classroom. Section IV describes howold computers are used in the course. Section V showsmeasurement results obtained using our collection. FinallySection VI presents concluding remarks.

II. TEACHING CONTEXT

The Computer Architecture and Design course describesbasics of computer architecture, microarchitectural featuresand performance evaluation. The course is mainly taken bythird-year undergraduate students. There are two other coursesthat are related to the Computer Architecture and Designcourse: the assembly programming course that is held beforethe Computer Architecture and Design course, and the CPUexperiment course follows the Computer Architecture andDesign course.

The instruction set that is taught in undergraduate assemblyprogramming course is PowerPC. This is generally a reason-able choice: the PowerPC documentation are easily obtained,and students are less likely to fall in compatibility issues asthe ABI has been stable for a long period of time. The onlyproblem is that teaching only PowerPC does not showcase theflexibility of designing an instruction set architecture. Here,the focus is on concepts of stacks, function calling and theuse of software trap for system call.

The Computer Architecture and Design course comprisestwo parts: one is a standard lectures where a professor teachesconcepts using slides, the other is hands-on part where stu-dents present features and characteristics of several processorarchitectures.

Students are divided into a few groups. Each group isassigned a processor architecture, and is asked to write a smallprogram that calculates the inner product of two vectors. Theytake turns to present instruction set architecture, other noveltyfeatures in each processor architecture and uses the assemblysource list that calculate the inner product to show the actualusage of such features.

Old computers are used by undergraduate students forpreparation of this presentation. As the programming referencemanuals contain unique terms that are not easily understoodby undergraduate students, the programming practice ensuresthe correct interpretation of manuals before presenting them.

Then, the Computer Architecture and Design course isfollowed by another course where students team up to designa computer system with original instruction set, processorimplementation on an FPGA board and compiler [13]. Thegoal of the system is to run a ray-tracer program that has fairlycomplex control flow structure and large number of floatingpoint instructions.

Designing an instruction set architecture involves decidingfunctions that are to be implemented on hardware: for ex-ample, mathematical functions such as sin and cos can beimplemented with either hardware or software library. Thenumber of architectural registers, operand count, instructionword length are other dimensions of freedom in optimizing theperformance of raytracing. In designing an optimal processorarchitecture, the knowledge and experience gained through

interacting with wide generations of computer systems are putto use: the trade-off of performance and logic scale that fit onan FPGA is better estimated.

III. REPAIRING EFFORTS

The extensive restoration is what differentiates our effortfrom other museum-style researches. Restoration is an effort toput a computer system back into working condition. There areparts in a computer system that have shorter life than others.From our diagnosing experience of more than 270 computersystems, following types of failures should be inspected beforeapplying power for the first time.

A. Mechanical FailureVery common failure modes in a computer are mechanical

ones. Although it sounds too simple, contact failure is oftenobserved in old computers. Reseating all components in asystem after completely disassembling it fixes this kind ofproblem. Sometimes Galvanic corrosion is the root cause ofthis failure.

Another type of mechanical failures occurs in places suchas floppy disk and CD-ROM drives. The cause of this failurediffers case-by-case, but typical examples are fixation ofgrease, mechanical oil running dry, and heads collecting dirt.These types of failure can be fixed by applying adequate greaseand oil after using propanol or other pure alcohol to clean themixture of grease, oil and dust.

B. BatteryTypically batteries in computers are used for keeping the

time-of-day (TOD) clock and boot parameters. The mostproblematic type of batteries is sealed Ni-Cd battery. Thetype of battery is known to have variation in quality andin military applications they are screened before use [2]. Incomputers, sometimes low-quality sealed Ni-Cd batteries areused on the printed circuit board (PCB). Over time, the low-quality sealed Ni-Cd battery experiences electrolyte leakage,disconnecting the traces on the PCB.These kind of batteriesshould be removed from the PCB as soon as computers arereceived and inspected.

Primary cell is sometimes embedded in a non-volatileRAM (NVRAM) chip. DS1287 is an instance of chip thatcombines functionality of NVRAM and TOD [9], and has alithium battery embedded, along with crystal oscillator. Thechip is widely used in PC AT compatibles from early- tomid-1990s. The battery is not rechargeable, so the NVRAMbecomes volatile after around 20 years (the amount of timethe machine had been powered up has effect on this figure).These embedded batteries must be removed by removing theplastic package of IC (Fig. 1).

On some systems such as EISA-based and sun4 archi-tectures, the NVRAM keeps information that are vital tooperation of those computers. Initializing the data on theNVRAM is required for normal operation in these cases.

Apart from above-stated problems, there are system that areknown to refuse to start up without voltage on battery. Manycomputers are discarded because of the apparent ’dead’ state.

Fig. 1. Removing packaging plastic mould from NVRAM to rework theembedded battery

C. Capacitors

Capacitors are the most problematic parts in old computersystems. They are used as power rail filters and sometimesfor timing constants. Electrolytic aluminium capacitors pro-vides high capacity and acceptable equivalent series resistance(ESR) at lower cost than other types of capacitors. The failurecharacteristics are very well studied for these electrolyticaluminium capacitors [12] [4], but the actual failure rate oncomputers are still higher than other components.

Contrary to what the reliability researches say, most of thefailure of electrolytic aluminium capacitors are attributed tomanufacturing defects. Systems from late 1980s to early 1990soften use electrolytic capacitors with quaternary ammoniumphosphonium salt. This type of chemical is patented byMitsubishi Chemical Corporation in 1987, and later it wasfound that the electrolyte leaks through the rubber seal of thecapacitor [15], spilling liquid over the PCB (Fig. 2). Thisin turn corrodes traces on the PCB and other componentsmounted on it, rendering the system inoperable. Capacitorsfrom mid 2000s is also likely to fail, but this is attributedto both the rise of counterfeit passive components [10] andincreased load on the power supply circuitry because of theheat and power consumption of chips around this era.

There are also other types of capacitors on PCB, and insome cases failure results in flame and smoke coming out ofthe failed capacitor. In any case of capacitor failure, replace-ment is required. Even after the replacement, the electrolyteleakage often damages other parts in the system such as PCBs:this is covered in the following subsection.

D. Boards

The PCB can also be the source of failure. The electrolyteleakage caused by the capacitor can sometimes cause thetraces to be non-conductive. Sometimes this can be visuallyinspected, but this is not always the case. Checking every traceusing a multimeter is the most reliable way to find this typeof failure. After identifying the path that is broken, alternate

Fig. 2. Electrolytic capacitor with quaternary ammonium phosphonium saltspilling liquid through rubber sealing

Fig. 3. Reworking the traces of PCB after replacing electrolytic capacitors

path is added to fix the system (Fig. 3).Due to environmental risks, Pb-Free solders has been man-

dated in parts of the world since mid 2000s [3]. The Pb-Freesolder has higher melting temperature and is known to requiredifferent reflow setting than standard solders. When the Pb-Free solder began to be widely used in early to mid 2000s, thedifference of thermal expansion of organic package substrate,the Pb-Free solder, and the PCB caused cracking of solderballs used for BGA packages. A lot of equipment from widearray of vendors have suffered this problem. The ironic partis that the supposedly environmentally-friendly Pb-Free solderresulted in more e-waste because of it. The Pb-Free solder ballcrack can be temporally fixed by heating the PCB to over themelting point of the Pb-Free solder.

IV. HOW WE USE OLD COMPUTERS IN CLASSROOM

To research and present features and characteristics ofcomputer architectures, undergraduate students are dividedinto teams of two to three members. There are around tenteams, each of which is in charge of a computer architecture.

Each team reads manuals of assigned architecture, andgiven several weeks to prepare for the presentation about thearchitecture. Usually, basic architectural parameters such asregister count, word size, data formats and instruction formatsare discussed on the presentation. Sometimes, implementationvariations are also talked about, and what each team presentsare largely up to students.

Then, each team writes small assembly program that calcu-late inner product of two vectors. This is done so that registerusage and control flow instructions are easily demonstrated.Old computers are used here. Students are asked to log in to asystem with corresponding processor architecture. For systemwith remote login feature this is done over network. Oldersystem with CPUs such as Z80 does not have remote login,so a system is handed to the team.

Students are asked to write the assembly program afterreading the instruction set manual, but this often turns outdifficult for undergraduate students (generally they are notnative speakers of the language that the manual is writtenon). In such cases the students could run the C compilerto get the base source list of the assignment. This practicecan be identified by the use pattern of registers and functionprologue/epilogue, but is not strictly prohibited: students mustlearn the instructions to describe how the program works inthe class. By programming each processor this way, studentslearns to adapt to differences of assembly language amonginstruction set architectures.

Then, the architectural features and assembly source list arepresented in the classroom. After all presentations, everyone inthe classroom gets to know about all architectures that are as-signed. This broadens the view of instruction set architectures,and leads to more exciting implementations of processors inthe next course.

V. EXPERIMENT RESULTS

Benchmarks are ran as we repair computers for use in theclass. This was mainly for testing the stability of repairedold computers, but it also makes a new standpoint fromwhich to quantitatively evaluate the history of computers ina reproducible manner. Here, the benchmarking results areshown in this section.

A. Power Consumption

Power consumption has been an important design parameterfor years. Showing how the power consumption of computersystem has changed over years is an useful way to drawattention of students.

The power consumption of older systems can only bemeasured using actual systems; it is virtually impossible toestimate the full-system power consumption of old computersusing simulators.

Here, system-level power consumption is measured using apower meter tapped between the computer and a wall outlet.We used the Dhrystone benchmark to keep the processor busywhile the power consumption is measured, but this makes littledifference on older systems anyway. To make the discussion

0

50

100

150

200

250

300

350

400

1980 1985 1990 1995 2000 2005 2010 2015

Pow

er

consum

ption [W

]

Year

alphaarm

ia-64m68kmips

pariscppcsh

sparcsx

vaxx86

x86_64

Fig. 4. Power consumption of single-socket systems: while there areexceptions, desktop platforms have kept similar power consumption since1980s

straightforward, only single-processor systems are discussed inthis subsection. The power consumption of single-processorsystem varies, but the range of variation have remained thesame over the years our collection spans (Fig. 4). The powerconsumption of desktop systems have remained at the levelof around 100 W when all processor cores are under load,while around 2004 there are systems that consume muchmore. Newer single-processor systems are almost exclusivelyportable systems, so the power consumption looks lower thanolder systems.

For more recent systems, the range of power consumptionremains almost the same. Laptop systems have consumedlower electricity than desktop counterparts, but the recentdesktop processors consume as small energy as laptop andembedded ones when idle. The embedded systems sure con-sume lower absolute energy, but the performance they pro-vide was not sufficient to compete with desktop systems inperformance-per-watt rating. The energy consumed when noprocess is using up processor cores have decreased consider-ably to around 20 W in minimal case. The performance-per-watt of embedded system is currently in the range of 2-wayserver platforms.

Therefore, the most power-efficient platform we tested werelatest single-socket desktop computers. Dual-socket serversystems, which is often used as building blocks for largerand high-performance computer systems, have a problem withpower consumption, especially when idle.

B. Comparing Benchmarks to Benchmarks

System evaluation issues are better explained when usingreal numbers measured on large number of computer systems.Differences of evaluation methods are better explained usingfigures obtained by systems that students can have their handson.

Traditionally, multiple benchmarks, such as a series ofSPEC CPU benchmark suites and Dhrystone have been usedto measure the performance of a computer system. Using the

10

100

1000

10000

0.1 1 10 100

CIN

T2000 b

ase r

atio

CINT2006 base ratio

alphaia-64mipsppc

sparctilerax86

x86_64

Fig. 5. CINT2000 and CINT2006 results: the two benchmark suites correlatevery well

10

100

1000

10000

10 100 1000 10000 100000

CIN

T2000 b

ase r

atio

Dhrystone (VAX MIPS)

alphaarmi860

ia-64m68kmips

pariscppcsh

sparcsx

tileravaxx86

x86_64

Fig. 6. CINT2000 and Dhrystone results: Dhrystone mostly runs in-cache,and CPU2000 performs not as good as Dhrystone on embedded systems

collection of computers from the collection, it is possible toevaluate benchmark for its ability to represent the systemperformance of a system, by comparing the results to otherbenchmark suite.

The results from different versions of SPEC CPU bench-mark suites correlate very well (Fig. 5). The difference of twobenchmarks are mainly the working-set size. CPU2000 havesmaller working-set size, so on systems where memory cacheis not sufficient, CPU2000 tends to score better than CPU2006.

Dhrystone and CINT2000 also shows similar results, but thecorrelation between the two is lower than with the CPU2006(Fig. 6). Embedded systems have lower CPU-to-memory per-formance (bandwidth and latency) that can be used on theSPEC benchmark suites, while the performance processor coreis optimized for in-cache operations. This imbalance causesbetter Dhrystone results, while the CPU2000 score is not asgood.

Correlation of STREAM and CFP2000 is even worse thanDhrystone-to-CINT2000 (Fig. 7). Naturally the memory band-

10

100

1000

10000

10 100 1000 10000 100000

CF

P2000 b

ase r

atio

STREAM mean (MBytes/s)

alphaarmi860

ia-64mipsppc

sparcsx

x86x86_64

Fig. 7. STREAM results and CFP2000 base ratio: Just increasing memorybandwidth is not sufficient for increasing application performance

width is not the only factor in floating point performance, andthere are systems that have much lower CFP2000 performancethan expected from STREAM bandwidth of less than 100MBytes/s. This is considered to be due to the poor handlingof control flow on these processors: STREAM consists of justfour simple and short loops, whereas CFP2000 has much morecomplex structures of control flow. The SPARC processor thathas much higher STREAM performance and smaller CFP2000result is the UltraSPARC T2 processor, where the floatingpoint unit is intentionally kept simple and low-performanceto save semiconductor real estate.

VI. CONCLUSIONS

In this paper we have presented a new approach to teachingdiversity of computer architecture by using old computers.Itwas required to restore old systems before handing it toundergraduate students. We established restoring techniquesspecific for microprocessor based computers, and more than270 systems are currently pulled back to working condition.

Students learn about options available in defining instructionset, instruction encoding, register count as well as implemen-tation differences. Their experience is then used to design theirown processor in the next semester.

We also measured power consumption and benchmark-to-benchmark differences of old system in an unified setting. Asthe benchmarking trends change, results from new benchmarkwill be measured using our collection of computers to lookat the history of performance, as it is measured by the newbenchmark.

The preservation effort of computer history has so far beencentered on collecting dead hardware. Software is also animportant part in computing history, and to save them thehardware must be in working condition. As we have discussed,there are parameters that can only be observed using livingcomputers. We want more old computers to be back in workingcondition using our repairing methods.

REFERENCES

[1] Maxwell M. Burnet and Robert M. Supnik. Preserving computing’s past:Restoration and simulation. Digital Technical Journal, 8(3):23–38, 1996.

[2] W.R. Johnson and W.J. Richards. Sealed nickel-cadmium cell perfor-mance and optimization of battery design. In Battery Conference onApplications and Advances, 1994., Proceedings of the Ninth Annual,pages 64–68, Jan 1994.

[3] Andrew D. Kostic. Lead-free electronics reliability - an update. TheAerospace Corporation, 2011.

[4] A. Lahyani, P. Venet, G. Grellet, and P.-J. Viverge. Failure prediction ofelectrolytic capacitors during operation of a switchmode power supply.Power Electronics, IEEE Transactions on, 13(6):1199–1207, Nov 1998.

[5] Computermuseum Munchen. Computermuseum Munchen. http://www.computermuseum-muenchen.de.

[6] Computer History Museum. Computer history museum. http://www.computerhistory.org/.

[7] P.E. Ross. Computer reborn. Spectrum, IEEE, 46(11):42–47, Nov 2009.[8] D.L. Schafer. Lowering the cost of legacy systems upgrades. In

AUTOTESTCON Proceedings, 2000 IEEE, pages 239–242, 2000.[9] Dallas Semiconductor. DS1287 real time clock. Dallas Semiconductor

Datasheets, pages 6–87–6–103, May 1992.[10] A. Shrivastava, M.H. Azarian, C. Morillo, B. Sood, and M. Pecht.

Detection and reliability risks of counterfeit electrolytic capacitors.Reliability, IEEE Transactions on, 63(2):468–479, June 2014.

[11] D. Spicer. The ibm 1620 restoration project. Annals of the History ofComputing, IEEE, 27(3):33–43, July 2005.

[12] J.L. Stevens, J.S. Shaffer, and J.T. Vandenham. The service life of largealuminum electrolytic capacitors: effects of construction and application.In Industry Applications Conference, 2001. Thirty-Sixth IAS AnnualMeeting. Conference Record of the 2001 IEEE, volume 4, pages 2493–2499 vol.4, Sept 2001.

[13] Yutaka Sugawara and Kei Hiraki. A computer architecture educationcurriculum through the design and implementation of original processorsusing fpgas. In Proceedings of the 2004 Workshop on ComputerArchitecture Education: Held in Conjunction with the 31st InternationalSymposium on Computer Architecture, WCAE ’04, New York, NY, USA,2004. ACM.

[14] Doron D. Swade. The construction of charles babbage’s differenceengine no. 2. Annals of the History of Computing, IEEE, 27(3):70–88, July 2005.

[15] Makoto UE. Chemical capacitors and quaternary ammonium salts.Electrochemistry, 75(8):565–572, 2007.

[16] M.T. Yourst. PTLsim: A cycle accurate full system x86-64 microarchi-tectural simulator. In Performance Analysis of Systems Software, 2007.ISPASS 2007. IEEE International Symposium on, pages 23–34, April2007.

[17] T. Zoppke and R. Rojas. The virtual life of eniac: simulating theoperation of the first electronic computer. Annals of the History ofComputing, IEEE, 28(2):18–25, April 2006.