A Supercomputer is a Computer That is at the Frontline of Current Processing Capacity

Embed Size (px)

Citation preview

  • 8/6/2019 A Supercomputer is a Computer That is at the Frontline of Current Processing Capacity

    1/17

    A supercomputer is a computer that is at the frontline of current processing capacity,

    particularly speed of calculation. Supercomputers were introduced in the 1960s and were

    designed primarily by Seymour Cray at Control Data Corporation (CDC), which led the

    market into the 1970s until Cray left to form his own company, Cray Research. He then

    took over the supercomputer market with his new designs, holding the top spot in

    supercomputing for five years (19851990). In the 1980s a large number of smallercompetitors entered the market, in parallel to the creation of the minicomputer market a

    decade earlier, but many of these disappeared in the mid-1990s "supercomputer market

    crash".

    Today, supercomputers are typically one-of-a-kind custom designs produced by traditional

    companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s

    companies to gain their experience. Since October 2010, the Tianhe-1A supercomputer has

    been the fastest in the world; it is located in China.

    The term supercomputer itself is rather fluid, and the speed of today's supercomputers

    tends to become typical of tomorrow's ordinary computers. CDC's early machines weresimply very fast scalar processors, some ten times the speed of the fastest machines offered

    by other companies. In the 1970s most supercomputers were dedicated to running a vector

    processor, and many of the newer players developed their own such processors at a lower

    price to enter the market. The early and mid-1980s saw machines with a modest number of

    vector processors working in parallel to become the standard. Typical numbers of

    processors were in the range of four to sixteen. In the later 1980s and 1990s, attention

    turned from vector processors to massive parallel processing systems with thousands of

    "ordinary" CPUs, some being off the shelf units and others being custom designs. Today,

    parallel designs are based on "off the shelf" server-class microprocessors, such as the

    PowerPC, Opteron, or Xeon, and coprocessors like NVIDIA Tesla GPGPUs, AMD GPUs,

    IBM Cell, FPGAs. Most

    [which?]

    modern supercomputers are now highly-tuned computerclusters using commodity processors combined with custom interconnects.

    Supercomputers are used for highly calculation-intensive tasks such as problems involving

    quantum physics, weather forecasting, climate research, molecular modeling (computing

    the structures and properties of chemical compounds, biological macromolecules,

    polymers, and crystals), physical simulations (such as simulation of airplanes in wind

    tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion).

    Relevant here is the distinction between capability computing and capacity computing, as

    defined by Graham et al. Capability computing is typically thought of as using the

    maximum computing power to solve a large problem in the shortest amount of time. Often

    a capability system is able to solve a problem of a size or complexity that no other

    computer can. Capacity computing in contrast is typically thought of as using efficient cost-

    effective computing power to solve somewhat large problems or many small problems or to

    prepare for a run on a capability system.

  • 8/6/2019 A Supercomputer is a Computer That is at the Frontline of Current Processing Capacity

    2/17

    [edit] Hardware and software design

    This section does not cite any references or sources.Please help improve this article by adding citations to reliable sources. Unsourced material may be

    challenged and removed. (July 2008)

    Processor board of a CRAY YMP vector computer

    Supercomputers using custom CPUs traditionally gained their speed over conventionalcomputers through the use of innovative designs that allow them to perform many tasks in

    parallel, as well as complex detail engineering. They tend to be specialized for certain types

    of computation, usually numerical calculations, and perform poorly at more general

    computing tasks. Their memory hierarchy is very carefully designed to ensure the

    processor is kept fed with data and instructions at all times in fact, much of the

    performance difference between slower computers and supercomputers is due to the

    memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with

    latency less of an issue, because supercomputers are not used for transaction processing.

    As with all highly parallel systems, Amdahl's law applies, and supercomputer designs

    devote great effort to eliminating software serialization, and using hardware to address theremaining bottlenecks.

    [edit] Supercomputer challenges, technologies

    y A supercomputer consumes large amounts of electrical power, almost all of which isconverted into heat, requiring cooling. For example, Tianhe-1A consumes 4.04

    Megawatts of electricity.[1]

    The cost to power and cool the system is usually one of

    the factors that limit the scalability of the system. (For example, 4MW at

    $0.10/KWh is $400 an hour or about $3.5 million per year).

    y Information cannot move faster than the speed of light between two parts of asupercomputer. For this reason, a supercomputer that is many meters across musthave latencies between its components measured at least in the tens of nanoseconds.

    Seymour Cray's supercomputer designs attempted to keep cable runs as short as

    possible for this reason, hence the cylindrical shape of his Cray range of computers.

    In modern supercomputers built of many conventional CPUs running in parallel,

    latencies of 15 microseconds to send a message between CPUs are typical.

    y Supercomputers consume and produce massive amounts of data in a very shortperiod of time. According to Ken Batcher, "A supercomputer is a device for turning

  • 8/6/2019 A Supercomputer is a Computer That is at the Frontline of Current Processing Capacity

    3/17

    compute-bound problems into I/O-bound problems." Much work on external

    storage bandwidth is needed to ensure that this information can be transferred

    quickly and stored/retrieved correctly.

    Technologies developed for supercomputers include:

    y Vector processingy Liquid coolingy Non-Uniform Memory Access (NUMA)y Striped disks (the first instance of what was later called RAID)y Parallel filesystems

    [edit] Processing techniques

    Vector processing techniques were first developed for supercomputers and continue to be

    used in specialist high-performance applications. Vector processing techniques have

    trickled down to the mass market in DSP architectures and SIMD (Single InstructionMultiple Data) processing instructions for general-purpose computers.

    Modern video game consoles in particular use SIMD extensively and this is the basis for

    some manufacturers' claim that their game machines are themselves supercomputers.

    Indeed, some graphics cards have the computing power of several TeraFLOPS. The

    applications to which this power can be applied was limited by the special-purpose nature

    of early video processing. As video processing has become more sophisticated, graphics

    processing units (GPUs) have evolved to become more useful as general-purpose vector

    processors, and an entire computer science sub-discipline has arisen to exploit this

    capability: General-Purpose Computing on Graphics Processing Units (GPGPU).

    The current Top500 list (from May 2010) has 3 supercomputers based on GPGPUs. In

    particular, the number 3 supercomputer, Nebulae built by Dawning in China, is based on

    GPGPUs.[2]

    [edit] Operating systems

  • 8/6/2019 A Supercomputer is a Computer That is at the Frontline of Current Processing Capacity

    4/17

    More than 90% of today's Supercomputers run some variant ofLinux.[3]

    Supercomputers today most often use variants ofLinux.[3]

    as shown by the graph to the

    right.

    Until the early-to-mid-1980s, supercomputers usually sacrificed instruction set

    compatibility and code portability for performance (processing and memory access speed).

    For the most part, supercomputers to this time (unlike high-end mainframes) had vastly

    different operating systems. The Cray-1 alone had at least six different proprietary OSs

    largely unknown to the general computing community. In a similar manner, different and

    incompatible vectorizing and parallelizing compilers for Fortran existed. This trend would

    have continued with the ETA-10 were it not for the initial instruction set compatibilitybetween the Cray-1 and the Cray X-MP, and the adoption of computer systems such as

    Cray's Unicos, or Linux.

    [edit] Programming

    The parallel architectures of supercomputers often dictate the use of special programming

    techniques to exploit their speed. The base language of supercomputer code is, in general,

    Fortran or C, using special libraries to share data between nodes. In the most common

    scenario, environments such as PVM and MPI for loosely connected clusters and OpenMP

    for tightly coordinated shared memory machines are used. Significant effort is required to

    optimize a problem for the interconnect characteristics of the machine it will be run on; theaim is to prevent any of the CPUs from wasting time waiting on data from other nodes. The

    new massively parallel GPGPUs have hundreds of processor cores and are programmed

    using programming models such as CUDA and OpenCL.

    [edit] Software tools

  • 8/6/2019 A Supercomputer is a Computer That is at the Frontline of Current Processing Capacity

    5/17

    Software tools for distributed processing include standard APIs such as MPI and PVM,

    VTL, and open source-based software solutions such as Beowulf, WareWulf, and

    openMosix, which facilitate the creation of a supercomputer from a collection of ordinary

    workstations or servers. Technology like ZeroConf (Rendezvous/Bonjour) can be used to

    create ad hoc computer clusters for specialized software such as Apple's Shake compositing

    application. An easy programming language for supercomputers remains an open researchtopic in computer science. Several utilities that would once have cost several thousands of

    dollars are now completely free thanks to the open source community that often creates

    disruptive technology[citation needed]

    .

    [edit] Modern supercomputer architecture

    The template below (Ref improve section) is being considered for deletion. See templates for discussion to help reach a consensus.

    This section needs additional citations for verification.Please help improve this article by adding reliable references. Unsourced material may be

    challenged and removed. (October 2010)

    IBM Roadrunner - LANL

    The CPU Architecture Share of Top500 Rankings between 1993 and 2009.

    Supercomputers today often have a similar top-level architecture consisting of a cluster of

    MIMD multiprocessors, each processor of which is SIMD. The supercomputers vary

    radically with respect to the number of multiprocessors per cluster, the number of

    processors per multiprocessor, and the number of simultaneous instructions per SIMD

    processor. Within this hierarchy we have:

  • 8/6/2019 A Supercomputer is a Computer That is at the Frontline of Current Processing Capacity

    6/17

    y A computer cluster is a collection of computers that are highly interconnected via ahigh-speed network or switching fabric. Each computer runs under a separate

    instance of an Operating System (OS).

    y A multiprocessing computer is a computer, operating under a single OS and usingmore than one CPU, wherein the application-level software is indifferent to the

    number of processors. The processors share tasks using Symmetric multiprocessing(SMP) and Non-Uniform Memory Access (NUMA).

    y A SIMD processor executes the same instruction on more than one set of data at thesame time. The processor could be a general purpose commodity processor or

    special-purpose vector processor. It could also be high-performance processor or a

    low power processor. As of 2007, the processor executes several SIMD instructions

    per nanosecond.

    As of October 2010 the fastest supercomputer in the world is the Tianhe-1A system at

    National University of Defense Technology with more than 21000 computers, it boasts a

    speed of 2.507 petaflops, over 30% faster than the world's next fastest computer, the Cray

    XT5 "Jaguar".

    In February 2009, IBM also announced work on "Sequoia," which appears to be a 20

    petaflops supercomputer. This will be equivalent to 2 million laptops (whereas Roadrunner

    is comparable to a mere 100,000 laptops). It is slated for deployment in late 2011.[4]

    The

    Sequoia will be powered by 1.6 million cores (specific 45-nanometer chips in development)

    and 1.6 petabytes of memory. It will be housed in 96 refrigerators spanning roughly

    3,000 square feet (280 m2).

    [5]

    Moore's Law and economies of scale are the dominant factors in supercomputer design.

    The design concepts that allowed past supercomputers to out-perform desktop machines of

    the time tended to be gradually incorporated into commodity PCs. Furthermore, the costsof chip development and production make it uneconomical to design custom chips for a

    small run and favor mass-produced chips that have enough demand to recoup the cost of

    production. A current model quad-core Xeon workstation running at 2.66 GHz will

    outperform a multimillion dollar Cray C90 supercomputer used in the early 1990s; most

    workloads requiring such a supercomputer in the 1990s can be done on workstations

    costing less than 4,000 US dollars as of 2010. Supercomputing is taking a step of increasing

    density, allowing for desktop supercomputers to become available, offering the computer

    power that in 1998 required a large room to require less than a desktop footprint.

    In addition, many problems carried out by supercomputers are particularly suitable for

    parallelization (in essence, splitting up into smaller parts to be worked on simultaneously)

    and, in particular, fairly coarse-grained parallelization that limits the amount of

    information that needs to be transferred between independent processing units. For this

    reason, traditional supercomputers can be replaced, for many applications, by "clusters"

    of computers of standard design, which can be programmed to act as one large computer.

    [edit] Special-purpose supercomputers

  • 8/6/2019 A Supercomputer is a Computer That is at the Frontline of Current Processing Capacity

    7/17

    The template below (Ref improve section) is being considered for deletion. See templates for discussion to help reach a consensus.

    This section needs additional citations for verification.Please help improve this article by adding reliable references. Unsourced material may be

    challenged and removed. (October 2010)

    Special-purpose supercomputers are high-performance computing devices with a hardware

    architecture dedicated to a single problem. This allows the use of specially programmedFPGA chips or even custom VLSI chips, allowing higher price/performance ratios by

    sacrificing generality. They are used for applications such as astrophysics computation and

    brute-force codebreaking. Historically a new special-purpose supercomputer has

    occasionally been faster than the world's fastest general-purpose supercomputer, by some

    measure. For example, GRAPE-6 was faster than the Earth Simulator in 2002 for a

    particular special set of problems.

    Examples of special-purpose supercomputers:

    y Belle, Deep Blue, and Hydra, for playing chessy Reconfigurable computing machines or parts of machinesy GRAPE, for astrophysics and molecular dynamicsy Deep Crack, for breaking the DES ciphery MDGRAPE-3, for protein structure computationy D. E. Shaw Research Anton, for simulating molecular dynamics [6]y PACE, for simulations of the strong interaction (Lattice QCD)

    [edit] The fastest supercomputers today

    [edit] Measuring supercomputer speed

    14 countries account for the vast majority of the world's 500 fastest supercomputers, with

    over half being located in the United States.

    In general, the speed of a supercomputer is measured in "FLOPS" (FLoating Point

    Operations Per Second), commonly used with an SI prefix such as tera-, combined into the

    shorthand "TFLOPS" (1012

    FLOPS, pronounced teraflops), or peta-, combined into the

    shorthand "PFLOPS" (1015

    FLOPS, pronouncedpetaflops.) This measurement is based on

    a particular benchmark, which does LU decomposition of a large matrix. This mimics a

  • 8/6/2019 A Supercomputer is a Computer That is at the Frontline of Current Processing Capacity

    8/17

    class of real-world problems, but is significantly easier to compute than a majority of

    actual real-world problems.

    "Petascale" supercomputers can process one quadrillion (1015

    ) (1000 trillion) FLOPS.

    Exascale is computing performance in the exaflops range. An exaflop is one quintillion

    (10

    18

    ) FL

    OPS (one million teraflops).

    [edit] The TOP500 list

    Main article: TOP500

    Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to

    their LINPACK benchmark results. The list does not claim to be unbiased or definitive,

    but it is a widely cited current definition of the "fastest" supercomputer available at any

    given time.

    [edit] Current fastest supercomputer system

    Tianhe-1A is ranked on the TOP500 list as the fastest supercomputer. It consists of 14,336

    Intel Xeon CPUs and 7,168 Nvidia Tesla M2050 GPUs with a new interconnect fabric of

    Chinese origin, reportedly twice the speed of InfiniBand.[7]

    Tianhe-1A spans 103 cabinets,

    weighs 155 tons, and consumes 4.04 megawatts of electricity.[8]

  • 8/6/2019 A Supercomputer is a Computer That is at the Frontline of Current Processing Capacity

    9/17

    A Brief History of the Computer (b.c. 1993a.d.)

    2tweetsretweet

    by Jeremy Meyers Note: Yes, a lot of this is from Groliers Encyclopedia. Hey, I wasyoung. I didnt know any better. Credit where credit is due. Also, this information is only

    current as of the early 1990s (1993, to be exact), and no Im not planning to add more

    information anytime soon. I would not recommend scamming this for your own homework,

    as some of the conclusions are rather humorous today. 0digg

    Citing This Work You are welcome to use this document as a

    reference in creating your own paper or research work on the subject. Please dont just copy this

    paper verbatim and submit it as your own work, as I put a lot of time and effort into it. Plus, its bad

    karma. If you would like to use this work, please use this citation in your bibliography: Meyers,

    Jeremy, A Short History of the Computer [Online] Available ([current date])

    Table of Contents:

    1) In The Beginning 5) The Modern Stored Program EDC

    2) Babbage 6) Advances in the 1950s

    3) Use of Punched Cards by Hollerith 7) Advances in the 1960s

    4) Electronic Digital Computers 8) Recent Advances

    In The Beginning

    The history of computers starts out about 2000 years ago, at the birth of the abacus, a

    wooden rack holding two horizontal wires with beads strung on them. When these beads

    are moved around, according to programming rules memorized by the user, all regular

    arithmetic problems can be done. Another important invention around the same time was

    the Astrolabe, used for navigation. Blaise Pascal is usually credited for building the firstdigital computer in 1642. It added numbers entered with dials and was made to help his

    father, a tax collector. In 1671, Gottfried Wilhelm von Leibniz invented a computer that

    was built in 1694. It could add, and, after changing some things around, multiply. Leibniz

    invented a special stepped gear mechanism for introducing the addend digits, and this is

    still being used. The prototypes made by Pascal and Leibniz were not used in many places,

    and considered weird until a little more than a century later, when Thomas of Colmar

    (A.K.A. Charles Xavier Thomas) created the first successful mechanical calculator that

  • 8/6/2019 A Supercomputer is a Computer That is at the Frontline of Current Processing Capacity

    10/17

    could add, subtract, multiply, and divide. A lot of improved desktop calculators by many

    inventors followed, so that by about 1890, the range of improvements included:

    y Accumulation of partial resultsy Storage and automatic reentry of past results (A memory function)y

    Printing of the results

    Each of these required manual installation. These improvements were mainly made for

    commercial users, and not for the needs of science.

    Babbage

    While Thomas of Colmar was developing the desktop calculator,

    a series of very interesting developments in computers was started in Cambridge, England,

    by Charles Babbage (left, of which the computer store Babbages, now GameStop, is

    named), a mathematics professor. In 1812, Babbage realized that many long calculations,

    especially those needed to make mathematical tables, were really a series of predictableactions that were constantly repeated. From this he suspected that it should be possible to

    do these automatically. He began to design an automatic mechanical calculating machine,

    which he called a difference engine. By 1822, he had a working model to demonstrate with.

    With financial help from the British government, Babbage started fabrication of a

    difference engine in 1823. It was intended to be steam powered and fully automatic,

    including the printing of the resulting tables, and commanded by a fixed instruction

    program. The difference engine, although having limited adaptability and applicability,

    was really a great advance. Babbage continued to work on it for the next 10 years, but in

    1833 he lost interest because he thought he had a better idea the construction of what

    would now be called a general purpose, fully program-controlled, automatic mechanical

    digital computer. Babbage called this idea an Analytical Engine. The ideas of this designshowed a lot of foresight, although this couldnt be appreciated until a full century later.

    The plans for this engine required an identical decimal computer operating on numbers of

    50 decimal digits (or words) and having a storage capacity (memory) of 1,000 such digits.

    The built-in operations were supposed to include everything that a modern general

    purpose computer would need, even the all important Conditional Control Transfer

    Capability that would allow commands to be executed in any order, not just the order in

    which they were programmed. The analytical engine was soon to use punched cards

  • 8/6/2019 A Supercomputer is a Computer That is at the Frontline of Current Processing Capacity

    11/17

    (similar to those used in a Jacquard loom), which would be read into the machine from

    several different Reading Stations. The machine was supposed to operate automatically, by

    steam power, and require only one person there. Babbages computers were never finished.

    Various reasons are used for his failure. Most used is the lack of precision machining

    techniques at the time. Another speculation is that Babbage was working on a solution of a

    problem that few people in 1840 really needed to solve. After Babbage, there was atemporary loss of interest in automatic digital computers. Between 1850 and 1900 great

    advances were made in mathematical physics, and it came to be known that most

    observable dynamic phenomena can be identified by differential equations(which meant that

    most events occurring in nature can be measured or described in one equation or another),

    so that easy means for their calculation would be helpful. Moreover, from a practical view,

    the availability of steam power caused manufacturing (boilers), transportation (steam

    engines and boats), and commerce to prosper and led to a period of a lot of engineering

    achievements. The designing of railroads, and the making of steamships, textile mills, and

    bridges required differential calculus to determine such things as:

    y center of gravityy center of buoyancyy moment of inertiay stress distributions

    Even the assessment of the power output of a steam engine needed mathematical

    integration. A strong need thus developed for a machine that could rapidly perform many

    repetitive calculations.

    Use of Punched Cards by Hollerith

    A step towards automated computing was the development of punched

    cards, which were first successfully used with computers in 1890 by Herman Hollerith ( left)

    and James Powers, who worked for the US. Census Bureau. They developed devices thatcould read the information that had been punched into the cards automatically, without

    human help. Because of this, reading errors were reduced dramatically, work flow

    increased, and, most importantly, stacks of punched cards could be used as easily

    accessible memory of almost unlimited size. Furthermore, different problems could be

    stored on different stacks of cards and accessed when needed. These advantages were seen

    by commercial companies and soon led to the development of improved punch-card using

    computers created by International Business Machines (IBM), Remington (yes, the same

  • 8/6/2019 A Supercomputer is a Computer That is at the Frontline of Current Processing Capacity

    12/17

    people that make shavers), Burroughs, and other corporations. These computers used

    electromechanical devices in which electrical power provided mechanical motion like

    turning the wheels of an adding machine. Such systems included features to:

    y feed in a specified number of cards automaticallyy

    add, multiply, and sorty feed out cards with punched results

    As compared to todays machines, these computers were slow, usually processing 50 220

    cards per minute, each card holding about 80 decimal numbers (characters). At the time,

    however, punched cards were a huge step forward. They provided a means of I/O, and

    memory storage on a huge scale. For more than 50 years after their first use, punched card

    machines did most of the worlds first business computing, and a considerable amount of

    the computing work in science.

    Electronic Digital Computers

    The start of World War II produced a large need for computer

    capacity, especially for the military. New weapons were made for which trajectory tables

    and other essential data were needed. In 1942, John P. Eckert, John W. Mauchly (left), and

    their associates at the Moore school of Electrical Engineering of University of Pennsylvania

    decided to build a high speed electronic computer to do the job. This machine became

    known as ENIAC (Electrical Numerical Integrator And Calculator) The size of ENIACs

    numerical word was 10 decimal digits, and it could multiply two of these numbers at a

    rate of 300 per second, by finding the value of each product from a multiplication table

    stored in its memory. ENIAC was therefore about 1,000 times faster then the previous

    generation of relay computers. ENIAC used 18,000 vacuum tubes, about 1,800 square feet

    of floor space, and consumed about 180,000 watts of electrical power. It had punched card

    I/O, 1 multiplier, 1 divider/square rooter, and 20 adders using decimal ring counters, which

    served as adders and also as quick-access (.0002 seconds) read-write register storage. The

    executable instructions making up a program were embodied in the separate units of

    ENIAC, which were plugged together to form a route for the flow of information.

  • 8/6/2019 A Supercomputer is a Computer That is at the Frontline of Current Processing Capacity

    13/17

    These connections had to be redone after each

    computation, together with presetting function tables and switches. This wire your own

    technique was inconvenient (for obvious reasons), and with only some latitude couldENIAC be considered programmable. It was, however, efficient in handling the particular

    programs for which it had been designed. ENIAC is commonly accepted as the first

    successful high speed electronic digital computer (EDC) and was used from 1946 to 1955.

    A controversy developed in 1971, however, over the patentability of ENIACs basic digital

    concepts, the claim being made that another physicist, John V. Atanasoff ( left) had already

    used basically the same ideas in a simpler vacuum tube device he had built in the 1930s

    while at Iowa State College. In 1973 the courts found in favor of the company using the

    Atanasoff claim.

    The Modern Stored Program EDC

    Fascinated by the success of ENIAC, the mathematician John

    Von Neumann (left) undertook, in 1945, an abstract study of computation that showed thata computer should have a very simple, fixed physical structure, and yet be able to execute

    any kind of computation by means of a proper programmed control without the need for

    any change in the unit itself. Von Neumann contributed a new awareness of how practical,

    yet fast computers should be organized and built. These ideas, usually referred to as the

    stored program technique, became essential for future generations of high speed digital

    computers and were universally adopted.

  • 8/6/2019 A Supercomputer is a Computer That is at the Frontline of Current Processing Capacity

    14/17

    The Stored Program technique involves many features of computer design and function

    besides the one that it is named after. In combination, these features make very high

    speed operation attainable. A glimpse may be provided by considering what 1,000

    operations per second means. If each instruction in a job program were used once in

    consecutive order, no human programmer could generate enough instruction to keep the

    computer busy. Arrangements must be made, therefore, for parts of the job program(called subroutines) to be used repeatedly in a manner that depends on the way the

    computation goes. Also, it would clearly be helpful if instructions could be changed if

    needed during a computation to make them behave differently.

    Von Neumann met these two needs by making a special type of machine instruction, called

    a Conditional control transfer which allowed the program sequence to be stopped and

    started again at any point and by storing all instruction programs together with data in

    the same memory unit, so that, when needed, instructions could be arithmetically changed

    in the same way as data. As a result of these techniques, computing and programming

    became much faster, more flexible, and more efficient with work. Regularly used

    subroutines did not have to be reprogrammed for each new program, but could be kept inlibraries and read into memory only when needed. Thus, much of a given program could

    be assembled from the subroutine library.

    The all purpose computer memory became the assembly place in which all parts of a long

    computation were kept, worked on piece by piece, and put together to form the final

    results. The computer control survived only as an errand runner for the overall process.

    As soon as the advantage of these techniques became clear, they became a standard

    practice.

    The first generation of modern programmed electronic computers to take advantage of

    these improvements were built in 1947. This group included computers using Random

    Access Memory (RAM), which is a memory designed to give almost constant access to

    any particular piece of information. . These machines had punched card or punched tape

  • 8/6/2019 A Supercomputer is a Computer That is at the Frontline of Current Processing Capacity

    15/17

    I/O devices and RAMs of 1,000 word capacity and access times of .5 Greek MU seconds

    (.5*10-6 seconds). Some of them could perform multiplications in 2 to 4 MU seconds.

    Physically, they were much smaller than ENIAC. Some were about the size of a grand

    piano and used only 2,500 electron tubes, a lot less then required by the earlier ENIAC.

    The first generation stored program computers needed a lot of maintenance, reachedprobably about 70 to 80% reliability of operation (ROO) and were used for 8 to 12 years.

    They were usually programmed in ML, although by the mid 1950s progress had been

    made in several aspects of advanced programming. This group of computers included

    EDVAC (above) and UNIVAC (right) the first commercially available computers.

    Advances in the 1950s

    Early in the 50s two important engineering discoveries changed the image of the electronic

    computer field, from one of fast but unreliable hardware to an image of relatively high

    reliability and even more capability. These discoveries were the magnetic core memory and

    the Transistor Circuit Element. These technical discoveries quickly found their way intonew models of digital computers. RAM capacities increased from 8,000 to 64,000 words in

    commercially available machines by the 1960s, with access times of 2 to 3 MS

    (Milliseconds). These machines were very expensive to purchase or even to rent and were

    particularly expensive to operate because of the cost of expanding programming. Such

    computers were mostly found in large computer centers operated by industry, government,

    and private laboratories staffed with many programmers and support personnel.

    This situation led to modes of operation enabling the sharing of the high potential

    available. One such mode is batch processing, in which problems are prepared and then

    held ready for computation on a relatively cheap storage medium. Magnetic drums,

    magnetic disk packs, or magnetic tapes were usually used. When the computer finisheswith a problem, it dumps the whole problem (program and results) on one of these

    peripheral storage units and starts on a new problem. Another mode for fast, powerful

    machines is called time-sharing. In time-sharing, the computer processes many jobs in such

    rapid succession that each job runs as if the other jobs did not exist, thus keeping each

    customer satisfied. Such operating modes need elaborate executable programs to attend

    to the administration of the various tasks.

    Advances in the 1960s

    In the 1960s, efforts to design and develop the fastest possible computer with the greatest

    capacity reached a turning point with the LARC machine, built for the LivermoreRadiation Laboratories of the University of California by the Sperry Rand Corporation,

    and the Stretch computer by IBM. The LARC had a base memory of 98,000 words and

    multiplied in 10 Greek MU seconds. Stretch was made with several degrees of memory

    having slower access for the ranks of greater capacity, the fastest access time being less

    then 1 Greek MU Second and the total capacity in the vicinity of 100,000,000 words.

    During this period, the major computer manufacturers began to offer a range of

    capabilities and prices, as well as accessories such as:

  • 8/6/2019 A Supercomputer is a Computer That is at the Frontline of Current Processing Capacity

    16/17

    y Consolesy Card Feedersy Page Printersy Cathode ray tube displaysy Graphing devices

    These were widely used in businesses for such things as:

    y Accountingy Payrolly Inventory controly Ordering Suppliesy Billing

    CPUs for these uses did not have to be very fast arithmetically and were usually used to

    access large amounts of records on file, keeping these up to date. By far, the most number

    of computer systems were sold for the more simple uses, such as hospitals (keeping track of

    patient records, medications, and treatments given). They were also used in libraries, such

    as the National Medical Library retrieval system, and in the Chemical Abstracts System,

    where computer records on file now cover nearly all known chemical compounds.

    More Recent Advances

    The trend during the 1970s was, to some extent, moving away from very powerful, single

    purpose computers and toward a larger range of applications for cheaper computer

    systems. Most continuous-process manufacturing, such as petroleum refining and

    electrical-power distribution systems, now used computers of smaller capability for

    controlling and regulating their jobs.

    In the 1960s, the problems in programming applications were an obstacle to the

    independence of medium sized on-site computers, but gains in applications programming

    language technologies removed these obstacles. Applications languages were now available

    for controlling a great range of manufacturing processes, for using machine tools with

    computers, and for many other things. Moreover, a new revolution in computer hardware

    was under way, involving shrinking of computer-logic circuitry and of components by what

    are called large-scale integration (LSI techniques.

    In the 1950s it was realized that scaling down the size of electronic digital computer

    circuits and parts would increase speed and efficiency and by that, improve performance, if

    they could only find a way to do this. About 1960 photo printing of conductive circuit

    boards to eliminate wiring became more developed. Then it became possible to build

    resistors and capacitors into the circuitry by the same process. In the 1970s, vacuum

    deposition of transistors became the norm, and entire assemblies, with adders, shifting

    registers, and counters, became available on tiny chips.

  • 8/6/2019 A Supercomputer is a Computer That is at the Frontline of Current Processing Capacity

    17/17

    In the 1980s, very large scale integration (VLSI), in which hundreds of thousands of

    transistors were placed on a single chip, became more and more common. Many

    companies, some new to the computer field, introduced in the 1970s programmable

    minicomputers supplied with software packages. The shrinking trend continued with the

    introduction of personal computers (PCs), which are programmable machines small

    enough and inexpensive enough to be purchased and used by individuals. Many companies,such as Apple Computer and Radio Shack, introduced very successful PCs in the 1970s,

    encouraged in part by a fad in computer (video) games. In the 1980s some friction occurred

    in the crowded PC field, with Apple and IBM keeping strong. In the manufacturing of

    semiconductor chips, the Intel and Motorola Corporations were very competitive into the

    1980s, although Japanese firms were making strong economic advances, especially in the

    area of memory chips.

    By the late 1980s, some personal computers were run by microprocessors that, handling 32

    bits of data at a time, could process about 4,000,000 instructions per second.

    Microprocessors equipped with read-only memory (ROM), which stores constantly used,

    unchanging programs, now performed an increased number of process-control, testing,monitoring, and diagnosing functions, like automobile ignition systems, automobile-engine

    diagnosis, and production-line inspection duties. Cray Research and Control Data Inc.

    dominated the field of supercomputers, or the most powerful computer systems, through

    the 1970s and 1980s.

    In the early 1980s, however, the Japanese government announced a gigantic plan to design

    and build a new generation of supercomputers. This new generation, the so-called fifth

    generation, is using new technologies in very large integration, along with new

    programming languages, and will be capable of amazing feats in the area of artificial

    intelligence, such as voice recognition.

    Progress in the area of software has not matched the great advances in hardware. Software

    has become the major cost of many systems because programming productivity has not

    increased very quickly. New programming techniques, such as object-oriented

    programming, have been developed to help relieve this problem. Despite difficulties with

    software, however, the cost per calculation of computers is rapidly lessening, and their

    convenience and efficiency are expected to increase in the early future. The computer field

    continues to experience huge growth. Computer networking, computer mail, and electronic

    publishing are just a few of the applications that have grown in recent years. Advances in

    technologies continue to produce cheaper and more powerful computers offering the

    promise that in the near future, computers or terminals will reside in most, if not all homes,

    offices, and schools.