Upload
clementine-spencer
View
214
Download
0
Embed Size (px)
Citation preview
1
Parallelism and Locality in Matrix Computations
www.cs.berkeley.edu/~demmel/cs267_Spr09
Introduction
Jim Demmel
EECS & Math Departments, UC Berkeley
01/21/2008 CS267 - Lecture 1 2
Outline (of all lectures)
• Why all computers must be parallel processors
• Arithmetic is cheap, what costs is moving data
• Recurring computational patterns
• Dense Linear Algebra
• Sparse Linear Algebra
• How do I know I get the right answer?
01/21/2008 CS267 - Lecture 1 3
Units of Measure
• High Performance Computing (HPC) units are:- Flop: floating point operation
- Flops/s: floating point operations per second
- Bytes: size of data (a double precision floating point number is 8)
• Typical sizes are millions, billions, trillions…Mega Mflop/s = 106 flop/sec Mbyte = 220 = 1048576 ~ 106 bytes
Giga Gflop/s = 109 flop/sec Gbyte = 230 ~ 109 bytes
Tera Tflop/s = 1012 flop/sec Tbyte = 240 ~ 1012 bytes
Peta Pflop/s = 1015 flop/sec Pbyte = 250 ~ 1015 bytes
Exa Eflop/s = 1018 flop/sec Ebyte = 260 ~ 1018 bytes
Zetta Zflop/s = 1021 flop/sec Zbyte = 270 ~ 1021 bytes
Yotta Yflop/s = 1024 flop/sec Ybyte = 280 ~ 1024 bytes
• Current fastest (public) machine ~ 1.5 Pflop/s- Up-to-date list at www.top500.org
01/21/2008 CS267 - Lecture 1 4
Outline (of all lectures)
• Why all computers must be parallel processors
• Arithmetic is cheap, what costs is moving data
• Recurring computational patterns
• Dense Linear Algebra
• Sparse Linear Algebra
• How do I know I get the right answer?
01/21/2008 CS267 - Lecture 1 5
Technology Trends: Microprocessor Capacity
2X transistors/Chip Every 1.5 years
Called “Moore’s Law”
Moore’s Law
Microprocessors have become smaller, denser, and more powerful.
Gordon Moore (co-founder of Intel) predicted in 1965 that the transistor density of semiconductor chips would double roughly every 18 months.
Slide source: Jack Dongarra
Performance Development
1 Gflop/s
1 Tflop/s
100 Mflop/s
100 Gflop/s
100 Tflop/s
10 Gflop/s
10 Tflop/s
1 Pflop/s
100 Pflop/s
10 Pflop/s
59.7 GFlop/s59.7 GFlop/s
400 MFlop/s400 MFlop/s
1.17 TFlop/s1.17 TFlop/s
1.1 PFlop/s1.1 PFlop/s
17.08 TFlop/s17.08 TFlop/s
22.9 PFlop/s22.9 PFlop/s
SUM
N=1
N=500
www.top500.org
01/21/2008 CS267 - Lecture 1 7
Parallelism Revolution is Happening Now
• Chip density is continuing increase ~2x every 2 years
- Clock speed is not
- Number of processor cores may double instead
• There is little or no more hidden parallelism (ILP) to be found
• Parallelism must be exposed to and managed by software
Source: Intel, Microsoft (Sutter) and Stanford (Olukotun, Hammond)
01/21/2008 CS267 - Lecture 1 8
Outline (of all four lectures)
• Why all computers must be parallel processors
• Arithmetic is cheap, what costs is moving data
• Recurring computational patterns
• Dense Linear Algebra
• Sparse Linear Algebra
• How do I know I get the right answer?
01/21/2008 CS267 - Lecture 1 9
Motivation
• Most applications run at < 10% of the “peak” performance of a system
- Peak is the maximum the hardware can physically execute
• Much of this performance is lost on a single processor, i.e., the code running on one processor often runs at only 10-20% of the processor peak
• Most of the single processor performance loss is in the memory system
- Moving data takes much longer than arithmetic and logic
• To understand this, we need to look under the hood of modern processors
- We will first look at only a single “core” processor
- These issues will exist on processors within any parallel computer
• For parallel computers, moving data also the bottleneck
01/21/2008 CS267 - Lecture 1 10
Outline
• Arithmetic is cheap, what costs is moving data- Idealized and actual costs in modern processors
- Parallelism within single processors
- Memory hierarchies
- What this means for designing algorithms and software
01/21/2008 CS267 - Lecture 1 11
Outline
• Arithmetic is cheap, what costs is moving data- Idealized and actual costs in modern processors
- Parallelism within single processors
- Memory hierarchies
- Temporal and spatial locality
- Basics of caches
- Use of microbenchmarks to characterize performance
- What this means for designing algorithms and software
01/21/2008 CS267 - Lecture 1 12
Memory Hierarchy• Most programs have a high degree of locality in their accesses
- spatial locality: accessing things nearby previous accesses
- temporal locality: reusing an item that was previously accessed
• Memory hierarchy tries to exploit locality
on-chip cache
registers
datapath
control
processor
Second level
cache (SRAM)
Main memory
(DRAM)
Secondary storage (Disk)
Tertiary storage
(Disk/Tape)
Speed 1ns 10ns 100ns 10ms 10sec
Size B KB MB GB TB
01/21/2008 CS267 - Lecture 1 13
Processor-DRAM Gap (latency)
µProc60%/yr.
DRAM7%/yr.
1
10
100
1000
1980
1981
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
DRAM
CPU1982
Processor-MemoryPerformance Gap:(grows 50% / year)
Per
form
ance
Time
“Moore’s Law”
• Memory hierarchies are getting deeper- Processors get faster more quickly than memory
01/21/2008 CS267 - Lecture 1 14
Approaches to Handling Memory Latency• Approach to address the memory latency problem
- Eliminate memory operations by saving values in small, fast memory (cache) and reusing them
- need temporal locality in program
- Take advantage of better bandwidth by getting a chunk of memory and saving it in small fast memory (cache) and using whole chunk
- need spatial locality in program
- Take advantage of better bandwidth by allowing processor to issue multiple reads to the memory system at once
- concurrency in the instruction stream, e.g. load whole array, as in vector processors; or prefetching
- Overlap computation & memory operations
- Prefetching
• Bandwidth has improved more than latency- 23% per year vs 7% per year
- Bandwidth still getting slower compared to arithmetic (at 60% per year)
01/21/2008 CS267 - Lecture 1 15
Cache Basics
• Cache is fast (expensive) memory which keeps copy of data in main memory; it is hidden from software
- Simplest example: data at memory address xxxxx1101 is stored at cache location 1101
• Cache hit: in-cache memory access—cheap
• Cache miss: non-cached memory access—expensive
- Need to access next, slower level of cache
• Cache line length: # of bytes loaded together in one entry• Ex: If either xxxxx1100 or xxxxx1101 is loaded, both are
• Associativity• direct-mapped: only 1 address (line) in a given range in cache
• Ex: Data stored at address xxxxx1101 stored at cache location 1101, in 16 word cache
• n-way: n 2 lines with different addresses can be stored• Ex: Up to 16 words with addresses xxxxx1101 can be stored at
cache location 1101
01/21/2008 CS267 - Lecture 1 16
Why Have Multiple Levels of Cache?
• On-chip vs. off-chip- On-chip caches are faster, but limited in size
• A large cache has delays- Hardware to check longer addresses in cache takes more time
- Associativity, which gives a more general set of data in cache, also takes more time
• Some examples:- Cray T3E eliminated one cache to speed up misses
- IBM uses a level of cache as a “victim cache” which is cheaper
• There are other levels of the memory hierarchy- Register, pages (TLB, virtual memory), …
- And it isn’t always a hierarchy
01/21/2008 CS267 - Lecture 1 17
Experimental Study of Memory (Membench)
• Microbenchmark for memory system performance
time the following loop (repeat many times and average)
for i from 0 to L load A[i] from memory (4 Bytes)
• for array A of length L from 4KB to 8MB by 2x for stride s from 4 Bytes (1 word) to L/2 by 2x time the following loop (repeat many times and average)
for i from 0 to L by s load A[i] from memory (4 Bytes)
s
1 experiment
01/21/2008 CS267 - Lecture 1 18
Membench: What to Expect
• Consider the average cost per load- Plot one line for each array length, time vs. stride
- Small stride is best: if cache line holds 4 words, at most ¼ miss
- If array is smaller than a given cache, all those accesses will hit (after the first run, which is negligible for large enough runs)
- Picture assumes only one level of cache
- Values have gotten more difficult to measure on modern procs
s = stride
average cost per access
total size < L1cache hit time
memory time
size > L1
01/21/2008 CS267 - Lecture 1 19
Memory Hierarchy on a Sun Ultra-2i
L1: 16 KB2 cycles (6ns)
Sun Ultra-2i, 333 MHz
L2: 64 byte line
See www.cs.berkeley.edu/~yelick/arvindk/t3d-isca95.ps for details
L2: 2 MB, 12 cycles (36 ns)
Mem: 396 ns
(132 cycles)
8 K pages, 32 TLB entries
L1: 16 B line
Array length
01/21/2008 CS267 - Lecture 1 20
Memory Hierarchy on a Pentium III
L1: 32 byte line ?
L2: 512 KB 60 ns
L1: 64K5 ns, 4-way?
Katmai processor on Millennium, 550 MHz Array size
01/21/2008 CS267 - Lecture 1 21
Memory Hierarchy on a Power3 (Seaborg)Power3, 375 MHz
L2: 8 MB128 B line9 cycles
L1: 32 KB128B line.5-2 cycles
Array size
Mem: 396 ns(132 cycles)
01/21/2008 CS267 - Lecture 1 22
Stanza Triad – to measure prefetching
• Even smaller benchmark for prefetching
• Derived from STREAM Triad
• Stanza (L) is the length of a unit stride runwhile i < arraylength
for each L element stanza
A[i] = scalar * X[i] + Y[i]
skip k elements
1) do L triads 3) do L triads2) skip k elements
. . .. . .
stanzastanza
Source: Kamil et al, MSP05
01/21/2008 CS267 - Lecture 1 23
Stanza Triad Results
• This graph (x-axis) starts at a cache line size (16 Bytes)
• If cache locality was the only thing that mattered, we would expect- Flat lines equal to measured memory peak bandwidth (STREAM) as on Pentium3
• Prefetching gets the next cache line (pipelining) while using the current one- This does not “kick in” immediately, so performance depends on L
01/21/2008 CS267 - Lecture 1 24
Outline
• Arithmetic is cheap, what costs is moving data- Idealized and actual costs in modern processors
- Parallelism within single processors
- Memory hierarchies
- What this means for designing algorithms and software
- This is the main topic of these lectures
01/21/2008 CS267 - Lecture 1 25
What this means for designing algorithms and software• Design goal should be to minimize the most expensive
operation- Minimize communication = moving data, either between levels of a
memory hierarchy, between processor over a network- An algorithm that is good enough today may not be tomorrow
- Communication cost increasing relative to arithmetic- Sometimes helps to do more arithmetic in order to do less
communication• Rest of lectures address impact on linear algebra
- Many new algorithms, designed to minimize communication- Proofs that communication is minimized
• Actual performance of a simple program can be a complicated function of the architecture
- Slight changes in the architecture or program change the performance significantly
- We would like simple models to help us design efficient algorithms and prove their optimality
- Can we automate algorithm design?
01/21/2008 CS267 - Lecture 1 26
Outline (of all four lectures)
• Why all computers must be parallel processors
• Arithmetic is cheap, what costs is moving data
• Recurring computational patterns
• Dense Linear Algebra
• Sparse Linear Algebra
• How do I know I get the right answer?
01/21/2008 CS267 - Lecture 1
The “7 Dwarfs” of High Performance Computing• Phil Colella (LBL) identified 7 kernels out of which most
large scale simulation and data-analysis programs are composed:
27
1. Dense Linear Algebra
• Ex: Solve Ax=b or Ax = λx where A is a dense matrix
2. Sparse Linear Algebra
• Ex: Solve Ax=b or Ax = λx where A is a sparse matrix (mostly zero)
3. Operations on Structured Grids
• Ex: Anew(i,j) = 4*A(i,j) – A(i-1,j) – A(i+1,j) – A(i,j-1) – A(i,j+1)
4. Operations on Unstructured Grids
• Ex: Similar, but list of neighbors varies from entry to entry
5. Spectral Methods
• Ex: Fast Fourier Transform (FFT)
6. Particle Methods
• Ex: Compute electrostatic forces using Fast Multiple Method
7. Monte Carlo
• Ex: Many independent simulations using different inputs
01/21/2008 CS267 - Lecture 1
Motif/Dwarf: Common Computational Methods(Red Hot Blue Cool)
Em
be
d
SP
EC
DB
Ga
me
s
ML
HP
C
Health Image Speech Music Browser1 Finite State Mach.2 Combinational3 Graph Traversal4 Structured Grid5 Dense Matrix6 Sparse Matrix7 Spectral (FFT)8 Dynamic Prog9 N-Body
10 MapReduce11 Backtrack/ B&B12 Graphical Models13 Unstructured Grid
01/21/2008 CS267 - Lecture 1 29
Graph Algorithms
Dynamic Programming
Dense Linear Algebra
Sparse Linear Algebra
Unstructured Grids
Structured Grids
Model-view controller
Iterator
Map reduce
Layered systems
Arbitrary Static Task Graph
Pipe-and-filter
Agent and Repository
Process Control
Event based, implicit invocation
Graphical models
Finite state machines
Backtrack Branch and Bound
N-Body methods
Circuits
Spectral Methods
Task Decomposition ↔ Data DecompositionGroup Tasks Order groups data sharing data access
Applications
Pipeline
Discrete Event
Event Based
Divide and Conquer
Data Parallelism
Geometric Decomposition
Task Parallelism
Graph algorithms
Fork/Join
CSP
Master/worker
Loop Parallelism
BSP
Distributed Array Shared-Data
Shared Queue
Shared Hash Table
Barriers
Mutex
Thread Creation/destruction
Process/Creation/destruction
Message passing
Collective communication
Speculation
Transactional memory
Choose your high level structure – what is the structure of my application? Guided expansion
Identify the key computational patterns – what are my key computations?Guided instantiation
Implementation methods – what are the building blocks of parallel programming? Guided implementation
Choose your high level architecture - Guided decomposition
Refine the structure - what concurrent approach do I use? Guided re-organization
Utilize Supporting Structures – how do I implement my concurrency? Guided mapping
Pro
du
ctiv
ity
Lay
er
Eff
icie
ncy
Lay
er
Digital Circuits
Semaphores
Programming Pattern Language 1.0 Programming Pattern Language 1.0 Keutzer& MattsonKeutzer& Mattson
01/21/2008 CS267 - Lecture 1
Algorithms for N x N Linear System Ax=b
Algorithm Serial PRAM Memory #Procs
• Dense LU N3 N N2 N2
• Band LU N2 N N3/2 N
• Jacobi N2 N N N
• Explicit Inv. N2 log N N2 N2
• Conj.Gradients N3/2 N1/2 *log N N N
• Red/Black SOR N3/2 N1/2 N N
• Sparse LU N3/2 N1/2 N*log N N
• FFT N*log N log N N N
• Multigrid N log2 N N N
• Lower bound N log N N
PRAM is an idealized parallel model with zero cost communication
01/21/2008 CS267 - Lecture 1
Algorithms for 2D Poisson Equation (N = n2 vars)
Algorithm Serial PRAM Memory #Procs
• Dense LU N3 N N2 N2
• Band LU N2 N N3/2 N
• Jacobi N2 N N N
• Explicit Inv. N2 log N N2 N2
• Conj.Gradients N3/2 N1/2 *log N N N
• Red/Black SOR N3/2 N1/2 N N
• Sparse LU N3/2 N1/2 N*log N N
• FFT N*log N log N N N
• Multigrid N log2 N N N
• Lower bound N log N N
PRAM is an idealized parallel model with zero cost communication
Reference: J.D., Applied Numerical Linear Algebra, SIAM, 1997.
01/21/2008 CS267 - Lecture 1
Algorithms for 2D (3D) Poisson Equation (N = n2 (n3) vars)
Algorithm Serial PRAM Memory #Procs
• Dense LU N3 N N2 N2
• Band LU N2 (N7/3) N N3/2 (N5/3) N (N4/3)
• Jacobi N2 (N5/3) N (N2/3) N N
• Explicit Inv. N2 log N N2 N2
• Conj.Gradients N3/2 (N4/3) N1/2 (1/3) *log N N N
• Red/Black SOR N3/2 (N4/3) N1/2 (N1/3) N N
• Sparse LU N3/2 (N2) N1/2 N*log N (N4/3) N
• FFT N*log N log N N N
• Multigrid N log2 N N N
• Lower bound N log N N
PRAM is an idealized parallel model with zero cost communication
Reference: J.D. , Applied Numerical Linear Algebra, SIAM, 1997.
01/21/2008 CS267 - Lecture 1
Algorithms and MotifsAlgorithm Motifs
• Dense LU Dense linear algebra
• Band LU Dense linear algebra
• Jacobi (Un)structured meshes, Sparse Linear Algebra
• Explicit Inv. Dense linear algebra
• Conj.Gradients (Un)structured meshes, Sparse Linear Algebra
• Red/Black SOR (Un)structured meshes, Sparse Linear Algebra
• Sparse LU Sparse Linear Algebra
• FFT Spectral
• Multigrid (Un)structured meshes, Sparse Linear Algebra
01/21/2008 CS267 - Lecture 1 34
Outline (of all four lectures)
• Why all computers must be parallel processors
• Arithmetic is cheap, what costs is moving data
• Recurring computational patterns
• Dense Linear Algebra
• Sparse Linear Algebra
• How do I know I get the right answer?
01/21/2008 CS267 - Lecture 1
For more information
• CS267- Annual one semester course on parallel computing at UC
Berkeley
- All slides and video archived from Spring 2009 offering
- www.cs.berkeley.edu/~demmel/cs267_Spr09
- Google “parallel computing course”
- www.cs.berkeley.edu/~demmel/cs267
- 1996 version, but extensive on-line algorithmic notes
• Parallelism “boot camp”- Second annual 3 day course at UC Berkeley
- parlab.eecs.berkeley.edu/2009bootcamp
• ParLab- Parallel Computing Research Lab at UC Berkeley
- parlab.eecs.berkeley.edu 35
01/21/2008 CS267 - Lecture 1
EXTRA SLIDESFROM CS267 LECTURE 1
36
01/21/2008 CS267 - Lecture 1 37
Computational Science- Recent News
Nature, March 23, 2006
“An important development in sciences is occurring at the intersection of computer science and the sciences that has the potential to have a profound impact on science. It is a leap from the application of computing … to the integration of computer science concepts, tools, and theorems into the very fabric of science.” -Science 2020 Report, March 2006
01/21/2008 CS267 - Lecture 1 38
Drivers for Change
• Continued exponential increase in computational power simulation is becoming third pillar of science, complementing theory and experiment
• Continued exponential increase in experimental data techniques and technology in data analysis, visualization, analytics, networking, and collaboration tools are becoming essential in all data rich scientific applications
01/21/2008 CS267 - Lecture 1 39
Simulation: The Third Pillar of Science
• Traditional scientific and engineering method:(1) Do theory or paper design
(2) Perform experiments or build system
• Limitations: –Too difficult—build large wind tunnels
–Too expensive—build a throw-away passenger jet
–Too slow—wait for climate or galactic evolution
–Too dangerous—weapons, drug design, climate
experimentation
• Computational science and engineering paradigm:(3) Use high performance computer systems
to simulate and analyze the phenomenon
- Based on known physical laws and efficient numerical methods
- Analyze simulation results with computational tools and methods beyond what is used traditionally for experimental data analysis
Simulation
Theory Experiment
01/21/2008 CS267 - Lecture 1 40
Computational Science and Engineering (CSE)
• CSE is a widely accepted label for an evolving field concerned with the science of and the engineering of systems and methodologies to solve computational problems arising throughout science and engineering
• CSE is characterized by- Multi - disciplinary- Multi - institutional- Requiring high-end resources- Large teams- Focus on community software
• CSE is not “just programming” (and not CS)
• Fast computers necessary but not sufficient
• New graduate program in CSE at UC Berkeley (more later)
Reference: Petzold, L., et al., Graduate Education in CSE, SIAM Rev., 43(2001), 163-177
01/21/2008 CS267 - Lecture 1 41
SciDAC - First Federal Program to Implement CSE
Biology Nanoscience Combustion Astrophysics
Global Climate• SciDAC (Scientific Discovery
through Advanced Computing) program created in 2001– About $50M annual funding– Berkeley (LBNL+UCB)
largest recipient of SciDAC funding
01/21/2008 CS267 - Lecture 1 42
Some Particularly Challenging Computations• Science
- Global climate modeling- Biology: genomics; protein folding; drug design- Astrophysical modeling- Computational Chemistry- Computational Material Sciences and Nanosciences
• Engineering- Semiconductor design- Earthquake and structural modeling- Computation fluid dynamics (airplane design)- Combustion (engine design)- Crash simulation
• Business- Financial and economic modeling- Transaction processing, web services and search engines
• Defense- Nuclear weapons -- test by simulations- Cryptography
01/21/2008 CS267 - Lecture 1 43
Economic Impact of HPC
• Airlines:- System-wide logistics optimization systems on parallel systems.- Savings: approx. $100 million per airline per year.
• Automotive design:- Major automotive companies use large systems (500+ CPUs) for:
- CAD-CAM, crash testing, structural integrity and aerodynamics.
- One company has 500+ CPU parallel system.- Savings: approx. $1 billion per company per year.
• Semiconductor industry:- Semiconductor firms use large systems (500+ CPUs) for
- device electronics simulation and logic validation - Savings: approx. $1 billion per company per year.
• Securities industry (note: old data …)- Savings: approx. $15 billion per year for U.S. home mortgages.
01/21/2008 CS267 - Lecture 1 44
$5B World Market in Technical Computing
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
1998 1999 2000 2001 2002 2003 Other
Technical Management andSupport
Simulation
Scientific Research and R&D
MechanicalDesign/Engineering Analysis
Mechanical Design andDrafting
Imaging
Geoscience and Geo-engineering
Electrical Design/EngineeringAnalysis
Economics/Financial
Digital Content Creation andDistribution
Classified Defense
Chemical Engineering
Biosciences
Source: IDC 2004, from NRC Future of Supercomputing Report
01/21/2008 CS267 - Lecture 1 45
What Supercomputers Do
Introducing Computational Science and Engineering
Two Examples- simulation replacing experiment that is too dangerous
- analyzing massive amounts of data with new tools
01/21/2008 CS267 - Lecture 1 46
Global Climate Modeling Problem
• Problem is to compute:f(latitude, longitude, elevation, time) “weather” =
(temperature, pressure, humidity, wind velocity)
• Approach:- Discretize the domain, e.g., a measurement point every 10 km
- Devise an algorithm to predict weather at time t+t given t
• Uses:- Predict major events,
e.g., El Nino
- Use in setting air emissions standards
- Evaluate global warming scenarios
Source: http://www.epm.ornl.gov/chammp/chammp.html
01/21/2008 CS267 - Lecture 1 47
Global Climate Modeling Computation
• One piece is modeling the fluid flow in the atmosphere- Solve Navier-Stokes equations
- Roughly 100 Flops per grid point with 1 minute timestep
• Computational requirements:- To match real-time, need 5 x 1011 flops in 60 seconds = 8 Gflop/s
- Weather prediction (7 days in 24 hours) 56 Gflop/s
- Climate prediction (50 years in 30 days) 4.8 Tflop/s
- To use in policy negotiations (50 years in 12 hours) 288 Tflop/s
• To double the grid resolution, computation is 8x to 16x
• State of the art models require integration of atmosphere, clouds, ocean, sea-ice, land models, plus possibly carbon cycle, geochemistry and more
• Current models are coarser than this
01/21/2008 CS267 - Lecture 1 48
High Resolution Climate Modeling on NERSC-3 – P. Duffy,
et al., LLNL
U.S.A. Hurricane
Source: M.Wehner, LBNL
01/21/2008 CS267 - Lecture 1 50
NERSC User George Smoot wins 2006 Nobel Prize in Physics
Smoot and Mather 1992
COBE Experiment showed anisotropy of CMB
Cosmic Microwave Background Radiation (CMB): an image of the universe at 400,000 years
01/21/2008 CS267 - Lecture 1 51
The Current CMB Map
• Unique imprint of primordial physics through the tiny anisotropies in temperature and polarization.
• Extracting these Kelvin fluctuations from inherently noisy data is a serious computational challenge.
source J. Borrill, LBNL
01/21/2008 CS267 - Lecture 1 52
Evolution Of CMB Data Sets: Cost > O(Np^3 )
Experiment Nt Np Nb Limiting Data Notes
COBE (1989) 2x109 6x103 3x101 Time Satellite, Workstation
BOOMERanG (1998) 3x108 5x105 3x101 Pixel Balloon, 1st HPC/NERSC
(4yr) WMAP (2001) 7x1010 4x107 1x103 ? Satellite, Analysis-bound
Planck (2007) 5x1011 6x108 6x103 Time/ PixelSatellite,
Major HPC/DA effort
POLARBEAR (2007) 8x1012 6x106 1x103 Time Ground, NG-multiplexing
CMBPol (~2020) 1014 109 104 Time/ Pixel Satellite, Early planning/design
data compression
01/21/2008 CS267 - Lecture 1 53
Which commercial applications require parallelism?
Embed SPEC DB Games ML HPC1 Finite State Mach.2 Combinational3 Graph Traversal4 Structured Grid5 Dense Matrix6 Sparse Matrix7 Spectral (FFT)8 Dynamic Prog9 N-Body
10 MapReduce11 Backtrack/ B&B12 Graphical Models13 Unstructured Grid
• Claim: parallel architecture, language, compiler … must do at least these well to run future parallel apps well• Note: MapReduce is embarrassingly parallel; FSM embarrassingly sequential?
Analyzed in detail in “Berkeley View” reportwww.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html
01/21/2008 CS267 - Lecture 1
Which commercial applications require parallelism?
Em
be
d
SP
EC
DB
Ga
me
s
ML
HP
C
Health Image Speech Music Browser1 Finite State Mach.2 Combinational3 Graph Traversal4 Structured Grid5 Dense Matrix6 Sparse Matrix7 Spectral (FFT)8 Dynamic Prog9 N-Body
10 MapReduce11 Backtrack/ B&B12 Graphical Models13 Unstructured Grid
ParLab Applications
01/21/2008 CS267 - Lecture 155
Compelling Laptop/Handheld Apps(David Wessel)
• Musicians have an insatiable appetite for computation + real-time demands
- More channels, instruments, more processing, more interaction!
- Latency must be low (5 ms)
- Must be reliable (No clicks)
1. Music Enhancer- Enhanced sound delivery systems for home
sound systems using large microphone and speaker arrays
- Laptop/Handheld recreate 3D sound over ear buds
2. Hearing Augmenter- Laptop/Handheld as accelerator for hearing aide
3. Novel Instrument User Interface- New composition and performance systems
beyond keyboards
- Input device for Laptop/Handheld
Berkeley Center for New Music and Audio Technology (CNMAT) created a compact loudspeaker array: 10-inch-diameter icosahedron incorporating 120 tweeters.
01/21/2008 CS267 - Lecture 156
Coronary Artery Disease (Tony Keaveny)
Modeling to help patient compliance?Modeling to help patient compliance?• 450k deaths/year, 16M symptomatic, 72M High BP
Massively parallel, Real-time variationsMassively parallel, Real-time variations• CFD FECFD FE solid (non-linear), fluid (Newtonian), pulsatilesolid (non-linear), fluid (Newtonian), pulsatile• Blood pressure, activity, habitus, cholesterolBlood pressure, activity, habitus, cholesterol
Before After
01/21/2008 CS267 - Lecture 157
Content-Based Image Retrieval(Kurt Keutzer)
Relevance Feedback
ImageImageDatabaseDatabase
Query by example
SimilarityMetric
CandidateResults Final ResultFinal Result
• Built around Key Characteristics of personal databases- Very large number of pictures (>5K)
- Non-labeled images
- Many pictures of few people
- Complex pictures including people, events, places, and objects
1000’s of images
01/21/2008 CS267 - Lecture 158
Compelling Laptop/Handheld Apps(Nelson Morgan) • Meeting Diarist
- Laptops/ Handhelds at meeting coordinate to create speaker identified, partially transcribed text diary of meeting
Teleconference speaker identifier, speech helper
L/Hs used for teleconference, identifies who is speaking, “closed caption” hint of what being said
01/21/2008 CS267 - Lecture 1
Motif/Dwarf: Common Computational Methods(Red Hot Blue Cool)
Em
be
d
SP
EC
DB
Ga
me
s
ML
HP
C
Health Image Speech Music Browser1 Finite State Mach.2 Combinational3 Graph Traversal4 Structured Grid5 Dense Matrix6 Sparse Matrix7 Spectral (FFT)8 Dynamic Prog9 N-Body
10 MapReduce11 Backtrack/ B&B12 Graphical Models13 Unstructured Grid
What do commercial and CSE applications have in common?
01/21/2008 CS267 - Lecture 1 60
Outline
• Why powerful computers must be parallel processors
• Large CSE problems require powerful computers
• Why writing (fast) parallel programs is hard
• Principles of parallel computing performance
• Structure of the course
Commercial problems too
Including your laptops and handhelds
all
But things are improving
01/21/2008 CS267 - Lecture 1 61
Principles of Parallel Computing
• Finding enough parallelism (Amdahl’s Law)
• Granularity
• Locality
• Load balance
• Coordination and synchronization
• Performance modeling
All of these things makes parallel programming even harder than sequential programming.
01/21/2008 CS267 - Lecture 1 62
“Automatic” Parallelism in Modern Machines• Bit level parallelism
- within floating point operations, etc.
• Instruction level parallelism (ILP)- multiple instructions execute per clock cycle
• Memory system parallelism- overlap of memory operations with computation
• OS parallelism- multiple jobs run in parallel on commodity SMPs
Limits to all of these -- for very high performance, need user to identify, schedule and coordinate parallel tasks
01/21/2008 CS267 - Lecture 1 63
Finding Enough Parallelism
• Suppose only part of an application seems parallel
• Amdahl’s law- let s be the fraction of work done sequentially, so
(1-s) is fraction parallelizable
- P = number of processors
Speedup(P) = Time(1)/Time(P)
<= 1/(s + (1-s)/P)
<= 1/s
• Even if the parallel part speeds up perfectly performance is limited by the sequential part
• Top500 list: currently fastest machine has P~130K
01/21/2008 CS267 - Lecture 1 64
Overhead of Parallelism
• Given enough parallel work, this is the biggest barrier to getting desired speedup
• Parallelism overheads include:- cost of starting a thread or process
- cost of communicating shared data
- cost of synchronizing
- extra (redundant) computation
• Each of these can be in the range of milliseconds (=millions of flops) on some systems
• Tradeoff: Algorithm needs sufficiently large units of work to run fast in parallel (i.e. large granularity), but not so large that there is not enough parallel work
01/21/2008 CS267 - Lecture 1 65
Locality and Parallelism
• Large memories are slow, fast memories are small
• Storage hierarchies are large and fast on average
• Parallel processors, collectively, have large, fast cache- the slow accesses to “remote” data we call “communication”
• Algorithm should do most work on local data
ProcCache
L2 Cache
L3 Cache
Memory
Conventional Storage Hierarchy
ProcCache
L2 Cache
L3 Cache
Memory
ProcCache
L2 Cache
L3 Cache
Memory
potentialinterconnects
01/21/2008 CS267 - Lecture 1 66
Processor-DRAM Gap (latency)
µProc60%/yr.
DRAM7%/yr.
1
10
100
1000
1980
1981
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
DRAM
CPU1982
Processor-MemoryPerformance Gap:(grows 50% / year)
Per
form
ance
Time
“Moore’s Law”
Goal: find algorithms that minimize communication, not necessarily arithmetic
01/21/2008 CS267 - Lecture 1 67
Load Imbalance
• Load imbalance is the time that some processors in the system are idle due to
- insufficient parallelism (during that phase)
- unequal size tasks
• Examples of the latter- adapting to “interesting parts of a domain”
- tree-structured computations
- fundamentally unstructured problems
• Algorithm needs to balance load- Sometimes can determine work load, divide up evenly, before starting
- “Static Load Balancing”
- Sometimes work load changes dynamically, need to rebalance dynamically
- “Dynamic Load Balancing”
01/21/2008 CS267 - Lecture 168
Parallel Software Eventually – ParLab view
• 2 types of programmers 2 layers
• Efficiency Layer (10% of today’s programmers)- Expert programmers build Libraries implementing motifs, “Frameworks”,
OS, ….
- Highest fraction of peak performance possible
• Productivity Layer (90% of today’s programmers)- Domain experts / Naïve programmers productively build parallel applications
by composing frameworks & libraries
- Hide as many details of machine, parallelism as possible
- Willing to sacrifice some performance for productive programming
• Expect students may want to work at either level- In the meantime, we all need to understand enough of the efficiency layer to
use parallelism effectively
01/21/2008 CS267 - Lecture 1 69
Outline
• Why powerful computers must be parallel processors
• Large CSE problems require powerful computers
• Why writing (fast) parallel programs is hard
• Principles of parallel computing performance
• Structure of the course
Commercial problems too
Including your laptops and handhelds
all
But things are improving
01/21/2008 CS267 - Lecture 1 70
Improving Real Performance
0.1
1
10
100
1,000
2000 2004T
eraf
lop
s1996
Peak Performance grows exponentially, a la Moore’s Law
In 1990’s, peak performance increased 100x; in 2000’s, it will increase 1000x
But efficiency (the performance relative to the hardware peak) has declined
was 40-50% on the vector supercomputers of 1990s
now as little as 5-10% on parallel supercomputers of today
Close the gap through ... Mathematical methods and algorithms that
achieve high performance on a single processor and scale to thousands of processors
More efficient programming models and tools for massively parallel supercomputers
PerformanceGap
Peak Performance
Real Performance
01/21/2008 CS267 - Lecture 1 71
Performance Levels
• Peak advertised performance (PAP)- You can’t possibly compute faster than this speed
• LINPACK - The “hello world” program for parallel computing
- Solve Ax=b using Gaussian Elimination, highly tuned
• Gordon Bell Prize winning applications performance- The right application/algorithm/platform combination plus years of work
• Average sustained applications performance- What one reasonable can expect for standard applications
When reporting performance results, these levels are often confused, even in reviewed publications
01/21/2008 CS267 - Lecture 1 72
Performance Levels (for example on NERSC-5)
• Peak advertised performance (PAP): 100 Tflop/s
• LINPACK (TPP): 84 Tflop/s
• Best climate application: 14 Tflop/s- WRF code benchmarked in December 2007
• Average sustained applications performance: ? Tflop/s- Probably less than 10% peak!
• We will study performance- Hardware and software tools to measure it
- Identifying bottlenecks
- Practical performance tuning (Matlab demo)
01/21/2008 CS267 - Lecture 1 73
Outline
• Why powerful computers must be parallel processors
• Large CSE problems require powerful computers
• Why writing (fast) parallel programs is hard
• Principles of parallel computing performance
• Structure of the course
Commercial problems too
Including your laptops and handhelds
all
But things are improving
01/21/2008 CS267 - Lecture 1 74
Course Mechanics• Web page: www.cs.berkeley.edu/~demmel/cs267_Spr09
• Normally a mix of CS, EE, and other engineering and science students
• This class seems to be about:- 43 grads + 5 undergrads from UCB- Half CS, rest Biology, BioEng, BioPhys, Chemistry, Civil, EE,
Materials, Mechanical, Physics
- Plus UC Davis, UC Merced, UC Santa Cruz
• Please fill out survey on web page (posted later today)
• Grading:- Three programming assignments
- Final projects
- Could be parallelizing an application, building or evaluating a tool, etc.
- We encourage interdisciplinary teams, since this is the way parallel scientific software is generally built
01/21/2008 CS267 - Lecture 1 75
Rough List of Topics• Basics of computer architecture, memory hierarchies, performance
• Parallel Programming Models and Machines- Shared Memory and Multithreading
- Distributed Memory and Message Passing
- Data parallelism, GPUs
• Parallel languages and libraries - Shared memory threads and OpenMP
- MPI
- Other Languages , Frameworks (UPC, CUDA, Cilk, Titanium, “Pattern Language”)
• “Seven Dwarfs” of Scientific Computing- Dense & Sparse Linear Algebra
- Structure dand Unstructured Grids
- Spectral methods (FFTs) and Particle Methods
• 6 additional motifs- Graph algorithms, Graphical models, Dynamic Programming, Branch & Bound, FSM,
Logic
• General techniques- Load balancing, performance tools
• Applications: Some scientific, some commercial (guest lecturers)
01/21/2008 CS267 - Lecture 1 76
Reading Materials• What does Google recommend?
• Pointers on class web page
• Must read:- “The Landscape of Parallel Processing Research: The View from Berkeley”
- http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.pdf
• Some on-line texts:- Demmel’s notes from CS267 Spring 1999, which are similar to 2000 and 2001.
However, they contain links to html notes from 1996. - http://www.cs.berkeley.edu/~demmel/cs267_Spr99/
- My notes from Fall 2002- http://www.nersc.gov/~simon/cs267/
- Ian Foster’s book, “Designing and Building Parallel Programming”. - http://www-unix.mcs.anl.gov/dbpp/
• Potentially useful texts:- “Sourcebook for Parallel Computing”, by Dongarra, Foster, Fox, ..
- A general overview of parallel computing methods- “Performance Optimization of Numerically Intensive Codes” by Stefan
Goedecker and Adolfy Hoisie- This is a practical guide to optimization, mostly for those of you who have
never done any optimization
01/21/2008 CS267 - Lecture 1 77
Reading Materials (cont.)
• Recent books with papers about the current state of the art
- David Bader (ed.), “Petascale Computing, Algorithms and Applications”, Chapman & Hall/CRC, 2007
- Michael Heroux, Padma Ragahvan, Horst Simon (ed.),”Parallel Processing for Scientific Computing”, SIAM, 2006.
• More pointers will be on the web page
01/21/2008 CS267 - Lecture 1
Instructors
• Jim Demmel, EECS & Mathematics
• Horst Simon, LBNL & EECS
• GSI: Vasily Volkov, CS
• Contact information on web page- Office hours TBD
78
01/21/2008 CS267 - Lecture 1 79
What you should get out of the course
In depth understanding of:
• When is parallel computing useful?
• Understanding of parallel computing hardware options.
• Overview of programming models (software) and tools.
• Some important parallel applications and the algorithms
• Performance analysis and tuning
• Exposure to various open research questions
80
Extra slides
01/21/2008 CS267 - Lecture 1 81
Transaction Processing
• Parallelism is natural in relational operators: select, join, etc.
• Many difficult issues: data partitioning, locking, threading.
(mar. 15, 1996)
0
5000
10000
15000
20000
25000
0 20 40 60 80 100 120Processors
Thro
ughput
(tp
mC
)
other
Tandem Himalaya
IBM PowerPC
DEC Alpha
SGI PowerChallenge
HP PA
01/21/2008 CS267 - Lecture 1 82
SIA Projections for Microprocessors
Compute power ~1/(Feature Size)3
0.01
0.1
1
10
100
10001
99
5
19
98
20
01
20
04
20
07
20
10
Year of Introduction
Fe
atu
re S
ize
(m
icro
ns
) &
Mill
ion
T
ran
sis
tors
pe
r c
hip
Feature Size(microns)
Transistors perchip x 106
based on F.S.Preston, 1997
01/21/2008 CS267 - Lecture 1 83
Much of the Performance is from Parallelism
Name
Bit-LevelParallelism
Instruction-LevelParallelism
Thread-LevelParallelism?
01/21/2008 CS267 - Lecture 1 84
Performance on Linpack Benchmark
System
0.1
1
10
100
1000
10000
100000Ju
n 93
Dec 9
3Ju
n 94
Dec 9
4Ju
n 95
Dec 9
5Ju
n 96
Dec 9
6Ju
n 97
Dec 9
7Ju
n 98
Dec 9
8Ju
n 99
Dec 9
9Ju
n 00
Dec 0
0Ju
n 01
Dec 0
1Ju
n 02
Dec 0
2Ju
n 03
Dec 0
3Ju
n 04
Rmax
max Rmax
mean Rmax
min Rmax
ASCI Red
Earth Simulator
ASCI White
Nov 2004: IBM Blue Gene L, 70.7 Tflops Rmax
Gflops
www.top500.org
01/21/2008 CS267 - Lecture 1 85
Performance Projection
1993 1995 1997 1999 2001 2003 2005 2007 2009 2011 2013 2015
N=1
N=500
SUM
1 Gflop/s
1 Tflop/s
100 Mflop/s
100 Gflop/s
100 Tflop/s
10 Gflop/s
10 Tflop/s
1 Pflop/s
10 Pflop/s
1 Eflop/s
100 Pflop/s
6-8 years
8-10 years
Slide by Erich Strohmaier, LBNL
01/21/2008 CS267 - Lecture 1 86
1993
1995
1997
1999
2001
2003
2005
2007
2009
2011
2013
2015
2017
2019
2021
2023
2025
N=1
N=500
SUM
1 Gflop/s
1 Tflop/s
100 Mflop/s
100 Gflop/s
100 Tflop/s
10 Gflop/s
10 Tflop/s
1 Pflop/s
10 Pflop/s
1 Eflop/s
100 Pflop/s
Performance Projection
Slide by Erich Strohmaier, LBNL
01/21/2008 CS267 - Lecture 1 87
1
10
100
1,000
10,000
100,000
1,000,000
Jun-9
3
Jun-9
4
Jun-9
5
Jun-9
6
Jun-9
7
Jun-9
8
Jun-9
9
Jun-0
0
Jun-0
1
Jun-0
2
Jun-0
3
Jun-0
4
Jun-0
5
Jun-0
6
Jun-0
7
Jun-0
8
Jun-0
9
Jun-1
0
Jun-1
1
Jun-1
2
Jun-1
3
Jun-1
4
Jun-1
5
# p
roce
sso
rs
.Concurrency Levels
Slide by Erich Strohmaier, LBNL
01/21/2008 CS267 - Lecture 1 88
Concurrency Levels- There is a Massively Parallel System Also in Your Future
1
10
100
1,000
10,000
100,000
1,000,000
10,000,000
100,000,000
Jun-9
3
Jun-9
4
Jun-9
5
Jun-9
6
Jun-9
7
Jun-9
8
Jun-9
9
Jun-0
0
Jun-0
1
Jun-0
2
Jun-0
3
Jun-0
4
Jun-0
5
Jun-0
6
Jun-0
7
Jun-0
8
Jun-0
9
Jun-1
0
Jun-1
1
Jun-1
2
Jun-1
3
Jun-1
4
Jun-1
5
Jun-1
6
Jun-1
7
Jun-1
8
Jun-1
9
Jun-2
0
Jun-2
1
Jun-2
2
Jun-2
3
Jun-2
4
Jun-2
5
# p
roc
es
so
rs
.
Slide by Erich Strohmaier, LBNL
89
• Microprocessors have made desktop computing in 2007 what supercomputing was in 1995.
• Massive Parallelism has changed the “high-end” completely.
• Most of today's standard supercomputing architecture are “hybrids”, clusters built out of commodity microprocessors and custom interconnects.
• The microprocessor revolution will continue with little attenuation for at least another 10 years
• The future will be massively parallel, based on multicore
Supercomputing Today
01/21/2008 CS267 - Lecture 1 90
Outline
• Why powerful computers must be parallel computers
• Large important problems require powerful computers
• Why writing (fast) parallel programs is hard
• Principles of parallel computing performance
• Structure of the course
Even computer games
Including your laptop and handhelds
all
But things are improving
01/21/2008 CS267 - Lecture 1 91
Is Multicore the Correct Response?
• Kurt Keutzer: “This shift toward increasing parallelism is not a triumphant stride forward based on breakthroughs in novel software and architectures for parallelism; instead, this plunge into parallelism is actually a retreat from even greater challenges that thwart efficient silicon implementation of traditional uniprocessor architectures.”
• David Patterson: “Industry has already thrown the hail-mary pass. . . But nobody is running yet.”
01/21/2008 CS267 - Lecture 1 92
Community Reaction
• Desktop/Consumer - Move from almost no parallelism to parallelism
- But industry is already betting on parallelism (multicore) for its future
• HPC- Modest growth in parallelism is giving way to exponential growth curve
- Have Parallel programming tools and algorithms, but driven by experts (unlikely to be adopted by broader software development community)
• The first hardware is here, but have no consensus on hardware details or software model necessary to program it
- Reaction: Widespread Panic!
01/21/2008 CS267 - Lecture 1 93
The View from Berkeley: Seven Questions for Parallelism
• Applications:1. What are the apps?
2. What are kernels of apps?
• Hardware:3. What are the HW building blocks?
4. How to connect them?
• Programming Model / Systems Software:
5. How to describe apps and kernels?
6. How to program the HW?
• Evaluation: 7. How to measure success?
(Inspired by a view of the Golden Gate Bridge from Berkeley)
http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.pdf
01/21/2008 CS267 - Lecture 1 94
Applications
• Applications:1. What are the apps?
2. What are kernels of apps?
• Hardware:3. What are the HW building blocks?
4. How to connect them?
• Programming Model / Systems Software:
5. How to describe apps and kernels?
6. How to program the HW?
• Evaluation: 7. How to measure success?
(Inspired by a view of the Golden Gate Bridge from Berkeley)
CS267 focus is here
01/21/2008 CS267 - Lecture 1 95
High-end simulation in the physical sciences = 7 numerical methods:
1. Structured Grids (including locally structured grids, e.g. Adaptive Mesh Refinement)
2. Unstructured Grids
3. Fast Fourier Transform
4. Dense Linear Algebra
5. Sparse Linear Algebra
6. Particles
7. Monte Carlo
Much Ado about Dwarves
• Benchmarks enable assessment of hardware performance improvements
• The problem with benchmarks is that they enshrine an implementation
• At this point in time, we need flexibility to innovate both implementation and the hardware they run on!
• Dwarves provide that necessary abstraction
Slide from “Defining Software Requirements for Scientific Computing”, Phillip Colella, 2004
Motifs
Map Reduce
01/21/2008 CS267 - Lecture 1 96
Do dwarfs work well outside HPC?
• Examine effectiveness 7 dwarfs elsewhere1. Embedded Computing (EEMBC benchmark)
2. Desktop/Server Computing (SPEC2006)
3. Data Base / Text Mining Software
- Advice from Jim Gray of Microsoft and Joe Hellerstein of UC
4. Games/Graphics/Vision
5. Machine Learning
- Advice from Mike Jordan and Dan Klein of UC Berkeley
• Result: Added 7 more dwarfs, revised 2 original dwarfs, renumbered list
01/21/2008 CS267 - Lecture 1 97
Destination is Manycore
• We need revolution, not evolution
• Software or architecture alone can’t fix parallel programming problem, need innovations in both
• “Multicore” 2X cores per generation: 2, 4, 8, …
• “Manycore” 100s is highest performance per unit area, and per Watt, then 2X per generation: 64, 128, 256, 512, 1024 …
• Multicore architectures & Programming Models good for 2 to 32 cores won’t evolve to Manycore systems of 1000’s of processors Desperately need HW/SW models that work for Manycore or will run out of steam(as ILP ran out of steam at 4 instructions)
01/21/2008 CS267 - Lecture 1 98
Units of Measure in HPC
• High Performance Computing (HPC) units are:- Flop: floating point operation
- Flops/s: floating point operations per second
- Bytes: size of data (a double precision floating point number is 8)
• Typical sizes are millions, billions, trillions…Mega Mflop/s = 106 flop/sec Mbyte = 220 = 1048576 ~ 106 bytes
Giga Gflop/s = 109 flop/sec Gbyte = 230 ~ 109 bytes
Tera Tflop/s = 1012 flop/sec Tbyte = 240 ~ 1012 bytes
Peta Pflop/s = 1015 flop/sec Pbyte = 250 ~ 1015 bytes
Exa Eflop/s = 1018 flop/sec Ebyte = 260 ~ 1018 bytes
Zetta Zflop/s = 1021 flop/sec Zbyte = 270 ~ 1021 bytes
Yotta Yflop/s = 1024 flop/sec Ybyte = 280 ~ 1024 bytes
• See www.top500.org for current list of fastest machines
page 99
30th List: The TOP10
Manufacturer
Computer Rmax [TF/s]
Installation Site Country Year #Cores
1 IBM BlueGene/LeServer Blue Gene
478.2 DOE/NNSA/LLNL USA 2007212,99
2
2 IBM JUGENEBlueGene/P Solution
167.3Forschungszentrum
JuelichGermany 2007 65,536
3 SGI SGI Altix ICE 8200 126.9 New Mexico Computing
Applications CenterUSA 2007 14,336
4 HP Cluster Platform 3000 BL460c 117.9
Computational Research
Laboratories, TATA SONS
India 2007 14,240
5 HP Cluster Platform 3000 BL460c 102.8
Swedish Government Agency
Sweden 2007 13,728
63
Sandia/Cray
Red StormCray XT3
102.2 DOE/NNSA/Sandia USA 2006 26,569
72
Cray JaguarCray XT3/XT4
101.7 DOE/ORNL USA 2007 23,016
84
IBM BGWeServer Blue Gene
91.29 IBM Thomas Watson USA 2005 40,960
9 Cray FranklinCray XT4
85.37 NERSC/LBNL USA 2007 19,320
10 5
IBM New York BlueeServer Blue Gene
82.16 Stony Brook/BNL USA 2007 36,864
01/21/2008 CS267 - Lecture 1 100
New 100 Tflops Cray XT-4 at NERSC
NERSC is enabling new science
Cray XT-4 “Franklin”
19,344 compute cores
102 Tflop/sec peak
39 TB memory
350 TB usable disk space
50 PB storage archive
page 101
Performance Development
4.92 PF/s
1.167 TF/s
59.7 GF/s
280.6 TF/s
0.4 GF/s
4.005 TF/s
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
Fujitsu
'NWT' NAL
NEC
Earth Simulator
Intel ASCI Red
Sandia
IBM ASCI White
LLNL
N=1
N=500
SUM
1 Gflop/s
1 Tflop/s
100 Mflop/s
100 Gflop/s
100 Tflop/s
10 Gflop/s
10 Tflop/s
1 Pflop/s
IBM
BlueGene/L
01/21/2008 CS267 - Lecture 1 102
Signpost System in 2005
IBM BG/L @ LLNL
• 700 MHz
• 65,536 nodes
• 180 (360) Tflop/s peak
• 32 TB memory
• 135 Tflop/s LINPACK
• 250 m2 floor space
• 1.8 MW power
01/21/2008 CS267 - Lecture 1 103
Outline
• Why powerful computers must be parallel processors
• Large important problems require powerful computers
• Why writing (fast) parallel programs is hard
• Principles of parallel computing performance
• Structure of the course
Even computer games
Including your laptop
all
104
Why we need powerful computers
01/21/2008 CS267 - Lecture 1 105
New Science Question: Hurricane Statistics
1979 1980 1981 1982 Obs
Northwest Pacific
Basin
>25 ~30 40
Atlantic Basin ~6 ~12 ?
Work in progress—results to be published
What is the effect of different climate scenarios on number and severity of tropical storms?
Source: M.Wehner, LBNL
01/21/2008 CS267 - Lecture 1 106
CMB Computing at NERSC
• CMB data analysis presents a significant and growing computational challenge, requiring
- well-controlled approximate algorithms
- efficient massively parallel implementations
- long-term access to the best HPC resources
• DOE/NERSC has become the leading HPC facility in the world for CMB data analysis
- O(1,000,000) CPU-hours/year
- O(10) Tb project disk space
- O(10) experiments & O(100) users (rolling)
source J. Borrill, LBNL
01/21/2008 CS267 - Lecture 1 107
Evolution Of CMB Satellite Maps
01/21/2008 CS267 - Lecture 1 108
Algorithms & Flop-Scaling
• Map-making- Exact maximum likelihood : O(Np
3)
- PCG maximum likelihood : O(Ni Nt log Nt)
- Scan-specific, e.g.. destriping : O(Nt log Nt)
- Naïve : O(Nt)
• Power Spectrum estimation- Iterative maximum likelihood : O(Ni Nb Np
3)
- Monte Carlo pseudo-spectral :
- Time domain : O(Nr Ni Nt log Nt), O(Nr lmax3)
- Pixel domain : O(Nr Nt)
- Simulations
– exact simulation > approximate analysis !
Spee
dSp
eed
Acc
urac
yA
ccur
acy
01/21/2008 CS267 - Lecture 1 109
CMB is Characteristic for CSE Projects
• Petaflop/s and beyond computing requirements
• Algorithm and software requirements
• Use of new technology, e.g. NGF
• Service to a large international community
• Exciting science
01/21/2008 CS267 - Lecture 1110
Parallel Browser (Ras Bodik)• Web 2.0: Browser plays role of traditional OS
- Resource sharing and allocation, Protection
• Goal: Desktop quality browsing on handhelds- Enabled by 4G networks, better output devices
• Bottlenecks to parallelize- Parsing, Rendering, Scripting
• “SkipJax”- Parallel replacement for JavaScript/AJAX
- Based on Brown’s FlapJax
01/21/2008 CS267 - Lecture 1 111
More Exotic Solutions on the Horizon
• GPUs - Graphics Processing Units (eg NVidia)- Parallel processor attached to main processor
- Originally special purpose, getting more general
• FPGAs – Field Programmable Gate Arrays- Inefficient use of chip area
- More efficient than multicore now, maybe not later
- Wire routing heuristics still troublesome
• Dataflow and tiled processor architectures- Have considerable experience with dataflow from 1980’s
- Are we ready to return to functional programming languages?
• Cell- Software controlled memory uses bandwidth efficiently
- Programming model not yet mature
page 112
Performance Development
1.167 TF/s
6.97 PF/s
478.2 TF/s
59.7 GF/s
5.9 TF/s
0.4 GF/s
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
N=1
N=500
SUM
1 Gflop/s
1 Tflop/s
100 Mflop/s
100 Gflop/s
100 Tflop/s
10 Gflop/s
10 Tflop/s
1 Pflop/s
See www.top500.org for latest data
113
Why powerful computers are parallel
circa 1991-2006
all (2007)
114
Tunnel Vision by Experts
“I think there is a world market for maybe five computers.”
Thomas Watson, chairman of IBM, 1943.
“There is no reason for any individual to have a computer in their home”
Ken Olson, president and founder of Digital Equipment Corporation, 1977.
“640K [of memory] ought to be enough for anybody.”Bill Gates, chairman of Microsoft,1981.
“On several recent occasions, I have been asked whether parallel computing will soon be relegated to the trash heap reserved for promising technologies that never quite make it.”
Ken Kennedy, CRPC Directory, 1994
Slide source: Warfield et al.
115
Microprocessor Transistors per Chip
i4004
i80286
i80386
i8080
i8086
R3000R2000
R10000
Pentium
1,000
10,000
100,000
1,000,000
10,000,000
100,000,000
1970 1975 1980 1985 1990 1995 2000 2005
Year
Tran
sist
ors
• Growth in transistors per chip • Increase in clock rate
0.1
1
10
100
1000
1970 1980 1990 2000
Year
Clo
ck R
ate
(MH
z)
116
Impact of Device Shrinkage
What happens when the feature size (transistor size) shrinks by a factor of x ?Clock rate goes up by x because wires are shorter
actually less than x, because of power consumption
Transistors per unit area goes up by x2
Die size also tends to increase
typically another factor of ~xRaw computing power of the chip goes up by ~ x4 !
typically x3 is devoted to either on-chipparallelism: hidden parallelism such as ILPlocality: caches
So most programs x3 times faster, without changing them
117
But there are limiting forces
• Moore’s 2nd law (Rock’s law): costs go up
Demo of 0.06 micron CMOS
Source: Forbes Magazine
• Yield-What percentage of the chips are usable?
-E.g., Cell processor (PS3) is sold with 7 out of 8 “on” to improve yield
Manufacturing costs and yield problems limit use of density
118
Power Density Limits Serial Performance
119
Parallelism in 2009?
These arguments are no longer theoreticalAll major processor vendors are producing multicore chips
Every machine will soon be a parallel machineTo keep doubling performance, parallelism must double
Which commercial applications can use this parallelism?Do they have to be rewritten from scratch?
Will all programmers have to be parallel programmers?New software model neededTry to hide complexity from most programmers – eventuallyIn the meantime, need to understand it
Computer industry betting on this big change, but does not have all the answers
Berkeley ParLab established to work on this
120
More Limits: How fast can a serial computer be?
Consider the 1 Tflop/s sequential machine:
Data must travel some distance, r, to get from memory to processor.To get 1 data element per cycle, this means 1012 times per second at the speed of light, c = 3x108 m/s. Thus r < c/1012 = 0.3 mm.
Now put 1 Tbyte of storage in a 0.3 mm x 0.3 mm area:
Each bit occupies about 1 square Angstrom, or the size of a small atom.
No choice but parallelism
r = 0.3 mm
1 Tflop/s, 1 Tbyte sequential machine
01/21/2008 CS267 - Lecture 1 121
Outline
• Arithmetic is cheap, what costs is moving data- Idealized and actual costs in modern processors
- Parallelism within single processors
- Memory hierarchies
- What this means for designing algorithms and software
01/21/2008 CS267 - Lecture 1 122
Idealized Uniprocessor Model• Processor names bytes, words, etc. in its address space
- These represent integers, floats, pointers, arrays, etc.
• Operations include- Read and write into very fast memory called registers
- Arithmetic and other logical operations on registers
• Order specified by program- Read returns the most recently written data
- Compiler and architecture translate high level expressions into “obvious” lower level instructions
- Hardware executes instructions in order specified by compiler
• Idealized Cost- Each operation has roughly the same cost
(read, write, add, multiply, etc.)
A = B + C
Read address(B) to R1Read address(C) to R2R3 = R1 + R2Write R3 to Address(A)
01/21/2008 CS267 - Lecture 1 123
Uniprocessors in the Real World
• Real processors have- registers and caches
- small amounts of fast memory (more of slow memory)
- store values of recently used or nearby data
- different memory ops can have very different costs
- parallelism
- multiple “functional units” that can run in parallel
- different orders, instruction mixes have different costs
- pipelining
- a form of parallelism, like an assembly line in a factory
• Why is this your problem?- In theory, compilers understand all of this and can
optimize your program; in practice they don’t.
- Even if they could optimize one algorithm, they won’t know about a different algorithm that might be a much better “match” to the processor
01/21/2008 CS267 - Lecture 1 124
Outline
• Arithmetic is cheap, what costs is moving data- Idealized and actual costs in modern processors
- Parallelism within single processors
- Hidden from software (sort of)
- Pipelining
- SIMD units
- Memory hierarchies
- What this means for designing algorithms and software
01/21/2008 CS267 - Lecture 1 125
What is Pipelining?
• In this example:- Sequential execution takes
4 * 90min = 6 hours
- Pipelined execution takes 30+4*40+20 = 3.5 hours
• Bandwidth = loads/hour
• BW = 4/6 l/h w/o pipelining
• BW = 4/3.5 l/h w pipelining
• BW <= 1.5 l/h w pipelining, more total loads
• Pipelining helps bandwidth but not latency (90 min)
• Bandwidth limited by slowest pipeline stage
• Potential speedup = Number pipe stages
A
B
C
D
6 PM 7 8 9
Task
Order
Time
30 40 40 40 40 20
Dave Patterson’s Laundry example: 4 people doing laundry
wash (30 min) + dry (40 min) + fold (20 min) = 90 min Latency
01/21/2008 CS267 - Lecture 1 126
Example: 5 Steps of MIPS DatapathFigure 3.4, Page 134 , CA:AQA 2e by Patterson and Hennessy
MemoryAccess
WriteBack
InstructionFetch
Instr. DecodeReg. Fetch
ExecuteAddr. Calc
ALU
Mem
ory
Reg File
MU
XM
UX
Data
Mem
ory
MU
X
SignExtend
Zero?
IF/ID
ID/E
X
MEM
/WB
EX
/MEM
4
Adder
Next SEQ PC Next SEQ PC
RD RD RD WB
Data
• Pipelining is also used within arithmetic units– a fp multiply may have latency 10 cycles, but throughput of 1/cycle
Next PC
Addre
ss
RS1
RS2
Imm
MU
X
01/21/2008 CS267 - Lecture 1 127
SIMD: Single Instruction, Multiple Data
++
• Scalar processing• traditional mode• one operation produces
one result
• SIMD processing• with SSE / SSE2• SSE = streaming SIMD extensions• one operation produces
multiple results
XX
YY
X + YX + Y
++
x3x3 x2x2 x1x1 x0x0
y3y3 y2y2 y1y1 y0y0
x3+y3x3+y3 x2+y2x2+y2 x1+y1x1+y1 x0+y0x0+y0
XX
YY
X + YX + Y
Slide Source: Alex Klimovitski & Dean Macri, Intel Corporation
01/21/2008 CS267 - Lecture 1 128
SSE / SSE2 SIMD on Intel
16x bytes
4x floats
2x doubles
• SSE2 data types: anything that fits into 16 bytes, e.g.,
• Instructions perform add, multiply etc. on all the data in this 16-byte register in parallel
• Challenges:• Need to be contiguous in memory and aligned• Some instructions to move data around from one part of register to
another• Similar on GPUs, vector processors (but many more simultaneous
operations)
01/21/2008 CS267 - Lecture 1 129
What does this mean to you?• In addition to SIMD extensions, the processor may have
other special instructions- Fused Multiply-Add (FMA) instructions:
x = y + c * z
is so common some processor execute the multiply/add as a single instruction, at the same rate (bandwidth) as + or * alone
• In theory, the compiler understands all of this- When compiling, it will rearrange instructions to get a good “schedule”
that maximizes pipelining, uses FMAs and SIMD
- It works with the mix of instructions inside an inner loop or other block of code
• But in practice the compiler may need your help- Choose a different compiler, optimization flags, etc.
- Rearrange your code to make things more obvious
- Using special functions (“intrinsics”) or write in assembly