View
218
Download
0
Category
Tags:
Preview:
Citation preview
Aiming at the Natural Equilibrium of Planet Earth
Requires to Reinvent Computing
Reiner Hartenstein
IEEE fellow
1
(ISCAS-2011)
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
(Preface) Without Computers?
2
Lufthansa anno 1960
(Business Information System)
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
(Preface) very important future Applications
3
replacing bureaucracies by mass collaboration
http://www.macrowikinomics.comThe World Economic Forum:
other applications: see Cyber-Physical Systems
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Preface
• Enormous Trouble in Computing:– Longterm Programming Crisis– Keynotes and Panel Discussions
booming– Excessive Power Consumption
4
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Outline (1)
•Energy consumption of Computers
•Toward Exascale Computing•The von Neumann Syndrome•We need to Reinvent Computing
•Conclusions5
© 2010, reiner@hartenstein.de http://hartenstein.de
TU KaiserslauternBeyond peak oil
6
„6 more Saudi Arabias needed for demand predicted for 2030“
80% crude oil coming from
[Fatih Birol, Chief Economist IEA].
https://www.theoildrum.com/
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
7 7
Saudi Arabia
© 2011 reiner@hartenstein.de
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
How many more Saudi
Arabias needed?
Rio de Janeiro
8
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Power Consumption of the Internet
Power consumption by internet: x30 til 2030 if trends continueG. Fettweis, E. Zimmermann: ICT Energy Consumption - Trends and Challenges; WPMC'08, Lapland, Finland, 8 –11 Sep 2008
9
[Randy Katz: IEEE Spectrum, Febr. 2009]
more
than6
saudi
arabia !
Google Data Ccenter at Columbia River
soon 8 billion smart wireless devices
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
More Google Data Centers
10Google causing 2% electricity consumption worldwide ?
[datacenterknowledge.com]
© 2011 reiner@hartenstein.de
10
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Electricity Bill: a Key Issue
„The possibility of computer equipment power consumption spiraling out of control could have
serious consequences for the overall affordability of computing.”
Patent for water-based data centers
Cost of a Google data center dominated only by monthly power bill
[L. A. Barroso, Google]
11
FERC
Google going to sell electricity
• Already in 2005, Google’s electricity bill higher than value of its equipment.
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
12
The World's largest Data Center
[datacenterknowledge.com]
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Microsoft Data Center at Quincey
13
[datacenterknowledge.com]
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
About 2000 datacenters world-wide
14
[datacenterknowledge.com]
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Outline (2)
•Energy consumption of Computers
•Toward Exascale Computing•The von Neumann syndrome•We need to Reinvent Computing
•Conclusions
15
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
16year
relative performance
94 96 98 00 02 04 06 08 10 12 14 16 18 20 22 24 26 28 30
Multicore: Break-through or Breakdown?
x86parallelismvon-Neumann-only
much slower than Moore‘s law
beg
in o
f th
e
mult
icore
era
„forcing a historic transition to a parallel programming model yet to be invented“
David Callahan, Microsoft distinghuished endineer
perf
orm
anc
e gr
owth
need
ed
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
„ intel has thrown a Hail Mary Pass“
DavePatterson
17
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
18
John Hennessy
„ … I would be panicking …“
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
19
Exa-scale: (1018 computations/second) expected by 2018; [several sources]
Power estimated (single supercomputer):250 MW – 10 GW (2x NY City: 16 million people)
Exascale affordable ?
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
20
In my opinion, the largest supercomputers at any time, including the first exaflops, should not be thought of as computers. …[Andrew Jones, vice president Numerical Algorithms Group]
Supercomputers: no Computers?
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
21
…Their usage patterns and scientific impact are closer to major research facilities such as CERN, ITER, or Hubble.[Andrew Jones, vice president Numerical Algorithms Group]
no reason to solve the power problem ?
Supercomputers as Scientific Instruments
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
CERN (1)
22
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
CERN (2)23
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Hubble 24
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Learning how to go Exascale
CACHES 20111st International Workshop on Characterizing
Applications for Heterogeneous Exascale SystemsJune 4th, 2011, held in conjunction with ICS'2011
25th International Conference on SupercomputingMay 31 - June 4, 2011, Loews Ventana Canyon Resort, Tucson, Arizona
25
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Outline (3)
•Energy consumption of Computers•Toward Exascale Computing•The von Neumann syndrome•We need to Reinvent Computing•Conclusions
26
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
27
Reconfigurable Computing offers an overwhelming reduction of electricity consumption
Potential of RC
as well as massive speed-up factors …
explained by the von Neumann Syndrome
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
>15000>15000PISA project
28
FFTFFT100
Reed-Solomon Decoding
Reed-Solomon Decoding 2400
Viterbi DecodingViterbi
Decoding4001000
MACMAC
DSP and wireless
molecular dynamics
simulation
molecular dynamics
simulation88
BLASTBLAST52
protein identificatio
n
protein identificatio
n40
Smith-Waterman
pattern matching
Smith-Waterman
pattern matching
288
Bioinformatics
GRAPEGRAPE
2020AstrophysicsAstrophysics
SPIHT wavelet-based image compression
SPIHT wavelet-based image compression
457
real-time face detection
real-time face detection
60006000
video-rate stereo vision
video-rate stereo vision
900pattern
recognitionpattern
recognition 730
Image processing,Pattern matching,
Multimedia
3000CT imagingCT imaging
1
1000
1,000,000
Speedu
p-F
act
or
Speed-up factors are not newby avoiding the von Neumann paradigm
doub
led ev
. 4 m
onths
8723DNA seq.
100
10
10,000
100,000
1985 2000 201020051995
© 2011 reiner@hartenstein.de
?
crypto
crypto1000
28500
DES breakingDES breaking
?
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Energy saving factors: ~10% of speedup
29
FFTFFT100
Reed-Solomon Decoding
Reed-Solomon Decoding 2400
Viterbi DecodingViterbi Decoding
4001000
MACMAC
DSP and wireless
molecular dynamics simulation
molecular dynamics simulation
88BLASTBLAST52
protein identification
protein identification
40
Smith-Waterman pattern matching
Smith-Waterman pattern matching
288
Bioinformatics
GRAPEGRAPE
2020AstrophysicsAstrophysics
cryptocrypto1000
28500DES breakingDES breaking
100
103
106
Sp
eed
up
-Fact
or
1995 2000 20102005
779
3439
http://hartenstein.de© 2011 reiner@hartenstein.de
doub
les ev
ery 4
months
Power save factors obtained
SPIHT wavelet-based image compression
SPIHT wavelet-based image compression
457
real-time face detection
real-time face detection
60006000video-rate
stereo vision
video-rate stereo vision
900pattern
recognitionpattern
recognition730
Image processing,Pattern matching,Multimedia
3000CT imagingCT imaging
8723DNA seq.
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011, 30
[Tarek El-Ghazawi et al.: IEEE COMPUTER, Febr. 2008]
Application . Speed-up factor
SavingsPower Cost Size
DNA and Protein sequencing 8723 779 22 253
DES breaking 28514 3439 96 1116much less
equipment needed
massively saving energy
RC*: Demonstrating the intensive Impact
SGI Altix 4700 with RC 100 RASC compared to Beowulf cluster
Tarek El-
Ghazawi
*) RC = Reconfigurable Computing
taxonomy of
HPRC
design
flows
12 %
9 %
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
Drastically less Equipment needed
For instance: a hangar full of racks replaced byFor instance: a hangar full of racks replaced bya single rack without air conditioning
31
or ½ rack
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
The Reconfigurability Paradox
• Routing congestion
32
• Lower clock speed
• Massive reconfigurability overhead
• Massive wiring overhead
Orders of magnitude better performance by a massively worse area-inefficient technology ?
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
33
More power for creating foam
than to accelerate the vessel ?More power for creating foam
than to accelerate the vessel ?
33
because ofThe von Neumann Syndrome
© 2011 reiner@hartenstein.de
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
von Neumann Syndrome
34
Lambert M. Surhone, Mariam T. Tennoe,
Susan F. Hennessow (ed.): Von Neumann Syndrome; ßetascript publishing 2011
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
© 2011 reiner@hartenstein.de
von Neumann Model Critics35
“The von Neumann Syndrome”:[C.V. “RAM” Ramamoorthy 2007; UC Berkeley]
Nathan’s Law: Software is a gas. It expands to fill all its containers ...Nathan Myhrvold, Microsoft Ex-CTO
„even fills the internet“and the clouds
year
systemSLOC
(millions)
2001
Windows XP 40
2005
MAC OS X 10.4 86
2007
SAP Net Weaver 238
incompetent programmers
E. Dijkstra 1968; J. Backus 1978; Arvind , 1983; Peter G. Neumann 1985-2003; L. Savain 2006.
Critique of von Neumann is not new: N. N. 1995: THE STANDISH GROUP REPORTRobert N. Charette 2005: Why Software Fails; IEEE Spectrum
Anthony Berglas 2008: Why it is Important that Software Projects Fail
Software Desaster Reports:
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
All hardware but ALU is overhead: x20 inefficiency
36
x20 inefficiency: just one of
several overhead layers
[R. Hameed et al.: Understanding Sources of Inefficiency in General-Purpose Chips; 37th ISCA, June 19-23, 2010, St. Malo, France]
“GP Processors are inefficient”
(data cashe)
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
„The Memory Wall“
60%/yr..
7%/year
Patterson’s Law:Processor-MemoryPerformance Gap:(grows 50% / year)
1
10
100
1000Performance
1980 1990 2000
DRAM
CPU
2008
>1000
coined by Sally McKee
The overwhealming problem is data moving complexity, not processor performance. Dr. Djordje Maric* (ETH Zurich),
37
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Through-Silicon-Via (TSV)
38
reduce power consumption by 75% [Wally Rh., Micro News 2/28/2011 ]
SIP multiple dicePoP Package on PackagePiP Package in PackageTSV Through silicon via
reducing the memory wall?
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
Massive Overhead Phenomena
proportionate to the
number of processors
overproportionate to the number
of processors
39
overhead von Neumann machine
instruction fetch instruction streamstate address computation instruction streamdata address computation instruction streamdata meet PU + other overh. instruction streami / o to / from off-chip RAM instruction stream
Inter PU communication instruction streammessage passing overhead instruction streamtransactional memory overh. instruction streammultithreading overhead etc. instruction stream
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
40
von Neumann overhead vs. Reconfigurable Computing
overhead von Neumann machine datastream machine
instruction fetch instruction stream none*state address computation instruction stream none*data address computation instruction stream none*data meet PU + other overh. instruction stream none*i / o to / from off-chip RAM instruction stream none*Inter PU communication instruction stream none*message passing overhead instruction stream none*
transactional memory overh. instruction stream none*
multithreading overhead etc. instruction stream none*
*) c
onfig
ured
bef
ore
run
time
no inst
ruct
ion
fetc
h a
t ru
n t
ime
40
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Outline (4)
•Energy consumption of Computers
•Toward Exascale Computing•The von Neumann Syndrome•We need to Reinvent Computing
•Conclusions
41
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Putting Old Ideas Into Practice
42
Software Engineering http://www.acm.org/sigsoft/SEN/parnas.html SEN vol. 24 no. 3, May 1999
The biggest payoff will come from putting old ideas into practice (POIIP) and teaching people how to apply them properly. [David Parnas]
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Mike Flynn‘s Taxonomy
43
M. J. Flynn: “Very high-speed computing systems”; Proc. IEEE, Vol. 54, No. 12, pp. 1901–1909, Dec., 1966.
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Diana‘s extended Taxonomy44© 2011 reiner@hartenstein.de
D. Göhringer, M. Hübner, T. Perschke, J. Becker: “A Taxonomy of Reconfigurable Single/Multi-Processor Systems-on-Chip”; International Journal of Reconfigurable Computing, Hindawi, Special Issue: Selected Papers from ReCoSoC 2008, 2009.
rSI: I can be reconfigured at run time: e. g. RISPrSD: can exchange data memory or datapathrSIrSD: both possible
4 x SISD:
4 x SIMD:
rMIrMD: supports both
I: instruction streamD: data stream
rMD: SIMD processors can exchange their data memories or reconfigure their datapaths
rSI: I can be reconfigured at run time: e. g. RISP
rSIrMD: can reconfigure both, D and Iat run time
rMI: MPSoCs w. reconfigurable I
4 x MIMD:
rMD: MPSoCs w. reconfigurable D
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern „But you can‘t implement decisions!“
45
S = R + (if C then A else B endif);
=1
+
ABR C
section of a very large pipe network:
decision
C. G. Bell et al: IEEE Trans-C21/5, May 1972W. A. Clark: 1967 SJCC, AFIPS Conf. Proc.
decision box turns into (de)multiplexer
**
Software to Configware Migration
0 1
(de)multiplexer:
B A
C
decision box:
C0 1
??
POIIP:
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
POIIP: Loop to Pipe Mapping
46
(reconfigurable) DataPath Unit:
rDPUrDPUloop
body rDPUrDPU
rDPUrDPU
rDPUrDPU
Pipeline:
rDPUrDPUloop bod
y
loop:
complex loop bodynested loops
complex rDPU or pipe network inside rDPU
complex pipe
network
CPUCPU
MemoryMemory
Adder
Speaker
FMDemod
LPF1
Split
Gather
LPF2 LPF3
HPF1 HPF2 HPF3
Source:MIT
StreamIT
transport-triggered
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
POIIP: Loop to Pipe Mapping
47
(reconfigurable) DataPath Unit:
rDPUrDPUloop
body rDPUrDPU
rDPUrDPU
rDPUrDPU
Pipeline:
rDPUrDPUloop bod
y
loop:
complex loop bodynested loops
complex rDPU or pipe network inside rDPU
complex pipe
network
CPUCPU
MemoryMemory
Adder
Speaker
FMDemod
LPF1
Split
Gather
LPF2 LPF3
HPF1 HPF2 HPF3
Source:MIT
StreamIT
transport-triggered
on „platform FPGAs“
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
48
Imperative Language Twins
language category Computer Languages Languages f. Anti Machine
both deterministic procedural sequencing: traceable, checkpointable
operation sequence driven by:
read next instruction, goto (instr. addr.),
jump (to instr. addr.), instr. loop, loop nesting
no parallel loops, escapes, instruction stream branching
read next data item, goto (data addr.),
jump (to data addr.), data loop, loop nesting, parallel loops, escapes, data stream branching
state register program counter data counter(s) address computation
massive memory cycle overhead overhead avoided
Instruction fetch memory cycle overhead overhead avoided parallel memory bank access interleaving only no restrictions
language features control flow + data manipulation
data streams only (no data manipulation)
very easy to learn
multipleGAGsmuch more
powerful
Flowware Languages Software Languages
much moresimple
von Neumann Languages
[COMP-EURO
’89]
Anti-machine:
MoPL: [FPL‘94, Prague] more simple
parallelism
solution
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
A Heliocentric CS Model needed
49
PEProgram Engineering
The Generalization ofSoftware Engineering —
data streams*
*) do not confuse with „dataflow“!
Flowware Engineering
FE
auto-sequencing Memoryauto-sequencing Memory
asM asM
CEConfigware Engineering
structurespipe network modelpipe network model
rDPU rDPU reconfigurable-Data-Path- Unitreconfigurable-Data-Path- Unit
reconfigurable-Data-Path- Arrayreconfigurable-Data-Path- ArrayrDPA rDPA
instruction streams
SESoftware Engineering
CPU CPU
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
50
program source
compilation result
Software instruction streams
Flowware data streams
Configware datapath structures configured
A Clean Terminology, please
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Outline (5)
•Energy consumption of Computers•Toward Exascale Computing•The von Neumann Syndrome•We need to Reinvent Computing•Conclusions
51
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
absurdely incomprehensible
abstractions
52
[For architecture design & debug] Concurrency models can operate at component architecture level rather than programming languages. [E. A. Lee]
[E. A. Lee: Are new languages necessary for multicore? 2007][E. A. Lee. The problem with threads. Computer, 2006.]
& Locality
Awareness
needed !!
We need model-based abstractions at algorithmic level
are the problem in „standard“ languages
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
53
Higher Abstraction Levels
Efforts to extend standards-based, serial programming languages with features to describe parallel constructs are likely to fail.
Nick Tredennick:
Term Rewriting Systems (TRS) may raise the abstraction level up to math formulae
Mauricio Ayala-Rincón:
What’s more likely to succeed are languages that raise the level of abstraction in algorithm description
TRS: powerful for better language design and design space exploration
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
54
Conclusions
Twin Paradigm skills & basic hardware knowledge are essential qualifications for programmers.We urgently need a fundamental CS Education and Research Revolution for dual-rail-thinking
Since we‘ve to re-write software anyway we should do it twin-pardigm.We need a tool flow & education efforts supporting a twin-paradigm approach and locality awareness
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern We need „une' Levée en Masses“
55
We need „une' Levée en Masses“
We need „une' Levée en Masses“
55
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
56
Don‘t worry !Thank You very much !
too many panels and keynotes?
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
END57
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
time to space mapping
time domain: space domain:procedure domain structure domain
58
program loopn time steps, 1 CPU
pipeline1 time step, n DPUs
Bubble Sortn x k time steps,
1 „conditional swap“ unit
Shuffle Sortk time steps, n conditional swap“
time algorithm space algorithm
conditionalswap
xy
conditionalswap
conditionalswap
conditionalswap
conditionalswap
time algorithm space/time algorithm s
units
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
Architecture instead of synchro:
59
„Shuffle Sort“
conditionalswap
conditionalswap
conditionalswap
conditionalswap
modification: with shuffle-
function
conditionalswap
conditionalswap
conditionalswap
conditionalswap
conditionalswap
conditionalswap
swap
conditionalswap
conditional
direct time to space mappingaccessing conflicts
Better Architectureinstead of complex synchronisation: half he number of Blocks + up und down of data (shuffle function) – no von Neumann-syndrome !
Example
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
Understanding Complex Hetero Systems
60
Layers of Abstraction and Automatic Parallelization hide critical sources of, and limits to efficient parallel execution
Efficient Distribution of Tasks being memory limited
Internode Communications reduces Computational Efficiency
We must change how programmers think
essential: awareness of locality,
Focusing on memory mapping issues and transfer modes to detect overhead and bottlenecks
Understanding streams through complex fabrics needed
[Ed Lee]
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Vertical Disintegration
611960 200X
courtesy Manfred Glesner
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Market Complexity
62
Source: Gartner
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Taxonomy of Twin Paradigm Programming Flows (HPRC)
63
E. El-Araby et al.: Comparative Analysis of High Level Programming for Reconfigurable Computers: Methodology And Empirical Study; Proc. SPL2007, Mar del Plata, Argentina, Febr. 2007
[courtesy Richard Newton]
„The nroff of EDA“ [R. N.]
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
HLL programming models
64
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
Some hardware description languaqges
65
DeFactoGaladriel & NenyaMATCH
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
Some programming languages
66
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
Some languages for parallelism
67
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
More Languages
Som
e f
un
ctio
nal la
ngu
ag
es
Som
e d
ata
stre
am
lan
gu
ag
es
68
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
Why Computers are important
69 R. Rajkumar, I. Lee, L. Sha, J. Stankovic: Cyber-Physical Systems: The Next Computing Revolution; DAC 2010
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Science alone ?
see the claims by Andrew Jones, …
70
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Mobile Communication2007 2014 2020
Worldwide radio base station sites* (millions)
3.3 7.6 11.2
Average power consumption per site (kW) 1.7 1.3 1.1
Total power consumption of all sites (TW) 5.6 10 12.5
Total global RAN energy consumption (TWh) 49 84 99
total # of subscriptions expected (billions) 6 9
Broadband subscriptions expected (billions) 2
Video streams (%) 66 90
Share of mobile data in total mobile traffic (%)
37.5 98 99.6
71
A. Fehske, J. Malmodin, G. Biczók, G. Fettweis: The Global Footprint of Mobile Communications – The Ecological and Economic Perspective; IEEE Communications Magazine, Aug 2011
*) all standards
The data transmission speed growth by a factor of ten every five years (cellular, local + personal area networks),
Technologies to reduce energy consumption are a key
enabler
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Undersea Cable
72
Google: 9,620km submarine cable Japan-US; 1st use Febr 21, 2011Five fiber pairs deliver up to 4.8 Terabits per second (Tbps)
>100 kilometers between repeaters
wavelength-division multiplexing dramatically increases fiber capacity.
repeater laser power consumption <25 W
power consumption of fabrication and cable layer ships much higher
multiple (e.g. 5) pairs of fibers: each pair has one fiber in each direction
<1000 repeaters: <25 kW
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
73 73
Saudi Arabia
© 2011 reiner@hartenstein.de
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
How many more Saudi
Arabias needed?
Rio de Janeiro
74
© 2010, reiner@hartenstein.de http://hartenstein.de
TU Kaiserslautern
2011,
Undersea Cable
75
Google: 9,620km submarine cable Japan-US; 1st use Febr 21, 2011Five fiber pairs deliver up to 4.8 Terabits per second (Tbps)
>100 kilometers between repeaters
wavelength-division multiplexing dramatically increases fiber capacity.
repeater laser power consumption <25 W
power consumption of fabrication and cable layer ships much higher
multiple (e.g. 5) pairs of fibers: each pair has one fiber in each direction
<1000 repeaters: <25 kW
Recommended