Upload
jamar-alkins
View
215
Download
2
Tags:
Embed Size (px)
Citation preview
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 1
Brief Outline of the Earth Simulatorand Our Research Activities
AND A Lesson Learnt for the Past Three Years
大淵 済Wataru Ohfuchi
[email protected] Simulator Center
Japan Agency for Marine-Earth Science and Technologyand
Atmospheric and Oceanic Simulation Groupand AFES Working Team
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 2
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 3
Earth Simulator Building
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 4
Inside the Earth Simulator Building
PN cabinets(320)
IN cabinets(65)
65m (71yd)
Double Floor for Cables
Power Supply System
Cartridge Tape Library System
Magnetic Disk System
Air Conditioning SystemSeismic Isolation System
50m (55yd)
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 5
Comparison of PN Size
about 6m
Peak Performance: 64GflopsElectric Power : about 8kVAAir Cooling
Peak Performance : 64Gflops Electric Power : about 90kVAAir Cooling
about 7m
NEC SX-4 (1 node) Earth Simulator
70cm
100cm
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 6
Configuration of the Earth Simulator
Shared Memory16GB
Arit
hm
etic
Pro
cess
or
#
0 Arit
hm
etic
Pro
cess
or
#
1
Arit
hm
etic
Pro
cess
or
#
7
Shared Memory16GB
Arit
hm
etic
Pro
cess
or
#
0 Arit
hm
etic
Pro
cess
or
#
1
Arit
hm
etic
Pro
cess
or
#
7
Shared Memory16GB
Arit
hm
etic
Pro
cess
or
#
0 Arit
hm
etic
Pro
cess
or
#
1
Arit
hm
etic
Pro
cess
or
#
7
Processor Node #0 Processor Node #1 Processor Node #639
• Total peak performance : 40Tflops• Total main memory : 10TB
• Peak performance/AP : 8Gflops• Peak performance/PN : 64Gflops• Shared memory/PN : 16GB
Interconnection Network (full crossbar switch)
• Total number of APs : 5120• Total number of PNs : 640
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 7
Connection among Nodes
XS
W #
0
XS
W #
1
XS
W #
2
XS
W #
3
XS
W #
4
XS
W #
5
XS
W #
6
XS
W #
7
XS
W #
126
XS
W #
127
XC
T #
0
PN
#2
PN
#3
PN
#4
PN
#5
PN
#63
6
PN
#63
7
PN
#63
8
PN
#63
9
PN
#0
PN
#1
128 XSWs
64 Cabinets
640 PNs
320 Cabinets
XC
T #
1
PN-IN Electric Cables : 640 x 130 = 83,200
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 8
Electric Cables under the Floor
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 9
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 10
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 11
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 12
An Overview of AFES• (AGCM for the Earth Simulator)
– Primitive equation system (hydrostatic approximation)• Valid (arguably) down to 10 km (T1279)
– Spectral Eulerian– Physical processes
• Cumulus parameterizations (A-S, Kuo, MCA, Emanuel)• Radiation (mstranX: Sekiguchi et al. 2004)• Surface model: MATSIRO (Takata et al. 2004)• Etc
– Adopted from CCSR/NIES AGCM5.4.02• Center for Climate System Research, the Univ. Tokyo• Japanese National Institute for Environmental Studies• Rewritten totally from scratch with FORTRAN 90, MPI and microtasking
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 13
T1279L96 AFES’s Scalability
26.58Tflops64.9%
14.50Tflops70.8%
7.61Tflops74.3%3.86Tflops
75.3%0
5
10
15
20
25
30
35
40
0 640 1280 1920 2560 3200 3840 4480 5120
Tflops
Peak Sustained
#CPU
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 14
AFES won the Gordon Bell Awardfor Peak Performance!!!
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 15
Meso-scale ResolvingT1279L96 Simulations
• Typhoons, wintertime cyclogenesis and Baiu-Meiyu front
– Interactions between large-scale circulations and meso-scale phenomena
– Self-organization of meso-scale circulations in larger circulation field
• Short-term (10 days to 2 weeks)– CPU power is NOT a problem; data size (~Tera bytes) is the
problem
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 16
10-km Mesh Global Simulations
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 17
Typhoons over Western Pacific
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 18
Winter Cyclogenesis over Japan
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 19
Baiu-Meiyu Front over Japan
QuickTime˛ Ç∆PNG êLí£ÉvÉçÉOÉâÉÄǙDZÇÃÉsÉNÉ`ÉÉÇ å©ÇÈÇΩÇflÇ…ÇÕïKóvÇ≈Ç∑ÅB
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 20
But, So What?
€
∇θe at 850hPa
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 21
Our ES Project 2004• Mechanism and predictability of atmospheric and oceanic variations indu
ced by interactions between large-scale field and meso-scale phenomena
– Project leader: Wataru Ohfuchi– “FES” models+THORPEX
• AFES– Sub-project leader: Takeshi Enomoto (ESC)– AGCM
• OFES– Sub-project leaders: Hideharu Sasaki (ESC), Hirofumi Sakuma (FRCGC), Yukio M
aumoto (FRCGC/U. Tokyo)– MOM3-based OGCM
• CFES– Sub-project leader: Nobumasa Komori (ESC)– Coupled model: AFES + OIFES (OFES + IARC sea ice model)
• THORPEX– Sub-project leader: Tadashi Tsuyuki (NPD, JMA)– High-resolution singular vector method and predictability
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 22
Summary• With the combination of the ES and models well
optimized for its architecture, now it is possible to conduct meaningful (ultra-)high resolution global simulations first in the history of computational atmospheric and oceanic sciences, and geophysical fluid dynamics.
• Interaction between meso-scale phenomena and larger-scale circulation can be studied.
• Scientifically new knowledge and contribution to society are expected.
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 23
A Possible Future Direction ofHigh Performance Computing in
Atmospheric and Oceanic Sciences: A Lesson Learnt for the Past Three Years
with the Earth Simulator
• The Earth Simulator was not unfortunately perfect, of course.
• What I foresee as future modeling strategy.• What I foresee as future HPC in AOS.• It will be published in “Advances in Science: Earth
Science” edited by Prof. Peter Sammonds, Royal Society’s Philosophical Transactions (2005).
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 24
How Many Points Are There in the 10-km Mesh AGCM?
• T1269L96– 1279 spherical harmonics with the so-called “trian
gular trancation.– 3840 (longitude) X 1920 (latitude) X 96 (layers) =
…– ~700 M points…
• Assume Double Precision (8B) and 100 Variables…– ~560 GB– Actually, the T1279L96 AFES needs about 1.2 TB
of memory.
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 25
How Much Data Are We Producingwith the 10-km Mesh AGCM?
• One 3-D Snapshot.– 2.6GB.
• Oh, We Need 6-hourly Output!!! Ten 3-D Variables!!! For One Day…– 2.6GB X 4/day (6-hourly) X 10 variables = …– 104GB/day.
• Oh, We Want to Integrate for 10 Days…– ~1TB.
• Oh, We Are Climatologists!!! We Want to Integrate for 10000 Days…– ~1PB.
• Oh, We Need Ten Sensitivity Tests!!!– 10PB.
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 26
Future HPC in the World (in 2010…)• VERY Unfortunately HPC Hardware Business Is Cu
rrently Dominated by the IMPERIAL JAPAN and the US of A!!!
• Suppose Emerging Super Computing Country, Taiwan, Republic of China, Takes Over Submerging J and USA within a Few Years.
• The Earth System Simulator, the National Taiwan Normal University.– 1 PFlops machine (25 larger than the ES).– 1 Exabyte of hardisk/PROJECT!!!– 1 Zettabyte of long-term storage/PROJECT!!!
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 27
So, What We Can Do with The Earth System Simulator
at the National Taiwan Normal University 2010?
• When You Increase the “Resolution” by the Factor of Two…– 2 (longitutde) X 2 (latitude) X 2 (vertical levels) X 2 (time)
= 16 – ~ 25.
• The Current Biggest Global Atmospheric Simulation Project on the ES is ~3-km Mesh (Nonhydorstatic) .– So, ~1.5-km mesh simulation.– Sorry, it’s not “cloud resolving, yet!!!
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 28
We Need to Think Better Than That!!!• Multi-scale Modeling.• “Stand-alone” Global “Hydrostatic” Model.• “Stand-alone” Regional Nonhydrostatic Model.
– As “super-parameterization”.
• “Stand-alone” 3-D Turbulence Model.• “Super-prameterization”-like link between these
models.• We may have to go down to “explicit” cloud physic
s.• Of course, 3-D radiation!!!
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 29
Conclusions 1
• HPC is not only a number crunching capability.– Linpack has become totally obsolete.
• Data handling is much much much much much much….MORE important. We are already in the middle of the storm of data!– Hard disk.– Long-term data storage.– Software.
• But still we need to think much better.– Just increasing resolution does not seem to lead to
a breakthrough.
2004/12/22 国立台湾師範大学,台北,台湾省,中華民国 30
Conclusions 2• A HUGE HPC System should be used as a whole.
– The ES consists of 640 nodes.– Sorry, those jobs that require less than ~320 nodes should go awa
y.• “Expensive” vs. “Cheapo”
– Vector vs. Scalar?– We need to think about “cost effectiveness”.– It may depend on problems.
• GRID?– Probably very good for data sharing.– Simulations?
• We need to integrate science and engineering.– At least wee need to understand both and have “strong” opinions.