29
synergy.cs.vt.edu Wu Feng 16th November 2014

November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

synergy.cs.vt.edu

Wu  Feng  16th

November  2014  

Page 2: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

The  Ul6mate  Goal  of  “The  Green500  List”  

•  Raise  awareness  in  the  energy  efficiency  of  supercompu6ng.  –  Drive  energy  efficiency  as  a  first-­‐order  design  constraint      (on  par  with  performance).  

   Encourage  fair  use  of  the  list  rankings  to  promote  energy  efficiency  in  high-­‐performance  compu6ng  systems.    

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]  

Page 3: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

Agenda    

•  Overview  of  the  Green500  (Wu  Feng)  •  Methodologies  for  Measuring  Power  (Erich  Strohmaier)  •  Re-­‐Visi6ng  Power  Measurement  for  the  Green500  

(Thomas  Scogland)  •  The  16th  Green500  List  (Wu  Feng)  

–  Trends  and  Evolu6on  –  Awards  

•  A  Talk  from  #1  Supercomputer  on  the  Green500    

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]  

Page 4: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

Brief  History:  From  Green  Des6ny  to  The  Green500  List  

2/2002:    Green  Des6ny  (hbp://sss.lanl.gov/  →  hbp://sss.cs.vt.edu/)  –  “Honey,  I  Shrunk  the  Beowulf!”  31st  Int’l  Conf.  on  Parallel  Processing,  

August  2002.  

4/2005:    Workshop  on  High-­‐Performance,  Power-­‐Aware  Compu6ng  –  Keynote  address  generates  ini6al  discussion  for  Green500  List  

4/2006  and  9/2006:  Making  a  Case  for  a  Green500  List  –  Workshop  on  High-­‐Performance,  Power-­‐Aware  Compu6ng  –  Jack  Dongarra’s  CCGSC  Workshop    “The  Final  Push”  (Dan  Fay)  

9/2006:  Founding  of  Green500:  Web  Site  and  RFC    (Chung-­‐Hsing  Hsu)  –  hbp://www.green500.org/        Generates  feedback  from  hundreds  

11/2007:  Launch  of  the  First  Green500  List  (Kirk  Cameron)  –  hbp://www.green500.org/lists/green200711    

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]  

Page 5: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

Evolu6on  of  The  Green500  

•  11/2009:    Experimental  Lists  Created  –  Li)le  Green500:    More  focus  on  LINPACK  energy  efficiency  than  on  

LINPACK  performance  in  order  to  foster  innova6on  –  HPCC  Green500:    Alterna6ve  workload  (i.e.,  HPC  Challenge  benchmarks)  

to  evaluate  energy  efficiency  –  Open  Green500:    Enabling  alterna6ve  innova6ve  approaches  for  LINPACK  

to  improve  performance  and  energy  efficiency,  e.g.,  mixed  precision  

•  11/2010:    First  Green500  Official  Run  Rules  Released  •  11/2010:    Open  Green500  Merged  into  Lible  Green500  •  06/2011:    Collabora6ons  Begin  on  Methodologies  for  Measuring  

the  Energy  Efficiency  of  Supercomputers  •  06/2013:    Adop6on  of  New  Power  Measurement  Methodology  

(EE  HPC  WG,  The  Green  Grid,  Top500,  and  Green500)  The  Green500  BoF,  SC|14,  Nov.  2014  

POC:  [email protected]  

Page 6: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]  

Evolution of the Power Profile of the HPL Core Phase

Page 7: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

Legacy  Assump6ons  

•  Measuring  a  small  part  of  a  system  and  scaling  it  up  does  not  introduce  too  much  of  an  error  

•  The  power  draw  of  the  interconnect  fabric  is  not  significant  when  compared  to  the  compute  system  

•  The  workload  phase  of  HPL  will  look  similar  on  all  HPC  systems  

These  assump6ons  need  to  be  re-­‐visited.  

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]  

Page 8: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

Methodologies  for  Measuring  Power  Erich  Strohmaier  

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]  

Page 9: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

Re-­‐Visi6ng  Power  Measurement  for  the  Green500  Tom  Scogland  

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]  

Page 10: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

●●

●●●●●●

●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●● ●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●

●●●

●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●

●●●●

●●●

●●

●●●●●

●●●●●

●●●

●●

●●●●

●●

●●●●●●●●●●●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●

●●

●●

●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●

●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●● ●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●

●●●

●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●

●●●●

●●●

●●

●●●●●

●●●●●

●●●

●●

●●●●

●●

●●●●●●●●●●●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●

●●

●●

●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

0

2000

4000

2007_11

2008_02

2008_06

2008_11

2009_06

2009_11

2010_06

2010_11

2011_06

2011_11

2012_06

2012_11

2013_06

2013_11

2014_06

2014_11List release

MFL

OPS

/WTrends:    How  Energy  Efficient  Are  We?  

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]  

Page 11: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

0

2000

4000

20082010

20122014

List release

MFL

OPS

/WGreen500 Rank 1 10 100 500

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]  

Trends:    How  Energy  Efficient  Are  We?  

Page 12: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

Trends  in  Power:    Max,  Mean,  Median,  Min  

The Green500 BoF, SC|14, Nov. 2014 POC: [email protected]

Max Mean

Median Min

0

5000

10000

15000

0

250

500

750

0100200300400

0

20

40

60

2009_11

2010_06

2010_11

2011_06

2011_11

2012_06

2012_11

2013_06

2013_11

2014_06

2014_11

2009_11

2010_06

2010_11

2011_06

2011_11

2012_06

2012_11

2013_06

2013_11

2014_06

2014_11

2009_11

2010_06

2010_11

2011_06

2011_11

2012_06

2012_11

2013_06

2013_11

2014_06

2014_11

2009_11

2010_06

2010_11

2011_06

2011_11

2012_06

2012_11

2013_06

2013_11

2014_06

2014_11List release

Powe

r

Page 13: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●

●●●

●●

●●

●●●

●●●

●●●●●

●●●●●

●●●●

●●●

●●

●●●●

●●

●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●● ●

●●●●●●●●

●●●●●●●●●●●●●●●●●

●●

●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●

●●

●●●●

●●●

●●

●●●●●

●●●●●

●●●

●●

●●●●

●●

●●●●●●●●●●●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●

●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Energy Performance

0

1000

2000

3000

4000

25

50

75

100

2008_06

2009_06

2010_06

2011_06

2012_06

2013_06

2014_06

2008_06

2009_06

2010_06

2011_06

2012_06

2013_06

2014_06

List release

Effic

ienc

y

Machine type Heterogeneous Homogeneous All

Trends:    Energy  vs  Performance  Efficiency  

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]  

Cell  MFLOPS/Watt % of peak MFLOPS

GPU  

Xeon  Phi  

Page 14: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

Trends  Towards  Exascale  

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]  

Page 15: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

Exascale  Compu6ng  Study:  Technology  Challenges  in  Achieving  Exascale  Systems  

•  Goal  –  “Because  of  the  difficulty  of  achieving  such  physical  constraints,  the  

study  was  permibed  to  assume  some  growth,  perhaps  a  factor  of  2X,  to  something  with  a  maximum  limit  of  500  racks  and  20  MW  for  the  computa6onal  part  of  the  2015  system.”  

•  Realis6c  Projec6on?  –  “Assuming  that  Linpack  performance  will  con6nue  to  be  of  at  least  

passing  significance  to  real  Exascale  applica6ons,  and  that  technology  advances  in  fact  proceed  as  they  did  in  the  last  decade  (both  of  which  have  been  shown  here  to  be  of  dubious  validity),  then  […]  an  Exaflop  per  second  system  is  possible  at  around  67  MW.”  

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]  

Page 16: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

Trends:    Zoomed  View  of  Nov.  2014  (By  Machine  Type)  

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]  

100

1000

10000

100 1000Efficiency (MFLOPS/Watt,log10)

Powe

r (KW

, log

10)

Machine● AMD GPU

Blue Gene/QIntel MICNVIDIA GPUOther

Heterogeneous●

01

Page 17: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

● ●

●●

● ●

●●●●●●

●● ● ●

●●

● ● ●

0

1

2

3

4

5

08 09 10 11 12 13 14 15

List release year

Powe

r ext

rapo

late

d to

exa

flop

(Gig

awat

ts)

Top in ● Green500 Top500

190  MW  

Trends:    Extrapola6ng  to  Exaflop  

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]  

Page 18: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

Evolu6on  of  the  Green500  

•  Methodologies  for  Measuring  Power  –  Collabora6on  between  EE  HPC  WG,  Green  Grid,  TOP500,  and  

Green500,  started  in  June  2011    

•  Research,  Evalua6on,  Improvement,  and  Convergence  on  –  Metrics    –  Methodologies  –  Workload    

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]  

Page 19: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

Evolu6on:    Green500  Methodology  

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]  

Aspect  1:  Time  Frac1on  &  Granularity  

Aspect  2:  Machine  Frac1on  

Aspect  3:  Subsystems  Measured  

Level  0   Derived  numbers  

[Y]  Compute  nodes  [      ]  Interconnect    net  [      ]  Storage  [      ]  Storage  Network  [      ]  Login/Head  nodes  

Level  1   20%  of  run:  1  average  power  measurement  

(larger  of)  1/64  of  machine  or  1kW  

Level  2   100%  of  run:  at  least  100  average  power  measurements  

(Larger  of)  1/8  of  machine  or  10kW  

Level  3   100%  of  run:  at  least  100  running  total  energy  measurements  

whole  machine  

Level  1+?    1/16  of  machine  +  network    

Page 20: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

Where  Are  We  Now?  •  Feedback  from  the  Community  

–  Include  cooling  and  associated  infrastructure  to  the  power  •  Focus  on  the  energy  efficiency  of  the  machine  itself  •  PUE:  A  measure  of  how  efficiently  a  datacenter  uses  its  power  

–  Total  Facility  Power  /  IT  Equipment  Power  

–  Soyware-­‐based  tuning  for  energy  efficiency  •  Is  it  rewarding  soyware  innova6on  or  gaming  the  system?

–  Phase-­‐out  Level  0  (via  more  repor6ng)  •  Why  does  it  exist?      •  Incen6vize  non-­‐repor6ng  ins6tu6ons  to  report.  

–  Phasing  out  of  Level  1  à  Adop6on  of  a  Level  1+  or  Level  2  

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]  

Page 21: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

Where  Do  We  Want  To  Go?  

•  Con6nue  building  upon  the  momentum  of  the  green  HPC  movement  …  –  Current  Buy-­‐In:    ~  60%  of  Green500  are  submibed  rather  than  

derived  (level  0)  numbers    

•  Three  aspects  –  Methodologies:    Different  levels  of  measurement  methodologies  

•  Level  0  and  Level  1  à  Level  2  and  3    

–  Metrics:    FLOPS/W  à  ???  –  Workloads:    LINPACK  à  ???    

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]  

Page 22: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

Top  10  of  the  Green500    

Page 23: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

Heat Exchanger Oil <40℃

⇒ Water <35℃

Cooling Tower: Water <35℃ ⇒ Outside

GRC Oil-Submersion Rack Processors 40~70℃

⇒ Oil <40℃

Peak performance 217TFlops (DP)

Worlds’  top  efficiency,  >4GFlops/W  •  Green  #1  in  Nov  13&Jun  14!  

High Density Compute Nodes with Latest Accelerators

Container 20 Feet Container (16m2)

40 NEC/SMC 1U Servers 2 IvyBridge CPUs + 4 K20X GPUs per node

Heat Dissipation to Outside Air

TSUBAME-KFC (Kepler Fluid Cooling): An Ultra-Green Supercomputer Testbed in Tokyo Tech

PUE  of  1.09  !

Page 24: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

GSIC Center, Tokyo Institute of Technology

3rd

November 2014

Page 25: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

The Green500 BoF, SC|14, Nov. 2014 POC: [email protected]

Submersion Tank Rack System ESLC-8 (ExaScaler, Inc.)Number of Unit 8U / Tank RackUnit Motherboard X9DRG-HTF (Supermicro)Cooling Method Single Phase Direct CoolingCoolant Fluorinert (3M)Cooling Capacity 13,800kcal/h (at 25 Celcius Water)Pump Capacity 100L/min.

Super Manycore Processor PEZY-SC (PEZY Computing, K.K.)Number of Processor Core 1,024Total Number of System Core 65,536Peak Procoessor Performance 1.5TFLOPS (Double) @733MHzTotal System Peak Performance 96TFLOPS with 64 of PEZY-SCSystem Memory/Speed DDR3/1,333MHzTotal System Memory 2TB (32GB per PEZY-SC)PCIe Interface PCIe 3.0 x16 (32 Buses)

Host System Processor Xeon E5-2660v2 (Intel)Number of Processor Core 10Processor Speed 2.2GHzNumber of Processor 16System Memory/Speed DDR3/1,866MHzTotal System Memory 2TB (128GB per Xeon)

Interconnect Card MCX354A-FCBT (Mellanox)InfiniBand Category InfiniBand FDR x4 (56Gb/s)Port Number/Number in use 8 or 16 (Dual Port) /8 or 16 (Dual Port)Interconnect Switch SX6025 (Mellanox)Number of Switch Port 36

Systerm Configuration

Main Manycore Processing System

Host System

Interconnect

ExaScaler-1 Complete System (ExaScaler, Inc.)

Number of Unit 8U / Tank RackUnit Motherboard X9DRG-HTF (Supermicro)Cooling Method Single Phase Direct CoolingCoolant / Coolant Volume Fluorinert (3M) / 200L (approx.)Cooling Capacity 13,800kcal/h (at 25 Celcius Water)Pump Capacity 100L/min.

Maximum GPU Board Length Up to 280mmNumber of GPU Boards per Unit 4Total Number of GPU Board 32Maximum Power Consumption 350 Watt per GPU BoardTotal Unit Power Capacity 1,800 Watt per UnitPCIe Interface PCIe 3.0 x16 (32 Buses)

Host System Processor Xeon E5-2xxx v2 Family (Intel)Number of Processor Core 4 to 10Processor Speed 1.7 to 3.5GHzNumber of Processor 16System Memory/Speed DDR3/1,866MHzTotal System Memory up to 2TB (128GB per Xeon)

Interconnect Card MCX354A-FCBT (Mellanox)InfiniBand Category InfiniBand FDR x4 (56Gb/s)Port Number/Number in use 8 or 16 (Dual Port) /8 or 16 (Dual Port)

ESLC-8 Submersion Cooling System (ExaScaler, Inc.)Systerm Configuration

GPU Board

Host System (Xeon E5-2xxx v3 with DDR4 configuration will be offered in Q2/15)

Interconnect (Optional)

Submersion Liquid CoolingSupercomputer 「ExaScaler-1」

and

Submersion Liquid CoolingTank Rack System 「ESLC-8」

56 nozzles to blow coolant up

x8 x8

1,024 core, 1.5TFLOPSProcessor  “PEZY-SC”

Dual Processor Module Card+Base Board

Submersion  Liquid  Cooling  Tank  Rack  “ELSC-8”  with 8 of Unit (Xeon*2+PEZY-SC*8 on X9DRG-HTF)

Multiple Tank Rack Systems with 16U

ESLC-8 Tank Rack System is ready for NVIDIA/AMD/Intel GPGPUs.

Or, any of GPGPU can be equipped instead of it

Ranked at # of Top500 with TFLOPS(Suiren 4 Tank Rack System at KEK Computing Research Center) Ranked at # 2 of Green500 with 4.02 GFLOPS/W

Page 26: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

High Energy Accelerator Research Organization KEK

2nd

November 2014

Page 27: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

1 20.11.2014

The Lattice-CSC Cluster at GSI

Lattice-CSC (at GSI): • Built for Lattice-QCD simulations. • Quantum Chromo Dynamics (QCD) is

the physical theory describing the strong force.

• Very memory intensive.

160 Compute nodes: • 4 * AMD FirePro S9150 GPU • ASUS ESC4000 G2S Server • 2 * Intel 10-core Ivy-Bridge CPU • 256 GB DDR3-1600 1.35V • FDR Infiniband

Installation ongoing, 56 nodes ready Green DateCenter at GSI, Darmstat, Germany

Page 28: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

GSI Helmholtz Center

1st

November 2014

Page 29: November2014 - eehpcwg.llnl.goveehpcwg.llnl.gov/documents/conference/sc14/bof_sc14_bof_trends.pdf2014_06 2008_06 2009_06 2010_06 2011_06 2012_06 2013_06 2014_06 List release Efficiency

Acknowledgements  

•  Key  Contributors    –  Balaji  Subramaniam  –  Thomas  Scogland  –  Vignesh  Adhinarayanan  

•  YOU!    –  For  your  contribu6ons  in  raising  awareness  in  the  energy  

efficiency  of  supercompu6ng  systems  

The  Green500  BoF,  SC|14,  Nov.  2014  POC:  [email protected]