6
Application Scaling

Application Scaling. Doug Kothe, ORNL Paul Muzio, CUNY Jonathan Carter, LLBL Bronis de Supinski, LLNL Mike Heroux, Sandia Phil Jones, LANL Brent Leback,

Embed Size (px)

Citation preview

Page 1: Application Scaling. Doug Kothe, ORNL Paul Muzio, CUNY Jonathan Carter, LLBL Bronis de Supinski, LLNL Mike Heroux, Sandia Phil Jones, LANL Brent Leback,

Application Scaling

Page 2: Application Scaling. Doug Kothe, ORNL Paul Muzio, CUNY Jonathan Carter, LLBL Bronis de Supinski, LLNL Mike Heroux, Sandia Phil Jones, LANL Brent Leback,

Application Scaling• Doug Kothe, ORNL• Paul Muzio, CUNY• Jonathan Carter, LLBL• Bronis de Supinski, LLNL• Mike Heroux, Sandia• Phil Jones, LANL• Brent Leback, The Portland Group• Piyush Mehrotra, NASA Ames• John Michalakes, NCAR• Nir Paikowsky, ScaleMP• Galen Shipman, ORNL• Trey White, ORNL

Page 3: Application Scaling. Doug Kothe, ORNL Paul Muzio, CUNY Jonathan Carter, LLBL Bronis de Supinski, LLNL Mike Heroux, Sandia Phil Jones, LANL Brent Leback,

Application Scaling

• Are there more appropriate domain-specific performance metrics for science and engineering HPC applications available then the canonical percent of peak?

• Or parallel efficiency and scalability? • If so, what are they? • Are these metrics driving for weak or strong scaling

or both?

Page 4: Application Scaling. Doug Kothe, ORNL Paul Muzio, CUNY Jonathan Carter, LLBL Bronis de Supinski, LLNL Mike Heroux, Sandia Phil Jones, LANL Brent Leback,

Application Scaling

• What is the role of local (node-based) floating point accelerators (e.g., cell, GPUs, etc.) for key science and engineering applications in the next 3-5 years?

• Is there unexploited or unrealized concurrency in the applications you are familiar with?

• If so, what, and where is it?

Page 5: Application Scaling. Doug Kothe, ORNL Paul Muzio, CUNY Jonathan Carter, LLBL Bronis de Supinski, LLNL Mike Heroux, Sandia Phil Jones, LANL Brent Leback,

Application Scaling

• Should applications continue with current programming models (Fortran, C, C++, PGAS, etc.) and paradigms (e.g., flat MPI, hybrid MPI/OpenMP, etc.) over the next decade?

• If not, what needs to change?• How might HPC system-attribute priorities

change over the next decade for the science and engineering applications you are familiar with?

Page 6: Application Scaling. Doug Kothe, ORNL Paul Muzio, CUNY Jonathan Carter, LLBL Bronis de Supinski, LLNL Mike Heroux, Sandia Phil Jones, LANL Brent Leback,

Application Scaling• Attributes to consider are:

– node peak flops – mean time to interrupt– wide-area network bandwidth– node memory capacity– local storage capacity– archival storage capacity– memory latency– interconnect latency– disk latency– interconnect bandwidth– memory bandwidth– disk bandwidth