Sc10 slide share

Embed Size (px)

DESCRIPTION

SC10 Diary

Text of Sc10 slide share

  • 1. SC10
    Guy Tel-Zur
    November 2010

2. My Own Diary
A subjective impression from SC10
3. Outline
The Tutorials
Plenary Talks
Papers & Panels
The Top500 list
The Exhibition
http://sc10.supercomputing.org/
4. Day 0 - Arrival
US Airways entertainment system is running Linux!
5. 6. 7. A lecture by Prof. Rubin Landau Computational Physics at the Educational Track
3:30PM - 5:00PM Communities, Education Physics: Examples in Computational Physics, Part 2 Physics: Examples in Computational Physics, Part 2 Rubin Landau 297
Although physics faculty are incorporating computers to enhance physics education, computation is often viewed as a black box whose inner workings need not be understood. We propose to open up the computational black box by providing Computational Physics (CP) curricula materials based on a problem-solving paradigm that can be incorporated into existing physics classes, or used in stand-alone CP classes. The curricula materials assume a computational science point of view, where understanding of the applied math and the CS is also important, and usually involve a compiled language in order for the students to get closer to the algorithms. The materials derive from a new CP eTextbook available from Compadre that includes video-based lectures, programs, applets, visualizations and animations.
8. Eclipse PTP
At last FORTRAN has an advanced, free, IDE !!!
PTP - Parallel Tools Platform
http://www.eclipse.org/ptp/
9. Elastic-R
10. 11. 12. 13. Visit
14. Python for Scientific Computing
15. Amazon Cluster GPU Instances provide 22 GB of memory, 33.5 EC2 Compute Units, and utilize the Amazon EC2 Cluster network, which provides high throughput and low latency for High Performance Computing (HPC) and data intensive applications. Each GPU instance features two NVIDIA Tesla M2050 GPUs, delivering peak performance of more than one trillion double-precision FLOPS. Many workloads can be greatly accelerated by taking advantage of the parallel processing power of hundreds of cores in the new GPU instances. Many industries including oil and gas exploration, graphics rendering and engineering design are using GPU processors to improve the performance of their critical applications.
Amazon Cluster GPU Instances extend the options for running HPC workloads in the AWS cloud. Cluster Compute Instances, launched earlier this year, provide the ability to create clusters of instances connected by a low latency, high throughput network. Cluster GPU Instances give customers with HPC workloads an additional option to further customize their high performance clusters in the cloud. For those customers who have applications that can benefit from the parallel computing power of GPUs, Amazon Cluster GPU Instances can often lead to even further efficiency gains over what can be achieved with traditional processors. By leveraging both instance types, HPC customers can tailor their compute cluster to best meet the performance needs of their workloads. For more information on HPC capabilities provided by Amazon EC2, visit aws.amazon.com/ec2/hpc-applications.
Amazon Cluster GPU Instances
Not SC10 Related
16. The Top500
17. 18. 19. 20. 21. 22. Worlds #1
China's National University of Defense Technology's Tianhe-1A supercomputer has taken the top ranking from Oak Ridge National Laboratory's Jaguar supercomputer on the latest Top500 ranking of the world's fastest supercomputers. The Tianhe-1A achieved a performance level of 2.67 petaflopsper second, while Jaguar achieved 1.75 petaflops per second. The Nebulae, another Chinese-built supercomputer, came in third with a performance of 1.27 petaflops per second. "What the Chinese have done is they're exploiting the power of [graphics processing units], which are...awfully close to being uniquely suited to this particular benchmark," says University of Illinois Urbana-Champagne professor Bill Gropp. Tianhe-1A is a Linux computer built from components from Intel and NVIDIA. "What we should be focusing on is not losing our leadership and being able to apply computing to a broad range of science and engineering problems," Gropp says. Overall, China had five supercomputers ranked in the top 100, while 42 of the top 100 computers were U.S. systems.
23. The Top 10
24. 25. 26. http://www.green500.org
27. Talks
28. SC10 Keynote LectureClayton M. Christensen - Harvard Business School
29. How to Create New Growth in a Risk-Minimizing Environment
Disruption is the mechanism by which great companies continue to succeed and new entrants displace the market leaders. Disruptive innovations either create new markets or reshape existing markets by delivering relatively simple, convenient, low cost innovations to a set of customers who are ignored by industry leaders. One of the bedrock principles of Christensen's disruptive innovation theory is that companies innovate faster than customers' lives change. Because of this, most organizations end up producing products that are too good, too expensive, and too inconvenient for many customers. By only pursuing these "sustaining" innovations, companies unwittingly open the door to "disruptive" innovations, be it "low-end disruption" targeting overshot-less-demanding customers or "new-market disruption", targeting non-consumers. 1. Many of todays markets that appear to have little growth remaining, actually have great growth potential through disruptive innovations that transform complicated, expensive products into simple, affordable ones. 2. Successful innovation seems unpredictable because innovators rely excessively on data, which is only available about the past. They have not been equipped with sound theories that do not allow them to see the future perceptively. This problem has been solved. 3. Understanding the customer is the wrong unit of analysis for successful innovation. Understanding the job that the customer is trying to do is the key. 4. Many innovations that have extraordinary growth potential fail, not because of the product or service itself, but because the company forced it into an inappropriate business model instead of creating a new optimal one. 5. Companies with disruptive products and business models are the ones whose share prices increase faster than the market over sustained periods
30. SC10 Keynote Speaker
31. High-End Computing and Climate Modeling: Future Trends and Prospects
SESSION: Big Science, Big Data II
Presenter(s):Phillip Colella
ABSTRACT:Over the past few years, there has been considerable discussion of the change in high-end computing, due to the change in the way increased processor performance will be obtained: heterogeneous processors with more cores per chip, deeper and more complex memory and communications hierarchies, and fewer bytes per flop. At the same time, the aggregate floating-point performance at the high end will continue to increase, to the point that we can expect exascale machines by the end of the decade. In this talk, we will discuss some of the consequences of these trends for scientific applications from a mathematical algorithm and software standpoint. We will use the specific example of climate modeling as a focus, based on discussions that have been going on in that community for the past two years.
Chair/Presenter Details:
Patricia Kovatch (Chair) - University of Tennessee, Knoxville
Phillip Colella - Lawrence Berkeley National Laboratory
32. Prediction of Earthquake Ground Motions Using Large-Scale Numerical Simulations
SESSION: Big Science, Big Data II
Presenter(s):Tom Jordan
ABSTRACT:Realistic earthquake simulations can now predict strong ground motions from the largest anticipated fault ruptures. Olsen et al. (this meeting) have simulated a M8 wall-to-wall earthquake on southern San Andreas fault up to 2-Hz, sustaining 220 teraflops for 24 hours on 223K cores of NCCS Jaguar. Large simulation ensembles (~10^6) have been combined with probabilistic rupture forecasts to create CyberShake, a physics-based hazard model for Southern California. In the highly-populated sedimentary basins, CyberShake predicts long-period shaking intensities substantially higher than empirical models, primarily due to the strong coupling between rupture directivity and basin excitation. Simulations are improving operational earthquake forecasting, which provides short-term earthquake probabilities using seismic triggering models, and earthquake early warning, which attempts to predict imminent shaking during an event. These applications offer new and urgent computational challenges, including requirements for robust, on-demand supercomputing and rapid access to very large data sets.
33. Panel
34. Exascale Computing Will (Won't) Be Used by Scientists by the End of This Decade
EVENT TYPE: Panel
Panelists:Marc Snir, William Gropp, Peter Kogge, Burton Smith, Horst Simon, Bob Lucas, Allan Snavely, Steve Wallach
ABSTRACT:DOE has set a goal of Exascale performance by 2018. While not impossible, this will require radical innovations. A contrarian view may hold that technical obstacles, cost, limited need, and inadequate policies will delay exascale well beyond 2018. The magnitude of the required investments will lead to a public discussion for which we need to be well prepared. We propose to have a public debate on the proposition "Exascale computing will be used by the end of the decade", with one team arguing in favor and another team arguing against. The arguments should consider technical and non-technical obstacles and use cases. The proposed format is: (a) introductory statements by each team (b) Q&A's where each team can put questions to other team (c) Q&A's from the p