24
news Issue 76 WINTER 2014 Also in this issue Investigating liquid crystals The newsletter of EPCC, the supercomputing centre at the University of Edinburgh In this issue The science of sound and hearing Reducing the environmental costs of HPC Why we need Women In HPC Digital health How innovation can improve healthcare for us all

The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

newsIssue 76 WINTER 2014

Also in this issue

Investigating liquid crystals

The newsletter of EPCC, the supercomputing centre at the University of Edinburgh

In this issue

The science of sound and hearing

Reducing the environmental costs of HPC

Why we need Women In HPC

Digital healthHow innovation can improve healthcare for us all

Page 2: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

2

We have added a new supercomputer to our portfolio.

Working in collaboration with the Digital Health Institute, we have acquired an SGI UV2000 system.

Unlike many of our existing HPC resources, Ultra is not a cluster, there is just one Linux operating system controlling all 512 computing cores and 8TB of memory. This offers many advantages to researchers and opens up new possibilities - suddenly we can run a large code without complex parallelisation!

There is no doubt that our Ultra system is an interesting one and packs a lot of new technologies inside. Now that the acceptance tests have been completed and the system is up and running, our next challenge is to do some real science on it!

Contact us

3

4

12

8

16

10

14

6

Contents

www.epcc.ed.ac.uk [email protected] +44 (0)131 650 5030

EPCC is a supercomputing centre based at The University of Edinburgh, which is a charitable body registered in Scotland with registration number SC005336.

From the Directors

17

20

19

18

Events EASC 2015

Staff profile Meet Andy, our new Projects Officer

Support for scienceNu-FUSE: project review

Support for scienceNESS: synthesising sound

Support for scienceAuditory project: modelling human hearing

HPC researchECO2Clouds: energy efficiency in HPC

OptimisationIntel Parallel Computing Centre

Energy efficiencyAdept: improving energy use in HPC

Support for scienceModelling liquid crystals

Support for scienceCP2K-UK: HPC for chemists

TrainingMSc in HPC: Summer Projects Programme

Support for scienceExTASY: modelling biomolecules

HPC research CRESTA: exascale project review

EventsWomen In HPC: SC’14 workshop

Digital health bidBetter healthcare for all

TrainingPRACE Summer of HPC

By the time you read this, details of the forthcoming ARCHER upgrade will have been released. This upgrade, increasing the size of ARCHER by around 60%, provides immediate extra HPC capability to the scientific community on an accelerated timetable from the original plans. There are many quite radical HPC technologies on the horizon, however waiting for these to emerge would not have provided the capability our users need now and we fully support EPSRC’s decision to accelerate the upgrade of ARCHER.

We’ve also installed an SGI UV2000 system, our first large-scale shared-memory system for a number of years (see below). Scottish Funding Council has funded this 8 Terabyte system to support the work of its Innovation Centres, in particular the Digital Health Institute that is hosted at the University of Edinburgh. With our hosting of the Farr Institute of

Health Informatics Research computer system in collaboration with Dundee University, EPCC is seeing a big increase in its engagement with health-related projects - a fascinating new area.

As part of this strategy, we have led the development of the LifeKIC proposal (see p21). This is a €428 million bid to the European Institute of Innovation & Technology – certainly our largest European bid ever. If successful, the University will lead a network of six European regions developing innovative solutions to the many challenges we face in Europe due to the complex demographic changes in our society and societies around the globe.

Meet our new Ultra system

Maciej Olchowik [email protected]

22

21

Alison Kennedy & Mark Parsons EPCC Executive Directors

[email protected] [email protected]

Page 3: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

3The newsletter of EPCC, the supercomputing centre at the University of Edinburgh

Alan Gray [email protected]

This event will bring together all the stakeholders involved in solving the software challenges of the exascale – from application developers, through numerical library experts, programming model developers and integrators, to tools designers.

How to Participate

Authors are invited to present research and experience in all areas associated with developing

applications for exascale and the associated tools, software programming models and libraries.

Work can be presented as a talk or as a poster at the conference. A peer review process will be used to select abstracts. Following the conference, authors of talks will have the opportunity to submit a paper for publication in the conference proceedings.

EASC 2015: Solving Software Challenges for Exascale

Edinburgh, UK, 21-23 April 2015

Exascale Applications and Software Conference

Call for abstracts

See the website for full details: easc2015.ed.ac.uk

STAFF PROFILE

I joined EPCC in December 2013 to take up the new role of Projects Officer to assist Project Managers in the financial and administrative management of several leading EC- funded Framework 7 and UK Research Council projects.

Currently most of my time is split between two projects that are at differing but equally exciting points in their lifecycle: CRESTA and FORTISSIMO.

CRESTA (see p19) is now midway through its final year, whereas FORTISSIMO has completed the second call for experiments. It’s looking likely that there will be over fifty experiments involving well over a hundred partners within the next few months. I’m looking forward to the challenges that a project of this size and ambition will bring.

I enjoy working at EPCC as my interest in computing began in 1983 when I was given a 16KB ZX Spectrum as a Christmas present. Many long nights were spent laboriously typing code using the Spectrum’s less than perfect rubber keyboard only to then have to spend even more time debugging! (Allinea wasn’t around then).

Whilst my hands-on experience of programming never went beyond an entry level understanding of BASIC it helped to foster an interest in all things computing that continues to this day.

To now find myself working in one of the world’s leading HPC centres still brings a smile to my face and I don’t see that changing for a long time to come.

Projects Officer Andy McDonald talks about his work here at EPCC.

Andy McDonald [email protected]

Page 4: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

4

Nu-FuSE aimed to significantly improve computational modelling capabilities for fusion, and fusion-related sciences, enhancing the predictive capabilities needed to address the key physics challenges of a new generation of fusion systems.

The project focussed on three specific scientific areas (fusion plasmas, radiation damage in materials, and the plasma edge) that require extreme-scale computing across a range of simulation codes, and that would benefit from interdisciplinary research engaging Applied Mathematicians, Computer Scientists, and Science Specialists.

For over a decade, both experimental observations and theoretical simulations of turbulent losses of fusion-grade tokamak plasmas have indicated that energy confinement degrades as the size of the tokamak increases in the so-called “Bohm regime.” However,

such simulations have also predicted that for sufficiently large tokamaks there will be a turnover point into a “Gyro-Bohm regime,” where the losses become independent of system size. For the next generation of fusion reactors it is of key importance that systems can operate in this favourable regime.

During Nu-FuSE, a number of plasma simulation codes have been optimised and improved to enable them to scale to very large numbers of cores, and therefore undertake longer, larger, and more detailed plasma simulations.

One such code, GTC-P, has been scaled to over 1.5 million cores. Using this code it has been possible to undertake the large simulations that show that the magnitude of turbulent losses in the Gyro-Bohm regime can be up to 50% lower than indicated by earlier, much lower-resolution, simulations and that the Bohm to Gyro-Bohm

Nu-FuSE: three years of exascale

The Nu-FuSE (Nuclear Fusion Simulations at Exascale) project was a 3-year G8-funded international research project to investigate the challenges and requirements for fusion simulations at Exascale levels. The project had a range of scientific and technical successes, which can be explored in more detail on the Nu-FuSE website, however here are two particularly important scientific outcomes.

Adrian Jackson [email protected]

Page 5: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

5The newsletter of EPCC, the supercomputing centre at the University of Edinburgh

transition is much more gradual as the plasma size increases. This finding was made possible only after going to high-resolution, long-timescale simulations needed to achieve the physics fidelity enabled by computing at extreme scales.

Commercial fusion reactors will produce energy in the form of very high-energy neutrons which can cause enormous radiation damage to the structure of such systems.

Since this damage is inevitable, what is needed is a new generation of materials that can self-heal. However, different materials can have hugely different responses to neutron irradiation, and there is no macroscopic theory for these differences. Since experiments with irradiated samples are costly, screening via computer simulations is an essential part of the process. Moreover, with millions of atoms involved, codes that can exploit exascale systems are essential.

One class of materials which show good radiation resistance are the so-called “ODS steels.” Through simulations undertaken in the project we have been able to identify that this is because of catalytic recombination of topological defects at the interface between iron and yttrium oxide particles. Now that the mechanism is understood, a systematic pathway to improving on existing materials can be devised.

EPCC was heavily involved in Nu-FuSE, particularly in working on optimising plasma simulation codes, and on porting materials simulation codes to GPU systems and optimising the performance on the GPU hardware.

Part of the Nu-FuSE work undertaken by EPCC won a recent HPC Innovation Award, as discussed on the EPCC website:

www.epcc.ed.ac.uk/news/epcc-wins-hpc-innovation-excellence-award

Figure 2 (opposite page): Nanoparticles of Yttria self-assemble in steel matrix. Shown by Kimura group to reduce radiation damage, simulation shows this is due to catalytic healing.

Figure 1 (top of page opposite): Electrostatic potential fluctuations in an annular region at mid-radius in the MAST tokamak, from a gyrokinetic simulation of the saturated turbulence using the GS2 code. A wedge of plasma has been removed from the visualisation to show the nature of the fluctuations inside the annulus.

For further information on the work undertaken by Nu-FuSE, visit the project website:

www.nu-fuse.com

NuFuSE-

A model of the ITER Tokamak. Image: ITER.

Page 6: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

6

Simulating the acoustics of 3D

rooms

Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls, virtual reality systems, and artificial reverberation effects for electroacoustic music and video games.

The sheer size of some listening spaces (eg concert halls), and the desire to represent sound waves up to the limit of human hearing (20kHz), has meant the computational complexity of this problem has been out of the reach of grid-based methods (eg finite difference methods) for some time.

Due to the size of these computational loads, commercial packages for the simulation of room acoustics employ simplifying assumptions that allow for the use of cheaper ray-based techniques borrowed from the graphics community, but these do not capture essential details in room acoustics, such as wave diffraction and interference.

Wave-based numerical methods, such as finite differences, have the promise of providing all these details. They also allow for the embedding of virtual instruments in a 3D room with two-way coupling between the acoustic field in air and the virtual instrument (see blog post[1] and EPCC News 71[2].

Simulations of concert halls at full audio rates may be beyond grid-based methods on commodity hardware for some years yet, but current personal computing hardware is sufficient for smaller spaces at audio rates, or concert hall-sized spaces at lower sample rates. But even relatively small simulations may require teraFLOPs for each second of output, leading to long simulation times.

Fortunately, the finite difference algorithms commonly used for these 3D simulations are excellent candidates for HPC on GPUs or multi-core CPUs. These algorithms belong to the class of explicit methods, meaning that each point-wise update in the algorithm may be computed in parallel. Also, the stencil operation at each point is conducive to memory coalesced reads with neighbouring points (threads) – an essential for GPU speed-ups. Without parallelisation, one second of audio output from a room of only 100m3 would require nearly 10 core-hours of computation on a current desktop PC. With parallelisation, this calculation could be reduced to tens of minutes, which is not suitable for real-time use, but is good enough for offline applications. Typically, with GPU-acceleration one can expect at least 10-times speed-up over serial CPU

The NESS project is developing next-generation sound synthesis techniques based on physical models of acoustical systems. One key system targeted by NESS is the acoustics of 3D rooms.

To reproduce the acoustics of the Royal Albert Hall (86,650 m3) up to the limit of human hearing, a grid-based method using the simplest time-integration methods would require, by sampling considerations alone, at least 1TB of memory (approx. 1cm mesh resolution, single precision) and nearly 6 petaFLOPS for a real-time output.

A smaller example is the Usher Hall in Edinburgh (15,700m3). This hall requires 190GB of memory and 1 petaFLOPS for a real-time output, which is only manageable by a supercomputer like ARCHER, the national HPC service hosted by EPCC.

Brian Hamilton Reid School of Music University of Edinburgh [email protected]

Albert Hall by jim crossley, Flickr.

Page 7: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

7The newsletter of EPCC, the supercomputing centre at the University of Edinburgh

codes. Speed-ups of 40-70 times are common with a professional-grade NVIDIA card, such as the Tesla K20 GPU cards being employed in the NESS project and hosted at EPCC. Further speed-ups can be achieved by using multiple GPU cards in parallel (see study[3] by Craig J. Webb and Alan Gray).

Another important consideration is the accuracy of these numerical methods. Approximation errors may require that the grid resolution be set higher than the bare minimum, and this has significant impacts on computational costs. Generally, as the grid resolution in 3D is increased the memory requirements grow cubically and the required FLOPs grow quartically. While any level of accuracy may be achieved by setting the grid resolution high enough, this strategy quickly becomes impractical for 3D rooms. As such, much of the research in this area, and within the NESS project itself, is focussed on improving the cost effectiveness of these algorithms. Often a trade-off arises between increased accuracy and the ease of implementing suitable boundary conditions for room acoustics.

One class of algorithms, namely implicit methods, are known to offer higher accuracy at the cost of

solving a linear system of equations at each time-step. Implicit methods are often formulated as alternating direction implicit (ADI) schemes, which allow the use of direct linear system solvers, ultimately based on Gaussian elimination. However, the incorporation of boundary conditions suitable for room acoustics with ADI schemes remains an open problem.

Another approach, common in CFD applications, is to resort to iterative methods for the linear system of equations. The Jacobi iterative method turns out to be a simple extension of explicit updates, and is thus straightforward to parallelise on a GPU.

Recently we investigated this approach (see study[4]) on 3D room acoustics with boundary conditions for simplified room geometries. It was found that such implicit methods are suitable for GPU implementations and are more cost effective than explicit counterparts when high accuracy is desired.

Future work will focus on hybridising high-accuracy methods with finite volume techniques, which become necessary for modelling irregular room geometries.

[1] http://www.epcc.ed.ac.uk/blog/2013/08/05/next-generation-sound-synthesis

[2] http://www.epcc.ed.ac.uk/sites/default/files/EPCCNews71_web_0.pdf

[3] http://scitation.aip.org/content/asa/journal/poma/19/1/10.1121/1.4800534

[4] http://www.dafx14.fau.de/papers/dafx14_brian_hamilton_revisiting_implicit_finit.pdf

Time evolution of a 3D acoustic field in a room with obstacles.

NESS is a five-year European Research Council-funded project currently in its third year.

It is an exploratory project, concerned entirely with synthetic sound, in particular, numerical simulation techniques for physical modelling sound synthesis.

The aim of the project is to explore numerical techniques, especially finite difference time domain methods, for a variety of instrument families. As such methods are numerically intensive, part of the project is devoted to looking at implementations on parallel architectures (multicore processors and general purpose graphics processing units).

EPCC manages NESS, and ports Matlab codes written by the Acoustics PhD students to C and CUDA.

www.ness-music.eu

Page 8: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

8

The Auditory pilot project, involving EPCC and the University’s Acoustics and Audio Group, sought to use HPC to enable faster run times for computational models of the human hearing organ. Such models are routinely solved in commercial environments such as Matlab, and can require many hours to complete a single simulation run. The Auditory project investigated ways in which these models might harness the power of HPC to allow a reduction in run times, so providing greater opportunities for the rapid development and use of such models in a range of research and clinical environments.

The human ear is tasked with converting acoustical sound waves in the environment into neural signals that can be interpreted by the brain. Sound arrives first at the outer ear, or pinna, which forms the most obvious visible component of the auditory system. The sound then travels into the ear canal and is focused onto the ear drum, or tympanic membrane, which forms part of the middle ear. This membrane helps to efficiently transfer acoustical vibrations in air to vibrations within the fluid-filled

cochlea, or inner ear, which is coupled to the ear drum via the three smallest bones in the human body. The process of transduction itself, where mechanical vibration is converted into electrical signals that are carried away to the brain via neurons, happens within the cochlea. This step is the most complex and remarkable feature of this acoustical chain.

The Auditory project was concerned with speeding up computational simulations of the acoustical-neural transduction process that takes place within the cochlea.

This process relies upon the extraordinary morphology of the cochlea, which consists of a spiraled, fluid-filled tubular structure approximately 35mm long, and bisected along its internal length by the cochlear partition. The cochlear partition is formed from a complex array of membranes and cells, but can be thought of as a ribbon-like structure that is stiff along its length, and compliant in the transverse direction. It responds to pressure waves within the cochlea in a manner somewhat akin to a line of buoys bobbing up and down on the sea surface. One feature of this unique morphology relates to the variation in physical properties of the ‘ribbon’ along the cochlear

Using HPC to understand human hearing

Dr Michael Newton Acoustics & Audio Group University of Edinburgh [email protected]

Above: The response of a cochlea model to a single frequency input stimulation is plotted in both time and space. The plot reveals the way that a travelling wave is formed along the cochlea, which, upon reaching its ‘best place’, causes a maximal cochlear vibration at that spatial location. The cochlear motion beyond this location falls away rapidly, somewhat akin to a wave breaking on a beach. The cochlea is thus able to act like a kind of ‘mechanical’ frequency analyser, providing the brain with remarkably accurate time and frequency information from the very moment of transduction.

Opposite page: A finite difference time domain simulation of the cochlear travelling wave. A single frequency tone is used to stimulate the cochlea via the ear drum (left side), and this wave propagates until it finds its ‘best place’ along the cochlear partition, the local properties of which closely match the stimulus frequency. This place-frequency mapping allows the cochlea to function as a kind of frequency analyser, amongst other things.

Page 9: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

9The newsletter of EPCC, the supercomputing centre at the University of Edinburgh

length, which allows a precise place-frequency mapping to take place for incoming sound waves, and leads to the cochlea’s key role as a kind of frequency analyser. Having a good model of this mechanical structure, and understanding its role in the transduction process, lies at the heart of unpicking the subtleties of cochlear function.

One approach taken in modeling the cochlear transduction process is to discretise the cochlear partition along its length, and to represent each of these discrete ‘elements’ using a set of carefully tuned mechanical oscillators. The whole family of elements that make up the digital cochlea, perhaps 4000 or more, can then be coupled together by acoustical equations that describe sound wave propagation within the surrounding cochlear fluid. The place-frequency mapping seen in the real cochlea can be mimicked by careful tuning of each of the individual elements.

The computational models that result from describing the preceding system consist of a large set of coupled differential equations. Various methods exist for solving these kinds of systems, including commercial solvers built in to platforms such as Matlab.

The Auditory project explored a range of alternative environments, including the Open Source PETSc library. PETSc has functionality that is broadly similar to the Matlab solvers, but which allows greater control over the underlying solver routines, as well as extension to parallel architectures. This can come at the cost of reduced ease of use, compared to Matlab. At the other end of the solver spectrum, the project also explored the possibility of designing hand-coded algorithms, such as by using Finite Difference Time Domain techniques, to solve the cochlear equations. This latter approach has proved of particular interest, with some promising early results.

The Auditory project provided a valuable environment within which a range of approaches to solving cochlear models could be explored. Knowledge gained will now be put to use in further development of these models, and will inform forthcoming external grant applications.

It is hoped that successful development of HPC tools tuned specially for such models may one day lead to their use as diagnostic tools within clinical environments.

HPC challenges

“This project was particularly challenging as there is no direct replacement for the ordinary differential equation solver used by the Matlab code. Our initial profiling of the Matlab code showed that almost the entire runtime (95% or more) was taken up with this solver and thus any attempts to parallelise the code would require a C/C++ replacement to be found.

“Whilst solvers for ordinary differential equations are relatively commonplace, the number of possibilities are greatly reduced when a non-zero right-hand side with a time and space dependent mass matrix is present. This was the case with the Matlab code.

“We found just two possible replacement solvers, PETSc and SUNDIALS. We focussed our efforts on PETSc as it allows for parallel computations using MPI.”

Fiona Reid

Applications Consultant, EPCC

This work was funded by the College of Humanities & Social Sciences Challenge Investment Fund.

Page 10: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

10

Greener CloudsIn addition to optimising energy use, Cloud providers are working to minimise their CO2 output. ECO2Clouds[1] is an EU-funded project that has investigated how to reduce CO2 emissions by managing the use of computing resources over federated Clouds. As the project draws to an end, we review its achievements. ECO2Clouds developed a carbon-aware Scheduler that decides on which site and on which Physical Host to deploy the application in order to minimise CO2 emissions associated with the application while continuing to satisfy performance requirements. It also developed an Application Controller that analyses the status of applications and enacts run-time adaptation actions as required.

EPCC led the design of the architecture of ECO2Clouds, which specifies, amongst other aspects, what a Cloud must offer to make it suitable for optimisations like ECO2Clouds.

Broadly speaking, such Clouds need to provide information about

physical host and VM status; resource management, including Virtual Machine (VM) migration; and numerous metrics. These are not commonly available on Cloud sites.

ECO2Clouds was piloted with BonFIRE [2], the multi-Cloud testing and experimentation facility. EPCC runs a BonFIRE site, so the metrics work was of particular importance to us. As discussed in EPCC News 74, we developed the Monitoring Collector component that gathers data from the sites for the Scheduler to use. We were also heavily involved in defining and implementing the metrics at site and VM level. BonFIRE has a strong culture of providing metrics for its users, and these were enhanced with new ones from ECO2Clouds. The ECO2Clouds Scheduler, for example, uses them to inform deployment decisions. BonFIRE has benefitted directly from the metrics and instrumentation developed by ECO2Clouds: it has further distanced itself from public Cloud offerings and established itself as a state-of-the-art resource for services experimentation.

A recent development by EPCC in collaboration with Inria (the French Institute for Research in Computer Science and Automation) has been the design and prototype

Kostas Kavoussanakis [email protected]

Gareth Francis [email protected]

Clouds are often referred to as a “green technology” but it has been estimated that the energy consumption of Cloud computing will triple by 2020 if energy solutions are not applied.

New metrics

• Managed power distribution units (PDUs): deployed across the BonFIRE sites, they provide accurate power-consumption information for each Physical Host. These values can be accessed by users.

• VM metrics, especially VM Power Consumption: a metric derived from the measured values at the managed PDUs, and CPU utilisation of the VM as seen by the Physical Host.

• Availability of Physical Hosts: defined for ECO2Clouds, and already used daily by the Fed4FIRE project [3].

• National Grid’s details of the energy mix of the power used: This combination of power consumed and energy mix allows ECO2Clouds and other BonFIRE clients to research and optimise resource allocation for CO2. This approach can be extended to classes of fuels, eg nuclear, gas and renewables.

Page 11: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

11The newsletter of EPCC, the supercomputing centre at the University of Edinburgh

implementation of a mechanism that allows sites to tell users how much of that site’s total CO2 emissions can be attributed to each user’s virtual resources, and also forecasts how this figure is likely to change. The motivation for this work is to use the knowledge gained from ECO2Clouds to produce an architecture designed for production Cloud environments rather than Cloud test beds. The concept is that multiple Clouds would providing information to the user across three timescales:

1. Non-committing, long-term forecasts of carbon footprint for virtual resources with specified characteristics

2. Short-term future (eg 24 hours), committing quotes for resources with specified characteristics

3. Past emissions for a running or deleted resource.

All participating sites would need to expose a common API, but each site is wholly responsible for its own service. The details of the Cloud infrastructure, algorithms, and information used to produce the forecasts are all specific to the site and are not exposed. This would allow, for example, commercial infrastructure providers to offer such a service without revealing details about the underlying

infrastructure, or for new information sources to be incorporated seamlessly. Subject to suitable standardisation, this mechanism could underpin competition between sites on the basis of CO2 emissions. This could also potentially extend to renewable-energy use and other green characteristics.

In summary, EPCC’s engagement in ECO2Clouds has been very successful. It has exposed the Centre to technologies and methodologies which are in line with the University’s endeavours for social responsibility and sustainability. It complements related activities, like ADEPT [4], approaching the subject of energy for computation from the HPC perspective. Notably, ECO2Clouds results are already part of our long-standing BonFIRE offering, which highlights the relevance of ECO2Clouds for EPCC.

ECO2Clouds was partially funded by the European Commission under the 7th Framework Programme.

[1] www.epcc.ed.ac.uk/projects-portfolio/eco2clouds-experimental-awareness-co2-federated-cloud-sourcing

[2] www.epcc.ed.ac.uk/bonfire-testbed

[3] www.epcc.ed.ac.uk/projects-portfolio/fed4fire

[4] www.epcc.ed.ac.uk/projects-portfolio/adept

Greener Clouds

Project website: eco2clouds.eu

Page 12: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

12

The Intel Xeon Phi co-processor is an interesting piece of hardware, with characteristics of both standard processors and accelerators (such as GPUs).

Unlike GPUs, the Xeon Phi uses fully functional processor cores (Intel Pentium P54C) with cache coherency which, whilst being simplistic and low-powered compared to modern processor cores, are able to run full programs, and even operating systems. These cores have been augmented with a large vector unit (512-bit) meaning at each clock cycle the core can operate on 8 double precision numbers at once. Furthermore, the cores have hyperthreading, or symmetric multi-threading, functionality enabling each core to efficiently host 4 virtual threads. The Xeon Phi generally has 60 of these cores on a single co-processor, along with 6–16GB (depending on the model of Xeon Phi being used) of fast GDDR5 memory. However, the cores do run slower than modern processors, generally clocked at around 1GHz.

For our IPCC collaboration, which is initially scheduled to run for two years, we are looking at optimising

four scientific simulation programs: CP2K, GS2, COSA, and CASTEP.During the first 6 months of the work we have been actively working on CP2K and GS2.

CP2K is a materials science simulation code that EPCC has a long history of working with (for example, see page 17), and indeed we previously carried out a port of CP2K to Xeon Phi co-processors through the PRACE project. That work found that, whilst it is possible to compile and run CP2K as a native application on the Xeon Phi, the performance did not match that of running it on standard Intel processors. However, we have a number of possible avenues for improving that performance, particularly around the vectorisation of the code by the compiler. As the Xeon Phi has very wide vector units but a low clock speed, to get good performance from the co-processor it is necessary to ensure that any code running on the system vectorises efficiently.

GS2 is an initial value flux-tube gyrokinetic plasma simulation code, written in FORTRAN and parallelised using MPI. GS2 has recently been optimised for large-

The Intel Parallel Computing Centre (IPCC) at EPCC was established to optimise codes for Intel processors, particularly to port and optimise scientific simulation codes for Intel Xeon Phi co-processors. This Grand Challenges Optimisation Centre has made significant progress in recent months.

EPCC runs the ARCHER supercomputer for EPSRC and other UK research-funding councils. As ARCHER contains a large number of Intel Xeon processors (although no accelerators or co-processors), we have a strong focus on ensuring that scientific simulation codes are highly optimised for these processors too.

Therefore, the IPCC work at EPCC has been concentrating on improving the performance of a range of codes that are heavily used for computational simulation in the UK on both Intel Xeon and Intel Xeon Phi processors.

Adrian Jackson [email protected]

Optimising the performance of GS2 on the Xeon Phi

Page 13: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

13The newsletter of EPCC, the supercomputing centre at the University of Edinburgh

scale simulations, however, the nature of the simulations undertaken mean that it is heavily dominated by MPI communications when using large numbers of processes. Therefore, if we want to port this to the Xeon Phi we need to try to address the problem of the MPI cost increasing as we go to large process counts. The natural solution to this challenge is a hybrid parallelisation, adding OpenMP to the existing MPI parallelisation to enable the use of a higher number of cores without requiring larger numbers of MPI processes to be employed (which would require the splitting of the simulation domain across processes and thus entail more MPI communications).

We have implemented a hybrid version of GS2, and explored the performance of the parallelisation versus the original MPI code. We have also run the hybrid code on a Xeon Phi co-processor (using a different, smaller, testcase to fit into the memory available on a single Xeon Phi).

When we ran the pure MPI code, it performed, at best, 1.5x as slow as the same code running on one

processor on the host machine using 8 MPI processes (the host machine has 2 eight-core Intel Xeon processors and 2 Xeon Phi co-processor cards), and 2.3x as slow as the same code using the two processors (16 MPI processes) on the host machine. However, this was comparing 236 MPI processes on the Xeon Phi with 16 MPI processes on the Xeon processor, an unfair comparison as for the Xeon Phi case the large number of MPI processes creates higher communication overheads in GS2.

To reduce the MPI processes required on the Xeon Phi we ran the hybrid version of GS2, using 3 OpenMP threads per MPI process. This enabled us to use a third of the MPI process, and gave a performance on parity with one processor, and 1.5x slower than the two processor result.

Whilst not ideal performance this is still good progress, and gives us hope for optimising the performance of GS2 on the Xeon Phi so that it runs faster than two standard processors.

Figure 1: Scaling of GS2 MPI and Hybrid versions.

1

10

100

10 100 1000

Tim

e (m

inut

es)

Nodes on ARCHER (24 cores per node)

MPI Advanced Time

Hybrid (2 Threads)

Hybrid (3 Threads)

Hybrid (4 Threads)

Hybrid (6 Threads)

Scaling on ARCHER

Figure 1 above shows the scaling we get on ARCHER for a representative simulation case.

We can observe that at lower core counts (256 and 512 cores) the original MPI parallelisation out-performs the hybrid code. However, when scaling beyond those core counts the hybrid version performance is better, with up to 25% quicker simulations than the pure MPI version. It is also clear from the graph that, whilst the hybrid code is faster at higher core counts, the parallelisation is only more efficient when using small numbers of OpenMP threads, with 2 or 3 OpenMP threads per MPI process giving optimal performance for this case.

Page 14: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

14

Energy-efficiency in parallel computing

The strength of the HPC world lies primarily in software application parallelisation: concurrent computation is used to speed up the overall time an application requires to run to completion. As a result, HPC software developers are also experts in parallel performance analysis and performance optimisation. The Embedded systems sector on the other hand excels in its management of energy usage because they are often constrained by fixed power and energy budgets.

The strengths of one sector are the relative weaknesses of the other. In HPC, power management and power efficiency are in their infancy but becoming important, with HPC systems requiring more and more power. Embedded programmers are increasingly forced to use parallel computing techniques that are more familiar to HPC programmers because recent advances in low-power multi-core processors have widened the choice of hardware architectures for Embedded systems while energy and power budgets continue to be constrained.

Adept is investigating the implications of parallelism in programming models and algorithms, as well as choice of hardware, on energy and power consumption.

It is important to gain a clear understanding of how factors such as redundant computations or algorithmic choices affect the power profile of a parallel application, or how this profile can be modified in a predictable way by off-loading compute-intensive parts of an application to low-power hardware. We are progressing the state of the art in application profiling, performance, and energy usage modelling in order to build a tool that integrates performance and energy consumption modelling for parallel Embedded and HPC systems.

Adept is advancing knowledge of how parallel software and hardware uses power. Being able to reduce the amount of power that is required to run large-scale applications on an HPC system will have a significant impact on the

Adept project meeting at Uppsala University, March 2014.

Michele Weiland [email protected]

The Adept project is addressing the challenge of the energy-efficient use of parallel technologies. It builds on the expertise of software developers from high-performance computing (HPC) to exploit parallelism for performance, and on the expertise of Embedded systems engineers in managing energy usage. Adept is developing a tool to guide software developers and help them model and predict the power consumption and performance of parallel software and hardware.

The low-tech approach to predicting energy usage!

Page 15: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

15The newsletter of EPCC, the supercomputing centre at the University of Edinburgh

total cost of ownership and on the carbon-footprint of such a system.

Adept also aims to increase programmer productivity by creating a tool that will be able to rapidly predict both the performance and the power usage of parallel systems, greatly reducing the need for speculative implementations to answer “what if?” questions during the software development process. This will enable developers to make informed decisions about hardware and software implementation that are economically viable in terms of performance and cost.

Adept benchmarksBefore implementing energy efficient algorithms or choosing the most power-performance efficient hardware, it is first necessary to understand and quantify any factors that dictate power consumption.

Benchmarks can be used to gain a deeper understanding of how implementation and architecture choices can impact on the overall efficiency of software. If we can identify the power usage profiles of

computational patterns, it will become possible to optimise for energy and power in the same way we optimise for performance today.

Adept has designed and implemented a set of benchmarks that can be used to measure both performance and power usage on a wide range of hardware architectures. The benchmarks are representative of the whole spectrum of parallel computing (from Embedded to HPC) and provide measurements that can be interpreted on a wide range of platforms. They also cover different levels of compute granularity, from single instructions and operations up to specific computational patterns in the form of kernels.

More details can be found in our paper “Benchmarking for power consumption monitoring”, which was published as part of the International Conference on Energy-Aware High Performance Computing: http://link.springer.com/article/10.1007/s00450-014-0260-1

Find out more Website: www.adept-project.eu

Twitter: @adept_project

ADEPT project partners: Uppsala University, Sweden; Ericsson AB, Sweden; Alpha Data Parallel Systems Ltd, UK; University of Ghent, Belgium.

The project started in September 2013 and will run for 36 months.

Visualising CPU and memory power measurements using the Adept Visualisation Tool.

Page 16: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

16

Modelling liquid crystals

When colloidal nanoparticles are mixed into a liquid crystal, they disrupt the orientational order in the fluid and create defects close to the particle’s surface where the liquid-crystalline order is strongly suppressed. To minimise the energetic costs of forming these defects, it is generally advantageous for defects to be shared between nanoparticles. The particles have a generic tendency to aggregate, and these aggregates may further self-assemble into more complex structures.

The surprising wealth of colloidal-blue phase composites that was found is the consequence of a complex competition between two self-assembly principles, namely templating according to the structure of the liquid-crystalline host material and aggregation of the dispersed particles. Which of these factors dominates depends on the strength and type of surface interactions. For low surface interaction strengths the templating principle dominates. Colloidal crystals, gels or twisted rings can then arise. For large surface

interaction strengths, interparticle attractions defeat the templating tendency and we predict disconnected aggregates, percolating gels, or helical undulating colloidal ropes. The variation of structure seen in the simulations suggests that some careful characterisation of parameters is required to understand the final outcome and to be able to reproduce it with a given experimental protocol.

Besides being of fundamental interest to material science, these structures exhibit tunable elastic and optical properties and offer exciting prospects for applications. Many of these new materials are metastable: two or more structures can arise under identical thermodynamic conditions. This new class of soft materials thus offers a route to switchable, multistable devices that consume energy only during the switching process and which are interesting for metamaterials, biosensors, transformational optical devices or energy-efficient optical technologies such as smart glass and e-paper.

Oliver Henrich EPCC Advanced Fellow [email protected]

Liquid crystals have caused a major technological revolution and are now widely used in displays and photonic devices. EPCC is collaborating with Edinburgh researchers to investigate their potential.

Research paper

In a recent publication, a team of researchers at EPCC and the Institute for Condensed Matter & Complex Systems (School of Physics & Astronomy, University of Edinburgh), showed for the first time by large-scale computer simulation that mixtures of colloidal particles and so-called chiral nematic liquid crystals or blue phases can give rise to new regular structures.

These structures can be tuned via particle concentration, by varying the surface interactions of the liquid crystal host with both the particles and confining walls, by the formation protocol or by external electric fields.K. Stratford, O. Henrich, J.S. Lintuvuori, M.E. Cates, D. Marenduzzo, Nature Communications 5, 3954 (2014)

Pebble smartwatches: examples of the use of e-paper.

Left: Confined blue phase structures. Depending on the

colloid volume fraction and interaction between the liquid

crystal and the confining walls and particle surfaces, different

morphologies emerge that give rise to new soft materials with

tunable properties.

Page 17: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

17The newsletter of EPCC, the supercomputing centre at the University of Edinburgh

Iain Bethune [email protected]

In January 2014, the first in a series of annual CP2K user meetings [2] was attended by over 50 people, from new PhD students to professors, and expert CP2K users to complete novices.

A frequent comment at the user meeting was the need for CP2K-specific training, so we were pleased to be able to offer a CP2K day as part of a wider Materials Modelling Packages course in April, and a dedicated two-day CP2K training course in August. The two-day programme was packed with lectures and lab sessions, including running parallel jobs on ARCHER, so allowing many people to try the service for the first time.

In addition to supporting CP2K users, we also hope to reduce the complexity barrier for new developers. EPCC developers have implemented a tool which automatically generates doxygen[3] documentation directly from the existing code. This makes it easier for developers to add relevant comments and maintain them as

code is refactored. During the process we found and fixed many instances where ‘code rot’ had set in and the existing documentation was no longer relevant, or worse, was downright misleading!

In our ongoing campaign to improve code quality, we have introduced automatic regression testing with the Intel compiler suite, which has flushed out several bugs in both the CP2K and Intel’s code. We have also been working with our trainee CP2K developer Dr Lianheng Tong at King’s College London who has added new functionality for Langevin Dynamics, as well as contributing many bug fixes and minor improvements as well.

We are planning another user group meeting for 2015, along with further training courses. If you would like to be kept informed, please contact [email protected] to be added to our email notification list.

[1] www.cp2k.org

[2] www.epcc.ed.ac.uk/content/cp2k-uk-workshop-2014

[3] doxygen.cp2k.org

EPCC created the CP2K-UK network to support the growing user and developer communities of CP2K [1], one of the UK’s most popular codes for materials science simulation. The network has now been running for over a year and we have held several popular community events.

Prof. Stuart Macgregor’s group at Heriot-Watt University undertook CP2K training and applied it to understanding the behaviour of rhodium alkane σ-complexes in a solid-state environment, representative of their potential use in solid-gas catalytic processes1. They have since carried out structural optimisation and vibrational analysis calculations for a 650 atom system which would been unfeasible using other codes.

‘We have benefitted enormously from the technical support of both EPCC and NSCCS staff. We are now able to explore new vistas in modelling solid state organometallic chemistry.’

1. S. D. Pike, A. L. Thompson, A. G. Algarra, D. C. Apperley, S. A. Macgregor, A. S. Weller,Science, 2012, 337, 1648.

Image: Paul Dodds

HPC for chemists

Page 18: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

18

The Programme benefits companies, students and the School. For the companies, it’s a unique opportunity to benefit from the expertise that can be provided by the School and our highly-motivated post-graduate students. For the students, they benefit from being able to tackle a real-world industry project which will help to increase their employability. And of course, the School benefits by building and establishing relationships with a wide range of local companies, thereby growing the network of companies that we already engage with.

For 2013-14, a total of 25 industrial projects were offered by 13 companies with students being placed with the companies whose logos are displayed here on this page. This year, our students who undertook industry projects had the

opportunity to win the Summer Industry Project Prize kindly donated by Scottish Equity Partners, the winner of which will be announced in the next issue of EPCC News.

As an alternative to undertaking an industry project, our students can opt to do in-house projects or participate in the annual HPAC-ISC Student Cluster Competition.

This year, four of our MSc in HPC students achieved the highest-ever LINPACK score in the Student Cluster Competition at ISC14.

A big thank you to our sponsors Boston Limited and CoolIT Systems and congratulations to Team EPCC (see pic): Emmanouil Farsarakis, Konstantinos Mouzakitis, Georgios Iniatis and Chenhui Quan; and their EPCC mentor, Xu Guo.

Summer Projects Programme

Maureen Simpson [email protected]

The School of Physics & Astronomy actively engages with a wide range of predominantly local companies to offer students the opportunity to undertake their dissertation project in collaboration with industry. The MSc programmes participating are High Performance Computing, High Performance Computing with Data Science, Mathematical Physics and Theoretical Physics.

MSc in High Performance Computing: www.epcc.ed.ac.uk/msc

MSc Summer Projects: www.msc-projects.ph.ed.ac.uk

MSc in High Performance Computing

Team EPCC, which achieved the highest-ever LINPACK score in the Student Cluster

Competition at ISC14.

Page 19: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

19The newsletter of EPCC, the supercomputing centre at the University of Edinburgh

This is an exciting time for CRESTA. Our software collection is being delivered and provides an excellent set of software tools. Not only designed to exploit current multi-petascale systems, these tools have been created with the future in mind, with roadmaps to take them through to the exascale era.

CRESTA has also been preparing and developing six key applications for exascale, all with strong science drivers. In addition to delivering new software releases, these codes have been preparing scientific demonstrators, which show the underlying science need and the resulting socio-economic impact.

Associated with the software collection and application software releases has been a programme of research into the exascale

challenge. We have investigated solutions to the anticipated problems and our results are now being published at conferences and in scientific journals.

Co-design has been an integral part of the project, our application developers working closely with our software developers, and the outputs from both have benefitted from this close integration.

As we reach the final stages it is great to see some pilot projects beginning to exploit our software. These range from individual user-groups at our HPC centres through to larger nationally- and European- funded projects. This, coupled with existing partner commitments, will ensure that CRESTA’s software outputs are sustained beyond the end of the project.

The CRESTA project has now been running for 3 years and will complete at the end of 2014. During this time the 13 project partners have been busy developing software and applications and carrying out research into solving the problems of utilising future exascale platforms.

Find out more on our website: www.cresta-project.eu

CRESTA: co-design for the exascale future

Lorna Smith [email protected]

More effective sampling of biomolecules

ExTASY toolkit Version 0.1 brings together two of the three aspects of our vision: support for high-performance execution of large ensemble Molecular Dynamics (MD) calculations, and using MD analysis tools to promote improved sampling of complex macromolecules.

When attempting to sample the complete behaviour of a new molecule (eg a protein) with MD it is difficult without a priori knowledge to understand whether the whole range of motion has been observed, or if the calculation is stuck in a ‘local minima’ and is missing aspects of the molecule’s function.

In this release we provide support for using the GROMACS and AMBER MD programs, coupled with the LSDMap and COCO analysis tools.

The toolkit implements simple workflows, which first execute a large ensemble of MD jobs, then analyses the resulting trajectory to generate new starting points for further MD, which allow the calculation to escape from ‘local minima’ and cover the full range of motion of the molecule. Repeating this loop several times can allow for full sampling to be completed 100-1000x faster than with brute force MD calculations.

The ExTASY (ExTensible Toolkit for Advanced Sampling and analYsis) toolkit includes full installation instructions, documentation and tutorial. It can be download for free from our website:

www.extasy-project.org

Above: ensemble simulations and ExTASY analysis of the major urinary protein reveal it can adopt three subtly distinct conformational states. The rarest, but still important, of these (right) features a collapsed central ligand binding cavity (ringed). This had never been seen before. Image: Charles Laughton, University of Nottingham.

Iain Bethune [email protected]

The first release of the ExTASY toolkit can improve the modelling and analysis of complex molecules.

Page 20: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

20

Jesmin Jahan Tithi, Stony Brook University

Working in HPC is fun. I can spend night after night designing, implementing and optimizing parallel algorithms for different kinds of parallel platforms to beat other existing approaches. When I get up in the morning, I feel passionate about starting from the point where I left late in the night!

My goal is to make the process of generating high-performance applications easy for general computational scientists by sharing my knowledge of algorithm engineering and possibly incorporating that in some automated systems.

HPC is one of the most highly demanding research areas. It is important to bring in more women because we are less than 2% in this community. WHPC can work towards improving that by providing a friendly platform where women can present their work in this highly competitive area. It can also bring better research collaborations, mentorship and even job opportunities.

WHPC has taken its first step. I am really glad to be part of it. I wish it all success.

Ivy Bo Peng, KTH Royal Institute of Technology, Sweden

It is a great honour to be able to contribute to the WHPC SC’14 workshop. As an early-stage researcher, I understand the importance of feeling inspired and encouraged to step out and step up.

WHPC provides a valuable way for female HPC researchers to be visible and sends a positive message to other young researchers. I appreciate WHPC’s support at this stage of my career and I want to extend this support to other female HPC researchers.

The gender imbalance is noticeable in most science and technology seminars. This imbalance can pressurise female students and may prevent women from entering the field. That is why we need more women in HPC: to bring variety and to change the current situation.

The WHPC workshop provides us with a good opportunity to showcase our work and to let female HPC researchers feel less isolated. With the WHPC workshop motivating more women into HPC, together we can change it.

Why we need Women In HPC

The WHPC workshop will include presentations and posters from leading early-career women working with HPC. There will also be a panel discussion with the audience on how the situation for women at all stages of their HPC career can be improved. We are delighted that Prof. Barbara Chapman of the University of Houston will open the workshop by discussing her experiences of working in HPC.

Women in HPC will be a recurring theme throughout SC’2014, starting with a community hub hosted by the Intel booth in the exhibition hall on Tuesday 18 November at midday, followed by the WHPC BoF on the Tuesday evening, and finishing with the workshop on Friday morning.

The Women in HPC (WHPC) workshop takes place on November 21 at SC’14. We asked our speakers why it’s important to improve the representation of women in the industry.

Toni Collis [email protected]

Ivy Bo Peng (top) and Jesmin Jahan Tithi.

Page 21: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

21The newsletter of EPCC, the supercomputing centre at the University of Edinburgh

Funded by the EIT through Horizon 2020, LifeKIC, as the Scotland-led KIC is called, will support the EIT and EU’s goal of delivering the triple win of: increased healthy life years; economic growth and increased competitiveness; and sustainable health and care. LifeKIC aims to catalyse a revolution in the delivery of health and care throughout an individual’s lifespan, driven by the real needs of society and the harsh realities of the sustainable provision of health and care in Europe today.

Throughout 2014, Mark Parsons and Francis Wray of EPCC and Stuart Anderson of the School of Informatics, along with partners in Scotland including NHS Scotland, Scotland Europa, Scottish Enterprise and the Scottish Government, have been working to build this complex bid which brings together partners in the UK, Denmark, Germany, Italy, Netherlands and Spain, to act as Co-Location Centres (a key part of a KIC). These countries are complemented by Affiliate Regions in France and Poland/Czech Republic.

In June, LifeKIC and the Digital Health Institute organised an evening reception at the Scottish Parliament to rally support for the bid. Hosted by Aileen McLeod MSP,

speakers included Alex Neil MSP (Cabinet Secretary for Health and Wellbeing), Prof. George Crooks (Medical Director of NHS24 and Chairman of the Digital Health Institute), and Prof. Sir Timothy O’Shea (Principal of Edinburgh University). Guests included MSPs, business representatives and members of the LifeKIC consortium who were attending a bid meeting in Edinburgh.

Prof. Mark Parsons said: “People are living for longer, but this often means that they experience complex illnesses late in life. We must plan effectively to manage the demand this places on health and care resources, to ensure adequate care for all. Scotland is well placed to tackle this challenge, and winning support for our bid could help us deliver effective results for citizens across Europe.”

Our bid was submitted in September and we now have to wait until mid-December to hear whether we’ve been successful. If we are, the 7–10-year project will establish itself in 2015 and open its doors in 2016. The UK Co-Location Centre will be hosted in Edinburgh along with the overall LifeKIC headquarters, providing us with a fantastic opportunity to tackle a key societal challenge of our age.

Maureen Simpson [email protected]

University-led €428m bid could boost Scotland’s growing reputation for innovation in health and care

The University of Edinburgh is leading an international bid to secure European funding to support the establishment of a European Institute of Innovation & Technology (EIT) “Knowledge and Innovation Community” (KIC) focused on healthy living and active ageing.

For more information about LifeKIC and the vision behind it see: www.lifekic.eu

For more information about the Digital Health Institute see: www.dhi-scotland.com

Prof. Sir Timothy O’Shea, Principal of the University of Edinburgh.

Page 22: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

22

Rajagopal, a Masters student in the High Performance Computing Specialisation at the Universitat Politecnica de Catalunya (UPC-BarcelonaTech) joined us to work on methods for visualization of the energy usage and energy efficiency of computers as part of the ADEPT project.

Adept is a 3-year, EC-funded FP7 project concerned with measuring, analysing and predicting the power usage of computer systems (see page 14). Led by EPCC, it involves partners in academia and industry from Belgium, Sweden and the UK.

During his time at EPCC, Rajagopal designed and built a tool to visualise energy data collected from the monitoring systems attached to different machines. While much emphasis is placed upon methods for collecting the data, prioritising the volume and embellishing the data into more useful higher-level information is often overlooked.

Rajagopal has given the project an excellent starting point to create a custom application which will run cross platform (including mobile devices) and allow near-real-time data monitoring and metric evaluation.

PRACE Summer of HPC

This summer we hosted two students, Rajagopal and Nedim, who joined us for nine weeks as part of the Summer of HPC programme.

Summer of HPC allows students from across Europe to visit HPC centres in other countries to work on an HPC project. Not only do the students gain experience and enthusiasm for the field but the actual work done on the projects is useful and important also.

The programme has run each summer for a number of years and is always very popular, with many of the participants who go on to study and work in HPC often drawing upon their experiences of this programme.

Nick Brown [email protected]

Rajagopal in front of the Half Moon Battery of Edinburgh Castle.

Page 23: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

23The newsletter of EPCC, the supercomputing centre at the University of Edinburgh

Nedim joined us from Bosnia, where he has just finished his engineering degree, to improve our dinosaur-racing outreach demonstration.

Initially created as part of the Summer of HPC 2013 programme, this allows the public to design their own dinosaur and, once its movement has been simulated on a supercomputer, race against other people’s creations. Because of the popularity at our outreach events in the past year, we were very glad to have a keen student come along to do further improvements to this demo.

Nedim initially concentrated on integrating additional dinosaur models (a Tyranosaurus Rex and Edmontosaurus) into the demo. The existing application only supported one creature, an Argentinosaurus, and more dinosaurs provide more flexibility in designing creatures and keeping the demo fresh. These additions were unveiled at the British Science Festival in early September and proved to be very popular, especially the T-Rex, with visitors.

Nedim’s main area of interest is at the architecture and systems level

and the other part of his project was to build a cluster of Raspberry Pis to simulate the dinosaurs. This is important as we often do not have an internet connection capable of communicating the substantial amount of movement data from ARCHER.

Not only has his work resulted in the demo being available at more events, it is also an excellent direct link with supercomputers where the public can actually see the parallel machine that is doing simulation.

Next steps

Both Rajagopal and Nedim have now returned home. Their summer projects have been very successful and the work they have done will benefit EPCC and the HPC community for a long time to come.

Both are planning to continue in the field of HPC. After Rajagopal completes his Masters programme he aims to do a PhD exploring the usage of new technologies and methodologies for advancement in the field of computer architecture, while Nedim plans to set up and administer a medium-sized HPC cluster and hopes to do a PhD in this field in the future.

PRACE Summer of HPCSummer of HPC is a PRACE programme that offers summer placements at HPC centres across Europe to late stage undergraduates and early stage postgraduate students. Up to ten top applicants from across Europe spend two months working on projects related to PRACE technical or industrial work and ideally produce a visualisation or video of their results.

Read more on the website:

https://summerofhpc.prace-ri.eu

Opposite: an image from our dino demo. Top left: Our HPC-class Intel server with a Nvidia K40 GPU

installed. Bottom left: An ODroid XU+E System-on-Chip.

Above: A selection of the machines from the Adept testbed.

Page 24: The newsletter of EPCC news · 2017. 10. 20. · acoustics of 3D rooms Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls,

24The newsletter of EPCC, the supercomputing centre at the University of Edinburgh

EPCC is one of Europe’s leading supercomputing centres and operates ARCHER, a 118,080 core Cray XC30 system.

ARCHER is the UK’s academic High Performance Computer System.

These programmes equip participants with the multidisciplinary skills and knowledge to lead the way in the fields of High Performance Computing and Data Science.

Through our strong links with industry, we also offer our students the opportunity to undertake their Master’s dissertation with one of a wide range of local companies.

Scholarships available for 2015/16

The University of Edinburgh is consistently ranked among the top 50 universities in the world*.

*Times Higher World University Ranking

Postgraduate Master’s Degrees in High Performance Computing

www.epcc.ed.ac.uk/msc

These MSc programmes are offered by EPCC, an institute at the University of Edinburgh.