12
1 Smart Energy Systems Application for a detailed proposal (full proposal) 1.a. Project title Energy-efficient cloud computing using hardware diversity and elastic scalability 1.b. Project acronym (if applicable) GreenClouds 1.c. Principal applicant Prof.dr.ir. H.E. Bal 2. Summaries The GreenClouds project studies how to reduce the energy footprint of modern High Performance Computing systems (like Clouds) that are distributed, elastically scalable, and contain a variety of hardware (accelerators and hybrid networks). The project takes a system-level approach and studies the problem of how to map high-performance applications onto such distributed systems, taking both performance and energy consumption into account. We will explore three ideas to reduce energy: Exploit the diversity of computing architectures (e.g. GPUs, multicores) to run computations on those architectures that perform them in the most energy-efficient way; Dynamically adapt the number of resources to the application needs accounting for computational and energy efficiency; Use optical and photonic networks to transport data and computations in a more energy-efficient way. The project will create the GreenClouds Knowledge Base System (GKBS) based on semantic web technology, which will provide detailed information on the energy characteristics of various applications (e.g., obtained from previous execution runs) and the different parts of the distributed system, including the network. Also, the project will study a broad range of applications and determine which classes of applications can reduce their energy consumption using accelerators. Finally, it will study energy reductions through dynamic adaptation of computing and networking resources. The project will make extensive use of the DAS-4 infrastructure, which is a wide-area testbed for computer scientists, to be equipped with many types of accelerators, a photonic network, and energy sensors. The results of the project will be utilized by the SARA national HPC center that operates a supercomputer, clusters, accelerator systems, and an HPC cloud. Today, the costs of energy over the lifetime of these systems are already larger than their acquisition costs, so reducing energy is vitally important for centers like SARA. Moreover, the results will be utilized in DAS-4 itself. 3. Clasiffication (please select one of the two options) ( ) Compartment 1: Application oriented research (X) Compartment 2: Curiosity-driven research 4. Composition of the research team (maximum 1200 words) a. principal applicant / contact name, title(s): Henri E. Bal, prof.dr.ir. male / university: Vrije Universiteit e-mail: [email protected] scientific discipline: Computer Science short resume: Please include a short resume (max 400 words) will act as promoter of: Ph.D.-2

Smart Energy Systemsbal/NWO-SES-Bal.pdf · highest among Dutch computer scientists. He gave invited/keynote lectures at Euro-Par 2009, CLADE 2009, PDMC 2008, UKPEW 2008, PCGRID 2008,

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Smart Energy Systemsbal/NWO-SES-Bal.pdf · highest among Dutch computer scientists. He gave invited/keynote lectures at Euro-Par 2009, CLADE 2009, PDMC 2008, UKPEW 2008, PCGRID 2008,

1

Smart Energy Systems Application for a detailed proposal (full proposal) 1.a. Project title Energy-efficient cloud computing using hardware diversity and elastic scalability

1.b. Project acronym (if applicable) GreenClouds 1.c. Principal applicant Prof.dr.ir. H.E. Bal

2. Summaries The GreenClouds project studies how to reduce the energy footprint of modern High Performance Computing systems (like Clouds) that are distributed, elastically scalable, and contain a variety of hardware (accelerators and hybrid networks). The project takes a system-level approach and studies the problem of how to map high-performance applications onto such distributed systems, taking both performance and energy consumption into account. We will explore three ideas to reduce energy:

• Exploit the diversity of computing architectures (e.g. GPUs, multicores) to run computations on those architectures that perform them in the most energy-efficient way;

• Dynamically adapt the number of resources to the application needs accounting for computational and energy efficiency;

• Use optical and photonic networks to transport data and computations in a more energy-efficient way.

The project will create the GreenClouds Knowledge Base System (GKBS) based on semantic web technology, which will provide detailed information on the energy characteristics of various applications (e.g., obtained from previous execution runs) and the different parts of the distributed system, including the network. Also, the project will study a broad range of applications and determine which classes of applications can reduce their energy consumption using accelerators. Finally, it will study energy reductions through dynamic adaptation of computing and networking resources. The project will make extensive use of the DAS-4 infrastructure, which is a wide-area testbed for computer scientists, to be equipped with many types of accelerators, a photonic network, and energy sensors. The results of the project will be utilized by the SARA national HPC center that operates a

supercomputer, clusters, accelerator systems, and an HPC cloud. Today, the costs of energy over the lifetime of these systems are already larger than their acquisition costs, so reducing energy is vitally important for centers like SARA. Moreover, the results will be utilized in DAS-4 itself. 3. Clasiffication (please select one of the two options) ( ) Compartment 1: Application oriented research (X) Compartment 2: Curiosity-driven research

4. Composition of the research team (maximum 1200 words)

a. principal applicant / contact

name, title(s): Henri E. Bal, prof.dr.ir. male /

university: Vrije Universiteit

e-mail: [email protected]

scientific discipline: Computer Science

short resume: Please include a short resume (max 400 words)

will act as promoter of: Ph.D.-2

Page 2: Smart Energy Systemsbal/NWO-SES-Bal.pdf · highest among Dutch computer scientists. He gave invited/keynote lectures at Euro-Par 2009, CLADE 2009, PDMC 2008, UKPEW 2008, PCGRID 2008,

2

b. co-applicant(s) / contact (when applicable)

name, title(s): Cees Th. A. M. de Laat, prof.dr.ir. male /

university: Universiteit van Amsterdam

e-mail: [email protected]

scientific discipline: Computer Science

short resume: Please include a short resume (max 400 words)

will act as promoter of: Ph.D.-1

c. other industrial/societal partners

name and title specialization employment/institute:

Axel Berg, dr.ir. HPC SARA

d. Does the local authority support your application? (X) Yes ( ) No

(Did you inform your superior and does your institute/university accept the conditions for support by

NWO?)

Relevant authority: Dean of the Faculty of Sciences of the Vrije Universiteit

Name: J. van Mill

Position: Professor The proposed work will be carried out at the VU and the UvA in the groups of prof. Bal and prof. de Laat. These groups will collaborate closely with the other groups in the DAS-4 project and with the SARA compute center for Proof-of-Concept environments, testing and deployment of the results. Both groups also collaborate with SURFnet and utilize the optical-photonic and electrical switching (test-) network of SURFnet7 and its international connections to carry out experiments.

Short resume of Henri Bal

Prof.dr.ir. Henri Bal received a M.Sc. in mathematics from the Delft University of Technology in 1982 and a Ph.D. in Computer Science from the Vrije Universiteit in Amsterdam in 1989. His research interests include parallel and distributed programming and applications, grid and cloud computing, networking, and e-Science. At present, he is a full professor at the Faculty of Sciences of the Vrije Universiteit, where he heads a research group on High Performance Distributed Computing (HPDC). He is author of 3 books on Distributed Systems, Programming Languages, and Compilers. He was program chair of several conferences, including CCGrid 2002 (2nd IEEE International Symposium on Cluster Computing and the Grid) and HPDC 2005 (15th IEEE International Symposium on High Performance Distributed Computing).

He is a member of the Steering Committee of CCGrid and HPDC and a member of the editorial board of 3 journals: Software: Practice and Experience, the Journal of Grid Computing, and Future Generation Computer Systems (FGCS). He is adjunct-director of the Dutch “Virtual Laboratories for e-Science” (VL-e) BSIK project. Bal’s h-index (based on Google Scholar, as is a custom in Computer Science) is 39, which is one of the highest among Dutch computer scientists. He gave invited/keynote lectures at Euro-Par 2009, CLADE 2009, PDMC 2008, UKPEW 2008, PCGRID 2008, Grids@large 2005, PASA 2004, Euro-Par 2003, and PDP 2003. Bal’s group has won two first prizes at DACH 2008 (First International Data Analysis Challenge for Finding Supernovae) and first prizes at both SCALE 2008 (IEEE International Scalable Computing Challenge) at CCGrid'08 and SCALE 2010 at CCGrid’10 (see http://www.cs.vu.nl/ibis/awards.html).

Bal received an NWO-Pionier award in 1993 and numerous other grants. His group obtained 6.8 Meuro external funding since 2002. Bal was the coordinator of the NWO/M proposals DAS-2, DAS-3, and DAS-4, for which the ASCI research school obtained about 3M Euro funding in total. He has been the promoter of 14 Ph.D. students, including current CTO and VP of Amazon.com, dr. Werner Vogels and the winner of the EuroSys 2010 Roger Needham award, Willem de Bruijn. At present, Bal’s group consists of 2 Associate Professors, 1 temporary Assistant Professor, 5 postdocs, 10 Ph.D. students, and 3 scientific programmers. http://www.cs.vu.nl/~bal/

Page 3: Smart Energy Systemsbal/NWO-SES-Bal.pdf · highest among Dutch computer scientists. He gave invited/keynote lectures at Euro-Par 2009, CLADE 2009, PDMC 2008, UKPEW 2008, PCGRID 2008,

3

Short resume of Cees de Laat

Prof. dr. ir. Cees de Laat received a M.Sc. in physics from the Delft University of Technology in 1983 and a Ph.D in Physics from the same university in 1988. From 1988 till 2001 he was assistant professor in the computer physics group at the University of Utrecht. Since 2001 de Laat was associate and since January 2010 full professor of the System and Network Engineering Science (SNE) group at the University of Amsterdam. Research in his group covers optical and switched networking for Internet transport of massive amounts of data in TeraScale e-Science applications, Semantic Web to describe networks and

associated e-infrastructure resources, distributed cross organization Authorization architectures and Systems Security & privacy of information in distributed environments. With SURFnet he develops and implements projects in the SURFnet7 Research on Networks. He works together with the University of California San Diego (UCSD) in the NSF funded OptiPuter and GreenLight projects that have the purpose to create collaborative environments for scientists and to study the energy and carbon footprint of such infrastructure.

Prof. de Laat served as steering group member of the Open Grid Forum for which he received a leadership award in 2007. He is member of the board of directors of the Dutch “Virtual Laboratories for e-Science” (VL-e) BSIK project. He serves in the Open Grid Forum as IETF Liaison, is co-chair of the Grid High Performance Networking Research Group (GHPN-RG), chairs GridForum.nl and is boardmember of

ISOC.nl. He is co-founder and organizer of four past workshops of the Global Lambda Integrated Facility (GLIF) and founding member of CineGrid.org. The group participates in the ASCI research school and is one of the principle leaders of the DAS-4 project and its interaction with the GigaPort Research on Networks. De Laat gave invited keynotes and lectures at a.o. OnVector, CineGrid, Internet2, ISOC, ONT (Optical Network Transort), OFC, OGF, GLIF, Terena, and NorduNet conferences and workshops. He is a member of the editorial board of the FGCS journal. De Laat and his group obtained about 9.1 MEuro external funding since 2001. At present the group consists of 1 full and one part time professor, 6 postdocs, 2 PHD students and 4 scientific programmers. http://www.science.uva.nl/~delaat

5. Research school ASCI (Advanced School for Computing and Imaging) 6. Content of the proposed project (5000 words for sections 6, 7 and 8) Keywords: distributed systems, optical networks, grids, clouds, GPUs, multicores, high-performance computing 6.a. Scientific aspect

Research question and intended results As the ICT industry overtakes the airline industry in energy consumption and pollution, it becomes vitally important to make computing systems much more energy aware [4]. High-performance computing (HPC) systems like supercomputers and grids consume large amounts of energy [11]. Following current technology trends, supercomputers with exascale performance (1000 Petaflop/s) are predicted to consume 50-100 MWh, which will neither be deployable nor affordable. Fortunately, many opportunities exist to reduce HPC energy consumption. In this proposal, we look at global system-level optimizations for mapping applications onto distributed computing systems. This work will be complementary to other, more localized optimizations. In particular, we explore three promising ideas, concerning the types, the number, and the interconnections of the resources:

• Exploit the diversity of computing architectures (e.g. GPUs, multicores) to run computations on those architectures that perform them in the most energy-efficient way;

• Dynamically adapt the number of resources to the application needs accounting for computational and energy efficiency;

• Use optical and photonic networks to transport data and computations in a more energy-efficient way.

These ideas follow three important trends in HPC. The first trend is novel computer architectures, like accelerators and energy-efficient multicores. The most prominent example of an accelerator are graphical processing units (GPUs), which not only give much better performance for many applications, but also can be vastly more energy-efficient [17,27].

Page 4: Smart Energy Systemsbal/NWO-SES-Bal.pdf · highest among Dutch computer scientists. He gave invited/keynote lectures at Euro-Par 2009, CLADE 2009, PDMC 2008, UKPEW 2008, PCGRID 2008,

4

This hardware diversity thus can be exploited to drastically cut down energy consumption for many applications. Second, grid and cloud applications are becoming more flexible and malleable, because these environments are inherently dynamic. Supercomputing applications often assume a fixed (static) number of resources, but for grids several programming systems [5,19,26,35,37] exist that can adapt the set of resources dynamically to the needs of the application and the availability of resources. For compute

clouds (e.g., Amazon EC2, SARA’s HPC Cloud [31]) “elastic scalability'” is the ability to dynamically scale up and down. This is essential as cloud users pay for each resource. We intend to exploit this increasing flexibility to reduce energy consumption. For example, if a communication-intensive application has mediocre computation efficiency, it may run only marginally slower on fewer nodes, so we can remove resources from the running program. Likewise, resources can be added during computation phases where they can be used more efficiently. Making such decisions requires complex policies. The third trend is hybrid networks, i.e. networks where packet switched and circuit switched paths coexist. Routers, switches and photonic components are all present in such an infrastructure [8,20]. This model has become mainstream in commercial, education, and research networks. Just like in computing there is a tradeoff between functionality, flexibility, capacity and energy usage. In this sense routers

compare to supercomputers, Ethernet switches to grids, while lambda’s and photonic switches compare to GPUs. The energy footprint of photonic switches is just a few percent of that of Ethernet switches and those consume about 10% of full Internet routers, all for the same throughput. The hybrid model allows energy saving when one can bypass routing and switching devices and deliver data as long as possible using photons. That is, when the data transport can be maintained at the lowest layer without need of manipulation at upper layers by energy-hungry routers. Hybrid networks offer greater degree of programmability than traditional network architectures [16]. This enables a system-level approach to optimize the energy footprint of an entire distributed data processing system including the communication component. Our research question thus is: how to map high-performance applications onto a hybrid distributed (networked) computing system, taking both performance and energy consumption into account. We

focus on computing systems containing a diversity of machines that can be scaled elastically. We expect that many future computing infrastructures will follow this model. Current compute clouds already are elastically scalable, and several include accelerators. Also, we are building a prototype of such a distributed system, called DAS-4 (see Figure 1), funded by NWO/M.

Figure 1: Schematic representation of a DAS-4 cluster

To address the research question, we propose two complementary research paths, performed by two Ph.D. students (supported by a programmer). First, we need a system that provides detailed information on the energy characteristics of various applications (e.g., obtained from previous execution runs) and

the different parts of the distributed system. We call this the GreenClouds knowledge base system (GKBS). Decision-making can only be optimal when information is complete, coherently organized and easy to use. The challenge is to capture the relation between energy components, application components, and infrastructure resources as well as to present them in a cohesive way. For this purpose, we propose to use the Semantic Web, which provides the right framework to optimally organize such information and to take “intelligent” decisions, based on logical relations between resources. Also, in grids and clouds many parties own information on the available resources. This requires a distributed information system, where different information owners exchange data, to produce a final coherent

Page 5: Smart Energy Systemsbal/NWO-SES-Bal.pdf · highest among Dutch computer scientists. He gave invited/keynote lectures at Euro-Par 2009, CLADE 2009, PDMC 2008, UKPEW 2008, PCGRID 2008,

5

system. Second, we need much more insight into the energy gains that novel architectures can achieve, in particular which classes of applications benefit from these architectures. Moreover, we need policies and dynamic scheduling algorithms that use the GKBS and the application requirements to decide how to execute programs such that both their execution times and energy consumption improve. The knowledge base, insights, policies, and the empirical evaluation will be the main results of our project. Research method

Our research will make extensive use of DAS-4, which is designed specifically as a computer science testbed for doing controlled experiments that give more reproducible behavior for performance and energy consumption. DAS-4 consists of six similar clusters at different locations, connected by multiple dedicated optical 10 Gb/s links provided by SURFnet. The system will contain a wide variety of accelerators and will be equipped with detailed energy sensors. The system will be built in two phases (see Figure 1). First, the basic clusters will be built, containing a few dozen currently available GPUs (from NVIDIA and ATI), a few energy-efficient multicores (e.g., Sun Niagara and AMD Magny Cours) and similar network connectivity as for DAS-3. This phase will be completed in 2010 (the awarding decision for the EU tender was taken in May 2010). In the second phase (2010-2012), additional (newer) GPUs, multicores and photonic devices will be purchased and dedicated hybrid connections via SURFnet-7 will

be realized. Below, we describe the research of the PhD students and programmer. PhD student 1 (UvA) The ultimate goal of the research of PhD-1 is to define and create the GreenClouds GKBS. We envision three main studies to achieve this goal: the definition of the GreenClouds ontologies in the form of RDF (Resource Description Framework) and OWL (Web Ontology Language) schemas; the creation of a software framework that feeds real-time data into the GKBS schemas and, finally, the integration of hybrid and photonic networks in the service delivery scheme by offering applications a greater degree of network programmability.

The Semantic Web provides an effective mechanism to organize and define information models. The presence of logical relations between the described resources offers powerful tools to applications and constitutes a lingua franca for different sorts of applications. We intend to use ontologies to express the dimension of energy and computation, together with all the other resources and application-relevant parameters present in the GreenClouds infrastructure. The ontologies support the search of an ensemble of resources that satisfy the applications requirements. Applications can apply richer reasoning to select and combine resources. The effort of the PhD student is grounded in existing work, providing a solid starting base. The SNE group showed the feasibility of a declarative approach for the path-finding in multi-layer multi-domain

networks [40]. The ontology that supports this is NDL, the Network Description Language [14]. NDL is a Semantic Web information system tailored at hybrid networks. NDL uses RDF and describes in various schemata the components in the network, the topology, the devices and the technical configurations possible. Path finding algorithms use the information expressed in the NDL schemas to determine end-to-end communication paths. PhD-1 will extend the NDL ontology to express the GreenClouds resources and provide the basic blocks for self-adaptivity and dynamic optimization of the system. The challenge is to identify the metrics and parameters that play a role in the applications decisions, in particular when looking at optimizing the total energy consumption of the system. Simulations will be necessary to experiment with scaling and different reasoning and allocation algorithms in a dynamically changing and elastic infrastructure. This research must be carefully aligned to the one performed by PhD-2, who will in his second research line

look at elastic scalability. The GreenClouds ontologies must support and potentially trigger the decision to change the number and the geographic distribution of resources devoted to an application. For example, it could trigger relocation of computation and data Virtual Machines in a cloud to other geographic locations using the photonic networks if that increases efficiency, a technology that was demonstrated in 2005 by a team including the SNE group [36]. It is important that the resulting schemas have extensibility beyond our system, and be usable for all people working with cloud environments. PhD-1 therefore will perform ontology alignments with models developed elsewhere, to avoid duplication and strengthen the knowledge base for GreenClouds applications. Other projects, such as the EU-funded NOVI - Networking Innovations Over Virtualized Infrastructures – project in which we participate look at the combination of virtualized environments (such as clouds) and dynamic networks. NOVI also uses resource description models, but it does not

consider energy constrains. GreenClouds schema can therefore be used to complement and enrich the NOVI work.

Page 6: Smart Energy Systemsbal/NWO-SES-Bal.pdf · highest among Dutch computer scientists. He gave invited/keynote lectures at Euro-Par 2009, CLADE 2009, PDMC 2008, UKPEW 2008, PCGRID 2008,

6

Finally PhD-1 will study the integration of photonic networks for the creation of “green” services by means of network programmability. This is accomplished in the DAS-4 during its second phase, when the clusters will have dedicated photonic connections via the SURFnet network. The general research question is whether the use of dedicated communication paths in clouds could make applications more effective while improving the energy consumption of the system. The explicit inclusion of the energy use of the network component is therefore an interesting and novel aspect of this work.

Ph.D. student 2 and programmer (VU) PhD-2 will study how to use the GKBS to map applications to a hybrid distributed system and reduce energy consumption by exploiting hardware diversity and elastic scalability. We will first try to obtain a better understanding of the actual power consumption of applications on different parallel architectures, which often is very different from the theoretical consumption. A good starting point is the work on the LOFAR correlation algorithms performed by van Nieuwpoort (a current member of our group) and Romein [27]. They have implemented an algorithm for correlating radio

astronomy signals on a range of architectures, including a multicore CPU (Intel i7), the IBM Blue Gene/P supercomputer, two GPU accelerators, and the Cell/B.E. Next, they have done detailed measurements of the theoretical and actual performance and power dissipation. The results show that the GPUs and Cell can obtain a factor of 3-5 better gflops/Watt ratio than the Intel CPU, even though the absolute power consumption of GPUs is high. The results also show that the relation between theoretical and actual performance and power dissipation is complicated and also depends heavily on the memory system. We will generalize this work in two ways: more applications and more architectures. Most importantly, we want to cover a much larger variety of applications and set up an application database. The VU group already started working on GPU implementations for Multimedia Content Analysis (MMCA) algorithms, (from the MultimediaN BSIK program), and on several different signal processing algorithms for LOFAR. Likewise, we are collaborating with the astronomy group of prof. Simon Portegies Zwart (Leiden

University) who has high-performance GPU implementations of N-body simulations. We will also use the requested programmer and other members of the HPDC group to develop additional applications, and we will use external applications, which are becoming more widely available. In fact, two members of our group are currently organizing a workshop (A4MMC, June 2010) that aims to set up a pool of real-life multicore and GPU applications. Another group member participates in an EU COST action that sets up an applications catalog for many domains, including GPU applications. Besides studying more applications, we will also study the behavior of available algorithms on a larger variety of accelerators, which will become available on DAS-4. This set will include newer GPUs and energy-efficient multicore machines like the Sun Niagara. DAS-4 will be equipped with Power Distribution Units that can accurately measure the power consumption of each node.

We will analyze the performance and power dissipation of the various applications on the different architectures, aiming to better understand the actual results and how they differ from theoretical hardware specifications. The goal is not to design a precise performance prediction model, but rather to identify which classes of applications and algorithms are more energy-efficient on accelerators than on general-purpose CPUs, and to identify the performance bottlenecks. We aim to analyze a comprehensive set of applications, covering as much as possible from the wide large-scale application spectrum. We separate these applications in classes, for example using a similarity-based approach [2] augmented with a more detailed metric-based approach [1]. From each class, we extract one representative application (either a synthetic or a real-life one) and we analyze its performance and energy behavior on current platforms (with and without accelerators) aiming to derive a set of (mostly empirical) energy-efficiency rules. We expect these rules to differ per application class and per target platform. Once these rules are available, we aim towards a model-based approach for mapping and scheduling applications.

The second research direction of PhD-2 is to adapt the number of resources used for the given application. To a certain extent, this allocation can be done statically, especially if detailed performance and energy data are available. A more flexible approach is to determine or adjust the number of resources dynamically, provided the programming system allows this. We have developed two such programming systems ourselves. Satin is a divide-and-conquer system that can dynamically detect the efficiency of the program (by monitoring its communication overhead and idle time) and adjust the number of resources accordingly [26]. We have many applications available for Satin, such as N-body simulations, MMCA programs, gene sequencing, and a SAT solver. Also, we are working on integrating GPU support into Satin. We can thus do experiments to determine the efficiency and power consumption of Satin programs on different numbers of processors and accelerators and derive policies for dynamic adaptation. The second dynamic system is the BaTS Bag-of-Tasks (BoT) Scheduler [30]. It is designed

only for BoT applications (like parameter sweeps), but it can execute such applications efficiently on different types of resources and without any prior performance information. It allocates and schedules

Page 7: Smart Energy Systemsbal/NWO-SES-Bal.pdf · highest among Dutch computer scientists. He gave invited/keynote lectures at Euro-Par 2009, CLADE 2009, PDMC 2008, UKPEW 2008, PCGRID 2008,

7

resources dynamically, taking multiple objectives into account, and dynamically learns how well the tasks perform on different resources [30]. The scheduler is able to heuristically minimize execution time while respecting a certain financial (Cloud) budget. We will extend BaTS by introducing energy as an additional objective, allowing us to execute an important class of applications in a more energy-efficient way. Both Satin and BaTS will use the GKBS, which will contain information about execution runs (number and types of nodes, runtime, energy consumption, parameter sizes, etc.). The GKBS supports a continuous feedback loop between the execution results and the decision making algorithms, requiring a close

cooperation between the two students. Satin and BaTS also work on geographically distributed systems consisting of multiple clusters, such as DAS-4. We will therefore also do distributed experiments that take the energy consumption of the clusters and the wide-area network into account and try to reduce the overall energy consumption. This latter work will be done in close collaboration with PhD-1. The applications include distributed supercomputing (i.e., running parallel programs on multiple clusters [24]) and data-intensive computing (i.e., accessing large remote data sets). The SNE group recently obtained funding for a Digital Cinema project named CineGrid. The distributed processing of a digital movie creation workflow is also an ideal application candidate for GreenClouds.

Significance of the proposed research Cloud computing is a major research topic, as evidenced by the recent EU Expert Group report [9], which emphasizes the importance of energy-proportional computing. Major companies (Amazon, Google, IBM, Sun, Microsoft) are working on cloud computing infrastructures and experts agree clouds will change the way we do computing. Therefore, reducing energy consumption of clouds will be highly significant. This also applies to networks, as their energy consumption is expected to increase substantially (see [38] and Section 6c). Accelerators like GPUs are expected to become mainstream for high-performance computing, as they obtain an outstanding price/performance ratio for many applications. It therefore is crucially important to better understand which classes of applications can be made more energy-efficient with this technology. Finally, the new insights we will obtain for making applications elastically scalable and energy-aware are highly relevant for future research projects.

6.b. Innovation Although several other projects exist that try to reduce energy consumption, we believe to be the first to study the effects of elastic scalability, hardware diversity, and hybrid networks in a distributed setting, using a state-of-the-art infrastructure (DAS-4) especially designed for such experiments. Our project takes a system-level approach: rather than trying to do local optimizations (e.g., reducing clock speeds) we look at the behavior of the overall system. We use a novel Semantic Web based information system to describe the resource characteristics and we develop new heuristics to scale applications up or down or to move computations to special hardware (accelerators) and offload data communication to dedicated

networks (photonic paths). This system-level approach supplements the local optimizations. 6.c. Relevance Our proposal is primarily relevant for the research line “Energy reduction in processing and storing of information”. We provide the “energy-efficient cloud computing mechanisms” requested in the Call. Our work also is relevant for the “Energy reduction in communication networks” research line, in particular for “technologies for provisioning of capacity and of QoS” and “always-on or event-controlled access connections”. In general, reducing the energy consumption of clouds may have long-term utilization. We focus on many compute (and thus energy) intensive applications, including N-body galaxy simulations, radio astronomy, and multimedia content analysis, which have a broad range of end-users.

We expect that our ideas are relevant for many other HPC applications as well. In addition, we study the generic class of bag-of-task applications, which is widely used. An accurate quantitative analysis of the expected benefits is difficult, but we can make some meaningful observations. The energy efficiency of accelerators depends strongly on the performance that applications achieve on them. For GPUs, many applications achieve very high speedups. The study on the Lofar correlator algorithms [27] shows that a GPU (NVIDIA Tesla C1060) obtains a measured performance of 300 GFLOPS whereas an energy-efficient Blue Gene/P node achieves 13 GFLOPS, resulting in a factor 4 better overall energy efficiency for the GPU. Huang [17] studied the energy-efficiency of a biological code (GEM) on GPUs. Our initial experiments [39] with a multimedia application (line-detection) on SARA’s

LISA GPU cluster also are very promising, giving large speedups (e.g., a factor of 387 on 8 NVIDIA Tesla GPUs compared to a sequential program). The group of prof. Portegies Zwart has obtained a two-order

Page 8: Smart Energy Systemsbal/NWO-SES-Bal.pdf · highest among Dutch computer scientists. He gave invited/keynote lectures at Euro-Par 2009, CLADE 2009, PDMC 2008, UKPEW 2008, PCGRID 2008,

8

of magnitude improvement for N-body tree-codes on GPUs [13]. GPUs are also used for many other application areas, including bioinformatics, chemistry, medical imaging, data mining, finance, etc. [28]. The energy savings resulting from dynamic scheduling algorithms also depend on the applications. Our techniques will reduce the number of resources for applications that do not scale well beyond a certain point. Optimization of network usage can play an important role in total energy reduction. A recent study

indicates the annual energy consumption of network devices at 22GW out of 156GW total for the ICT sector in 2007 [38]. Despite being the smallest contributor to the total ICT energy bill as of 2007, the network component was expected to grow at a 12% yearly rate and become one of the largest by 2020. A possible reduction technique is to move away from IP packet forwarding, and therefore eliminate power hungry routers and switches [15]. [18] indicated that optical switching provides energy benefits. This is the approach we take in GreenClouds, using photonic dedicated paths for data transport. 6.d. Utilisation (maximum 400 words) We will utilize the results in DAS-4, which will be operational during (mid) 2010-2014 and will have over 100 users. Since DAS-4 will be instrumented with energy sensors, we can estimate the resulting energy

savings. In addition, our work will be utilized by the SARA HPC center, as described below. The energy consumption of HPC systems at SARA has increased rapidly over the years. Today, the costs of energy over the lifetime of a system are already larger than the acquisition costs. HPC data center facilities need continuous expansion for the increasing energy demands, running costs of supercomputing facilities are exploding and provision of HPC services is becoming unaffordable. Whole-chain thinking is necessary all the way from the data center to the application and research on energy efficiency is required in all aspects across that chain. The results in this proposal are therefore highly relevant to HPC centers. On average HPC centers currently use 2-15 MW of energy. Energy efficiency gains as a result of this proposal would have a direct and relevant impact of the energy use of these centers. An efficiency improvement of, say, 15% would result in an energy saving of 300-2250 KW per year, which today roughly corresponds to yearly direct costs reductions between €300.000 and €2.250.000 per center.

We thus will collaborate closely with the national HPC center SARA. Different computing resources (HPC systems, accelerator systems, HPC Cloud environment) at SARA will be used for testing the energy efficiency concepts and application performance in relation to energy use. Models and mechanisms will be tested in Proof-of-Concept environments in collaboration with SARA to ensure experience and technology transfer to the HPC center, and ultimately in uptake and deployment of technology in the standard HPC and cloud production service environment. SARA is currently collaborating closely with HPC vendors like IBM on energy efficiency of data centers, systems and applications. These collaborations are essential in facilitating and stimulating the discussions on the eventual uptake of results of this proposal by HPC vendors in their products.

6.e. Positioning of the project proposal Our proposal is unique by taking a clear system-level approach combined with a new distributed testbed (DAS-4) especially designed for doing controlled performance/energy experiments. Many other Green IT projects study lower-level optimizations, and are complementary to our work. Other projects try to reduce the power of the CPUs or disks when their load decreases. An example is [6], which combines dynamic voltage and frequency scaling with dynamic concurrency throttling. Also, several new processors are being designed that aim to obtain better energy-efficiency. The Sun Niagara uses many simple cores and thread-level parallelism, rather than speculative and thus wasteful Instruction Level Parallelism [21]. Reducing the number of resources to save energy has also been studied for multiprocessors [6,25,33]. Moreover, intelligent schedulers exist that minimize energy consumption, such as the IANOS

(www.ianos.org) grid scheduler and the framework of [22] that distributes load across multiple data centers. Several papers have studied how to use GPUs for Green IT. The SPRAT framework [34] can dynamically switch between CPUs and GPUs, but this system is limited to streaming applications. Ma et al [23] have designed a detailed statistical model for the power consumption of GPUs. Application experiments are reported in [10,32]. Several other testbeds exist. Nautilus [3] is a testbed for green supercomputing. Green Flash is a green supercomputer for climate predictions [29]. The French Grid’5000 system also has been instrumented with energy sensors [7], to power off nodes that are not used. In comparison, DAS-4 provides a unique opportunity to study the impact of hardware diversity and photonic paths in energy saving schemes. To the best of our knowledge, we are the first to do a systematic study on a range of applications, looking at the system-level problem of mapping applications to an entire distributed system with diverse hardware (networks and accelerators) in an energy-efficient way.

Page 9: Smart Energy Systemsbal/NWO-SES-Bal.pdf · highest among Dutch computer scientists. He gave invited/keynote lectures at Euro-Par 2009, CLADE 2009, PDMC 2008, UKPEW 2008, PCGRID 2008,

9

The research will be embedded in

- the HPDC group of the Computer Systems section of the Vrije Universiteit; this section obtained the maximum possible score (excellent on all aspects) at the 2009 QUANU research assessment. The research is part of the VU Network Institute.

- The SNE group of the University of Amsterdam, which is part of the Informatics Institute (IvI). The groups together have over 30 researchers and both have excellent track records in distributed

systems, networking, and e-Science. The HPDC group contains two postdocs (van Nieuwpoort and Varbanescu) who are specialists in accelerators. The two groups have collaborated in three DAS projects, the BSIK project VL-e, and the NWO-GLANCE project StarPlane. The groups collaborate with Computer Scientists from TU Delft, Leiden, VU and UvA, and application scientists from life sciences (VU), multimedia (UvA), astronomy (Astron, Leiden), medicine (VUmc), and many other areas. They collaborate with SURFnet on networking and with SARA on computing and cloud infrastructures. The groups also have many international collaborations on grids and clouds, in EU projects like CoreGrid, GridLab, XtreemOS, GEANT3, Geysers, NOVI and Contrail, with CALIT2 (http://greenlight.calit2.net/), and with projects like Grid’5000, CineGrid, OptiPuter, Terabit LAN, Lambdagrid etc. Both groups also have strong ties with education and with industrial partners (through

joint research in the past). 7. Description of the proposed plan of work The work of PhD-1 consists of three main tasks: task 1-1 is the definition of the GreenClouds ontologies, task 1-2 is research on the GreenClouds knowledge base (GKBS), and task 1-3 is the integration of hybrid and photonic networks. PhD-2 has two research directions: task 2-1 concerns accelerators (diversity) and task 2-2 concerns elastic scalability. The Programmer supports the PhD students (especially PhD-2).

PhD-1 PhD-2 Programmer

Year-1 1-1 Study ontology 2-1 Study/classify accelerator applications

2-1 Port more accelerator applications

Year-2 1-1 Finish ontology 1-2 Start GKBS design

2-1 Finish accelerator work 2-1 Port more applications 2-2 Adapt Satin

Year-3 1-2 GKBS

1-3 Add Hybrid networks 2-2 Elastic scalability using Satin

2-2 Elastic scalability using BaTS 2-2 Adapt BaTS

2-2 Prepare wide-area experiments

Year-4 1-3 Wide area network experiments PhD thesis

2-2 Wide area experiments with Satin & BaTS PhD thesis

In year 1 the work will be mainly devoted to tasks 1-1 and 2-1. PhD-1 will start with the definition of an ontology. This requires the student to become familiar with the existing work in the area, and in particular NDL. Also it is important that the student acquires familiarity with basic tools for ontology creation and definition. At the same time the student needs to familiarize with DAS-4 and its resources. The design of the schemas in this first year should be an iterative process where all the GreenClouds researchers will provide feedback on the validity of the proposed model. PhD-2 will first investigate which classes of applications are suitable for which accelerator architectures, and identify the bottlenecks and estimate the energy gains. To prepare the experiments, PhD-2 will use existing GPU applications (Lofar, MMCA), implement them on different accelerators available at DAS-4 or SARA (at least GPUs from both NVIDIA and ATI) and multicores (probably AMD Magny Cours, and Sun

Niagara) and compare their performance and energy consumption against that on normal CPUs. Next, PhD-2 will investigate how to classify applications (using ideas from [1,2]), with the goal of obtaining a model-based approach to map and schedule applications. For this purpose, the programmer and PhD-2 will build a more extensive database of applications from the HPDC group and external sources. In year 2 PhD1 will produce a more tailored set of schemas (the ontology) based on the feedback. He will also start the work to couple the ontologies and monitored data, in order to provide up-to-date information to applications. PhD-2 will continue task 2-1 while the programmer will also work on task 2-2, starting by adapting Satin for energy optimization. In year 3 the research of PhD-1 is on the completion of the GKBS and its software framework. As DAS-4 will now have finished its second phase (2012), PhD1 will also begin the integration of the photonic

network in the fabric of GreenClouds services (task 1-3). Simulations with the framework will give

Page 10: Smart Energy Systemsbal/NWO-SES-Bal.pdf · highest among Dutch computer scientists. He gave invited/keynote lectures at Euro-Par 2009, CLADE 2009, PDMC 2008, UKPEW 2008, PCGRID 2008,

10

experimental results on scalability and robustness while researching for the best path finding and resource allocation (reasoning) algorithms. One of the goals is to demonstrate together with PhD-2 the use of real-time ontology information and resource optimization. To this purpose, PhD-2 will use Satin and BaTS (in combination with the GKBS) to study elastic scalability. Performance experiments will be done on DAS-4 and at SARA resources. The programmer will adapt the Bag-of-Tasks scheduler and prepare the wide-area experiments.

In year 4, PhD-1 and PhD-2 will work together to do wide-area experiments using distributed applications that use multiple clusters and that try to reduce the total energy (computing plus networking). Both students will finalize their work and write their dissertations. During the project the students will write and contribute to scientific articles and demonstrations at international venues highlighting their research. We are also keen in making our work known to the international research community, and the students will be required to present their results at relevant conferences and journals. 8. Expected use of instrumentation

We will use the NWO/M-funded DAS-4 system, which is a computer science testbed for doing controlled experiments on performance and energy consumption. DAS-4 will contain six clusters at different locations, connected by optical 10 Gb/s links provided by SURFnet. The system will have a variety of accelerators and energy sensors. Also, various (super)computing-, accelerator- and cloud environments at SARA will be used as Proof-of-Concept, test- and reference systems. 9. Literature

- [1] Alexander S. van Amesfoort, Ana Lucia Varbanescu, and Henk J. Sips: Metrics to Characterize Application Behavior for Parallel Programming, 15th Workshop on Compilers for Parallel Computing, July 2010, Vienna, Austria (available from http://www.pds.ewi.tudelft.nl/~varbanescu/AppCharSamples)

- [2] Krste Asanovic, Rastislav Bodik, James Demmel, Tony Keaveny, Kurt Keutzer, John Kubiatowicz, Nelson Morgan, David Patterson, Koushik Sen, John Wawrzynek, David Wessel, Katherine Yelick: A View of the Parallel Computing Landscape , Comm. ACM, Vol. 52, No. 10 (Oct. 2009), pp. 56-67.

- [3] Bartosz Borucki, Maciej Cytowski and Maciej Remiszewski, Nautilus - A Testbed for Green Scientific Computing, ERCIM Newsletter Towards Green ICT, Oct. 2009, see http://ercim-news.ercim.eu/en79

- [4] Brown D. and Reams C.: Toward Energy-efficient Computing, ACM Queue, Vol. 8, No. 2, February 2010, http://queue.acm.org/detail.cfm?id=1730791

- [5] J. Buisson, O.O. Sonmez, H.H. Mohamed, W. Lammers, and D.H.J. Epema: Scheduling Malleable Applications in MultiCluster Systems, IEEE Cluster 2007, Austin, Texas, Sept.

2007. - [6] M. Curtis-Maury, A. Shah, F. Blagojevic, D.S. Nikolopoulos, B.R. de Supinski, and M.

Schulz: Prediction Models for Multi-dimensional Power-performance Optimization on Many Cores, Proc. of the 17th Int. Conference on Parallel architectures and compilation techniques, pp. 250-259, 2008

- [7] Georges Da-Costa, Jean Patrick Gelas, Yiannis Georgiou, Laurent Lefevre, Anne-Cecile Orgerie, Jean-Marc Pierson, and Olivier Richard: The GREEN-NET Framework: Energy Efficiency in Large Scale Distributed Systems, The Fifth Workshop on High-Performance, Power-Aware Computing, May 2009, Rome, Italy.

- [8] Tom DeFanti, Cees de Laat, Joe Mambretti, Kees Neggers, Bill St. Arnaud: TransLight: a global-scale LambdaGrid for e-science, Communications of the ACM, Volume 46 , Issue 11 (Nov. 2003), pp. 34 - 41.

- [9] EU Expert Group 2010] EU Expert Group: The Future of Cloud Computing: Opportunities for European Cloud Computing beyond 2010, Jan. 2010, see http://cordis.europa.eu/fp7/ict/ssai/events-20100126-cloud-computing_en.html

- [10] X. Feng, R. Ge, and K.W. Cameron: Power and Energy Profiling of Scientific Applications on Distributed Systems, Proceedings of the 19th IEEE International Parallel and Distributed Processing Symposium, April 2005.

- [11] Wu-chun Feng and Thomas Scogland: The Green500 List: Year One, The Fifth Workshop on High-Performance, Power-Aware Computing, May 2009, Rome, Italy.

- [12] Tiago Fioreze: Self management of hybrid optical and packet switched networks, CTIT Ph.D thesis series n 09-163 University of Twente, Enschede ISSN 1381-3617, ISBN 978-90-365-29966-2

- [13] Evghenii Gaburov, Jeroen Bedorf, and Simon Portegies Zwart: Gravitational Tree-Code

on Graphics Processing Units: Implementation in CUDA, Int. Conf. on Computational Science 2010, Amsterdam, The Netherlands

Page 11: Smart Energy Systemsbal/NWO-SES-Bal.pdf · highest among Dutch computer scientists. He gave invited/keynote lectures at Euro-Par 2009, CLADE 2009, PDMC 2008, UKPEW 2008, PCGRID 2008,

11

- [14] Paola Grosso , Li Xu, Jan-Philip Velders, Cees de Laat: StarPlane - A National Dynamic Photonic Network Controlled by Grid Applications, Emerald Journal of Internet Research, Vol.17, Issue 5, 2007, pp. 546 – 553

- [15] M. Gupta and S. Singh: Greening of the Internet, SIGCOMM’03, Aug. 2003, Karlsruhe, Germany.

- [16] Jeroen van der Ham, Freek Dijkstra, Paola Grosso, Ronald van der Pol, Andree Toonk, Cees de Laat: A distributed topology information system for optical networks based on the

semantic web, Elsevier Journal on Optical Switching and Networking, Optical Switching and Networking, Volume 5, Issues 2-3, June 2008, Pages 85-93 "Advances in IP-Optical Networking for IP Quad-play Traffic and Services"

- [17] Song Huang, Shucai Xiao, and Wu-chun Feng: On the Energy Efficiency of Graphics Processing Units for Scientific Computing, The Fifth Workshop on High-Performance, Power-Aware Computing, May 2009, Rome, Italy.

- [18] H. Imaizumi, H. Morikawa: Directions towards Future Green Internet, in Proceedings of to the 12th International Symposium on Wireless Personal Multimedia Communications (WPMC’09), Sendai, Sep. 2009

- [19] Kale, L. V., Kumar, S., and DeSouza, J.: A malleable-job system for timeshared parallel machines, 2nd IEEE/ACM International Symposium on Cluster Computing and the

Grid (CCGRID'02). Berlin, Germany, pp. 230-237, 2002. - [20] Cees de Laat, Paola Grosso: Lambda Grid developments, History - Present – Future,

Lighting the Blue Touchpaper for UK e-Science - Closing Conference of ESLEA Project, Proceedings of Science, June 2007

- [21] James Laudon: Performance/Watt: the New Server Focus, ACM SIGARCH Computer Architecture News, Vol. 33, No. 4 (Nov. 2005), pp. 5-13

- [22] Kien Le, Ricardo Bianchini, Margaret Martonosi and Thu D. Nguyen: Cost- and Energy-Aware Load Distribution Across Data Centers, SOSP Workshop on Power Aware Computing and Systems (HotPower '09), Oct. 2009

- [23] Xiaohan Ma, Mian Dong, Lin Zhong , and Zhigang Deng: Statistical Power Consumption and Analysis for GPU-based Computing, SOSP Workshop on Power Aware Computing and Systems (HotPower '09), Oct. 2009

- [24] J. Maassen, K. Verstoep, H.E. Bal, P. Grosso, and C. de Laat: Assessing the Impact of Future Reconfigurable Optical Networks on Application Performance, Sixth High-Performance Grid Computing Workshop (HPGC 2009), with the 23rd International Parallel & Distributed Processing Symposium (IPDPS 2009), Rome, Italy, May 2009.

- [25] Andreas Merkel, Jan Stoess, and Frank Bellosa: Resource-conscious Scheduling for Energy Efficiency on Multicore Processors, Proc. of the 5th ACM SIGOPS EuroSys Conference (EurosSys'10), Paris, France, April 2010

- [26] Rob V. van Nieuwpoort, Gosia Wrzesinska, Ceriel J.H. Jacobs and Henri E.Bal: Satin: a High-Level and Efficient Grid Programming Model, ACM Transactions on Programming Languages and Systems (TOPLAS), Volume 32, Issue 3, ACM Press New York, NY, USA, 2010, see http://www.cs.vu.nl/~bal/Papers/toplas-satin2010.pdf

- [27] Rob V. van Nieuwpoort and John W. Romein: Correlating Radio Astronomy Signals with Many-Core Hardware, In International Journal of Parallel Programming, 2010.

- [28] http://www.nvidia.com/object/tesla_computing_solutions.html - [29] Leonid Oliker, Green Flash: Designing an energy efficient climate supercomputer, 23rd

International Parallel & Distributed Processing Symposium (IPDPS 2009), Rome, Italy, May 2009

- [30] Ana-Maria Oprescu and Thilo Kielmann: Bag-of-Tasks Scheduling under Time and Budget Constraints, Department of Computer Science, Vrije Universiteit, Amsterdam, The Netherlands, Febr. 2010, see http://www.cs.vu.nl/~kielmann/papers/bats.pdf

- [31] http://www.hpcinthecloud.com/features/SARA-Opens-Gate-for-HPC-Cloud-Researchers-93287464.html

- [32] S. Song, R. Ge, X. Feng, K. W. Cameron, Energy Profiling and Analysis of the HPC

Challenge Benchmarks, The International Journal of High Performance Computing Applications, Vol. 23, No. 3, 265-276 (2009)

- [33] M.A. Suleman, M.K. Qureshi, Y.N. Patt: Feedback-driven threading: power-efficient and high-performance execution of multi-threaded workloads on CMPs, Proceedings of the 13th Int. Conf. on Architectural Support for Programming Languages and Operating Systems, pp. 277-286, 2008

- [34] H. Takizawa, K. Sato, and H. Kobayashi: SPRAT: Runtime Processor Selection for Energy-aware Computing, Proc. 2008 IEEE Int. Conference on Cluster Computing, pp. 386-393, 2008.

- [35] Taura, K., Endo, T., Kaneda, K., and Yonezawa, A.: Phoenix : a parallel programming model for accommodating dynamically joining/leaving resources, IACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP 2003). pp. 216-229.

- [36] F. Travostino, P. Daspit, L. Gommans, C. Jog, C.T.A.M. de Laat, J. Mambretti, I. Monga, B. van Oudenaarde, S. Raghunath and P.Y. Wang: Seamless Live Migration of Virtual

Page 12: Smart Energy Systemsbal/NWO-SES-Bal.pdf · highest among Dutch computer scientists. He gave invited/keynote lectures at Euro-Par 2009, CLADE 2009, PDMC 2008, UKPEW 2008, PCGRID 2008,

12

Machines over the MAN/WAN, Future Generation Computer Systems, Volume 22, Issue 8, Oct. 2006, pp. 901-907.

- [37] Vadhiyar, S. S. and Dongarra, J. J.: Self adaptivity in Grid computing, Concurrency and Computation: Practice and Experience 17, 2-4, 235-257, 2005.

- [38] W. Vereecken, L. Deboosere, D. Colle, B. Vermeulen, M. Pickavet, B. Dhoedt, and P. Demeester: Energy Efficiency in Telecommunication Networks, In Proceedings of NOC2008, the 13th European Conference on Networks and and Optical Communications, 2008 pp. 44-

51. - [39] Ben van Werkhoven, Jason Maassen, and Frank J. Seinstra: Towards User Transparent

Parallel Multimedia Computing on GPU-clusters, 1st Workshop on Applications for Multi and Many Core Processors (in conjunction with ISCA 2010), June 2010.

- [40] Li Xu , Freek Dijkstra , Damien Marchal , Arie Taal , Paola Grosso and Cees de Laat: A Declarative Approach to Multi-Layer Path Finding Based on Semantic Network Descriptions, Proceedings of the 13th Conference on Optical Network Design and Modeling, Feb. 2009 (ONDM09) ISBN: 978-1-4244-4187-7

10. Budget

Component A (to be funded from grant)

(1) Staff (see tables 1 and 2) No FTE x amount

a) appointment of PhD student(s) 2 FTE x € 200,013 = € 400,026

b) appointment of post-doc(s) ........ FTE x € ........ = €

f) appointment of other (new) staff ........FTE x € ........ = €

scientific programmer, 3 year, university level 1 FTE x € 210,605 = € 210,605

Subtotal, staff = €

(2) Other

a) materials and internal travel *) ........ FTE x € ........ = € 1,000

b) international travel budget *) ........ FTE x € ........ = €

c) project-related equipment/software *) ........ FTE x € ........ = €

d) (foreign) visiting researchers ........ FTE x € ........ = €

2 x AiO benchfee = € 10,000

Subtotal, other = €

Component A: subtotal (1) + (2) = € 621,631

Component B (4) Financial contribution -/- = -/- €

Total grant = € 621,631

Component B (inbreng vanuit de bedrijfsleven of maatschappelijke partijen)

(1) Capitalisation of hours to be worked, plus 50% overheads = €

(2) Cost of materials and aids used = €

(3) Use of equipment and machinery = €

.............. +

Total Component B = €

TOTAL PROJECT BUDGET (= Component A +B) = €

*) For compartment 1 these are without V.A.T., for compartment 2 these amounts are including V.A.T.