134
www.chain-project.eu [email protected] Grant Agreement n. 306819 Project acronym : CHAIN-REDS Project full title : Co-ordination & Harmonisation of Advanced and e-INfrastructures for Research Education Data Sharing Grant agreement : 306819 Start date : December 1, 2012 Duration : 30 months Programme : 7th Framework Programme (FP7) Theme : Capacities specific program Thematic area : Research Infrastructures Funding scheme : Support action Call identifier : FP7–INFRASTRUCTURES–2012-1 Project coordinator : Federico Ruggieri (INFN) D3.4–Interoperation report Deliverable Status : Draft File Name : CHAIN-REDS-D3.4-d Due Date : 30/04/2015 Submission Date : 31/05/2015 Dissemination Level : Public Author (s) : CHAIN-REDS consortium (please refer to list of contributors on next page) © Copyright 2012-2015 The CHAIN-REDS Consortium INFN Istituto Nazionale di Fisica Nucleare - Italy CIEMAT Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas - Spain GRNET Greek Research and Technology Network S.A. - Greece CESNET Zajmove Sdruzeni Pravnickych Osob - Czech Republic UBUNTUNET The Ubuntunet Alliance for Research and Education Networking - Malawi CLARA Cooperacion Latinoamericana de Redes Avanzadas - Uruguay IHEP Institute of High Energy Physics Chinese Academy of Sciences - China ASREN Arab States Research and Education Network - Jordan SIGMA Sigma Orionis - France CDAC Centre for Development of Advanced Computing - India

D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Embed Size (px)

Citation preview

Page 1: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

www.chain-project.eu [email protected] Grant Agreement n. 306819

Project acronym : CHAIN-REDS

Project full title : Co-ordination & Harmonisation of Advanced and e-INfrastructures for Research Education Data Sharing

Grant agreement : 306819

Start date : December 1, 2012

Duration : 30 months

Programme : 7th Framework Programme (FP7)

Theme : Capacities specific program

Thematic area : Research Infrastructures

Funding scheme : Support action

Call identifier : FP7–INFRASTRUCTURES–2012-1

Project coordinator : Federico Ruggieri (INFN)

D3.4–Interoperation report

Deliverable Status : Draft File Name : CHAIN-REDS-D3.4-d Due Date : 30/04/2015 Submission Date : 31/05/2015 Dissemination Level : Public Author (s) : CHAIN-REDS consortium (please refer to list of contributors on next page)

© Copyright 2012-2015 The CHAIN-REDS Consortium

INFN Istituto Nazionale di Fisica Nucleare - Italy CIEMAT Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas - Spain GRNET Greek Research and Technology Network S.A. - Greece CESNET Zajmove Sdruzeni Pravnickych Osob - Czech Republic UBUNTUNET The Ubuntunet Alliance for Research and Education Networking - Malawi CLARA Cooperacion Latinoamericana de Redes Avanzadas - Uruguay IHEP Institute of High Energy Physics Chinese Academy of Sciences - China ASREN Arab States Research and Education Network - Jordan SIGMA Sigma Orionis - France CDAC Centre for Development of Advanced Computing - India

Page 2: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #2

2

Disclaimer

More details on the copyright holders can be found at www.chain-project.eu. CHAIN-REDS (“Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing”) is a project co-funded by the European Union in the framework of the 7th FP for Research and Technological Development, as part of the “Capacities specific program - Research Infrastructures FP7–INFRASTRUCTURES–2012-1”. For more information on the project, its partners and contributors visit hwww.chain-project.eu. You are permitted to copy and distribute verbatim copies of this document containing this copyright notice, but modifying this document is not allowed. You are permitted to copy this document in whole or in part into other documents if you attach the following reference to the copied elements: "Copyright (C) 2013 - CHAIN-REDS Consortium - www.chain-project.eu".The information contained in this document represents the views of the CHAIN-REDS Consortium as of the date they are published. The CHAIN-REDS Consortium does not guarantee that any information contained herein is error-free, or up to date. THE CHAIN CONSORTIUM MAKES NO WARRANTIES, EXPRESS, IMPLIED, OR STATUTORY, BY PUBLISHING THIS DOCUMENT.

Revision Control

Issue Date Comment Author

a 27/1/2015 ToC and first round of contributions

O. Prnjat, C. Kanellopoulos, K. Koumantaros, E. Athanasaki

b 25/4/2015 Contributions to all sections

Gang Chen, Luis Núñez, Subrata Chattopadhyay, Weinjin Wu, Aouaouche Elmaouhab, Guiseppe LaRoca, Rafael Mayo-García.

c 19/05/2015 Elaboration on the MoUs, inclusion of the MoUs in Appendix

O. Prnjat, E. Athanasaki

d, e 28/5/2015 Final edits and contributions O. Prnjat, C. Kanellopoulos, K. Koumantaros, E. Athanasaki

Page 3: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #3

3

Abstract The CHAIN-REDS project aims at promoting and supporting technological and scientific collaboration across e-Infrastructures established and operated in various continents, in order to define a path towards a global e-Infrastructure ecosystem that will allow Virtual Research Communities (VRCs), research groups and even single researchers to access and efficiently use worldwide distributed resources (i.e., computing, storage, data, services, tools, applications).

Work Package 3, Interoperation and coordination of e-Infrastructures, supports the interoperation of Grids in Europe and other world regions through support for Regional Operations Centres (ROCs) in terms of functionality requirements, structure and guidelines; and also looks into solutions for standardised access to heterogeneous Distributed Computing Infrastructures (DCIs).

In this deliverable, we present the status of the operations in the regions, as related to European Grid Infrastructure (EGI); and also discuss the long-term MoUs between EGI and other regional infrastructures. We also present CHAIN-REDS approach to cloud as well as one-stop shop solutions to support heterogeneous DCI environments. Finally, we describe the key role of WP3 in the operational support conducted for the selected use-cases / success stories.

Page 4: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #4

4

Table of contents

ABSTRACT ________________________________________________________________________________________________ 3

TABLE OF CONTENTS ___________________________________________________________________________________ 4

LIST OF FIGURES _________________________________________________________________________________________ 6

LIST OF TABLES __________________________________________________________________________________________ 7

PURPOSE __________________________________________________________________________________________________ 8

GLOSSARY ________________________________________________________________________________________________ 9

1 INTRODUCTION __________________________________________________________________________________ 12

2 STATUS OF THE GRID OPERATIONS IN THE REGIONS ______________________________________ 13

2.1 AFRICA-ARABIA ________________________________________________________________________________ 14

2.2 ASIA-PACIFIC ___________________________________________________________________________________ 17

2.3 CHINA __________________________________________________________________________________________ 19

2.4 INDIA ___________________________________________________________________________________________ 25

2.5 LATIN AMERICA ________________________________________________________________________________ 31

2.6 OVERVIEW OF THE MOUS _______________________________________________________________________ 32

3 CLOUD COMPUTING AND HETEROGENEOUS DCI ACCESS __________________________________ 36

3.1 CHAIN-REDS CLOUD TEST-BED ________________________________________________________________ 36

3.2 CHAIN-REDS SCIENCE GATEWAY FOR SINGLE-STOP SHOPPING ACCESS _________________________ 38

4 OPERATIONAL SUPPORT FOR THE USE-CASES ______________________________________________ 43

4.1 ABINIT ________________________________________________________________________________________ 43

4.2 TREETHREADER ________________________________________________________________________________ 45

4.3 GROMACS _____________________________________________________________________________________ 48

4.4 LAGO __________________________________________________________________________________________ 50

4.5 APHRC ________________________________________________________________________________________ 52

5 CONCLUSION ______________________________________________________________________________________ 53

Page 5: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #5

5

6 ANNEX I: MOUS ___________________________________________________________________________________ 54

6.1 MOU EGI –AFRICA-ARABIA ROC _______________________________________________________________ 54

6.2 MOU EGI –CHINA ROC _________________________________________________________________________ 71

6.3 MOU EGI –GARUDA (INDIA) ____________________________________________________________________ 84

6.4 MOU EGI –ROC-LA ____________________________________________________________________________ 94

6.5 MOU EGI –ASIA-PACIFIC-ROC ________________________________________________________________ 105

7 ANNEX II: REVIEW OF HPC INSTALLATIONS IN THE REGIONS ___________________________ 121

7.1 ARABIA ________________________________________________________________________________________ 121

7.2 CHINA _________________________________________________________________________________________ 124

7.3 INDIA __________________________________________________________________________________________ 127

7.4 LATIN AMERICA _______________________________________________________________________________ 133

7.5 SUMMARY _____________________________________________________________________________________ 134

Page 6: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #6

6

List of Figures

Figure 1: Structure of CNGrid Monitoring & Accounting System ....................... 23

Figure 2: Monitoring groups and monitoring subjects ..................................... 24

Figure 3: GARUDA and EGI helpdesks .......................................................... 27

Figure 4: GARUDA Accounting Architecture ................................................... 28

Figure 5: Paryavekshanam architecture ........................................................ 30

Figure 6: EGI Monitoring architecture ........................................................... 31

Figure 7: Collaboration of Resource Infrastructure Providers with EGI .............. 33

Figure 8: CHAIN-REDS cloud testbed ........................................................... 37

Figure 9: CHAIN-REDS’ MyCloud ................................................................. 38

Figure 10: The reference model for the CHAIN-REDS Science Gateway ............ 40

Figure 11: View of DCI Engine with the list of JSAGA adaptors ........................ 41

Figure 12: Location of sites running jobs from CHAIN-REDS use cases ............. 43

Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures . 44

Figure 14: Wall Clock Time used in hours ..................................................... 45

Figure 15: DCIs used for TreeThreader......................................................... 46

Figure 16: TreeThreader CPU hrs & executed jobs on CHAIN-REDS cloud ......... 47

Figure 17: Status of the inter-continental infrastructure for GROMACS ............. 49

Figure 18: Wall Clock Time used in hours ..................................................... 50

Figure 19: Wall Clock Time used in hours ..................................................... 52

Figure 20: LAGO-Corsika end time in hh:mm:ss ........................................... 52

Figure 21: Sunway/Bluelight supercomputer in NSC-JN ................................ 126

Figure 22 – Utilization in PARAM Yuva II ..................................................... 129

Page 7: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #7

7

List of Tables

Table 1: Status of the ROC action plans ....................................................... 13

Table 2: AA ROC Resource Centres .............................................................. 15

Table 3: Core Services of the AA ROC .......................................................... 16

Table 4: Advanced services operated by AAROC ............................................ 17

Table 5: AP-ROC Resource Centres .............................................................. 18

Table 6: CHINA ROC Resource Centres ......................................................... 19

Table 7: CAS@home Resource Centres ........................................................ 20

Table 8: CNGrid Resource Centres ............................................................... 21

Table 9: GARUDA Resource Centres ............................................................. 25

Table 10: GARUDA - EGI policies comparison ................................................ 26

Table 11: GARUDA - EGI information system comparison ............................... 29

Table 12: ROC LA Resource Centres ............................................................. 32

Table 13: The brief information of national supercomputing centers in China .. 124

Page 8: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #8

8

Purpose

CHAIN-REDS is a FP7 project co-funded by the European Commission (DG CONNECT) which started on the December 1st, 2012 and aims at promoting and supporting technological and scientific collaboration across different e-Infrastructures established and operated in various continents, in order to define a path towards a global e-Infrastructure ecosystem that will allow Virtual Research Communities (VRCs), research groups and even single researchers to access and efficiently use worldwide distributed resources, i.e. computing, storage, data, services, tools, applications.

The purpose of this deliverable is to present the status of the operations in the regions, as related to European Grid Infrastructure (EGI); and present CHAIN-REDS approach to cloud as well as one-stop shop solutions to support heterogeneous DCI environments. Finally, the key role of WP3 in the operational support conducted for the selected use-cases / success stories is described.

Page 9: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #9

9

Glossary AAROC Africa-Arabia Regional Operations Centre APEL Accounting Processor for Event Logs APGridPMA Asia-Pacific Grid Policy Management Authority API Application Programming Interface APROC Asia-Pacific Regional Operations Centre ASGCA Certification Authority in Taiwan AUP Acceptable Use Policy CA Certification Authority CAS@home Computing platform at the computer centre of IHEP, CAS CDMI Cloud Data Management Interface CERN European Organization for Nuclear Research CHAIN Co-ordination and Harmonisation of Advanced

e-Infrastructures CHAIN-REDS Co-ordination and Harmonisation of Advanced

e-Infrastructures for Research Education Data Sharing CNGrid China National Grid CNRST Centre National pour la Recherche Scientifique et Technique in

Morocco CSGF The Catania Science Gateway Framework CSIR Council of Scientific and Industrial Research, South Africa DCI Distributed Computing Infrastructure DCMI Dublin Core Metadata Initiative DNS Domain Name Server DevOps Operations development DoW Description of Work – Annex I to the GA DR Data Repository EC European Commission EGI European Grid Initiative EGI.eu A not-for-profit foundation established to coordinate and

manage the European Grid Infrastructure (EGI) EGI-InSPIRE European Grid Initiative-Integrated Sustained Pan-European

Infrastructure EIRO European Intergovernmental Research Organizations EMI European Middleware Initiative EPIKH Exchange Programme to advanced e-Infrastructure Know-How

Page 10: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #10

10

EUGridPMA Europe, Middle-East, and Africa Grid Policy Management Authority

FP7 European Commission’s Framework Programme Seven

GA Grant Agreement Ganglia Monitoring system GARUDA National Grid Initiative in India GÉANT The pan European data network dedicated to the research and

education community GGUS Global Grid User Support GIIS Grid Index Information Services GLUE EGI information system schema GOCDB Grid Operations DataBase GOS CNGrid middleware GRIS Grid Resource Information Services GSTAT EGI monitoring system HPC High Performance Computing IdF Identity Federation IGCA GARUDA (Indian) Certification authority IGE Initiative for Globus in Europe IGTF Interoperable Global Trust Federation IVOA International Virtual Observatory Alliance KB Knowledge Base KLIOS Knowledge Linking and sharIng in research dOmainS MoU Memorandum of Understanding Nagios Monitoring application NGI National Grid Initiative NREN National Research and Education Network NSCC-JN National Supercomputing Center in Jinan NSCC-SZ National Supercomputing Center in Shenzhen NSCC-TJ National Supercomputing Center in Tianjin OADR Open Access Data Repository OAI-PMH Open Archives Initiative Protocol for Metadata Harvesting OCCI Open Cloud Computing Interface OLA Operational Level Agreement OpenStack Cloud computing software platform OWL Ontology Web Language RDF Resource Description Framework

Page 11: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #11

11

ROC Regional Operation Centre ROCLA Latin America Regional Operation Centre ROCOps Regional Operations RT Request Tracker SAM Service Availability Monitoring SCCAS Supercomputing Center of China Academy of Sciences SG Science Gateway SLURM Simple Linux Utility for Resource Management SOAP Simple Object Access Protocol SSC Shanghai Supercomputer Center Synnefo GRNET own cloud management framework compatible with

openstack VO Virtual Organisations VOMS Virtual Organisation Membership Service VPN Virtual Private Network VRC Virtual Research Community WCT Wall Clock Time WP Work Package XML Extensible Markup Language

Page 12: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #12

12

1 Introduction The CHAIN-REDS project aims at promoting and supporting technological and scientific collaboration across e-Infrastructures established and operated in various continents, in order to define a path towards a global e-Infrastructure ecosystem that will allow Virtual Research Communities (VRCs), research groups and even single researchers to access and efficiently use worldwide distributed resources (i.e., computing, storage, data, services, tools, applications).

Work Package 3, Interoperation and coordination of e-Infrastructures, supports the interoperation of Grids in Europe and other world regions through support for Regional Operations Centres (ROCs) in terms of functionality requirements, structure and guidelines; and also looks into solutions for standardised access to heterogeneous Distributed Computing Infrastructures (DCIs).

In this deliverable, we present the status of the operations in the regions, as related to European Grid Infrastructure (EGI); and also discuss the long-term MoUs between EGI and other regional infrastructures. Next we present CHAIN-REDS approach to cloud as well as one-stop shop solutions to support heterogeneous DCI environments – including Grid, cloud, HPC installations. Finally, we describe the key role of WP3 in the operational support conducted for the selected use-cases / success stories.

Page 13: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #13

13

2 Status of the Grid operations in the regions

Overall status of the initial action plan is available in Table 1 below. All actions are accomplished; while Africa&Arabia ROC continues to use the EGI Catch-All Certification Authority, and several individual national-level CAs.

Table 1: Status of the ROC action plans

Action AAROC APROC ChinaROC GARUDA ROCLA Designate a person that will have the role of the ROC Manager √ √ √ √

Designate a person that will have the role of the Security Officer √ √ √ √

Sign MoU with EGI.eu as an Integrated Resource Infrastructure Provider √ √ √ √

Update the information on the ROC website √

Investigate the compatibility with the EGI policies √

Set up dedicated Support Unit in GGUS √ √ √

Register with EGI.eu as a Peer Resource Infrastructure Provider √

Adopt and employ Operational Policies and Procedures √ √

Create a new Regional Operations Centre in the EGI.eu GOCDB and register production Sites

√ √ √

Setup and Operate a Grid Monitoring Service √ √

Investigate integration with the EGI Monitoring Framework

Publish accounting records to the EGI.eu Accounting System from all certified Resource Centres

Investigate integration with the EGI Accounting System √

Investigate the publishing of Service Information using Glue 1.3 or Glue 2.0 √

Provide IGTF Accredited Certificate Services that will cover the ROC

√ (partial)

Execute jobs through a Scientific Gateway in Catania √

Page 14: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #14

14

2.1 Africa-Arabia

The Africa-Arabia Regional Operations Centre has MoU with EGI.eu as a Peer Resource Infrastructure Provider. The ROC is a collaboration between the regional network alliances in the Sub-Saharan African region and that covered by the Arabian and North-African states: the Ubuntunet Alliance 1 and ASREN 2 respectively. The ROC is implemented via a MoU between the CSIR Meraka Institute and EGI.eu, and operationalised via a collaboration between CNRST in Morocco and the South African National Grid in South Africa, which represent the NGIs in these countries. Core services which enable the interoperability between NGIs both in the region and between the region and others (including Europe) are operated in a distributed fashion between participating members of the ROC, mostly in South Africa and Morocco. Due to the nature of the agreement, in many cases (South Africa, Kenya, Tanzania, Nigeria, Jordan and Algeria), these activities are conducted in close collaboration with or directly by the NRENs of the respective countries.

Structure, Composition and Activities of the ROC The ROC is architecturally a hub-and-spoke model, with a set of core services provided by the ROC directly, and site services provided by the actual resource providers. Resource providers – also known as sites – provide the computing, data and in some cases man-power of the ROC, while the Resource Infrastructure Provider operates the core services and coordinates activities.

The ROC has both operational and developmental activities, which are coordinated as in frequent contact and communication with EGI.eu and its member NGIs. Operations are the responsibility of maintaining the availability of a suite of services and is typically the collective responsibility of the Africa-Arabia Regional Operations (AAROCOps) Team. The members of the AAROCOps are employed by their local institutes. There are, beyond the basic operations, few different roles, including site operators, infrastructure developers, service developers, international liaisons, user community contact points, and policy directors which form the periphery of the ROC community. These are constituted according to the needs of the moment, and often via agreements with the Virtual Research Communities which the ROC serves via its sites. However, the core roles of ROC manager, deputy managers, and security officers are stipulated in the MoU with EGI. The ROC manager is Bruce Becker (CSIR) and Security Officer is Roderick Mooi (CSIR). Deputies are Bouchra Rahim (CNRST) and Ouafa Bentaleb (DZ-Grid).

1 See http://www.ubuntunet.net/members for the membership map.

2 ASREN members are given in detail at http://asrenorg.net/article/25418/National-Networks and cover most countries from the Arabian peninsula across North Africa to Morocco.

Page 15: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #15

15

Although the specific activities conducted by members of the ROC may differ slightly from site to site, depending on the precise local needs, the ROC indeed enables certain collective activities, beyond the operation of a single platform. The most important of these is perhaps the development of the infrastructure in terms of site-independent or catch-all services which may link users to resources. The delivery of these services is done in a continuously tested way, ensuring that the ROC remains open to innovation and requests for new applications, services and use cases. The adoption of a DevOps paradigm in the region has enabled the ROC to both deliver relevant technologies (cloud based, or web-enabled) via gateways to the grid, but also harness the communities of practice present in the region and give them access to a powerful platform.

The sites of the ROC AAROC has two kinds of compute sites within its ambit, both of which are registered in the GOCDB and monitored, and can be described according to their operational quality. Production sites are those which have signed the Operating Level Agreement and are expected to provide corresponding levels of service, whilst other sites have no desire or capacity to do so, but are nevertheless collaborating and sharing resources in the ROC. This may be due to several legitimate factors, including the timescale at which resources are provided (i.e. they may be transient). Their inclusion in the operations database, and usage of the core services however provides them and the users they serve a means to inter-operate with peer infrastructures, demonstrating once more the value of the ROC.

Since the latter sites are more dynamic and heterogeneous than the former, we present here only those sites which have agreed to the EGI OLA and are 100% interoperable with EGI and peer infrastructures.

Table 2: AA ROC Resource Centres

#/# Site Name CPUs Storage

1 ZA-MERAKA (core services) 48 20 TB

2 ZA-WITS-CORE 156 7.4TB

3 ZA-UJ 188 2.0TB

4 ZA-UFS 768 -

5 ZA-CHPC 560 40TB

6 ZA-UCT-ICTS 24 -

7 DZ-01-ARN 42 2 TB

8 MA-CNRST 36 0.4 TB

Page 16: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #16

16

Services offered and employed by the ROC The MoU with EGI makes provision for access to EGI central services by Africa-Arabia sites via the ROC. The services used by the operations team are:

GOCDB: The sites are registered along with their technical contacts and service details in the operations databases.

GGUS: There is an operational support unit in GGUS which allows issues to be assigned to the ROC and associated sites, as well as allowing the sites to escalate issues to 2nd or 3rd-level support.

Operations Portal: AAROC managers have access to the EGI Operations Portal. This allows ROC-level monitoring provides a means to check that the OLA is being respected.

Catch-All services: AAROC users and operators have access to the EGI Catch-All services. These include core grid services, but also adjunct services such as the site certification tool and the catch-all Certificate Authority and VOMS servers.

Beyond these services operated by EGI, AAROC provides a set of core services. These are summarised below; Primary instances are designated in bold.

Table 3: Core Services of the AA ROC

Service Location(s) Description

Top-BDII server ZA-MERAKA (Pretoria)

MA-01-CNRST (Rabat)

The top-level information index necessary for service discovery and monitoring

WMS server ZA-MERAKA (Pretoria)

MA-01-CNRST (Rabat)

The workload management service used to match job requests with services at the sites

LFC MA-01-CNRST (Rabat) Logical File catalogue needed to provide an abstract interface to distributed permanent data storage facilities

SAM-NAGIOS ZA-MERAKA (Pretoria)

MA-01-CNRST (Rabat)

Service Availability and Monitoring service

VOMS ZA-UFS (Bloemfontein) Virtual Organisation Membership Service hosting the Catch-All VO services of the ROC.

Page 17: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #17

17

ARGUS ZA-MERAKA (Pretoria) ROC-level authorisation server for grid services.

Furthermore, there are several advanced services, not necessarily grid or cloud-related, but are used by the user and technical communities that the ROC serves. These are shown below.

Table 4: Advanced services operated by AAROC

Service Location Description

CVMFS ZA-UFS (Bloemfontein) Application delivery platform

Jenkins ZA-UFS (Bloemfontein) The continuous integration platform which is used for porting new applications to the infrastructure

Dev site ZA-MERAKA (Pretoria) An IaaS cloud site used for the development and integration of new services

Code repositories

Github http://github.com/AAROC

The AAROC service descriptions, configurations and state expessions are implemented with Puppet and Ansible. This DevOps code is kept in one of the several collaborative code repositories, along with the source code necessary for the operation of the other services mentioned above

Messaging platform

Slack – https://africa-arabia-roc.slack.com

Real-time collaboration and information sharing is done with Slack. This is a messaging platform which has been integrated with the automation and other tools mentioned above.

2.2 Asia-Pacific

ROC manager and Security Officer is Eric Yen.

Page 18: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #18

18

The region is supported by the AP ROC 3 , which has been created as a coordination and support point for sites in the Asia Pacific Region. The AP ROC is one of the oldest Regional Operation Centres and supports sites in many countries of the Asia-Pacific region.

There are 27 Sites running operationally within the AP-ROC. All sites are running the EMI middleware.

Table 5: AP-ROC Resource Centres

#/# Site CPUs Storage

1 Australia-ATLAS 920 1.05 PB

2 HK-HKU-CC-01 8 528 GB

3 IN-DAE-VECC-02 888 0 GB

4 INDIACMS-TIFR 320 1.0 PB

5 IR-IPM-HEP 16 25 TB

6 JP-KEK-CRC-02 3504 2.1 PB

7 KR-KISTI-GSDC-01 2688 52 GB

8 KR-KNU-T3 226 0 GB

9 KR-UOS-SSCC 60 275 TB

10 LCG-KNU 484 664 TB

11 MY-UPM-BIRUNI-01 344 3,95 TB

12 NCP-LCG2 524 189 TB

13 PK-CIIT 112 0 GB

14 T2-TH-ALICE-NSTDA 60 0 GB

15 T2-TH-CUNSTDA 200 110 TB

16 T2-TH-SUT 320 0 GB

17 T3-TH-CHULA 32 5 TB

3 http://aproc.twgrid.org/

Page 19: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #19

19

18 Taiwan-LCG2 3384 6.183 PB

19 TH-NECTEC-LSR 1 2.1 PB

20 TOKYO-LCG2 3072 3,16 TB

21 TW-FTT 1068 0 GB

22 TW-NCUHEP 2 0 GB

23 TW-NTU-HEP 2288 171.03 TB

The AP-ROC interoperated with EGI as an Integrated Resource Infrastructure Provider, with a signed MoU, since before the CHAIN-REDS start.

Most of the countries within the AP-ROC have already set up their own Certification Authorities. The CA in Taiwan, ASGCA is operating also as a Catch-All Certification Authority, which means that it can provide certificates to users of the Asia-Pacific region that do not have access the a local/national CA.

AP-ROC is using GGUS as the main Helpdesk service and a dedicated Support Unit has been already setup. Furthermore AP-ROC is already registered in the GOCDB. For monitoring the infrastructure AP-ROC is using the SAM framework in combination with Nagios and GSTAT. Monitoring results from the AP region are properly published on the EGI Message Broker Network and Availability & Reliability statistics are calculated on a monthly basis.

Regarding Accounting, AA-ROC runs APEL on all sites and accounting information is properly published to the EGI Accounting Portal.

2.3 China

There are three providers operating within China, the CHINA ROC, CAS@home and the CNGRID.

For CHINA-ROC, ROC manager is Yan Xiaofei and Security Officer is Kan Bowen.

There is 1 Resource Centre, BEIJING-LCG2, operated by IHEP.

Table 6: CHINA ROC Resource Centres

#/# Site CPUs Storage

1 BEIJING-LCG2 1088 544TB

Page 20: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #20

20

China ROC has a Memorandum of Understanding with EGI.eu as an Integrated Resource Provider. The ROC is registered in the EGI.eu GOCDB. There is a web portal for the CHINA ROC4. A dedicated GGUS support unit has been set up. The Resource Centre within the ROC is publishing Accounting information to the EGI.eu Central Accounting System. IHEP CA has been operating its own Certification Authority, which has been accredited by the EUGridPMA and which is now member of the APGridPMA. The ROC is maintaining its own instance of an EGI SAM/Nagios box that monitors the sites services.

CAS@home is a high throughput oriented volunteer computing platform which offers free high throughput computational power to the scientific computing community. CAS@home is located and operated at the computer centre of IHEP, CAS.

For CAS@home, both manager and the Security officer is Wenjing Wu.

Table 7: CAS@home Resource Centres

#/# Site CPUs Storage

1 CAS@home ~2000 2TB

China National Grid (CNGrid) is a federation of 14 High Performance Computing Resource Centres, including several national supercomputing centers: Supercomputing Center of China Academy of Sciences (SCCAS), Shanghai Supercomputer Center (SSC), National Supercomputing Center in Tianjin (NSCC-TJ), National Supercomputing Center in Shenzhen (NSCC-SZ) and National Supercomputing Center in Jinan (NSCC-JN). The Middleware used by all CNGrid sites was GOS. During the 12th Five-Year Plan of China, SCE is selected as the operating middleware in CNGrid. SCE is a middleware developed by Supercomputing Center of China Academy of Sciences. Since 2010 SCE was deployed to build the three-tier Supercomputing Environment of CAS, aka ScGrid. Using SCE middleware, ScGrid has connected HPC resources from more than 30 institutes in CAS. ScGrid has been providing services for over 400 users from universities and institutes among China for more than 5 years, which is around 100M CPU hours.

SCCAS is the Operation Center for CNGrid, Operation Manager and Security Officer is Haili Xiao.

CNGrid is cooperating with China ROC and act as a sub-ROC for its sites from the point of view of ticket handling, and is thus effectively “behind” the China ROC.

4 http://www.china-roc.cn

Page 21: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #21

21

Table 8: CNGrid Resource Centres

#/# Site Performance Storage

1 SCCAS 157TF/300TF 1.4PB

2 SSC 200TF 0.6 PB

3 NSCTJ 1PF/3.7PF 2 PB

4 NSCSZ 716TF/1.3PF 9.2 PB

5 NSCJN 1.1PF 2PB

6 THU 104TF/64TF 1PB

7 IAPCM 40TF 80TB

8 USTC 10TF 50TB

9 XJTU 5TF 50TB

10 SIAT 30TF/200TF 1PB

11 HKU 23TF/7.7TF 130TB

12 SDU 10TF 50TB

13 HUST 3TF 22TB

14 GSCC 13TF/28TF 40TB

Both for Accounting and Monitoring, CNGrid Resource Centres are using an in-house developed monitoring tool called CNGridEye that is tightly integrated with the GOC middleware. Access to the CNGrid Infrastructure is based on X.509v3 certificates provided by an internal Certification Authority. As CNGrid is a closed system, the CNGrid Certification Authority is internal to the system and does not participate in cross-trust federations like the IGTF.

Since security and accounting systems between CNGrid and European Grid systems are different, it is not easy to find a set of common tools. But it is necessary to relate the European Grid infrastructure for e-Science to China and vice versa. A gateway was thus designed as a bridge between the two infrastructures. The gateway also has to provide the authentication and authorization functionalities. To reach this goal, two different security modules were defined. The first module is the Identity Mapping Service and the second module is the Security Token Service. The Identity Mapping Service is used to map identities between heterogeneous identity management infrastructures. The

Page 22: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #22

22

Security Token Service is used to centrally store security tokens in order to allow users from different domains to retrieve the corresponding tokens. The relevant set of CNGrid security policies is as follows:

• Registration Process. Before using CNGrid, a new user must register an account online, and provide personal information, and all resource requirements including memory, disk, CPU hours, software, etc. CNGrid operators will check all this information and grant the resource accesses for the user.

• Acceptable Usage. CNGrid Service Agreement is a document describing the rules that govern behaviors of the users when they are using CNGrid resource. Users must accept this document before he registers an account.

• Incident Response. Incident policies are defined in CNGrid monitoring system. When an incident occurs, CNGrid administrators will receive a notification and handle the situation in a coordinated way that minimizes damage and reduces recovery time and costs.

CNGrid operates its own set of operations tools as follows. The main operational components/functions are:

• Helpdesk • Accounting system • Monitoring system • Information system

Regarding Helpdesk, the GGUS support unit of China-ROC acts as a communication bridge with CNGrid support unit, by forwarding relevant information. The following use cases demonstrate the agreed workflows.

• Scenario 1 External User: when and external user has an issue with CNGrid sites he or she creates a ticket in ggus.eu asking to be assigned to China ROC with a note to be redirected to CNGrid. CNGrid operations centre takes over and replies to the user via China ROC support unit.

• Scenario 2 CNGrid User: When a CNGrid user has an issue with an EGI site, he or she has to contact CNGrid support staff and ask them to open a ticket on GGUS with the relevant information or use GGUS.eu directly to submit a ticket providing the relevant information.

Regarding Accounting system, CNGrid uses an internally developed tool to retrieve accounting information from log files in the batch systems (LSF, Torque, Slurm) from all HPC sites in CNGrid, and store them in a unified format.

Page 23: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #23

23

Regarding the Monitoring System the analysis is as follows. The Monitoring system that runs at CNGrid has two tiers. The system structure is shown in Figure 1. The Main Center runs as the main-tier providing main monitoring service. It not only monitors SCE service and grid server running at Central Site, but also collects sub-tiers status and shows all the status together in a web page. There are two sub-tiers, CAS site and National site. Each site monitors its own clusters and reports its own site status to the Main Center.

Figure 1: Structure of CNGrid Monitoring & Accounting System

The monitoring system is based on Nagios, and there are several service groups defined in it. Some dedicated agents have been developed as the plugins of the Nagios. All these agents fetch both devices’ and service’s status. Figure 2 shows the monitoring groups and monitoring subjects collected by Nagios running at sub-tiers. There are two metrics defined by the monitoring system. The first one is “Cluster Status” which includes three concepts.

• CPU Utilization: (The amount of cores running jobs) / (The amount of available cores)

• Node Utilization: (The Amount of nodes running jobs) / (The amount of Available nodes)

• Job counts, user counts and Core counts The second metrics defined is “Disk Usage”.

• Disk Usage: (The Capacity of Used disk) /(The Capacity of the total disk)

Page 24: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #24

24

Figure 2: Monitoring groups and monitoring subjects

The monitoring system of GOS grid is also used to collect accounting/usage data that is a rather simpler system and more centralized than the distributed apel system EGI has in place. The GOS system is radically different to the grid middleware used by EGI (gLite etc).

The Information system of CNGrid is a module of SCE middleware. The nodes in clusters periodically send their information to the information server, and the server processes the received information, unifies format from different batch systems and store information into databases for further queries.

When comparing the CNGRid services with the EGI services, we find quite a few similarities even though CNGrid approach is rather different to EGI Tools - which are standalone tools while CNGrid tools are being tightly integrated with GOS middleware. The services rely on Nagios for the scheduling and execution of the monitoring probes and as a result they implement fully compatible interfaces. EGI has a distributed architecture whereas GOS grid has a strictly hierarchical structure where the main node handles user registration, job management, accounting and monitoring.

CNGrid can take advantage of a Science Gateway that can give the possibility to hide the operational and technical differences between CNGrid and EGI.

Page 25: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #25

25

2.4 India

ROC manager is M. Divya and Security Officer is M. Gokulatheerthan.

The national Grid Initiative in India is called GARUDA. Currently there are 8 Resource Centres operating within GARUDA, running a mixture of Globus Toolkit 4.0.7 and 4.0.8.

Table 9: GARUDA Resource Centres

#/# Site CPU Storage

1 CDAC-BLR 320 -

2 IISC-BLR 64 -

3 CDAC-CHE 320 -

4 JNU-DEL 48 -

5 IIT-GU 128 -

6 CDAC-HYD 320 -

7 CDAC-PUN 4608 -

8 PRL-AHM 16 -

GARUDA has MoU with EGI.eu as a Peer Resource Infrastructure Provider5, which means that GARUDA has developed its own policies, operates its own operations tools and a nation-wide certification authority, the IGCA. IGCA is a member of the Interoperable Global Trust Federation (IGTF) through the Asia-Pacific Grid Policy Management Authority (APGridPMA). The requirement for Peer Resource Infrastructure Providers from the EGI perspective is that they have a set of policies, which address the following topics:

• Registration Process. A peer infrastructure must have a policy document describing the requirements for a new user in order to be granted access the resources of the infrastructure.

• Acceptable Usage. The Acceptable Use Policy (AUP) is a document describing the rules that govern the behaviour of the users when they are using the resources of infrastructure.

• Operational Security. Operational security policies regulate how the infrastructure and its services are operated in a secure manner. Especially in distributed infrastructures, where services and infrastructure elements

5 https://www.egi.eu/news-and-media/newsfeed/news_2014_025.html

Page 26: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #26

26

can belong to a number of administrative domains, it is very important clearly define the common set of rules that apply to the operators of these services.

• Incident Response. This policy defines how the infrastructure provider addresses and manages the aftermath of security incidents. The goal is to handle the situation in a coordinated way that limits damage and reduces recovery time and costs.

Using the EGI published policies as the reference, we have reviewed the GARUDA policies in order to assess whether all topics are addressed and in the table below we provide the results of this comparison.

Table 10: GARUDA - EGI policies comparison

#/# Policy GARUDA Document

EGI Document

1 Acceptable Use Policy GARUDA AUP6

EGI AUP7

2 Security Incident Response Policy

GARUDA Policy Framework (internal confidential document)

EGI Incident Response Policy8

3 Service Operations Security Policy

GARUDA Policy Framework

EGI Service Operations Security Policy9

4 Grid Security Policy GARUDA Policy Framework

EGI Grid Security Policy10

5 VO Operations Policy GARUDA Policy Framework

EGI Virtual Organizations Operations Policy11

6 VO Registration Security Policy

GARUDA Policy Framework

EGI Virtual Organization Registration Security

6 http://portal.garudaindia.in/gap2/GARU DA-Policy.html

7 https://documents.egi.eu/public/ShowDocument?docid=74

8 https://documents.egi.eu/public/ShowDocument?docid=82

9 https://documents.egi.eu/public/ShowDocument?docid=1475

10 https://documents.egi.eu/public/ShowDocument?docid=86

11 https://documents.egi.eu/public/ShowDocument?docid=77

Page 27: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #27

27

Policy12

Although the Globus Toolkit versions used are rather old, GARUDA had developed a rich set of higher-level middleware services that sit on top of Globus. The effort that would be required to migrate to newer version of the Globus Toolkit is not justified by the benefits such a migration would provide.

As mentioned before, GARUDA as a Peer Resource Infrastructure Provider operates its own set of operations tools. The main operational components/functions that can be found in a Resource Infrastructure providing Grid services are the:

• Helpdesk • Accounting system • Information system • Monitoring system

GARUDA is using Request Tracker (RT) for providing Helpdesk services to its users. RT is commonly used by European Resource Infrastructure Providers and there are guidelines on how to integrate13 an RT instance with the SOAP interface provided by GGUS. GARUDA has created a dedicated Support Unit in GGUS and integrates GARUDA Request Tracker with GGUS.

Figure 3: GARUDA and EGI helpdesks

There are two possible scenarios implemented: • Scenario 1: A ticket is created at the GGUS, where the ticket submitter

assigns the ticket at the NGI_GARUDA support unit, using the “Assign to ROC/NGI” field of the “Submit ticket” form of the GGUS. In this scenario, a corresponding ticket must be created at the appropriate queue of the GARUDA RT.

12 https://documents.egi.eu/public/ShowDocument?docid=78

13https://wiki.egi.eu/wiki/GGUSRT:GGUS-RT_Interface_Task_Force

Page 28: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #28

28

• Scenario 2: A ticket is created at the GARUDA RT, where the ticket submitter assigns the ticket at the GGUS support unit. In this scenario, a corresponding ticket is created at the GGUS.

In both scenarios, when a ticket is resolved/re-opened at the GARUDA RT (GGUS) then the corresponding ticket is also resolved/re-opened at the GGUS (GARUDA RT) system.

For Accounting, GARUDA is using an internally developed tool that extracts accounting records from the batch system log file of each cluster. Job accounting tool (JAT) is a web portal, which maintain information about all jobs executed in GARUDA. This captures all information related to job submitted to GARUDA through grid (Gridway) or at cluster (PBS) by any user.

Figure 4: GARUDA Accounting Architecture

Although exchange of accounting records is not required between peer infrastructure providers, it is required that the resource usage is accounted in each peering infrastructure, which is the case here.

The Information system in GARUDA has been setup in a hierarchical manner, very similar to EGI, to facilitate easy querying of information by different tools and users, and also to allow easy publishing of information from different information providers. It is based on the Globus MDS and uses the Grid Index Information Services (GIIS) at the Site level and the Grid Resource Information Services (GRIS) at the resource level. The GARUDA Information service consists of a region wise aggregation layer, which takes care of indexing the information from different cluster head nodes in that region. Each head node will have an Index Service running (powered with GLUE scheme version 1.1), which will collect and index the information from information providers. All region-wise Index Services will publish the information in a centralized Information Server. In the lower level, Ganglia information provider is used. This information includes; basic host data (name, ID), memory size, OS name and version, file system data,

Page 29: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #29

29

processor load, and other basic cluster data.

Table 11: GARUDA - EGI information system comparison

GARUDA EGI

Components of Information System

Based on GRIS and GIIS for GT2.x systems

Based on MDS4 Web Services for GT4.x systems

Based on GRIS, GIIS and BDII

Information System Hierarchy

GIIS at Site level and GRIS are Resource level

BDII at Site and VO levels, GRIS at Site level and GIIS at resource level

Information Publishing Published using the Glue 1.1 schema for GT2.x systems.

Published using the XML representation of the GLUE 1.1 schema for GT4.x systems

Published using the Glue 2.x schema

For Infrastructure and Service Monitoring, the GARUDA Resource Centres are using a mixture of Nagios, Ganglia and an internally developed tool called Paryavekshanam. These monitoring tools are accessible only inside GARUDA Virtual Private Network (VPN). The key benefits of these tools are enabling central monitoring of the GARUDA infrastructure and support for notifying the administrators in the case of any resource failures.

Page 30: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #30

30

Figure 5: Paryavekshanam architecture

A dynamic report generation facility is provided for the statistical viewing of data from managerial point of view. Whenever a new site is added to the grid, the site administrations can add it to the monitoring tool through an easy to use interface. At the core of the monitoring tool relies the Nagios open source software. Nagios is configured to monitor all the critical entities such as computing resources, storage resources, network, services and other system metrics of the GARUDA infrastructure.

When comparing the GARUDA monitoring service with the EGI monitoring service, we find many similarities. Both services rely on Nagios for the scheduling and execution of the monitoring probes and as a result they implement fully compatible interfaces. Both systems support the notification of the administrators on service failures, but although in GARUDA the Nagios tool is responsible for the dispatching of the notification, in EGI an external tool, the operations portal, handles notifications. In both cases, Site administrators have to register their resources in order to be monitored by the central monitoring tools. In the case of GARUDA, the site administrators register their resources directly to the monitoring tool, while in EGI site administrators have to register their resources to the Grid Operations DataBase (GOCDB) and the monitoring system is automatically configured using that information. Finally in both cases, the monitoring system provides information about the status of the services along with computed percentages for the availability and reliability of each service. In GARUDA, the Nagios tool provides availability and reliability, while in EGI there is an external component, which computes the availability and reliability of the systems taking into account the complex EGI topology and various other business requirements.

Page 31: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #31

31

Figure 6: EGI Monitoring architecture

In general, the GARUDA and EGI monitoring systems have the same set of capabilities, but due to the different size and complexity of the infrastructures, the two tools differ also in their complexity and componentization.

2.5 Latin America

ROC manager is Renato Santana, ROC Deputy Managers are Andres Holguin, Luciano Diaz, and Security Officer is Andres Holguin.

In Latin America there are 7 countries (ar, br, cl, co, cu, mx, pe) contributing to CERN main experiments and there are 6 Tier2 distributing and synchronising datasets with the other Tier architecture. The ROC-LA is mainly supporting the computing and Data infrastructure for the CERN experiments.

The region is supported by the Latin America ROC14, which has been created as a coordination and support point for all sites of the region. Sites that used to be part of the IGALC ROC are going to migrate to ROC LA but much of the operation remains in the HEP field. Recently, due to the participation of the Astroparticle community, represented by three important Observatories: The Pierre Auger Cosmic Ray Observatory15; Latin American Giant Observatory16and High Altitude

14 http://www.roc-la.org/home/ 15 https://www.auger.org 16 http://lagoproject.org

Page 32: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #32

32

Water Cherenkov Gamma Ray Observatory17 new resources are added to the ROC-LA.

Currently there are below sites running operationally within the ROC LA. All sites are running the EMI middleware. New sites of ROC-LA from astroparticle community are planning to be operational by the end of 2015.

Table 12: ROC LA Resource Centres

#/# Site CPUs Storage

1 CBPF 104 8 TB

2 EELA-UTFSM 428 100TB

3 ICN-UNAM 144 4 TB

4 ATLAND 248 3 TB

5 Cinvestav 32 42ΤΒ

6 SAMPA 64 143 TB

ROC LA already interoperates with EGI as an Integrated Resource Infrastructure Provider, and MoU has been signed with the support of CHAIN-REDS. All the countries that are supported by ROC LA have set up their own Certification Authorities, which are accredited by the IGTF.

ROC LA is using GGUS as the main Helpdesk service and a dedicated Support Unit is in place. Furthermore ROC LA is registered in the GOCDB. For monitoring the infrastructure AP-ROC is using the SAM framework in based on Nagios. Monitoring results from the LA region are properly published on the EGI Message Broker Network and Availability & Reliability statistics are calculated on a monthly basis.

Regarding Accounting, AA-ROC runs APEL on all sites and accounting information is properly published to the EGI Accounting Portal.

2.6 Overview of the MoUs

As discussed in CHAIN-REDS D3.1, the European Grid Infrastructure is comprised by a number of Infrastructure Providers. The National Grid Initiatives (NGIs) & EIROs are the cornerstone of the EGI. The NGIs are the first type of infrastructure providers that provide the main resource infrastructures, on top of which the European grid services are built.

17 http://www.hawc-observatory.org

Page 33: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #33

33

A second type of infrastructure provider are external providers which are not direct members of the EGI collaboration, but which have a clear relationship with EGI, defined by a bilateral Memorandum of Understanding. These are called Integrated Resource Infrastructure Providers and usually share the same operations tools, procedures and policies with EGI. Examples of regional infrastructures that can be identified as external integrated resource infrastructure providers are the grid initiatives in Austria, Canada, and Ukraine that are not direct members of the EGI consortium, and regional infrastructures brought on board by the CHAIN-REDS project.

The MoU is signed between EGI.eu and a legal entity representing the whole Resource Infrastructure. Typically, the participants of the Resource Infrastructure have an internal agreement, in the form of an internal MoU that defines the responsibilities of each partner within the Resource Infrastructure and that recognizes the representation of the Resource Infrastructure by a single legal entity. The MoUs are on purpose being made lightweight, since number of organisations involved behind the two signatories is considerable.

External providers that are not direct members of the EGI collaboration and that have their own operations tools and procedures comprise the third type of Infrastructure Providers. They are essentially Peer Resource Infrastructure Providers and they have a loose cooperation with EGI by implementing a compatible set of policies.

Figure 7: Collaboration of Resource Infrastructure Providers with EGI18

18 http://indico3.twgrid.org/indico/contributionDisplay.py?sessionId=75&contribId=270&confId=370

Page 34: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #34

34

Integrated Resource Infrastructure Provider MoUs are signed between EGI and Africa&Arabia ROC, China ROC, ROC-LA and AP-ROC, and Peer Resource Infrastructure Providers MoU is signed with Garuda (India).

MoUs set the ground for long-term collaboration between EGI and the regions. This interoperation aims to fulfil the following requirements (as stated in the MoUs), thus providing a full spectrum of the intercontinental service support.

1. To provide Local and Global operational services as needed to support the international collaborations in this context.

2. To subscribe to a mandatory set of policies, procedures and OLAs; 3. To comply with the operations interfaces required by the EGI Operations

Architecture, which are needed to ensure seamless and interoperable access to resources;

4. To be able to participate as an observer in the Operations Management Board and contribute to the EGI operations agenda.

5. To participate be represented in the Security Policy Group to comment on the development of the security policies fabric of the infrastructure.

Joint action plans include specific actions related to the CHAIN-REDS programme of work (such as setting up the operational collaboration, which has been completed), but also include a set of continuous actions for daily interoperation which are continuous and unbounded by a time limit, typically including participation in operational boards, use of resources and EGI operational services. For further region-specific details of the MoUs please refer to the text in the Annex to this deliverable. The duration of the MoUs is indefinite as stated in article 9. “This MoU will start when signed by the authorised representatives of the Parties and shall remain until completion of the activities identified in Article 4 (Joint Work Plan), or upon termination of the projects in which the Parties participate, or upon three (3) months prior written notice by one Party to the other. In the event of termination, the parties shall endeavour to reach agreement on terms and conditions to minimise negative impacts on the other Party. In the event of the continuation of the present cooperation, the Agreement may be extended and/or amended by mutual agreement in writing.” As discussed above, the MoUs contain a set of continuous actions; and are not related to CHAIN-REDS project but are established between EGI and the bodies responsible for the Regional Operations Centres. In the following text an assessment by EGI itself as to future resources to be provided by EGI beyond CHAIN-REDS to support the MoUs is given.

Through the established MoUs EGI.eu commits to provide human and technical services necessary to the involved e-Infrastructures to work as an international federation. On the other side, the e-Infrastructures commit to comply jointly defined policies and to provide an integration layer in terms of middleware and human coordination that allow them to work in the federation, as discussed in the MoUs. The human coordination functions of EGI in the area of operations,

Page 35: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #35

35

outreach and user support, technical development and policy are offered to the partners. The EGI.eu federation services offered value approximately 5 MEuro/year are sustained by EGI Council fees, in-kind contributions of the EGI Council members and project funding, and allow the sharing and efficient use of locally operated ICT infrastructures by international research collaborations. Through its projects EGI technically advances these services and evolves its service portfolio; the partners engaged through the MoUs directly benefit from these as innovation is made available to them through the federation services. Access to European and international ICT capabilities federated via EGI is regulated through access policies on a per user community basis. EGI engages with international user communities by establishing Service Level Agreements that guarantee resource pledges, quality of services and human support to them.

Page 36: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #36

36

3 Cloud computing and heterogeneous DCI access

3.1 CHAIN-REDS cloud test-bed

Since the very beginning, the CHAIN-REDS project has been committed to foster the adoption of standards that can help reach the interoperability across various kinds of e-Infrastructures. In particular, to demonstrate the cloud interoperability, the most important ones identified and proposed by the project are:

• CDMI (Cloud Data Management Interface): a standard defined by SNIA that describes the functional interface to create, retrieve, update and delete data elements from the cloud.

• OCCI (Open Cloud Computing Interface): a standard defined by the Open Grid Forum that comprises a set of APIs for all kinds of management tasks, including deployment, automatic scaling.

With this regard, the CHAIN-REDS Cloud Testbed has been created in the framework of the project to demonstrate standard-based interoperability and interoperation. In this implementation, the Cloud Testbed has been organised as a “virtual-cloud” composed by resources belonging to different world-wide cloud providers. The driving idea, that inspired the creation of this Cloud Testbed, is a natural extension to clouds of the “Grid-born” concept of Virtual Organisations (VOs) being authorised on Grid resources and VO managers managing users belonging to the VO through the VOMS service. Managers of cloud sites pledge part of their resources to a project/initiative/organisation and the ensemble of these resources are organised in a “personal virtual-cloud” the cloud tenant of the project/initiative/organisation can manage.

As shown in Figure 10, the CHAIN-REDS Cloud Testbed is currently composed by resources belonging to 12 different sites, from 8 countries, of which one owned by an SME located in Egypt. The number of cloud sites registered in the Cloud Tesbed has been recently extended with the Chinese Cloud based on OpenStack cloud stack and a cloud site in Algeria.

Page 37: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #37

37

Figure 8: CHAIN-REDS cloud testbed

As indicated in Figure 10, 5 sites are also belonging to the EGI Federated cloud and 3 different and well know cloud stacks have been used, namely Synnefo, OpenStack and OpenNebula.

From the technical point of view, the CHAIN-REDS cloud testbed is managed by a specific service integrated in the CHAIN-REDS Science Gateway, which is called MyCLoud and uses the CLoud-Enabled Virtual EnviRonment (CLEVER) to orchestrate the cloud services through their OCCI-complaint and rOCCI-enabled interfaces. A view of the CHAIN-REDS Cloud testbed is shown in Figure 9.

Page 38: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #38

38

Figure 9: CHAIN-REDS’ MyCloud

The graphic user interface is very intuitive and includes point & click and drag & drop functionalities. The current implementation allows:

• Federated authentication (inherited from the CSGF); • Fine-grained authorisation (inherited from the CSGF); • Single/multi-deployment of VMs on a cloud and across clouds; • Single/multi-move of VMs across clouds; • Single/multi-deletion of VMs on a cloud and across clouds; • SSH connection to VMs; • Direct web access to VMs hosting web services.

The VMs are made belonging to the same domain name thanks to a function that allow MyCloud to update the dynamic DNS of the chain-project.eu domain when a VM is instantiated or killed.

3.2 CHAIN-REDS Science Gateway for single-stop shopping access

As discussed in this deliverable, there are different types of Distributed Computing Infrastructure (DCIs) and, for researchers in need of huge computing resources, selecting and accessing Grid, cloud or High-Performance Computing

Page 39: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #39

39

(HPC) resources can become more complex and time-consuming. CHAIN-REDS aims to provide a Science Gateway-based solution to provide single stop shopping infrastructure for scientific users.

In the recent past, interesting developments have been independently carried out by the Grid community with the Science Gateways and by the National Research and Education Networks with the Identity Federations to ease, from one side, the access and use of Grid infrastructures and, from the other side, to increase the number of users authorised to access network-based services. Science Gateways are basically portals that interface different types of DCIs to offer a single access to researchers, making it easier for scientists to access remote computing facilities. Over the last years, Science Gateways have proven to be fertile ground for e-Infrastructure research while at the same time substantially increasing the usage and accessibility of Distributed Computing Infrastructures all around the world to scientists and educators.

With this regard, the CHAIN-REDS Science Gateway has been built in the context of the EU co-funded CHAIN and CHAIN-REDS projects to demonstrate how the Science Gateway paradigm and standard adoption can make e-Infrastructures worldwide, based on different middleware and architectures (Grid, HPC, Cloud or simply local clusters), interoperable amongst each other, at user application level.

The CHAIN-REDS Science Gateway is fully based on different open-source standards and frameworks such as: the Catania Science Gateway Framework (CSGF), SAML, SAGA and OCCI (just to name a few). As stated already in D5.2, the combination use of the OCCI and SAGA standards have allowed to demonstrate the interoperability amongst local clusters, grid and cloud infrastructures.

The reference model of the CHAIN-REDS Science Gateways is shown in Figure 10. Users belonging to different organisations and having difference roles and privileges within the community the Science Gateway is developed for can access different applications and run them in a seamless way on e-Infrastructures where different middleware might be deployed.

Page 40: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #40

40

Figure 10: The reference model for the CHAIN-REDS Science Gateway

The Science Gateway is built within the Liferay web portal framework and portlet container and it is fully compliant with JSR 286 standard. The core business of the CHAIN-REDS Science Gateway is represented by the Grid & Cloud Engine whose architecture is shown in Figure 11. For more additional details we invite the reader to go back to deliverable D3.2.

Page 41: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #41

41

Figure 11: View of DCI Engine with the list of JSAGA adaptors

The Engine exposes services to transparently execute jobs on different DCIs and to seamlessly manage virtual machines across cloud infrastructures complaint with OCCI standard.

During the project, the CHAIN-REDS Science Gateway has demonstrated to be an invaluable platform to: (i) seamlessly access various e-Infrastructures in a transparent way for the end-users, including Grid, Cloud and HPC resources; and (ii) support the small groups or even single researchers that do not belong to the large VRCs. For these reasons, the CHAIN-REDS project has been committed, since the beginning, to foster the adoption of the Science Gateway paradigm in all the regions targeted by the project. A status report of the deployment of SGs in all the regions has been already described in D5.2.

Regarding the process by which the application code can access computing resources (including EGI resources) via Science Gateway, all the documentation is available on the project website through the central page. In more detail, the training material is available at http://www.chain-project.eu/wiki-page (for users) and http://www.chain-project.eu/admin-training-materials (for system administrators). Additionally, SG guidelines are available at http://goo.gl/cAC3dD and how-to at http://www.catania-science-gateways.it/documents . The wiki is the best format for such a document - any printed version will become obsolete the same day it will be published.

The document effectively includes links to downloadable libraries and examples: all the materials available link the software

Page 42: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #42

42

repository http://sourceforge.net/projects/ctsciencegtwys/ which contains the code of all portlets developed so far.

The Catania Science Gateway Framework will be further developed in the context of INDIGO-DataCloud and of Sci-GaIA projects. EGI is agnostic with respect to a particular framework but INFN has funds in EGI-Engage to customise the CSGF for the Long Tail of Science Portal.

Compared to other available solutions in terms of other Science Gateway frameworks (e.g., WS-PGRADE, Vine Toolkit, EXSEDE gateways, etc.) the exercise has been done by the EGI Science Gateway Task Force and their report is cited in the previous CHAIN-REDS deliverable D5.2.

Page 43: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #43

43

4 Operational support for the use-cases

Key effort has been invested in WP3 in order to support the 5 selected use-cases on the intercontinental infrastructures involved in CHAIN-REDS. A snapshot of the location of DCI sites for all use-cases is shown in the figure below.

Figure 12: Location of sites running jobs from CHAIN-REDS use cases

4.1 ABINIT

In this section, we will focus the attention on the work carried out by the project to support the ab-initio calculation in the fields of quantum chemistry and the physics of materials. Currently, there is a strong interest in using the ABINIT code in Algeria and in other parts of the Arab region but the computing resources available for computation are very limited. The CHAIN-REDS project supported this VRC providing technical know-how and expertise to set-up an intercontinental virtual environment for the ABINIT community.

In order to better support the deployment of the ABINIT code on DCIs, a team of application experts from the Arab region attended the CHAIN-REDS Science Gateway porting school held in Catania in June 2014. This event was a good opportunity for application experts to work closely with Grid and Cloud experts with the aim to make this code available at large scale.

During this event a lot of work has been done. First and foremost the ABINIT package (v7.6.4) with OpenMPI (v1.8.1) support, along with some additional libraries (ATLAS, PATCH, FFTW) and GCC compiler (v4.4.7) has been successfully compiled, installed and tested on different DCIs. Thanks to this work researchers can now count on a bigger pool of resources across 3 continents, and run both sequential and MPI-based versions of ABINIT on different computing

Page 44: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #44

44

infrastructures as shown in Figure 12 below.

Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

To set-up a virtual environment for the ABINIT application, the following CHAIN-REDS promoted services have been used:

• Several services from ROC-LA , Arab & Africa Grid and European ROCs; • Authentication and Authorization services via Identity Federations to access

the Science Gateway; • The Catania Science Gateway Framework (CSGF), the framework that

changes the way e-Infrastructures are used and exploited; • The eTokenServer, a standard-based solution developed by INFN for

central management of robot certificates and provisioning of proxies to get seamless and secure access to computing e-Infrastructures, based on local, Grid and Cloud middleware supporting the X.509 standard authorization;

• The gLibrary Metadata Server and its REST APIs, an INFN solution to offer a robust, secure and easy-to-use system to handle digital assets stored on Grid files.

Another hot topic for the CHAIN-REDS project to complete this work was the possibility to provide a transparent access to DCIs. With this regard, the CHAIN-REDS project adopted the standard JSR 286, also known as “portlet 2.0” and relied on the Catania Science Gateway Framework that makes easier the access and the exploitation of different DCIs. Recently this support has been further enhanced with the adoption of the gLibrary Data Management APIs to register big output files on the EMI-3 DPM Storage Element with WebDAV interface. Lastly, to access the gLibrary Metadata Server and download ABINIT results a dedicated portlet has been also implemented. Today the application has been integrated to

Page 45: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #45

45

take advantage from DCIs technologies and support the day-by-day work of researchers in Arab/Middle East region. The application is available both in the CHAIN-REDS and in the ARN Science Gateways.

Figure 14 shows the Wall Clock Time (WCT) used by the VRC to run ABINIT simulations on the DCIs.

Figure 14: Wall Clock Time used in hours

4.2 TreeThreader

When the TreeThreader use case was selected by CHAIN-REDS, the code was running on a volunteer computing infrastructure deployed by CAS@home19. In order to enhance the impact of the application, several modifications to the code were implemented in order to run it on nodes which operate under the umbrella of the CHAIN-REDS cloud test-bed.

Such a capability was implemented by customizing the BOINC client. Configuration files were specifically created in the BOINC client, so the client has the information of the BOINC server to connect to, of the user’s account to run on behalf of, and of the application to fetch jobs from. Finally, a Scientific Linux

19 http://casathome.ihep.ac.cn/

Page 46: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #46

46

image (SLC6.5 based, qcow2 formatted) with minimum software and library release was customised too for easily installing the BOINC client. With the customized image, any site which runs cloud computing infrastructure can launch virtual machines with this image, and once the virtual machine is started and running, the BOINC client is started at the start-up of the machine, fetching TreeThreader jobs from the CAS@home server.

To track this cloud platform, monitoring webpages20,21 were implemented as well; an installation guide was drafted too. As a consequence, a Chinese cloud node was installed accepting TreeThreader jobs for testing reasons and, later on, it went to a production status.

As it is shown in Figure 15 TreeThreader is currently running on 2 kinds of infrastructure, including volunteer computing platform and the cloud test-bed. The volunteer computing platform mainly consists of personal desktops, laptops and even smart phones; meanwhile the cloud test-bed consists of 27 cloud nodes provided by different CHAIN-RED sites, 20 in Chinese sites and 7 in European ones.

Figure 15: DCIs used for TreeThreader

Currently, CHAIN-REDS is supporting TreeThreader from the e-Infrastructure point of view with sites configured in China, Italy, Greece and Czech republic. The former counts on 20 cloud nodes managed with OpenStack which are also integrated in the CHAIN-REDS cloud testbed; regarding the European sites, TreeThreader is deployed on HG-09-Okeanos that runs Synnefo (GRNET own cloud management framework that is compatible with openstack), CESNET-

20 http://casathome.ihep.ac.cn/casstats/monitor_cloud.html

21 http://casathome.ihep.ac.cn/casstats/monitor.html

Volunteer Computing Platform

BOINC Client

Cloud Test Bed

BOINC Client

CAS@home Server

Web Portal: TreeThreader job

submission

Page 47: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #47

47

Metacloud (Opennebula) and PRISMA-INFN-BARI (OpenStack), which are all part of CHAIN-REDS cloud test-bed.

CHAIN-REDS is strongly supporting TreeThreader and is currently one of the top users22 with an average of 389 recent BOINC credits. As those of April 2015, CHAIN-REDS cloud infrastructure has provided the ThreeThreader scientists with 5,536 CPU hours and 3,062 successfully executed jobs (see Figure 16).

Figure 16: TreeThreader CPU hrs & executed jobs on CHAIN-REDS cloud

The main CHAIN-REDS service used is the Cloud orchestration. With regard to this, it is important to notice that now TreeThreader is able to run on a Cloud Infrastructure that is part of the CHAIN-REDS cloud test-bed and, potentially, can

22 http://casathome.ihep.ac.cn/top_users.php

Page 48: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #48

48

be easily integrated in the EGI fed Cloud test-bed if desired. Then, the cloud nodes that are being used profit from the China-ROC portfolio of services and are properly monitored.

The TreeThreader community of users is provided with a front-end for the submission of jobs, thus no actions on the integration of the project Science Gateway have been carried out. The easiest way to integrate the CHAIN-REDS cloud testbed resources was to run a customized BOINC client on the cloud nodes (as previously described), so the BOINC client connects to the CAS@home server to fetch the jobs which were submitted through the web portal.

4.3 GROMACS

In this paragraph, the reader can find below the activities carried out by the project to support the work of researchers interested to observe molecular activity of various bio-molecules using GROMACS, a software package for molecular dynamics simulations. These kinds of studies present huge computational demand. The CHAIN-REDS project supported this VRC providing technical know-how and expertise to facilitate the deployment of GROMACS on inter-continental scale and allow Indian VRC to collaborate with global community. To support the deployment of the GROMACS on inter-continental scale, the GROMACS (v4.6.5) threaded version has been successfully installed on DCIs and a JSR 286 portlet has been developed and installed in the CHAIN-REDS Science Gateway using the Catania Science Gateway Framework (CSGF). The portlet has been configured in order to retrieve .trr file produced during simulations and configure the MaxWallClockTime per simulation. Each GROMACS simulation usually produces about 2 GB of compressed data that are registered on EMI-3 DPM Storage Element with WebDAV interface using the gLibrary DM APIs.

For more details, here follows a complete list of CHAIN-REDS services that have been used for supporting the deployment of the GROMACS application:

• Several services from ROC-LA , Arab & Africa ROC, and few European ROCs;

• Authentication and Authorization services via Identity Federations to access the Science Gateway;

• The Catania Science Gateway Framework (CSGF), the framework that changes the way e-Infrastructures are used and exploited;

• The eTokenServer, a standard-based solution developed by INFN for central management of robot certificates and provisioning of proxies to get seamless and secure access to computing e-Infrastructures, based on local, Grid and Cloud middleware supporting the X.509 standard authorization;

• The gLibrary Metadata Server and its REST APIs, an INFN solution to offer a robust, secure and easy-to-use system to handle digital assets stored on Grid files.

Page 49: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #49

49

Thanks to the technical support of the CHAIN-REDS project, the status of the inter-continental infrastructures configured for the GROMACS application is shown in Figure 17 below.

Figure 17: Status of the inter-continental infrastructure for GROMACS

Figure 18 shows the Wall Clock Time (WCT) used by the VRC to run GROMACS simulations on the DCIs.

Page 50: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #50

50

Figure 18: Wall Clock Time used in hours

4.4 LAGO

LAGO association 23 was established before the start of CHAIN-REDS as an extended Astroparticle Observatory at global scale that made use of computation. Nevertheless, its access to resources was scarce while on the contrary, its potential to deploy several services was huge.

CHAIN-REDS supported LAGO in several ways. It first provided the association with a Grid-enabled version of the Corsika code that could be used beyond local clusters. It ported the three LAGO adapted versions of Corsika to Grid as well, which correspond to three different energy ranges of the measured cosmic rays. In order to enhance the use of these versions, the corresponding portlet to run them via the CHAIN-REDS Science Gateway was implemented and, now, this portlet is being improved with better storage and execution/submission of jobs capabilities (sweep parameter submission).

Also, CHAIN-REDS has supported LAGO in the classification of their datasets (already stored and daily measured) and is providing the association with a structure to properly assign these datasets to Persistent Identifiers (PIDs) in a way that it will be useful to the LAGO researchers.

Overall, LAGO is covering the whole cycle proposed by the CHAIN-REDS DART challenge (see D4.3 and D4.424 for further information about DART).

23 http://lagoproject.org/

24 http://www.chain-project.eu/deliverables

Page 51: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #51

51

The previous process is covered by a Grid e-Infrastructure. Implementation tests have been made on the EGI infrastructure using a general purpose VO, but currently a fully dedicated VO called lagoproject has been configured, which is in the process of being accredited as part of EGI as well. Once done, its acceptance in several sites will be promoted. CIEMAT-TIC sites have properly configured their queue lists to run jobs submitted to lagoproject.

In the meantime, RedCLARA has supported LAGO for the installation of lagoproject in ROC-LA associated sites too. Such a circumstance has been accepted and LAGO-Corsika jobs will be run on Latin American sites in the near future, being in this way also integrated in the federated e-Infrastructure promoted by CHAIN-REDS.

In addition, CHAIN-REDS is supporting LAGO with the Handle service provided by GRNET25 to assign PID to the association datasets.

In summary, LAGO makes use of most of the CHAIN-REDS promoted services: ROC services via ROC-LA and other European ROC, their associated Grid sites to execute jobs and provide storage capacities, PID services via EPIC-GRNET, the project Science Gateway where LAGO-Corsika jobs can be submitted and monitored, and Authentication and Authorization services via Identity Federations to access the Science Gateway. Not only this, Cloud services will be accessed in a near future once FedCloud will be fully operational by migrating the current Grid sites to cloud-based ones.

Regarding monitoring graphs showing the usage of infrastructure for the specific application, the reader can find below some statistics related to the jobs managed by the CHAIN-REDS Science Gateway until May 2015.

25 http://epic.grnet.gr/

Page 52: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #52

52

Figure 19: Wall Clock Time used in hours

One LAGO observatory requires 60-65 jobs that correspond to different energy and atomic parameters to simulate one cosmic ray cluster. Such a simulation typically lasts 24 hours in a local cluster; Figure 20 depicts its end time using Grid. The data shown correspond to the jobs needed to simulate one cosmic ray cluster detected.

Figure 20: LAGO-Corsika end time in hh:mm:ss

4.5 APHRC

APHRC is entirely data-centric use case and as such is described in detail in the final deliverable of WP4.

0:00:00

4:48:00

9:36:00

14:24:00

19:12:00

24:00:00

28:48:00

1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65

END TIME

END TIME

Page 53: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #53

53

5 Conclusion

CHAIN-REDS project has supported the full functional establishment of the Regional Operational Centres (ROCs) in the target world regions and facilitated the long-term agreements between European Grid Infrastructure and the ROCs in the target regions via long-term MoUs which last beyond project lifetime and are explicitly supported by EGI.

Next, the project has put in place the Cloud Testbed to demonstrate standard-based interoperability and interoperation (based on OCCI and CDMI standards and fully compatible with pan-European cloud federation initiatives of EGI). CHAIN-REDS Cloud Testbed is currently composed by resources belonging to 12 different sites, from 8 countries, including China, South Africa and Algeria.

Regarding the single-stop shopping access to heterogeneous e-Infrastructures, CHAIN-REDS Science Gateway shows how standard adoption can make e-Infrastructures worldwide, based on different middleware and architectures (Grid, High-Performance Computing, Cloud or simply local clusters), interoperable amongst each other, at user application level. CHAIN-REDS Science Gateway has demonstrated to be an invaluable platform to: (i) seamlessly access various e-Infrastructures in a transparent way for the end-users; and (ii) support the small groups or even single researchers that do not belong to the large VRCs. The guidelines are available on CHAIN-REDS website.

Finally, key effort has been invested by the project so as to support the 5 selected use-cases on the intercontinental infrastructures involved in CHAIN-REDS. This deliverable has presented details of the operational support and e-Infrastructure monitoring and usage by the applications.

Page 54: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #54

54

6 Annex I: MoUs

6.1 MoU EGI –Africa-Arabia ROC

Page 55: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures
Page 56: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures
Page 57: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures
Page 58: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures
Page 59: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures
Page 60: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures
Page 61: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures
Page 62: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures
Page 63: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures
Page 64: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures
Page 65: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures
Page 66: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures
Page 67: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures
Page 68: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures
Page 69: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures
Page 70: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures
Page 71: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #71

71

6.2 MoU EGI –China ROC

Page 72: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and IHEP

26/03/2014 1 / 12

Memorandum of Understanding

between

EGI.eu and IHEP

Resource Infrastructure Provider MoU

Page 73: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and IHEP

26/03/2014 2 / 12

Table of Content Table of Content ........................................................................................................................................ 2 Background ............................................................................................................................................... 3 Article 1: Purpose ...................................................................................................................................... 4 Article 2: Definitions ................................................................................................................................. 4 Article 3: Infrastructure Composition ....................................................................................................... 4 Article 4: Joint Work plan ......................................................................................................................... 4 Article 5: Communication ......................................................................................................................... 6 Article 6: participation in EGI.eu groups .................................................................................................. 6 Article 7: INTELECTUAL PROPERTY RIGHTS, JOINTLY OWNED RESULTS AND LICENSE ... 6 Article 8: Funding ..................................................................................................................................... 7 Article 9: Starting date, duration and termination ..................................................................................... 7 Article 10: Amendments ........................................................................................................................... 8 Article 11: Annexes ................................................................................................................................... 8 Article 12: Language ................................................................................................................................. 8

Annex 1 – EGI.eu Description ........................................................................................ 10 Annex 2 – Description of Institute of High Energy Physics ............................................ 11 Annex 3 – Detailed Contact List ..................................................................................... 12

Page 74: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and IHEP

26/03/2014 3 / 12

BACKGROUND European Grid Initiative Foundation – EGI.eu The Stichting European Grid Initiative Foundation (hereafter referred to as “EGI.eu”) has been created under the Dutch law with the mission to create and maintain a pan-European Grid Infrastructure in collaboration with its Participants i.e. the National Grid Initiatives (NGIs) and Associated participants (e.g. European International Research Organisations - EIROs) in order to guarantee the long-term availability of a generic e-infrastructure for all European research communities and their international collaborators. In its role of coordinating grid activities between European NGIs EGI.eu will: 1) operate a secure integrated production grid infrastructure that seamlessly federates resources from providers around Europe; 2) coordinate the support of the research communities using the European infrastructure coordinated by EGI.eu; 3) work with software providers within Europe and worldwide to provide high-quality innovative software solutions that deliver the capability required by our user communities; 4) ensure the development of EGI.eu through the coordination and participation in collaborative research projects that bring innovation to European Distributed Computing Infrastructures (DCIs). A summary of EGI.eu is attached as Annex 1. Institute of High Energy Physics - Chinese Academy of Sciences – IHEP IHEP, founded in 1973, is the leading high energy physics laboratory in China, being involved in high energy, cosmic ray and accelerator physics and technologies, radiation technologies and applications. It is staffed with over 1100 physicists and engineers. IHEP participates to ARGO-YBJ Cosmic Ray Experiment at Yangbajing, Tibet, an Italian-Chinese collaboration dealing with the study of Extensive Air Showers. IHEP is member of LHC ATLAS and CMS experiments, contributing on detector research and is involved in the LHC physics analyses being a Tier-2 Centre. IHEP is member of BES experiment focusing on tau/charm physics at BEPC (Beijing Electron-Positron Collider) at IHEP, with 30 institutes and universities involved in the project. IHEP is one of the pioneers in China on computing and network, having set up the first Internet link connected to the international network in 1980s. The Institute built and operated high performance computing infrastructures for HEP and cooperates with IT divisions of Laboratories like CERN, KEK and SLAC. In 2003 IHEP joined the LCG Computing Grid for LHC experiments and its WLCG site merged into the global LCG system. IHEP is building also a grid-based computing system for ARGO-YBJ. IHEP helped Peking University and Shandong University to build their LCG systems and provides training and support service. IHEP built a certificate authority (CA), accredited by EUGridPMA. In the framework of CHAIN-REDS, IHEP is in charge of establishing and operating CHINA-ROC, the regional operation centre of Grid infrastructure.

Page 75: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and IHEP

26/03/2014 4 / 12

ARTICLE 1: PURPOSE The purpose of this Memorandum of Understanding1 (MoU) is to define a non-binding framework of collaboration between EGI.eu and the Institute of High Energy Physics, IHEP (hereafter also referred to as “the Party” or the “Parties”). The Parties recognise, by this MoU, the opening of a wider and longer-term cooperation in activities that will bring visible benefits.

ARTICLE 2: DEFINITIONS For the purpose of this MoU, the definitions in the EGI glossary are relevant (http://go.egi.eu/glossary).

ARTICLE 3: INFRASTRUCTURE COMPOSITION Institute of High Energy Physics is in the position as the China-ROC coordinator and represents the Grid infrastructure at the Computing Centre of IHEP. IHEP as the China ROC operator will support other resources centres in the country that want to join the China-ROC national infrastructure and interoperate with EGI, as well as support the research communities related to IHEP and its collaborating centres.

x Institute of High Energy Physics, Beijing China (BEIJING-LCG2)

ARTICLE 4: JOINT WORK PLAN The parties contribute to enable the vision of providing European and Chinese scientists an international collaboration for sustainable distributed computing services to support their work. In this broad context, the specific goals of the collaborations are to: 1. To provide Local and Global operational services as needed to support the international

collaborations in this context. 2. To subscribe to a mandatory set of policies, procedures and OLAs; 3. To comply with the operations interfaces required by the EGI Operations Architecture2, which are

needed to ensure seamless and interoperable access to resources; 4. To be able to participate as an observer in the Operations Management Board and contribute to the

EGI operations agenda. 5. To participate be represented in the Security Policy Group to comment on the development of the

security policies fabric of the infrastructure. As coordinating party to the MoU, IHEP reserves the right to delegate work described below to the Regional Operations Centre. The specific activities to be carried out in the framework of the collaboration are3:

WP1 Participation to the EGI.eu operation policy groups Parties Involved: EGI.eu (Contact: Senior Operations Manager); IHEP Description of work: Operations experts from participating institutes to be regularly represented in the Operations Management Board, to provide requirements necessary to drive the evolution of

1 An MoU is a written agreement that clarifies relationships and responsibilities between two or more parties that share services, clients, and resources. 2 EGI Operations Architecture: Infrastructure Platform and Collaboration Platform Integration, EGI-InSPIRE Deliverable D4.6, March 2012 (https://documents.egi.eu/document/1309) 3 Party leading the activity is underlined.

Page 76: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and IHEP

26/03/2014 5 / 12

the operations architecture and generally to provide feedback through attendance to meetings, questionnaires and e-mail. IHEP to regularly participate to the SPG meetings, with the status of voting member, contributing to the development of the security policies that ensure a secure distributed computing infrastructure. Expected outcome: Participation to OMB work. China-ROC Operations Manager and China-ROC Security Manager/Officer are already appointed and contributing to the OMB. Performance of Operations Centre is assessed on an annual basis by EGI.eu.

WP2 Global services Parties Involved: EGI.eu (Contact: Senior Operations Manager); IHEP Description of work: Identify a set of EGI.eu Global services China-ROC is interested in using on the ROC according to CHAIN-REDs deliverable D3.14, together with the respective guaranteed quality parameters that EGI.eu commits to provide. Expected outcome: x A2.1 (EGI.eu to define SLA for the Global services offered to IHEP to be released and

approved.. (Leader Peter Solanga) x IHEP will use of GOCDB, GGUS, Accounting Service and Grid Monitoring Services.

WP3 Local services Parties involved: EGI.eu (Contact: Senior Operations Manager), China-ROC ( IHEP) Leading partner: IHEP Description of work: Identify a set of IHEP local services and the respective minimum quality of service that the Party commits to provide to EGI.eu in order to be part of EGI.eu A3.1: IHEP to adopt and employ Operational Policies and Procedures (AP-CHINAROC-7)

WP4 Integration Parties involved: EGI.eu (M.Krakowian EGI.eu), IHEP Leading partner: EGI COD Description of work: IHEP infrastructure to be supported validated and integrated by the EGI.eu within EGI according to the established procedure. Expected outcome: IHEP enters the EGI production infrastructure A4.1 (xx/xxxx): IHEP, as the Resource Infrastructure Provider, carries out integration on operational

level with EGI.eu A4.2: IHEP to setup and Operate a Grid Monitoring Service (AP-CHINAROC-5) A4.3: IHEP to create a new Operations Centre in the EGi.eu GOCDB and transfer Resource Centres from ROC Canada to the newly established Operations Centre (AP-CHINAROC-6) A4.4: IHEP to set up dedicated Support Unit in GGUS (AP-CHINAROC-8)

WP5 Reporting Parties Involved: EGI.eu (Contact: Senior Operations Manager); IHEP Description of work: As part of the EGI quality assurance procedures, performance of the services provided by China-ROC Resource Centre (site-level grid services) and performance of the core services provided by China-ROC (operations centre-level services) is reported on a monthly basis. Reports are produced by EGI.eu and are accessible on the EGI wiki: https://wiki.egi.eu/wiki/Performance

4 http://documents.ct.infn.it/record/556/files/CHAIN-REDS-D3.1-i.pdf

Page 77: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and IHEP

26/03/2014 6 / 12

China-ROC resource centre agrees to adhere to the minimum service level targets defined in the Resource Centre OLA5 and the Resource Infrastructure Provider OLA6 (AP-CHINAROC-7) . China-ROC installed capacity and utilization are also assessed yearly as part of the annual assessment of EGI. EGI.eu is performing this assessment, which is publicly available on EGI document DB. Expected outcome:

x A5.1 (every year) - Annual report on performance of China-ROC Local services (Resource Centre services and NGI services) (Leader Peter Solagna)

x A5.2 (every year) - Annual assessment of services and installed capacity and utilization (Leader Peter Solagna)

The EGI.eu Strategy and Policy Team (SPT) will coordinate the periodic review of the progress of the activities defined in the table above, follow-up the milestones and distribute reports to both Parties. Special meetings between the points of contact designated under 5 (Communication) shall be held, as often as necessary, to examine the progress in the implementing of this Agreement.

ARTICLE 5: COMMUNICATION The Parties shall keep each other informed on all their respective activities and on their progress and shall consult regularly on areas offering potential for cooperation. Each Party shall designate a “point of contact” to be responsible for monitoring the implementation of this MoU and for taking measures to assist in the further development of cooperative activities. Such points of contact shall be the ordinary channel for the Parties' communication of proposals for cooperation. The primary point of contact for each Party is: EGI.eu: Operations Centre. E-mail: operations (at) egi.eu IHEP: ROC manager e-mail : roc-manager (at) ihep.ac.cn Questions of principle or problems that cannot be solved at primary contact level are escalated to the EGI.eu Director director (at) egi.eu and the Director of IHEP Computing Centre Gang Chen (Gang.Chen (at) ihep.ac.cn). Should there be, in rare occasions, a need to escalate beyond the competency area, the director of the Institute should be contacted : yfwang (at) ihep.ac.cn

ARTICLE 6: PARTICIPATION IN EGI.EU GROUPS IHEP agrees to name a technical representative (with deputy) for the EGI OMB. IHEP may be asked to nominate representatives to serve on other policy groups as appropriate. Sites included in China-ROC may nominate representatives to serve on or participate to user support or technical work groups, as well as EGI Virtual Teams, as appropriate.

ARTICLE 7: INTELECTUAL PROPERTY RIGHTS, JOINTLY OWNED RESULTS AND LICENSE

A. INTELECTUAL PROPERTY RIGHTS AND LICENSE 1. “Intellectual Property Rights” shall mean all intellectual creations including but not limited to

5 Resource Centre Operational Level Agreement: http://documents.egi.eu/document/31 6 Resource Infrastructure Provider Operational Level Agreement: http://documents.egi.eu/document/463

Page 78: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and IHEP

26/03/2014 7 / 12

inventions, know-how, layouts, drawings, designs, specifications, computer programs, reports, processes, protocols, calculations and any other matter and protected by intellectual property rights, whether registered or not, including patents, registered designs, copyrights, design rights and all similar proprietary rights and applications for protection thereof. 2. Intellectual property rights generated by a Party under this MoU shall be the property of that Party who shall be free to protect, transfer and use such Intellectual Property Rights as it deems fit. 3. Notwithstanding the foregoing each Party shall grant the other a non-exclusive royalty free, perpetual license to use the Intellectual Property Rights generated by it under this MoU for use within its project or for the exploitation the results thereof. Such license shall include the right to sublicense the entities involved in the project. B. JOINTLY OWNED RESULTS 1. Results that were jointly generated by both Parties will be jointly owned by the Parties, hereinafter referred to as (“Jointly Owned Results”) and each of the Parties shall be free to use these Jointly Owned Results as it sees fit without owing the other Party any compensation or requiring the consent of the other Party. Each Party, therefore, for example and without limitation, has the transferable right to grant non-exclusive, further transferable licenses under such Jointly Owned Results to third parties. Each Party shall be entitled to disclose such Jointly Owned Results without restrictions unless such Jointly Owned Results contain a Joint Invention in which case no disclosure must be made prior to the filing of a priority application. 2. With respect to any joint invention resulting from this MoU (i.e. any invention jointly made by employees of both Parties), the features of which cannot be separately applied for as Intellectual Property Rights and which are eligible for statutory protection requiring an application or registration (herein referred to as “Joint Invention”), the Parties shall agree on which Party will carry out any filling as well as any further details with regard to persecuting and maintaining of relevant patent applications.

ARTICLE 8: FUNDING Each Party shall bear the costs of discharging its respective responsibilities under this MoU, including travel and subsistence of its own personnel and transportation of goods and equipment and associated documentation, unless otherwise agreed in this MoU. Each Party shall make available free of charge to the other Party any office/meeting space needed for the joint activities. The Parties' obligations hereunder are subject to their respective funding procedures and the availability of appropriated funds. Should either Party encounter budgetary problems in the course of its respective internal procedures that may affect the activities carried out under this MoU, that Party shall notify and consult with the other Party in a timely manner in order to minimise the negative impact of such problems on the cooperation. The Parties shall jointly look for mutually agreeable solutions. In order to reduce the impact on travel costs, face-to-face meetings should be co-located with other events where participants are likely to attend. Meeting via teleconferences should be considered when the nature of the discussion does not strictly require a face-to-face presence.

ARTICLE 9: STARTING DATE, DURATION AND TERMINATION This MoU will start when signed by the authorised representatives of the Parties and shall remain until

Page 79: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and IHEP

26/03/2014 8 / 12

completion of the activities identified in Article 4 (Joint Work Plan), or upon termination of the projects in which the Parties participate, or upon three (3) months prior written notice by one Party to the other. In the event of termination, the parties shall endeavour to reach agreement on terms and conditions to minimise negative impacts on the other Party. In the event of the continuation of the present cooperation, the Agreement may be extended and/or amended by mutual agreement in writing.

ARTICLE 10: AMENDMENTS The MoU is subject to updates and modifications that can be triggered by changes in either EGI organizational model, or the changes in other party's organizational model The MoU may be amended by written agreement of the Parties. Amendments shall be valid only if signed by the authorised representatives of the Parties.

ARTICLE 11: ANNEXES Annexes Annex 1, Annex 2, Annex 3, Annex 4 and Annex 5 attached hereto have the same validity as this MoU and together constitute the entire understanding and rights and obligations covering the cooperation accepted by the Parties under this MoU. Annexes may be amended following the provisions of Article 10: Amendments.

ARTICLE 12: LANGUAGE The language for this MoU, its interpretation and all cooperative activities foreseen for its implementation, is English.

Page 80: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures
Page 81: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and IHEP

26/03/2014 10 / 12

Annex 1 – EGI.eu Description To support science and innovation, a lasting operational model for e-Infrastructure is needed – both for coordinating the infrastructure and for delivering integrated services that cross national borders. The objective of EGI.eu (a non-for-profit foundation established under Dutch law) is to coordinate and manage the European Grid Infrastructure (EGI) federation on behalf of its members: National Grid Initiatives (NGIs) and European International Research Organisations (EIROs) to help guarantee the long-term availability of a generic e-Infrastructure for all European research communities and their international collaborators. Services provided by EGI.eu to the wider EGI community:

x Oversee the operations of EGI to guarantee the integration of resources from providers around Europe into a seamless and secure e-Infrastructure.

x Coordinate the support provided to EGI’s user communities. x Work with technology providers to source high-quality and innovative software solutions to

answer users’ requirements. x Represent the EGI federation in the wider Distributed Computing Infrastructures (DCI)

community through coordination and participation in collaborative projects. x Coordinate the external services provided by partners in the community. x Steer the evolution of EGI’s policy and strategy development. x Organise EGI’s flagship events and publicise community’s news and achievements.

The EGI.eu is supporting a federation of high-performance computing (HPC) and high-throughput computing (HTC) resources and cloud resources. EGI.eu is also ideally placed to integrate new Distributed Computing Infrastructures (DCIs) such as clouds, supercomputing networks and desktop grids, to benefit the user communities within the European Research Area. EGI collects user requirements and provides support for the current and emerging user communities. Support is also given to the current heavy users of the infrastructure, such as high energy physics, computational chemistry and life sciences, as they move their critical services and tools from a centralised support model to one driven by their own individual communities. The EGI community is a federation of independent national and community resource providers, whose resources support specific research communities and international collaborators both within Europe and worldwide. EGI.eu, coordinator of EGI, brings together partner institutions established within the community to provide a set of essential human and technical services that enable secure integrated access to distributed resources on behalf of the community. The production infrastructure supports Virtual Research Communities – structured international user communities – that are grouped into specific research domains. VRCs are formally represented within EGI at both a technical and strategic level. Further information (e.g. governance; services) can be found at: www.egi.eu/about/EGI.eu

Page 82: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and IHEP

26/03/2014 11 / 12

Annex 2 – Description of Institute of High Energy Physics The Institute of High Energy Physics (IHEP) is the biggest and comprehensive fundamental research center in China. The major research fields of IHEP are particle physics, accelerator physics and technologies, radiation technologies and application, including the following leading research areas:

• Particle physics experiments: BES, neutrino experiments, experiments at LHC and B-factories… • Theoretical Physics: particle physics, medium and high energy nuclear physics, cosmology, field

theory… • Particle astrophysics: cosmic ray, astrophysics experiments… • Accelerator physics and technology: high luminosity e+e– collider, high power proton accelerator,

accelerator applications… • Synchrotron radiation: technology and application; • Nuclear analytical technique and application; • Free electron laser; • Nuclear detector and fast electronics; • Computing and network application; • Radiation protection. The main scientific facilities at IHEP are:

• Upgraded Beijing Electron Positron Collider • Beijing Spectrometer (BES) • Beijing Synchrotron Radiation Facility • Daya Bay Reactor Neutrino Experiment • Yangbajing International Cosmic Ray Observatory in Tibet • China Spallation Neutron Source in Dongguan, Guangdong (under construction) • Hard X-Ray Modulation Telescope (under construction)

IHEP has extensive cooperation with all high energy physics laboratories and participates in many important particle physics experiments in the world.

As of April 2013, over 1390 employees work at the Institute of High Energy Physics, among whom 1100 are scientists and engineers, including 7 CAS academicians, 2 CAE academicians, 8 chief scientists of Project 973, 45 winners of the CAS Hundred Talents Scheme and 18 winners of the Outstanding Youth Fund. In addition, there are over 460 post-graduates and over 50 post-doctorates on site.

Page 83: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and IHEP

26/03/2014 12 / 12

Annex 3 – Detailed Contact List Role EGI.eu Collaborating Organisations Signing Authority Director

Yannick Legré [email protected]

IHEP Institute Director Yifang Wang [email protected]

MoU Contact Point Strategy and Policy Manager Sergio Andreozzi [email protected]

IHEP Computing Center Director Gang Chen [email protected]

User support Senior Operations Manager Peter Solagna [email protected]

China-ROC User Support team [email protected]

Infrastructure Operations Senior Operations Manager Peter Solagna [email protected]

China-ROC Operations Manager Xiaofei Yan [email protected]

Technical Coordination Technical Manager Michel Drescher [email protected]

China-ROC Operatons Manager Jingyan Shi [email protected]

Dissemination Communications Manager Neasan O'Neill [email protected]

IHEP Director’s Office Wei Meng [email protected]

These contact points may be the same person. The EGI.eu Strategy and Policy Team ([email protected]) is to be notified regarding any changes to the contact list.

Page 84: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #84

84

6.3 MoU EGI –Garuda (India)

Page 85: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding

Between

EGI.eu and C-DAC

Peer Resource Infrastructure Provider MoU

Page 86: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

27/05/14 FINAL 2/ 9

Table of Content

Table of Content ........................................................................................................................................ 2

Scope and background for this MoU ......................................................................................................... 3

Article 1: JoINT WORK PLAN ................................................................................................................ 4

Article 2: Communication ......................................................................................................................... 4

Article 3: participation in EGI.eu groups .................................................................................................. 5

Article 4: Funding ..................................................................................................................................... 5

Article 5: Entry into force, duration and termination ................................................................................ 5

Article 6: Amendments ............................................................................................................................. 5

Article 7: Language ................................................................................................................................... 5

article 9: FORCE MAJEURE .................................................................................................................... 6

article 10: INTELLECTUAL PROPERTY RIGHTS ............................................................................... 6

article 11: DISPUTES ............................................................................................................................... 6

ARTICLE 13: ASSIGNMENT AND TRANSFER .................................................................................. 7

ARTICLE 14: SEVERABILITY .............................................................................................................. 7

ARTICLE 15: LIMITATION OF LIABLITY .......................................................................................... 7

ARTICLE 17: NO PARTNERSHIP ......................................................................................................... 7

ARTICLE 18: ENTIRE MOU................................................................................................................... 7

ARTICLE 19: HEADINGS ....................................................................................................................... 7

– Detailed Contact List ...................................................................................................... 8

Page 87: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

27/05/14 FINAL 3/ 9

SCOPE AND BACKGROUND FOR THIS MOU

This MOU made and executed on 28th day of May 2014, by and between, The Stitching European Grid

Initiative foundation, registered under the Dutch laws, having its registered office at Science Parc 140,

(hereinafter referred to as “egi.eu” (which reference unless repugnant to context, include its, successors

and permitted assignees

And

Centre for Development of Advanced Computing, a Scientific Society of the Ministry of Information

Technology, Government of India registered under the Societies’ Registration Act 1860 and the

Bombay Public Trust Act 1950 and having its registered address as University Campus, Pune 411 007,

having one of its unit at C-DAC, Bangalore (hereinafter referred to as C-DAC) which expression shall

where the context admits, include its successors or assignees

BACKGROUND:

Stichting European Grid Initiative Foundation

The Stichting European Grid Initiative Foundation (hereafter referred to as “EGI.eu”) has been created

under the Dutch law with the mission to create and maintain a pan-European Grid Infrastructure in

collaboration with its Participants i.e. the National Grid Initiatives (NGIs) and Associated participants

(e.g. European International Research Organisations - EIROs) in order to guarantee the long-term

availability of a generic e-infrastructure for all European research communities and their international

collaborators. In its role of coordinating grid activities between European NGIs EGI.eu will: 1) operate

a secure integrated production grid infrastructure that seamlessly federates resources from providers

around Europe; 2) coordinate the support of the research communities using the European infrastructure

coordinated by EGI.eu; 3) work with software providers within Europe and worldwide to provide high-

quality innovative software solutions that deliver the capability required by our user communities; 4)

ensure the development of EGI.eu through the coordination and participation in collaborative research

projects that bring innovation to European Distributed Computing Infrastructures (DCIs). More

information on EGI can be found at www.egi.eu/about/EGI.eu

Centre for Development of Advanced Computing (C-DAC)

C-DAC is a premier research and development organization engaged in cutting edge technology design,

development and deployment of products and solutions in the area of electronics and Information

Technology. As a Scientific Society under the Department of Electronics and Information Technology,

Ministry of Communications and Information Technology, Government of India, C-DAC is also

spearheading the national initiatives in IT to meet the technological needs of the country, including the

GARUDA grid project. C-DAC will participate to all project activities bringing the experience of

GARUDA and representing the Indian research community. C-DAC will also disseminate the activities

and achievements of the project in India. More information about C-DAC can be found at www.cdac.in

.

In order to provide researchers with secure and consistent experience for those Virtual Organisations (

VO’s) wishing to utilize resources both in GARUDA and EGI cyber infrastructure, C-DAC and EGI

team will work towards cooperative communication goals and persistent set of interoperable services.

C-DAC is already participating in a FP7 funded project entitled CHAIN-REDS (Co-ordination &

Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing), where

interoperability and interoperation of European Grid (EGI) and Indian grid (GARUDA) is being

explored.

Page 88: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

27/05/14 FINAL 4/ 9

Both parties recognizing the need and importance of their collaboration have decided to enter into this

non-binding MoU on the following terms and conditions.

ARTICLE 1: JOINT WORK PLAN

The specific goals of the collaboration are:

1. To enhance the interoperation capacities of both infrastructures with the focus on Virtual

Organisations using, or interested to use, both infrastructures.

2. To provide Local and Global operational services as needed to support the members of such virtual

organisations;

3. To provide Local and Global operational services as needed to support the VO members;

4. To cooperate and exchange information about common operation activities;

5. To participate in the Security Policy Group to contribute to the development of the security policies

fabric of the infrastructure.

6. This MOU being a broad base for operational methodology, some of the operations could be

brought under a purview of specifically drawn up agreements on case-to-case basis in writing and

signed between the parties, specifying in detail in respect of time line for various agreed activities,

responsibilities of Parties, finance, IPR ownership, commercial terms, etc.,

As coordinating party to the MoU, C-DAC reserves the right to delegate work described below to the

Regional Operations Centre.

The specific activities to be carried out in the framework of the collaboration are1:

AP1 Participation to the EGI.eu policy groups

Parties Involved: EGI.eu (Contact: EGI Chief Operations Officer); C-DAC

Description of work: Operations experts from participating institutes to be regularly represented

in the Operations Management Board, to provide requirements necessary to drive the evolution of

the operations architecture and generally to provide feedback through attendance to meetings,

questionnaires and e-mail.

C-DAC to regularly participate to the SPG meetings or to follow the discussion on the mailing list

to ensure a secure distributed computing infrastructure.

AP2 Global services

Parties Involved: EGI.eu (Contact: EGI Chief Operations Officer); C-DAC

Description of work: Create dedicated Support Unit in GGUS by implementing proper

mechanisms to get accounting systems, publishing of service information and monitoring

framework with EGI to support seamless data flow between both the entity.

The EGI.eu Strategy and Policy Team (SPT) will coordinate the periodic review of the progress of the

activities defined in the table above, follow-up the milestones and distribute reports to both Parties.

Special meetings between the points of contact designated under 5 (Communication) shall be held, as

often as necessary, to examine the progress in the implementing of this Agreement.

ARTICLE 2: COMMUNICATION

The Parties shall keep each other informed on all their respective activities and on their progress and

shall consult regularly on areas offering potential for cooperation.

Each Party shall designate a “point of contact” to be responsible for monitoring the implementation of

1 Party leading the activity is underlined.

Page 89: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

27/05/14 FINAL 5/ 9

this MoU and for taking measures to assist in the further development of cooperative activities. Such

points of contact shall be the ordinary channel for the Parties' communication of proposals for

cooperation.

The primary point of contact for each Party is:

EGI.eu: Operations Centre. E-mail: operations (at) egi.eu

C-DAC : Garuda operation in-charge. E-mail : [email protected]

Questions of principle or problems that cannot be solved at primary contact level are escalated to the

EGI.eu Managing Director director(at)egi.eu and C-DAC Associate Director subratac(at)cdac.in.

Should there be, in rare occasions, a need to escalate beyond the competency area, the Executive

Director of the Institute should be contacted: sarat(at)cdac.in.

ARTICLE 3: PARTICIPATION IN EGI.EU GROUPS

C-DAC may be asked to nominate representatives to serve on other policy groups as appropriate.

ARTICLE 4: FUNDING

Each Party shall bear the costs of discharging its respective responsibilities under this MoU, including

travel and subsistence of its own personnel and transportation of goods and equipment and associated

documentation, unless otherwise agreed in this MoU.

Each Party shall make available free of charge to the other Party any office/meeting space needed for

the joint activities.

The Parties' obligations hereunder are subject to their respective funding procedures and the availability

of appropriated funds. Should either Party encounter budgetary problems in the course of its respective

internal procedures that may affect the activities carried out under this MoU, that Party shall notify and

consult with the other Party in a timely manner in order to minimise the negative impact of such

problems on the cooperation. The Parties shall jointly look for mutually agreeable solutions.

In order to reduce the impact on travel costs, face-to-face meetings should be co-located with other

events where participants are likely to attend. Meeting via teleconferences should be considered when

the nature of the discussion does not strictly require a face-to-face presence.

This MoU does not create any financial/funding obligations upon parties except as mentioned in this

article of the MoU.

ARTICLE 5: ENTRY INTO FORCE, DURATION AND TERMINATION

This MoU will enter into force when signed by the authorised representatives of the Parties and shall

remain in force for the period of four (4) years, unless terminated by the parties as per the provisions of

this MoU. Either Party may terminate this MoU upon three (3) months prior written notice by one Party

to the other sent through AIR MAIL. In the event of termination, the parties shall endeavour to reach

agreement on terms and conditions to minimise negative impacts on the other Party. In the event of the

continuation of the present cooperation the Agreement may be extended and/or amended by mutual

agreement in writing.

ARTICLE 6: AMENDMENTS

The MoU may be amended by written agreement of the Parties. Amendments shall be valid only if

signed by the authorised representatives of the Parties.

ARTICLE 7: LANGUAGE

The language for this MoU, its interpretation and all cooperative activities foreseen for its

Page 90: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

27/05/14 FINAL 6/ 9

implementation, is English.

ARTCLE 8: CONFIDENTIALITY Both parties shall take all reasonable care to ensure that intellectual property, privacy and

confidentiality of any information (inclusive but not limited to software, designs, dataset, etc) from

other party (and other institutions, as applicable) are not compromised.

Each Party will treat as confidential all Confidential Information of the other Party and shall not

disclose such Confidential Information to any third party without prior written consent of the other

Party. Without limiting the foregoing, each of the Parties will use at least the same degree of care with

respect to the Confidential Information that such Party uses to prevent the disclosure of its own

confidential information of like importance. Each Party will promptly notify the other Party of any

actual or suspected misuse or unauthorized disclosure of the other Party's Confidential Information.

Exceptions: Notwithstanding the above, neither Party will have liability to the other with regard to any

Confidential Information of the other which the receiving Party can demonstrate:

was in the public domain at the time it was disclosed or has become in the public domain

through no fault of the receiving Party;

was known to the receiving Party through no breach of any other confidentiality MoU at the

time of disclosure, as evidenced by the receiving Party’s files/documents in existence at the

time of disclosure;

was independently developed by the receiving Party as evidenced by the receiving Party’s

files/documents in existence at the time of disclosure;

is disclosed by the disclosing Party to any third party without confidentiality obligations similar

to those contained in this MoU; or

is disclosed pursuant to the order or requirement of a court, administrative agency, or other

governmental body, provided, however, that the receiving Party will provide prompt notice

thereof to the disclosing Party prior to any disclosure to enable the disclosing Party to seek a

protective order or otherwise prevent or restrict such disclosure.

If a receiving party claims that Confidential Information falls under one of the above subsections, such

receiving party has the burden of establishing the fact of such exception by clear and convincing

evidence

ARTICLE 9: FORCE MAJEURE

Neither party to this MOU shall be liable to the other party for any delay or failure on its part in

performing any of its obligations under this MOU resulting from any cause beyond its reasonable

control, including but not limiting to strikes, riots, civil commotion, or other concerted actions of

Workmen, material shortages, fire, floods, explosions, acts of God, acts of state, war, enemy action or

terrorist action etc.

ARTICLE 10: INTELLECTUAL PROPERTY RIGHTS

All Intellectual Property (including but not limited to trade secret, copyrights and patents, if any) of

either Party in existence on the effective date shall remain the property of their respective owner/party.

Ownership of Intellectual Property developed or created by or for a Party after the effective date as part

of the delivery of the services or performance under this MoU shall be decided on case to case basis

depending on the contribution of the Party to develop the same with a separate written agreement

signed between the Parties.

ARTICLE 11: DISPUTES

This MOU is based on both the immediate benefits and on developing and building enduring

relationships serving and safeguarding the commercial interests as well as the standing in the world of

Information Technology of the parties hereto. Hence any question, doubt or dispute arising out of the

interpretation of any term or usage herein or on the implementation and functioning of the various

understandings forming a part of this MOU shall be resolved by the Heads of the two organizations viz.

Executive Director, C-DAC and Managing Director, EGI-eu or their authorized representatives for the

purpose mentioned herein by discussions and negotiations based on consensus in the spirit of

Page 91: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

27/05/14 FINAL 7/ 9

developing and strengthening the mutual relationships. The decision so reached shall be final and

binding on both the parties. Both Parties do not intend to create any legal relationship/obligations under

this MoU.

ARTICLE 12: INDEMINITY

Either party shall keep other party, its affiliates, officers, directors, employees, agents, representatives,

and customers indemnified and harmless from and against any and all costs, liabilities, losses, and

expenses (including, but not limited to, reasonable attorneys' fees) arising out of any claim, suit, action

or proceeding (each, an "Action"), for any act(s) and omissions of such party under this MoU or any

incidental matter or in any way arising therefrom.

ARTICLE 13: ASSIGNMENT AND TRANSFER

Any and all rights, duties and obligations of the parties under this MOU shall not be transferred or

assigned by either party to any third party without prior written consent of the other party.

ARTICLE 14: SEVERABILITY

The invalidity or unenforceability of any provision of this MOU shall not affect the validity or

enforceability of any other provision of this MOU that shall continue in full force and effect except for

any such invalid and unenforceable provision.

ARTICLE 15: LIMITATION OF LIABLITY

In no event parties shall be liable to the other for any incidental, consequential, special, and exemplary

or direct or indirect damages, or for lost profits, lost revenues, or loss of business arising out of the

subject matter of this MOU, regardless of the cause of action, even if the party has been advised of the

likelihood of damages.

ARTICLE 17: NO PARTNERSHIP

Nothing in this MOU shall be deemed to neither constitute or create an association, trust, partnership or

joint venture between the Parties nor constitute either party the agent of the other party for any purpose.

ARTICLE 18: ENTIRE MOU

This MOU constitutes the entire Understanding between the Parties. Any and all written or oral

agreements, representations or understandings of any kind that may have been made prior to the date

hereof shall be deemed to have been superseded by the terms of this MOU.

ARTICLE 19: HEADINGS

The headings shall not limit, alter or affect the meaning of the Clauses headed by them and are solely

for the purpose of easy reference.

Page 92: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

27/05/14 FINAL 8/ 9

– Detailed Contact List

Role EGI.eu Collaborating Organisations

Signing Authority Managing Director

Yannick Legré

[email protected]

Associate Director

Dr. Subrata Chattopadhyay

[email protected]

MoU Contact Point Strategy and Policy Manager

Sergio Andreozzi

[email protected]

Associate Director

Dr. Subrata Chattopadhyay

[email protected]

User support Chief Operations Officer

Peter Solagna

[email protected]

Principal Technical Officer

Ms. Divya MG

[email protected]

Infrastructure Operations Chief Operations Officer

Peter Solagna

[email protected]

Principal Technical Officer

Ms. Divya MG

[email protected]

Technical Coordination Technical Manager

Michel Drescher

[email protected]

Joint Director

Mr. Sridharan R

[email protected]

Dissemination Communications Manager

Neasan O'Neill

[email protected]

Joint Director

Mr. Sridharan R

[email protected]

These contact points may be the same person. The EGI.eu Strategy and Policy Team ([email protected]) is

to be notified regarding any changes to the contact list.

Page 93: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

27/05/14 FINAL 9/ 9

Memorandum of Understanding between EGI.eu and C-DAC

IN WITNESS WHEREOF, the Parties have caused their duly authorised representatives to sign

two originals of this Memorandum of Understanding, in the English language.

The following agree to the terms and conditions of this MoU:

Name : Yannick Legré Name : Subrata Chattopadhyay

Designation : Managing Director Designation : Associate Director

Date : Date : 28/05/2014

WITNESSES:

1.

2.

Page 94: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #94

94

6.4 MoU EGI –ROC-LA

Page 95: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding Between

EGI.eu and CLAF

Resource Infrastructure Provider MoU

Table of Content

Background 3

Article 1: Purpose 4

Article 2: Definitions 4

Article 3: Infrastructure Composition 4

Article 4: Joint Work plan 6

Article 5: Communication 10

Article 6: participation in EGI.eu groups 10

Article 7: INTELECTUAL PROPERTY RIGHTS, JOINTLY OWNED RESULTS AND LICENSE 11

Article 8: Funding 11

Article 9: Starting date, duration and termination 12

Article 10: Amendments 12

Article 11: Annexes 12

Article 12: Language 12

Annex 1.– EGI.eu Description 15

Annex 2.– Description of Latin American Centre for Physics 17

Annex 3.– Detailed Contact List 18  

Page 96: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

 

Background

European Grid Initiative Foundation – EGI.eu The Stichting European Grid Initiative Foundation (hereafter referred to as “EGI.eu”) has been created under the Dutch law with the mission to create and maintain a pan-European Grid Infrastructure in collaboration with its Participants i.e. the National Grid Initiatives (NGIs) and Associated participants (e.g. European International Research Organisations - EIROs) in order to guarantee the long-term availability of a generic e-infrastructure for all European research communities and their international collaborators. In its role of coordinating grid activities between European NGIs EGI.eu will: 1) operate a secure integrated production grid infrastructure that seamlessly federates resources from providers around Europe; 2) coordinate the support of the research communities using the European infrastructure coordinated by EGI.eu; 3) work with software providers within Europe and worldwide to provide high-quality innovative software solutions that deliver the capability required by our user communities; 4) ensure the development of EGI.eu through the coordination and participation in collaborative research projects that bring innovation to European Distributed Computing Infrastructures (DCIs). A summary of EGI.eu is attached as Annex 1.

The Latin American Centre for Physics (CLAF: Centro Latino Americano de Fisica) CLAF is an international organization that aims to promote and coordinate efforts for the development of physics in Latin America. CLAF was created on March 26, 1962, during a meeting in Rio de Janeiro, encouraged by UNESCO and the Brazilian government with the participation of twenty Latin American countries. Signed the initiative instituted CLAF: Argentina, Bolivia, Brazil, Colombia, Costa Rica, Cuba, Chile, Ecuador, El Salvador, Guatemala, Haiti, Honduras, Mexico, Nicaragua, Panama, Paraguay, Peru, Dominican Republic, Uruguay and Venezuela. The creation agreement was ratified by: Argentina, Bolivia, Brazil, Chile, Colombia, Costa Rica, Cuba, Ecuador, Mexico, Nicaragua, Peru, Paraguay, Uruguay and Venezuela, which are thus the current Member States of CLAF. CLAF has special relations with various international organizations such as UNESCO, both the Paris office as well as the office in Uruguay, with the International Centre for Theoretical Physics (ICTP), Trieste, body of UNESCO, which also has a collaboration agreement for relatively less developed countries and a cooperative doctoral fellowships, with the Joint Institute of Nuclear Research (JINR), in Dubna, whit CLAF director being part of its Scientific Council, has started a scholarship program, and with CERN, in Geneva, with whom Latin American physics schools are organized. CLAF headquarters are located in the Centro Brasileiro de Pesquisas Físicas (CBFP) in Rio de Janeiro. CLAF hosts ROC-LA (Regional Operating Centre for Latin America), which started operating on September 30th 2009 to fill a gap in relation to the still fledgling and scattered activities of GRID in Latin America. In 2013, CLAF signed the MoU with WLCG (World LHC Computing Grid), being the link between all Latin American sites, through ROC-LA, and WLCG. Technicians and engineers of the ROC-LA are committed to the development of the GRID in Latin America and consequently are committed to provide the necessary support for new institutions and/or groups attempting to set up their sites GRID the region.

Page 97: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Article 1: Purpose

The purpose of this Memorandum of Understanding1 (MoU) is to define a non-binding framework of collaboration between EGI.eu and The Latin American Centre for Physics (hereafter also referred to as “the Party” or the “Parties”). The Parties recognise, by this MoU, the opening of a wider and longer-term cooperation in activities that will bring visible benefits.

Article 2: Definitions

For the purpose of this MoU, the definitions in the EGI glossary are relevant (http://go.egi.eu/glossary).

Article 3: Infrastructure Composition The Latin American Centre for Physics is in the position as the ROC-LA coordinator and represents the Grid infrastructure at the Brazilian Centre for Physic Research. ROC-LA, hosted by CLAF, will support other resources centres in the region that want to join the ROC-LA infrastructure and interoperate with EGI, as well as support the research communities related to ROC-LA and its collaborating centres: Now, these are the certified sites:

l Brazilian Centre for Physics Research (CBPF), Rio de Janeiro, Brazil

l University of São Paulo (SAMPA), São Paulo, Brazil.

l National Autonomous University (ICN-UNAM), Mexico City, Mexico

l Los Andes University (UniAndes), Bogota, Colombia

l Universidad Tecnica Federico Santa Maria (EELA-UTFSM), Valparaiso, Chile

l PUC Atlas Andino Group (ATLAND), Santiago, Chile

l Grid site of the Universidad Nacional de La Plata (EELA-UNLP), Argentina

l There are more 5 more sites as canditate/uncertified sites.

l ROC-LA infrastructure and critical services are all hosted now at CBPF site and it´s management is performed by CBPF ´s technicians team.

Article 4: Joint Work plan

The parties contribute to enable the vision of providing European and Latin-American scientists an international collaboration for sustainable distributed computing services to support their work.

In this broad context, the specific goals of the collaborations are to:

1. To provide Local and Global operational services as needed to support the international collaborations in this context.

2. To subscribe to a mandatory set of policies, procedures and OLAs;

3. To comply with the operations interfaces required by the EGI Operations Architecture2, which are needed to ensure seamless and interoperable access to resources;

4. To be able to participate as an observer in the Operations Management Board and contribute to the EGI operations agenda.

5. To participate be represented in the Security Policy Group to comment on the development of the security policies fabric of the infrastructure.

Page 98: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

As coordinating party to the MoU, ROC-LA reserves the right to delegate work described below to other Operations Centres.

The specific activities to be carried out in the framework of the collaboration are3:

WP1 Participation to the EGI.eu operation policy groups

Parties Involved: EGI.eu (Contact: Senior Operations Manager); ROC-LA

Description of work: Operations experts from participating institutes to be regularly represented in the Operations Management Board, to provide requirements necessary to drive the evolution of the operations architecture and generally to provide feedback through attendance to meetings, questionnaires and e-mail.

CLAF to regularly participate to the SPG meetings, with the status of voting member, contributing to the development of the security policies that ensure a secure distributed computing infrastructure.

Expected outcome:

Participation to OMB work. ROC-LA Operations Manager and ROC-LA Security Manager/Officer are already appointed and contributing to the OMB. Performance of Operations Centre is assessed on an annual basis by EGI.eu.

WP2 Global services

Parties Involved: EGI.eu (Contact: Senior Operations Manager); ROC-LA

Description of work: Identify a set of EGI.eu Global services ROC-LA is interested in using with the ROC according to CHAIN-REDs deliverable D3.1, together with the respective guaranteed quality parameters that EGI.eu commits to provide.

Expected outcome:

l A2.1 - EGI.eu to define SLA for the Global services offered to ROC-LA to be released and approved. (Leader Peter Solanga)

l ROC-LA will use of GOCDB, GGUS, Accounting Service and Grid Monitoring Services.

WP3 Local services

Parties involved: EGI.eu (Contact: Senior Operations Manager), ROC-LA

Leading partner: ROC-LA

Description of work: Identify a set of ROC-LA local services and the respective minimum quality of service that the Party commits to provide to EGI.eu in order to be part of EGI.eu

A3.1: ROC-LA to adopt and employ Operational Policies and Procedures

WP4 Integration

Parties involved: EGI.eu (Contact: Senior Operations Officer), ROC-LA

Leading partner: EGI COD

Description of work: ROC-LA infrastructure to be supported validated and integrated by the EGI.eu within EGI according to the established procedure.

Expected outcome: ROC-LA enters the EGI production infrastructure

A4.1 ROC-LA, as the Resource Infrastructure Provider for the Latin America Region, carries out integration on operational level with EGI.eu

A4.2: ROC-LA to setup and Operate a Grid Monitoring Service

Page 99: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

WP5 Reporting

Parties Involved: EGI.eu (Contact: Senior Operations Manager); ROC-LA

Description of work: As part of the EGI quality assurance procedures, performance of the services provided by ROC-LA Resource Centres (site-level grid services) and performance of the core services provided by ROC-LA (operations centre-level services) is reported on a monthly basis. Reports are produced by EGI.eu and are accessible on the EGI wiki: https://wiki.egi.eu/wiki/Performance

ROC-LA resource centre agrees to adhere to the minimum service level targets defined in the Resource Centre OLA and the Resource Infrastructure Provider OLA.

ROC-LA installed capacity and utilization are also assessed yearly as part of the annual assessment of EGI. EGI.eu is performing this assessment, which is publicly available on EGI document DB.

Expected outcome:

l A5.1 (every year) - Annual report on performance of ROC-LA Local services (Resource Centre services and NGI services) (Leader Peter Solagna)

l A5.1 (every year) - Annual assessment of services and installed capacity and utilization (Leader Peter Solagna)

The EGI.eu Strategy and Policy Team (SPT) will coordinate the periodic review of the progress of the activities defined in the table above, follow-up the milestones and distribute reports to both Parties. Special meetings between the points of contact designated under 5 (Communication) shall be held, as often as necessary, to examine the progress in the implementing of this Agreement.

Article 5: Communication

The Parties shall keep each other informed on all their respective activities and on their progress and shall consult regularly on areas offering potential for cooperation. Each Party shall designate a “point of contact” to be responsible for monitoring the implementation of this MoU and for taking measures to assist in the further development of cooperative activities. Such points of contact shall be the ordinary channel for the Parties' communication of proposals for cooperation. The primary point of contact for each Party is:

EGI.eu: Operations Centre. E-mail: operations (at) egi.eu

ROC-LA: ROC manager e-mail : rsantana (at) cbpf.br

Questions of principle or problems that cannot be solved at primary contact level are escalated to the EGI.eu Director (director (at) egi.eu) and the Director of CLAF Carlos Tallero (tallero (at) cbpf.br).

Article 6: participation in EGI.eu groups

CLAF agrees to name a technical representative (with deputy) for the EGI OMB. CLAF may be asked to nominate representatives to serve on other policy groups as appropriate. Sites included in ROC-LA may nominate representatives to serve on or participate to user support or technical work groups, as well as EGI Virtual Teams, as appropriate.

Article 7: INTELECTUAL PROPERTY RIGHTS, JOINTLY OWNED RESULTS AND LICENSE

A. INTELECTUAL PROPERTY RIGHTS AND LICENSE

Page 100: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

1. “Intellectual Property Rights” shall mean all intellectual creations including but not limited to inventions, know-how, layouts, drawings, designs, specifications, computer programs, reports, processes, protocols, calculations and any other matter and protected by intellectual property rights, whether registered or not, including patents, registered designs, copyrights, design rights and all similar proprietary rights and applications for protection thereof.

2. Intellectual property rights generated by a Party under this MoU shall be the property of that Party who shall be free to protect, transfer and use such Intellectual Property Rights as it deems fit.

3. Notwithstanding the foregoing each Party shall grant the other a non-exclusive royalty free, perpetual license to use the Intellectual Property Rights generated by it under this MoU for use within its project or for the exploitation the results thereof. Such license shall include the right to sublicense the entities involved in the project.

B. JOINTLY OWNED RESULTS

1. Results that were jointly generated by both Parties will be jointly owned by the Parties, hereinafter referred to as (“Jointly Owned Results”) and each of the Parties shall be free to use these Jointly Owned Results as it sees fit without owing the other Party any compensation or requiring the consent of the other Party. Each Party, therefore, for example and without limitation, has the transferable right to grant non-exclusive, further transferable licenses under such Jointly Owned Results to third parties. Each Party shall be entitled to disclose such Jointly Owned Results without restrictions unless such Jointly Owned Results contain a Joint Invention in which case no disclosure must be made prior to the filing of a priority application.

2. With respect to any joint invention resulting from this MoU (i.e. any invention jointly made by employees of both Parties), the features of which cannot be separately applied for as Intellectual Property Rights and which are eligible for statutory protection requiring an application or registration (herein referred to as “Joint Invention”), the Parties shall agree on which Party will carry out any filling as well as any further details with regard to persecuting and maintaining of relevant patent applications.

Article 8: Funding

Each Party shall bear the costs of discharging its respective responsibilities under this MoU, including travel and subsistence of its own personnel and transportation of goods and equipment and associated documentation, unless otherwise agreed in this MoU. Each Party shall make available free of charge to the other Party any office/meeting space needed for the joint activities. The Parties' obligations hereunder are subject to their respective funding procedures and the availability of appropriated funds. Should either Party encounter budgetary problems in the course of its respective internal procedures that may affect the activities carried out under this MoU, that Party shall notify and consult with the other Party in a timely manner in order to minimise the negative impact of such problems on the cooperation. The Parties shall jointly look for mutually agreeable solutions. In order to reduce the impact on travel costs, face-to-face meetings should be co-located with other events where participants are likely to attend. Meeting via teleconferences should be considered when the nature of the discussion does not strictly require a face-to-face presence.

Article 9: Starting date, duration and termination

This MoU will start when signed by the authorised representatives of the Parties and shall remain until completion of the activities identified in Article 4 (Joint Work Plan), or upon termination of the projects in which the Parties participate, or upon three (3) months prior written notice by one Party to the other. In

Page 101: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

the event of termination, the parties shall endeavour to reach agreement on terms and conditions to minimise negative impacts on the other Party. In the event of the continuation of the present cooperation, the Agreement may be extended and/or amended by mutual agreement in writing.

Article 10: Amendments

The MoU is subject to updates and modifications that can be triggered by changes in either EGI organizational model, or the changes in other party's organizational model The MoU may be amended by written agreement of the Parties. Amendments shall be valid only if signed by the authorised representatives of the Parties.

Article 11: Annexes

Annexes 1, 2, 3 attached hereto have the same validity as this MoU and together constitute the entire understanding and rights and obligations covering the cooperation accepted by the Parties under this MoU. Annexes may be amended following the provisions of Article 10: Amendments.

Article 12: Language

The language for this MoU, its interpretation and all cooperative activities foreseen for its implementation, is English.

Memorandum of Understanding between EGI.eu and CLAF IN WITNESS WHEREOF, the Parties have caused their duly authorised representatives to sign two originals of this non-binding Memorandum of Understanding, in the English language.

The following agree to the terms and conditions of this MoU:

Page 102: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Annex 1 – EGI.eu Description To support science and innovation, a lasting operational model for e-Infrastructure is needed – both for coordinating the infrastructure and for delivering integrated services that cross national borders. The objective of EGI.eu (a non-for-profit foundation established under Dutch law) is to coordinate and manage the European Grid Infrastructure (EGI) federation on behalf of its members: National Grid Initiatives (NGIs) and European International Research Organisations (EIROs) to help guarantee the long-term availability of a generic e-Infrastructure for all European research communities and their international collaborators. Services provided by EGI.eu to the wider EGI community:

l Oversee the operations of EGI to guarantee the integration of resources from providers around Europe into a seamless and secure e-Infrastructure.

l Coordinate the support provided to EGI’s user communities. l Work with technology providers to source high-quality and innovative software solutions

to answer users’ requirements. l Represent the EGI federation in the wider Distributed Computing Infrastructures (DCI)

community through coordination and participation in collaborative projects. l Coordinate the external services provided by CLAFs in the community. l Steer the evolution of EGI’s policy and strategy development. l Organise EGI’s flagship events and publicise community’s news and achievements.

The EGI.eu is supporting a federation of high-performance computing (HPC) and high-throughput computing (HTC) resources and cloud resources. EGI.eu is also ideally placed to integrate new Distributed Computing Infrastructures (DCIs) such as clouds, supercomputing networks and desktop grids, to benefit the user communities within the European Research Area. EGI collects user requirements and provides support for the current and emerging user communities. Support is also given to the current heavy users of the infrastructure, such as high energy physics, computational chemistry and life sciences, as they move their critical services and tools from a centralised support model to one driven by their own individual communities. The EGI community is a federation of independent national and community resource providers, whose resources support specific research communities and international collaborators both within Europe and worldwide. EGI.eu, coordinator of EGI, brings together CLAF institutions established within the community to provide a set of essential human and technical services that enable secure integrated access to distributed resources on behalf of the community. The production infrastructure supports Virtual Research Communities – structured international user communities – that are grouped into specific research domains. VRCs are formally represented within EGI at both a technical and strategic level. Further information (e.g. governance; services) can be found at: www.egi.eu/about/EGI.eu

Page 103: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Annex 2 – Description of Latin American Centre for Physics

CLAF is an international organization that aims to promote and coordinate efforts for the development of physics in Latin America. CLAF was created on March 26, 1962, during a meeting in Rio de Janeiro, encouraged by UNESCO and the Brazilian government with the participation of twenty Latin American countries. Signed the initiative instituted CLAF: Argentina, Bolivia, Brazil, Colombia, Costa Rica, Cuba, Chile, Ecuador, El Salvador, Guatemala, Haiti, Honduras, Mexico, Nicaragua, Panama, Paraguay, Peru, Dominican Republic, Uruguay and Venezuela. The creation agreement was retified by: Argentina, Bolivia, Brazil, Chile, Colombia, Costa Rica, Cuba, Ecuador, Mexico, Nicaragua, Peru, Paraguay, Uruguay and Venezuela, which are thus the current Member States of CLAF. CLAF has special relations with various international organizations such as UNESCO, both the Paris office as well as the office in Uruguay, with the International Centre for Theoretical Physics (ICTP), Trieste, body of UNESCO, which also has a collaboration agreement for relatively less developed countries and a cooperative doctoral fellowships, with the Joint Institute of Nuclear Research (JINR), in Dubna, whit CLAF director being part of its Scientific Council, has started a scholarship program, and with CERN, in Geneva, with whom Latin American physics schools are organized. CLAF headquarters are located in the Centro Brasileiro de Pesquisas Físicas (CBFP) in Rio de Janeiro. CLAF hosts ROC-LA (Latin American ROC), started operating on September 30th 2009 to fill a gap in relation to the still fledgling and scattered activities of GRID in Latin America, as such, technicians and engineers of the ROC-LA are committed to the development of GRID in Latin America and consequently is committed to provide the necessary support for new institutions and/or groups attempting to set up their sites GRID the region. In 2013, CLAF signed the MoU with WLCG(World LHC Computing Grid), being the link between all Latin American sites, through ROC-LA, and WLCG.

l ROC-LA infrastructure and critical services are all hosted at CBPF site and it´s management is performed by CBPF ´s technicians team.

Page 104: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Annex 3 – Detailed Contact List Role EGI.eu CLAF / ROC-LA

Signing Authority Director Yannick Legré [email protected]

Director Carlos Trallero-Giner [email protected]

MoU Contact Point Strategy and Policy Manager Sergio Andreozzi [email protected]

Director Carlos Trallero-Giner [email protected]

User support Senior Operations Manager Peter Solagna [email protected]

ROC-LA Operations Manager Renato Santana [email protected]

Infrastructure Operations Senior Operations Manager Peter Solagna [email protected]

ROC-LA Operations Manager Renato Santana [email protected]

Technical Coordination Technical Manager Michel Drescher [email protected]

ROC-LA Operations Manager Renato Santana [email protected]

Dissemination Communications Manager Neasan O'Neill [email protected]

Director Carlos Trallero-Giner [email protected]

These contact points may be the same person. The EGI.eu Strategy and Policy Team ([email protected]) is to be notified regarding any changes to the contact list.

Page 105: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #105

105

6.5 MoU EGI –Asia-Pacific-ROC This MoU is provided for completeness

Page 106: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and ASGC

10/04/2013 FINAL 1 / 15

Memorandum of Understanding between

EGI.eu and ASGC

Resource Infrastructure Provider MoU

Page 107: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and ASGC

10/04/2013 FINAL 2 / 15

Table of Content BACKGROUND ....................................................................................................................................................... 3  ARTICLE 1: PURPOSE .......................................................................................................................................... 4  ARTICLE 2: DEFINITIONS .................................................................................................................................. 4  ARTICLE 3: INFRASTRUCTURE COMPOSITION ......................................................................................... 4  ARTICLE 4: JOINT WORK PLAN ...................................................................................................................... 4  ARTICLE 5: COMMUNICATION ........................................................................................................................ 6  ARTICLE 6: PARTICIPATION IN EGI.EU GROUPS ...................................................................................... 6  ARTICLE 7: RIGHTS AND RESPONSIBILITIES ............................................................................................. 6  ARTICLE 8: FUNDING .......................................................................................................................................... 6  ARTICLE 9: ENTRY INTO FORCE, DURATION AND TERMINATION .................................................... 7  ARTICLE 10: AMENDMENTS ............................................................................................................................. 7  ARTICLE 11: ANNEXES ....................................................................................................................................... 7  ARTICLE 12: LANGUAGE ................................................................................................................................... 7  ARTICLE 13: GOVERNING LAW – DISPUTE RESOLUTION ...................................................................... 7  ANNEX 1 – EGI.EU DESCRIPTION .................................................................................................................... 9  ANNEX 2 – APGI DESCRIPTION ...................................................................................................................... 10  ANNEX 3 – RIGHTS AND RESPONSIBILITIES ............................................................................................. 12  ANNEX 4 – SETTLEMENT OF DISPUTES ...................................................................................................... 14  ANNEX 5 – DETAILED CONTACT LIST ......................................................................................................... 15  

Page 108: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and ASGC

10/04/2013 FINAL 3 / 15

BACKGROUND The Stichting European Grid Initiative Foundation (hereafter referred to as “EGI.eu”) has been created under the Dutch law with the mission to create and maintain a pan-European Grid Infrastructure in collaboration with its Participants i.e. the National Grid Initiatives (NGIs) and Associated participants (e.g. European International Research Organisations - EIROs) in order to guarantee the long-term availability of a generic e-infrastructure for all European research communities and their international collaborators. In its role of coordinating grid activities between European NGIs EGI.eu will: 1) operate a secure integrated production grid infrastructure that seamlessly federates resources from providers around Europe; 2) coordinate the support of the research communities using the European infrastructure coordinated by EGI.eu; 3) work with software providers within Europe and worldwide to provide high-quality innovative software solutions that deliver the capability required by our user communities; 4) ensure the development of EGI.eu through the coordination and participation in collaborative research projects that bring innovation to European Distributed Computing Infrastructures (DCIs). A summary of EGI.eu is attached as Annex 1. The Asia Pacific Grid Initiative (hereafter referred to as "APGI") is coordinated by Academia Sinica Grid Computing Centre (ASGC) that aims at ensuring the long-term sustainability of the Asia e-Infrastructure, the continuity and the enhancements of the Asia Virtual Research Communities (VRCs) using it. APGI is devoted to providing international coordination and collaboration within the region. A summary of APGI is attached as Annex 2.

Page 109: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and ASGC

10/04/2013 FINAL 4 / 15

ARTICLE 1: PURPOSE The purpose of this Memorandum of Understanding1 (MoU) is to define a framework of collaboration between EGI.eu and ASGC (hereafter also referred to as “the Party” or the “Parties”). The Parties recognise, by this MoU, the opening of a wider and longer-term cooperation in activities that will bring visible benefits.

ARTICLE 2: DEFINITIONS For the purpose of this MoU, the definitions in the EGI glossary are relevant (http://go.egi.eu/glossary).

ARTICLE 3: INFRASTRUCTURE COMPOSITION ASGC in the position as the APROC (Asia Pacific Regional Operational Centre) coordinate and represent the following Institutions from APGI, who wish to participate in the framework of collaboration defined in this document, and delegate it to represent them in EGI.eu policy groups.

• ASGC – Academia Sinica Grid Computing Centre (Taiwan)

• ASTI – Advanced Science and Technology Institute (Philippines)

• ITB – Institut Teknologi Bandung (Indonesia)

• KEK – Inter-University Research Institute Corporation High Energy Accelerator Research Organization (Japan)

• KISTI – Korea Institute of Science and Technology Information (Korea)

• NSTDA – National Science and Technology Development Agency (Thailand)

• UNIMELB – University of Melbourne (Australia)

• UPM – Universiti Putra Malaysia (Malaysia) Annex 2 contains the list of the participating Resource Centres that the listed Institutions are responsible for. The list of Resource Centres will be reviewed and updated every 6 months as required.

ARTICLE 4: JOINT WORK PLAN The parties contribute to enable the vision of providing European scientists and international collaboration for sustainable distributed computing services to support their work. In this broad context, the specific goals of the collaborations are to: The specific goals of the collaboration are: 1. To enhance the capacities of both infrastructures; 2. To provide Local and Global operational services as needed to support the international user

community and the EGI operational needs; 3. To subscribe to a mandatory set of policies, procedures and OLAs; 4. To comply with the operations interfaces required by the EGI Operations Architecture2, which are

needed to ensure seamless and interoperable access to resources; 5. To participate in the Operations Management Board to contribute to the EGI operations agenda. 6. To participate in the Security Policy Group to contribute to the development of the security policies

fabric of the infrastructure. 1 An MoU is a written agreement that clarifies relationships and responsibilities between two or more parties that share services, clients, and resources. 2 EGI Operations Architecture: Infrastructure Platform and Collaboration Platform Integration, EGI-InSPIRE Deliverable D4.6, March 2012 (https://documents.egi.eu/document/1309)

Page 110: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and ASGC

10/04/2013 FINAL 5 / 15

The specific activities to be carried out in the framework of the collaboration are3:

WP1 Participation to the EGI.eu policy groups Parties Involved: EGI.eu (Contact: EGI Chief Operations Officer); ASGC Description of work: APGI to be regularly represented in the Operations Management Board, to provide requirements necessary to drive the evolution of the operations architecture and generally to provide feedback through attendance to meetings, questionnaires and e-mail. APGI to regularly participate to the SPG meetings, with the status of voting member, contributing to the development of the security policies that ensure a secure distributed computing infrastructure. Expected outcome: Participation to OMB work. APGI Operations Manager and APGI Security Manager/Officer are already appointed and contributing to the OMB. Performance of all Operations Centres is assessed on an annual basis by EGI.eu.

WP2 Global services Parties Involved: EGI.eu (Contact: EGI Chief Operations Officer); ASGC Description of work: Identify a set of EGI.eu Global services APGI is interested in using together with the respective guaranteed quality parameters that EGI.eu commits to provide. Expected outcome: • A2.1 (09/2013) - EGI.eu OLA defining the Global services offered is released and approved.

The OLA should be periodically reviewed, at least yearly. (Leader Tiziana Ferrari) • A2.2 (04/2014) - Annual report on used EGI.eu Global services, including performance and

utilization statistics. (Leader Tiziana Ferrari)

WP3 Reporting Parties Involved: EGI.eu (Contact: EGI Chief Operations Officer); ASGC Description of work: As part of the EGI quality assurance procedures, performance of the services provided by APGI Resource Centres (site-level grid services) and performance of the core services provided by APGI (operations centre-level services) is reported on a monthly basis. Reports are produced by EGI.eu and are accessible on the EGI wiki: https://wiki.egi.eu/wiki/Performance APGI agrees to adhere to the minimum service level targets defined in the Resource Centre OLA4 and the Resource Infrastructure Provider OLA5. APGI installed capacity and utilization are also assessed yearly as part of the annual assessment of EGI. EGI.eu is performing this assessment, which is publicly available on EGI document DB. Expected outcome:

• A3.1 (every year) - Annual report on performance of APGI Local services (Resource Centre services and NGI services) (Leader Tiziana Ferrari)

• A3.2 (04/2014) - Annual assessment of APGI installed capacity and utilization (Leader Tiziana Ferrari)

3 Party leading the activity is underlined. 4 Resource Centre Operational Level Agreement: http://documents.egi.eu/document/31 5 Resource Infrastructure Provider Operational Level Agreement: http://documents.egi.eu/document/463

Page 111: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and ASGC

10/04/2013 FINAL 6 / 15

The EGI.eu Strategy and Policy Team (SPT) will coordinate the periodic review of the progress of the activities defined in the table above, follow-up the milestones and distribute reports to both Parties. Special meetings between the points of contact designated under 5 (Communication) shall be held, as often as necessary, to examine the progress in the implementing of this Agreement.

ARTICLE 5: COMMUNICATION The Parties shall keep each other informed on all their respective activities and on their progress and shall consult regularly on areas offering potential for cooperation. Each Party shall designate a “point of contact” to be responsible for monitoring the implementation of this MoU and for taking measures to assist in the further development of cooperative activities. Such points of contact shall be the ordinary channel for the Parties' communication of proposals for cooperation. The primary point of contact for each Party is: EGI.eu: Operations Centre. E-mail: operations (at) egi.eu ASGC: [email protected] Questions of principle or problems that cannot be solved at primary contact level are escalated to the EGI.eu Director director (at) egi.eu and the ASGC ([email protected]).

ARTICLE 6: PARTICIPATION IN EGI.EU GROUPS ASGC agrees to name a technical representative (with deputy) for the EGI OMB. ASGC may be asked to nominate representatives to serve on other policy groups as appropriate.

ARTICLE 7: RIGHTS AND RESPONSIBILITIES The procedure is set out in Annex 3.

ARTICLE 8: FUNDING Each Party shall bear the costs of discharging its respective responsibilities under this MoU, including travel and subsistence of its own personnel and transportation of goods and equipment and associated documentation, unless otherwise agreed in this MoU. Each Party shall make available free of charge to the other Party any office/meeting space needed for the joint activities. The Parties' obligations hereunder are subject to their respective funding procedures and the availability of appropriated funds. Should either Party encounter budgetary problems in the course of its respective internal procedures that may affect the activities carried out under this MoU, that Party shall notify and consult with the other Party in a timely manner in order to minimise the negative impact of such problems on the cooperation. The Parties shall jointly look for mutually agreeable solutions. In order to reduce the impact on travel costs, face-to-face meetings should be co-located with other events where participants are likely to attend. Meeting via teleconferences should be considered when the nature of the discussion does not strictly require a face-to-face presence.

Page 112: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and ASGC

10/04/2013 FINAL 7 / 15

ARTICLE 9: ENTRY INTO FORCE, DURATION AND TERMINATION This MoU will enter into force when signed by the authorised representatives of the Parties and shall remain in force until completion of the activities identified in Article 4 (Joint Work Plan), or upon termination of the projects in which the Parties participate, or upon three (3) months prior written notice by one Party to the other. In the event of termination, the parties shall endeavour to reach agreement on terms and conditions to minimise negative impacts on the other Party. In the event of the continuation of the present cooperation, the Agreement may be extended and/or amended by mutual agreement in writing.

ARTICLE 10: AMENDMENTS The MoU may be amended by written agreement of the Parties. Amendments shall be valid only if signed by the authorised representatives of the Parties.

ARTICLE 11: ANNEXES Annexes Annex 1, Annex 2, Annex 3, Annex 4 and Annex 5 attached hereto have the same validity as this MoU and together constitute the entire understanding and rights and obligations covering the cooperation accepted by the Parties under this MoU. Annexes may be amended following the provisions of Article 10: Amendments.

ARTICLE 12: LANGUAGE The language for this MoU, its interpretation and all cooperative activities foreseen for its implementation, is English.

ARTICLE 13: GOVERNING LAW – DISPUTE RESOLUTION The terms of this MoU shall be interpreted in accordance with their true meaning and effect independently of national and local law. Provided that if and insofar as this MoU does not stipulate, or any of its terms are ambiguous or unclear reference shall be made to the substantive laws of Belgium. Disputes shall be resolved by amicable settlement or failing which by arbitration in accordance with the procedure set out in Annex 4.

Page 113: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and ASGC

10/04/2013 FINAL 8 / 15

Memorandum of Understanding between EGI.eu and APGI IN WITNESS WHEREOF, the Parties have caused their duly authorised representatives to sign two originals of this Memorandum of Understanding, in the English language. The following agree to the terms and conditions of this MoU:

Page 114: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and ASGC

10/04/2013 FINAL 9 / 15

Annex 1 – EGI.eu Description To support science and innovation, a lasting operational model for e-Infrastructure is needed – both for coordinating the infrastructure and for delivering integrated services that cross national borders. The objective of EGI.eu (a non-for-profit foundation established under Dutch law) is to coordinate and manage the European Grid Infrastructure (EGI) federation on behalf of its members: National Grid Initiatives (NGIs) and European International Research Organisations (EIROs) to help guarantee the long-term availability of a generic e-Infrastructure for all European research communities and their international collaborators. Services provided by EGI.eu to the wider EGI community:

• Oversee the operations of EGI to guarantee the integration of resources from providers around Europe into a seamless and secure e-Infrastructure.

• Coordinate the support provided to EGI’s user communities. • Work with technology providers to source high-quality and innovative software solutions to

answer users’ requirements. • Represent the EGI federation in the wider Distributed Computing Infrastructures (DCI)

community through coordination and participation in collaborative projects. • Coordinate the external services provided by partners in the community. • Steer the evolution of EGI’s policy and strategy development. • Organise EGI’s flagship events and publicise community’s news and achievements.

The EGI.eu is supporting ‘grids’ of high-performance computing (HPC) and high-throughput computing (HTC) resources. EGI.eu is also ideally placed to integrate new Distributed Computing Infrastructures (DCIs) such as clouds, supercomputing networks and desktop grids, to benefit the user communities within the European Research Area. EGI collects user requirements and provides support for the current and emerging user communities. Support is also given to the current heavy users of the infrastructure, such as high energy physics, computational chemistry and life sciences, as they move their critical services and tools from a centralised support model to one driven by their own individual communities. The EGI community is a federation of independent national and community resource providers, whose resources support specific research communities and international collaborators both within Europe and worldwide. EGI.eu, coordinator of EGI, brings together partner institutions established within the community to provide a set of essential human and technical services that enable secure integrated access to distributed resources on behalf of the community. The production infrastructure supports Virtual Research Communities – structured international user communities – that are grouped into specific research domains. VRCs are formally represented within EGI at both a technical and strategic level. Further information (e.g. governance; services) can be found at: www.egi.eu/about/EGI.eu

Page 115: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and ASGC

10/04/2013 FINAL 10 / 15

Annex 2 – APGI Description The Asia Pacific Grid Initiative (APGI) is representing Asian Consortium that aims at ensuring the long-term sustainability of the Asia e-Infrastructure, the continuity and the enhancements of the Asia Virtual Research Communities (VRCs) using it. APGI is coordinated by Academia Sinica Grid Computing Centre (ASGC) and devoted to providing international coordination and collaboration within the region. The partners in APGI cooperate within the region as well as engaging in bi-lateral interactions with partners in Europe (within EGI-InSPIRE) and elsewhere in the world. As the lead partner of APGI, ASGC serves as the liaison between EGI-InSPIRE Project Office and APGI. In addition, ASGC on behalf of APGI participates in the EGI-InSPIRE Project Management Board (PMB) representing the unfunded Asia-Pacific partners. ASGC coordinates the e-Science infrastructure support and user community engagement with all the APGI partners. ASGC is operating the Asia Pacific Regional Operation Centre (APROC) to extend the EGI infrastructure in Asia Pacific region and maximise the e-Infrastructure reliability to support various e-Science user communities. APROC is in charge of site certification and also application environment verification to maintain consistent APGI regional collaboration framework as a whole. ASGC provides both first-line and second-line support to APGI member sites by the APROC framework. The reliable APGI infrastructure is enforced by Operational Level Agreement of each site and APROC services. Resource Centres

• Australia

o UNIMELB – University of Melbourne

§ Australia-ATLAS

• Indonesia

o ITB – Institut Teknologi Bandung

• Japan

o KEK – Inter-University Research Institute Corporation High Energy Accelerator Research Organization

§ JP-KEK-CRC-02

• Korea

o KISTI – Korea Institute of Science and Technology Information

§ KR-KISTI-GCRT-01

§ KR-KISTI-GSDC-01

• Malaysia

o UPM – Universiti Putra Malaysia

§ MY-UPM-BIRUNI-01

• Philippines

o ASTI – Advanced Science and Technology Institute

§ PH-ASTI-LIKNAYAN

• Taiwan

o ASGC – Academia Sinica Grid Computing Centre

Page 116: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and ASGC

10/04/2013 FINAL 11 / 15

§ TW-EMI-PPS

§ TW-FTT

§ TW-eScience

§ Taiwan-LCG2

• Thailand

o NSTDA – National Science and Technology Development Agency

§ No sites to date

Page 117: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and ASGC

10/04/2013 FINAL 12 / 15

Annex 3 – Rights and Responsibilities A. GENERAL 1. ASGC agrees to adhere to applicable policies and procedures relating to the use of the production infrastructure. 2. A Party which makes material, equipment or components available to the other Party, for the purposes of activities under this MoU shall remain the proprietor of such material, equipment or components. 3. Each Party shall remain fully responsible for its own activities, including the fulfilment of its obligations under any grant agreement with the European Commission or under any consortium agreement related thereto. B. PERSONNEL 1. Each Party shall be solely responsible for any personnel hired to carry out work under this MoU. 2. In case personnel employed by one Party temporarily carries out work under this MoU on the premises of another (hereafter referred to as “secondment”), the following provisions shall apply: (a) The persons seconded shall be subject to all regulations, including, in particular, safety regulations, applicable on the site of the Party they are seconded to. (b) The personnel seconded by a Party to another shall remain employees of the Party having seconded them and such Party, as employer, shall bear exclusive responsibility for the payment of salary and for the procurement of adequate social security and insurance, including third party liability insurance and health insurance. (c) Unless otherwise agreed by the Parties concerned, Intellectual Property Rights generated by personnel seconded by a Party to another shall be owned by the Party having seconded such personnel. C. INTELECTUAL PROPERTY RIGHTS AND LICENSE 1. “Intellectual Property Rights” shall mean all intellectual creations including but not limited to inventions, know-how, layouts, drawings, designs, specifications, computer programs, reports, processes, protocols, calculations and any other matter and protected by intellectual property rights, whether registered or not, including patents, registered designs, copyrights, design rights and all similar proprietary rights and applications for protection thereof. 2. Intellectual property rights generated by a Party under this MoU shall be the property of that Party who shall be free to protect, transfer and use such Intellectual Property Rights as it deems fit. 3. Notwithstanding the foregoing each Party shall grant the other a non-exclusive royalty free, perpetual license to use the Intellectual Property Rights generated by it under this MoU for use within its project or for the exploitation the results thereof. Such license shall include the right to sublicense the entities involved in the project. D. JOINTLY OWNED RESULTS 1. Results that were jointly generated by both Parties will be jointly owned by the Parties, hereinafter referred to as (“Jointly Owned Results”) and each of the Parties shall be free to use these Jointly Owned Results as it sees fit without owing the other Party any compensation or requiring the consent of the other Party. Each Party, therefore, for example and without limitation, has the transferable right to grant non-exclusive, further transferable licenses under such Jointly Owned Results to third parties. Each Party shall be entitled to disclose such Jointly Owned Results without restrictions unless such Jointly Owned Results contain a Joint Invention in which case no disclosure must be made prior to the filing of a priority application. 2. With respect to any joint invention resulting from this MoU (i.e. any invention jointly made by employees of both Parties), the features of which cannot be separately applied for as Intellectual Property Rights and which are eligible for statutory protection requiring an application or registration

Page 118: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and ASGC

10/04/2013 FINAL 13 / 15

(herein referred to as “Joint Invention”), the Parties shall agree on which Party will carry out any filling as well as any further details with regard to persecuting and maintaining of relevant patent applications. E. PUBLIC RELATIONS 1. Any publication by a Party resulting from the activities carried out under this MoU shall be subject to prior agreement of the other Party not be unreasonably withheld. 2. The Parties may each release information to the public, provided it is related only to its own part of the activities under this MoU. In cases where the activities of the other Party are concerned prior consultation shall be sought. In all relevant public relations activities, the contribution of each Party related to activities covered by this MoU shall be duly acknowledged. F. CONFIDENTIALITY OF INFORMATION 1. The Parties may disclose to each other information that the disclosing Party deems confidential and which is (i) in writing and marked “confidential”, or (ii) disclosed orally, and identified as confidential when disclosed, and reduced in writing and marked “confidential” within fifteen (15) days of the oral disclosure (hereafter referred to as “Confidential Information”). Confidential Information shall be held in confidence and shall not be disclosed by the receiving Party to any third party without the prior written consent of the disclosing Party. 2. Notwithstanding the foregoing a Party is entitled to disclose Confidential Information which it is required by law to disclose or which, in a lawful manner, it has obtained from a third party without any obligation of confidentiality, or which it has developed independently from any Confidential Information received under this MoU, or which has become public knowledge other than as a result of a breach on its part of these confidentiality provisions. G. LIABILITY 1. Each Party shall use reasonable endeavours to ensure the accuracy of any information or materials it supplies to the other Party and of any other contribution it makes hereunder and promptly to correct any error therein of which it is notified. The supplying Party shall be under no obligation or liability other than as stated above and no warranty or representation of any kind is made, given or to be implied as to the sufficiency, accuracy or fitness for a particular purpose of such information, materials or other contribution or as to the absence of any infringement of any proprietary rights of third parties through the possession or use of such information, materials or other contribution. The recipient Party shall be entirely responsible for its use of such information, materials or other contribution and shall hold the other Party free and harmless and indemnify it for any loss or damage with regard thereto. 2. Except in case of gross negligence or wilful misconduct, neither Party shall be liable for any indirect or consequential damages of the other Party, including loss of profit or interest, under any legal cause whatsoever and on account of whatsoever reason. H. PARTICIPATION IN SIMILAR ACTIVITIES 1. Parties are not prevented by this MoU from participating and activities similar to those described in this document with third parties. There is no obligation to disclose any similar activity to the other party. However, when considered of mutual benefit, both parties are encouraged to involve the other party in similar activities to the goal of disseminating the knowledge about EGI.eu.

Page 119: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and ASGC

10/04/2013 FINAL 14 / 15

Annex 4 – Settlement of Disputes 1. All disputes or differences arising in connection with this MoU which cannot be settled amicably shall be finally settled by arbitration in accordance with the procedure specified below which shall be adapted in the light of the number of Parties involved. 2. Within thirty (30) calendar days of written notification by a Party to the other Party of its intention to resort to arbitration, the first Party shall appoint an arbitrator. The second Party shall appoint an arbitrator within three (3) months of the appointment of the first arbitrator. The two arbitrators shall, by joint agreement and within ninety (90) calendar days of the appointment of the second arbitrator, appoint a third arbitrator, who shall be the Chairman of the Arbitration Committee. 3. If the second Party fails to appoint an arbitrator or the two arbitrators fail to agree on the selection of a third arbitrator, the second or, as the case may be, the third arbitrator, shall be appointed by the President of the Court of Justice of the European Communities. 4. Unless otherwise agreed by the Parties concerned within thirty (30) calendar days of the provision of notice referred to in Article 12 above, the arbitration proceedings shall take place in Brussels and shall be conducted in English. The Parties shall within one month of the appointment of the third arbitrator agree on the terms of reference of the Arbitration Committee, including the procedure to be followed. 5. The Arbitration Committee shall faithfully apply the terms of this MoU. The Arbitration Committee shall set out in the award the detailed grounds for its decision. 6. The award shall be final and binding upon the Parties, who hereby expressly agree to renounce any form of appeal or revision. 7. The costs including all reasonable fees expended by the Parties to any arbitration hereunder shall be apportioned by the Arbitration Committee between these Parties.

Page 120: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

Memorandum of Understanding between

EGI.eu and ASGC

10/04/2013 FINAL 15 / 15

Annex 5 – Detailed Contact List Role EGI.eu Collaborating Organisations Signing Authority Director

Steven Newhouse [email protected]

ASGC Project Director Simon C. Lin [email protected]

MoU Contact Point Strategy and Policy Manager Sergio Andreozzi [email protected]

Operation Manager Eric Yen [email protected]

User support Chief Operations Officer Tiziana Ferrari [email protected]

Application manager Hsin-Yen Chen [email protected]

Infrastructure Operations Chief Operations Officer Tiziana Ferrari [email protected]

Operation Manager Eric Yen [email protected]

Technical Coordination Technical Manager Michel Drescher [email protected]

Operation Manager Eric Yen [email protected]

Dissemination Deputy Director Catherine Gater [email protected]

Dissemination Deputy Vicky Huang [email protected]

These contact points may be the same person. The EGI.eu Strategy and Policy Team ([email protected]) is to be notified regarding any changes to the contact list.

Page 121: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #121

121

7 Annex II: Review of HPC installations in the regions

7.1 Arabia

IMAN1

Every success story starts with a great vision. IMAN1 project was born from His Majesty’s King Abdullah II vision and his strong belief that his people could turn this vision into reality. The goal of the project was to build a High Performance Computing facility within the most economical parameters and to do it all in Jordan utilizing Jordanian innovation and ingenuity. That’s why IMAN1 was completely designed, developed, and built in Jordan by Jordanian resources.

IMAN1 was built using 2260 PlayStation 3 devices with IBM Cell Processors connected together on a very fast fiber based network; basically, turning a video gaming console into a supercomputer powerhouse that is capable of performing up to 25 trillion mathematical operations per second. In addition to that, it has one of the world’s best price-per-performance ratios in the High Performance Computing market.

Currently IMAN1 project is operated and utilized by JAEC (Jordan Atomic Energy Commission) and SESAME (Synchrotron-light for Experimental Science & Applications in the Middle East) supporting their critical operations and fulfilling their high-end computing needs. In addition to that, it is available to all the universities and scientific institutions in Jordan as an open platform to advance and support their research activities and development efforts in the areas of science, medicine, and engineering.

System Technical description and available clusters: KING HPC cluster:

• Hardware 2260 PS3 Devices Each PS3 has an IBM Cell Processor The IBM Cell processor has a main Power Processor CPU (called the PPE) and 8 special compute engines (called SPEs) available for raw computation. The SPEs can only be accessed through the PPE. Moreover, each SPE performs vector operations, which implies that they can compute on multiple data, in a single step (SIMD) Total Processors (PPC + SPE) = 18,080 Total RAM = 565 GB Total Disk Space = 132 TB

• Network: Distribution Layer: Juniper EX 3200, 48-port 10/100/1000 Switches Core Layer: Juniper EX8216-BASE-AC Fiber Based Switch

• Software: Custom designed and light weight Linux Kernel, Kernel version 2.6.29

Page 122: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #122

122

Open MPI (Message Passing Interface) SSH (Secure Shell) NFS (Network File System)

GAMA GPU HPC Node

• Hardware: GPU: NVIDIA Tesla K20 CPU: 2X Two 4 ‐core I n te l Xeon E5‐ 2609 RAM: 32GB

• Software: OpenCL or CUDA

Zaina (Intel based CPU) HPC cluster

• Hardware: 5 nodes of Dual Quad Core CPU (8 cores Intel Xeon) , Each node 2 GB of memory ( HP ProLiant DL140 Server) Two Node of Dual Quad Core CPU with SMP (16 cores Intel Xeon) 16 GB RAM (Dell PowerEdge R710)

• Network: 1000 dual Mbps TCP/IP

• Software: Pelegant, openMPI Wien2k and SIESTA ATLAS, BLAS LAPACK, HPL

Usage and sharing IMAN1 is acting as the Jordan National Supercomputing Center for Users among institutions and collaborators

• Approach to HPC resource sharing with a wider community • Open scientific calls is to be considered and lunched soon • % of resources for sharing within a worldwide scientific community • For KING HPC cluster a percentage of 60% can be shared

Current Research @ IMAN1

• Jordan Research and Training Reactor Core Modeling and Simulation. • Water Nano-Particles and their Effects on the Thermal Hydraulic Behavior of

a SMART Reactor. • SMART Reactor Design Enhancements (Circular Fuel & Nano-Fluids). • Simulating the Beam Dynamics for the Synchrotron Light Storage Ring and

Booster of SESAME (Synchrotron-light for Experimental Science & Applications in the Middle East).

• High Performance Computing for Synchrotron Radiation (SR) Applications. • Computer-Based Drug Discovery and Refinement of Protein X-ray

Crystallography Data: several ongoing projects being carried out in IMAN1 aim at the discovery of novel promising drug molecules to treat diseases and epidemics of national and international interest, in particular, those diseases affecting the third-world countries.

Page 123: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #123

123

• Medical Image Segmentation for early diagnosis and treatment of diseases. • Oxygen Reduction on Transition Metal Chalcogenides: searching for new

catalysts that could lower the cost of the electro-catalysts, thus lowering the cost of fuel cells which are considered to be the most promising products in 21st century.

• Investigating the Structural, Electronic, Electrical & Mechanical Properties of Carbon Nano- Ribbons & Nano-Tubes for Designing Better Nano-Electronic Transistors.

• Advanced Parallel Sorting Algorithms for Massive Data Sets. • Simulating the Path and Effects of an Asteroid Hitting Planet Earth.

For More: http://www.iman1.jo/iman1/index.php/news/40-examples-of-iman1-use

Helpdesk: advanced user support on-site and off-site User registration, details of authentication and authorization: this I done online by web submission form, follow up by technical and scientific evaluation, then an access granted via SSH public/private key mechanism

Bibliotheca Alexandrina Supercomputer (BA Supercomputer)

The Supercomputer is the outcome of the protocol of collaboration signed between Ministry of Communication and Information Technology and the Library of Alexandria, back in 2006. The New Library of Alexandria, Bibliotheca Alexandrina (BA) (www.bibalex.org), is dedicated to recapturing the spirit of the ancient Library of Alexandria, a center of world learning from 300 BC to 400 AD. The BA aspires to be the world's window on Egypt, Egypt's window on the world; a learning institution of the digital age; and above all a center for learning, tolerance, dialogue and understanding. ISIS, the International School of Information Science (http://bibalex.org/isis), is a research institute affiliated with the BA that acts as an incubator for digital and technological projects. Guided by the BA goals, ISIS strives to preserve heritage for future generations in digital form and to provide universal access to human knowledge, in addition to promoting research and development of activities and projects related to building a universal digital library. ISIS has adopted a number of major projects in cultural heritage as well as two R&D projects which are the Virtually Immersive Scientific and Technology Applications (Vista) and the Supercomputer. Researchers can do complex simulations using BA supercomputer.

System information: A giant computer cluster composed of 130 computational nodes, 6 management nodes, Inter-process Communication network (Infiniband), a 36-TByte storage, a management network and a backup system. Nodes were selected to give a maximum throughput with the highest availability. This is achieved through the two quad core xeon processors (64 bit technology), a 2.83 GHz speed, 8 Gbyte RAM, 80 Gbyte hard disk, a dual port Infiniband, an Ethernet network port and a dual power supply source. With such specifications, the supercomputer cluster is meant to be deployed for specialized applications that require immense

Page 124: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #124

124

mathematical calculations, thus rendering the cluster a valuable tool for researchers seeking optimum and accurate results at a rate of trillions of calculations per second. Joint operations and resource sharing Collaboration with Cy-Tera project which includes some aspects of technical support. http://cytera.cyi.ac.cy/ Computation results are visually simulated through the visualization infrastructure of the VISTA (vista.bibalex.org).This allows Egyptian researchers to cooperate and work closely with researchers from all over the world. Thus, the R&D environment will be enhanced as the strategic means to securing sustainable development and growth in Egypt and the region. The project is currently based on an advanced CAVE (Computer Aided Virtual Environment) systems; FLEX. Using stereoscopic projection together with high-resolution 3D computer graphics, an illusion is created which makes the user feels that he/she is immersed in a virtual environment. This helps the user to have better perception and hence analysis of the visualized data. The VISTA software developers have developed useful virtual reality toolkits as well as Visualization and Virtual Reality applications in many fields including Bioinformatics, Medical Visualization, Information Visualization, and Architecture. In the field of cultural heritage, the VISTA has two main products. The first is a highly detailed immersive virtual tour in Bibliotheca Alexandrina. The second is a collaboration work with IBM Centre for Advanced Studies in Cairo in a research that aims at studying the environmental threats facing Sphinx, one of the most important Egyptian and international heritage.

7.2 China HPC infrastructure in China was mainly supported by the national 863 program, which is the most important high-tech R&D program of China since 1987. The 863’s emphases were shifted from early “Intelligent computers” to “high performance computers”, then “HPC environment” and “high productivity”. Currently it emphasized integrated efforts on HPC systems, HPC environment, and HPC applications. There were three key projects in the last 12 years, including High performance computer and core software (2002-2005), high productivity computer and Grid service environment (2006-2010), High productivity computer and application environment (2011-2016). These projects produced some supercomputers including Dawning 4000A, 5000A, 6000, and Lenovo DeepComp 6800, 7000, and Tianhe-1A, Tianhe-2, Sunway/Bluelight and so on. CNGrid (China National Grid) was one part of these projects, which connected 14 HPC sites, totally had more than 3PF computing power and 15PB storage capacity. Most powerful supercomputers are installed in the national supercomputing centers as in the following table.

Table 13: The brief information of national supercomputing centers in China

No. Name Established year

Supercomputer peak performance

Page 125: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #125

125

1 Supercomputing Center of Chinese Academy of Sciences

2002 Lenevo DeepComp 7000

158 Tflops

2 Shanghai Supercomputer Center

2002 Dawning 5000A 220 Tflops

3 National Supercomputer Center in Tianjin

2010 Tianhe-1A 4700 TFlops

4 National Supercomputing Center in Jinan

2011 Sunway/Bluelight 1100 TFlops

5 National Supercomputing Center in Shenzhen

2011 Dawning 6000A 3000 TFlops

6 National Supercomputing Center in Changsha

2013 Tianhe-1 1372 TFlops

7 National Supercomputing Center in Guangzhou

2013 Tianhe-2 54900 TFlops

Total peak performance 65450 Tflops

The followings are some descriptions of some HPC centers in the country.

Sunway/BlueLight at NSC-JN

National Supercomputing Center in Jinan (NSC-JN) was founded in 2011 with design of more than 10Pflops capacity, and in the first phase it built a 1Pflops computing platform. NSC-JN was the first national supercomputing center which installed a pure homemade supercomputer named Sunway Bluelight. The machine is equipped with 8704 homemade Shenwei SW1600 processors. Each processor has 16 cores. The computer nodes are interconnected with QDR Infiniband network with the bandwidth of 40Gbps. The total memory is the machine is 170TB. The HPC is running on Sunway Ruisi Parallel Operating System, one kind of Linux variant. The peak performance is up to 1.1Pflops, and the efficiency of LINPACK is 74.4%. NSC-JN deployed a storage system with 2PB capacity.

Page 126: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #126

126

Figure 21: Sunway/Bluelight supercomputer in NSC-JN

The following software tools are installed on NSC-JN. Commercial softwares: LS-Dyna, ANSYS packages, Mechanical, MSC packages, ABAQUS, etc. Bio-drugs: AMBER, CHARMM, GROMACS, NAMD, LAMMPS, DEMOND, DOCK, AutoDock, Gaussians, etc. Material Sciences: WIEN2k, CP2K, XMD, CPMD, MedeA, DL_POLY, SIESTA, DAPACO, SMEAGOL, VASP, Materials Studio, etc. Climate: WRF, MM5, CESM, GRAPES. Ocean Science: FOAM, HYCOM, POP, MOM, FVCOM, MITgcm, ROMS, etc. Computational Chemistry: NWChem, Q-Chem, PSI, GAMESS, CPMD.

NSC-JN provides services most to the users from Shandong Province, the place where the NSC-JN is located. But it also provides services to users from the rest of the country. The users are from universities, research institutes as well as industries. So far the computing resources are not directly open to international users.

NSC-JN is one of the CNGrid site coordinated by SCGrid, the supercomputing center of Chinese Academy of Sciences. The operation of CNGrid has been described in the section 2.3 of D3.4.

Tianhe-2 at NSC-Guangzhou

Tianhe-2 is a 33.86-petaflops supercomputer located in Sun Yat-sen University, Guangzhou. With 16,000 computer nodes, each comprising two Intel Ivy Bridge Xeon processors and three Xeon Phi coprocessor chips, counting a total of 3,120,000 cores. Each of the 16,000 nodes possesses 88 GB of memory (64 GB used by the Ivy Bridge processors, and 8 GB for each of the Xeon Phi processors). The total CPU plus coprocessor memory is 1.34 PB. The interconnect, called the TH Express-2, designed by NUDT, utilizes a fat tree topology with 13 switches each of 576 ports.

Tianhe-2 runs on Kylin Linux, a version of the operating system developed by NUDT. Resource management is based on Simple Linux Utility for Resource Management (SLURM).

A series of application tools have been deployed on Tianhe-2. The tools include

Page 127: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #127

127

the following. Geography: CESM, paraFEM, Moose Framework, OpenFoam, etc. Astronomy: CESM, WRF, CMAQ, Fds. Ocean Science: CESM, FVCOM, ELCIRC. Cosmology: Cubep3m

The computing services are provided to all users across the country. The process and policy of access to the resources of Tianhe-2 is described in the web page (in Chinese) http://www.nscc-gz.cn/a/shangjifuwu/. To apply for the access to Tianhe-2, the user should enquire the information of Tianhe-2 so that the user can get a trial account to login to the machine. With the trial account the user can evaluate the system by trying to install the run the software on Tianhe-2 and get the information of the computing platform and estimate the cost to use the machine. After that users can apply of the formal account from the supercomputing center. The applicants should provide the user information, purpose to use the computer and estimated requirements of the resource. The supercomputing center will validate the user and sign the agreement.

Tianhe-2 does not provide Grid-like authentication. Remote users can only use VPN to access the machine and login to the system with username and password.

7.3 India Here, some of the major HPC systems of India used in education and research institutions including PARAM series are covered giving the main features and other relevant details.

Most of the institutes in India ( Except C-DAC with its GARUDA infrastructure) build their resources mainly to cater to their internal requirements. The resources in C-DAC are used by both internal and external organizations inside India based on the projects. However sharing of resources in some cases done only after signing MoU and as per the agreement and available free resources.

The HPC capability of India is tracked by a recent project called Top Supercomputers-India (http://topsupercomputers-india.iisc.ernet.in/). This project lists the top powerful supercomputers in India twice a year and is the Indian equivalent of the Top500 project that lists the 500 powerful supercomputers in the world.

PARAM Yuva II

System data: • Location: C-DAC, Pune. • The Param Yuva II cluster has 225 Intel Xeon E5-2670 (Sandy Bridge)

nodes that also consist of Intel Xeon Phi 5110P (Knights Corner) co-processors, constituting a total of 3600 CPU cores and 27000 co-processor cores, with Linux 64-bit (CentOS 6.2) operating system, and with FDR Infiniband Interconnect. Each node has 64 GB memory.

• Storage: HPC Scratch area with 10 GB/s write bandwidth over Parallel File System. Reliable User Home Area is 100TB. Backup is 400TB (native capacity)

• Software: Operating System is Cent OS V6.2, Kernel v2.6.32-220. Intel

Page 128: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #128

128

Cluster Studio XE 2013 and PGI Cluster Development Kit are available • Rpeak is 529.74 TFlops, Rmax is 386.71 TFlops.

Usage and sharing: Access policies: Users can contact Param Yuva administration and get account at PARAM Yuva. Later user can login directly and access the resource if available or they can reserve the resource this process called Dedicated Slot Booking Facility (DSBF). The DSBF steps are as follows:

1) Login to https://npsf.cdac.in website using PARAM Yuva-II user credentials. 2) Click on the Slot Booking menu and then click on the Book Slot sub menu 3) Fill in all the details, as asked. 4) Click on the Book button to book the slot. 5) Upon successful booking, reservation id will be shown in pop-up and will be

received in the notification e-mail. Dedicated slot booking facility will be governed by following rules, regulations and guidelines.

* Dedicated slot booking facility (DSBF) to be opened for a fixed duration prior to actual period of usage of the slots.

* DSBF will be announced for a pre-defined fixed period e.g. 25'Feb, 2015 to 11'March, 2015.

* DSBF will close on the specified date and time irrespective of slot bookings filling up the entire available slots.

* DSBF to be a "First Cum First Serve" service till all available slots are booked or DSBF closing date whichever is earlier.

* A fixed set of 64 nodes (1024 CPU cores) will be available under the DSB facility.

* Maximum period of single slot can be of up to 14 days and can be using up to 64 nodes (1024 CPU cores).

* Minimum slot duration will be one day and minimum resource requirement should be of 8 nodes (128 CPU cores).

* While booking, a user can share the slot with another user. However, the other user will then not able to book another slot.

* Single slot booking will be available per group instead of per user. If one user from a group has booked a slot then no other user from the same group will be permitted to book a slot.

* Users who have been allotted slots will have to submit a detailed report for the activities carried out using the slot. Further slot booking will be on hold until such a report is submitted.

* If the resource utilization is observed to be less than 50% during the slot for a continuous period of 12 hrs., the slot will be cancelled.

* There will be no restriction in the job wall time except as imposed by the slot duration and there will be no restriction on no. of jobs running under the allocated slot.

* For any additional storage requirements for effective utilization of the allocated slot, user need to take approvals well in advance so as to ensure that, slot facility is used to the maximum.

* It is the responsibility of the user to ensure to have sufficient CPU time left under their respective project for consumption during the booked dedicated

Page 129: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #129

129

slot. * No other requests for dedicated resource booking apart from Online

Dedicated Slot Booking will be entertained. * NPSF will guarantee the resources on best effort basis. * In DSBF node allocation will be in multiples of entire node (16 cores per

node).

Top 5 disciplines

Figure 22 – Utilization in PARAM Yuva II

Operations: In house developed tool are used. For details please refer: http://www.cdac.in/index.aspx?id=hpc_i_npsfaff Joint operations in the context of federated resources and sharing are not possible; entire Operation and Administration is carried out by C-DAC, Pune team.

PARAM Padma

System data: • Name: PARAM Padma • Location: C-DAC, Bangalore • System description: C-DAC’s Tera-Scale Supercomputing Facility (CTSF)

houses the PARAM Padma and C-DAC’s next generation high performance scalable computing cluster with a peak computing power of 4 Teraflop.

Page 130: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #130

130

Located at the plush environs of C-DAC Knowledge Park in Bangalore, the CTSF is equipped to operate as a highly available computing facility with dedicated power generation of (2 x 200 KVA) of Uninterrupted Power Supply. The machine area of 1800 sq. ft. concealed as an aquarium is precision air-conditioned to maintain a temperature of 18 ± 2 Degree Celsius and relative humidity in the range of 15-80%.

• Hardware details: It has 40 compute nodes with Intel Xeon EM64T and Infiniband Interconnect. This cluster is used exclusively for the GARUDA Grid Computing Initiative. This cluster has an external storage of 10 TB SAS and 24 TB SATA boxes.

Processor: 2XQuad core Xeon @ 3.16 Ghz RAM: 24 GB Operating System: Rocks 5.0 on RHEL 5.1 x86_64

• Software details: Intel Compiler Suite 11.0 Intel MKL MVAPICH2 MPICH2 / OPENMPI

Usage and sharing

• Access policies: Users are from Government research labs and academic institutes.

• Approach to HPC resource sharing with a wider community: First-come-first-served

Application Code Resources (run on PARAM Padma)

* Weather Forecasting WRF, MM5 (Regional Forecast), CSM (Climate System Model)

* Computational Fluid Dynamics Navier Stokes for large simulation, External/Internal Flows

* Structural Mechanics FEMCOMP (Stress Analysis), INTEGRA (Modeling & Visualization), ONLIN (Non-linear Stability Analysis), FRACT 3D (Fracture Analysis)

* Bioinformatics BLAST, FASTA, Smith Watermann (Genomic Sequencing), AMBER, CHARMM, GROMACS (Molecular Modeling), Problem Solving Environment, Genetic Algorithms

* Seismic Data Processing WAVES (Modeling & Migration), 1D/3D migration algorithms using wavelets

* Computational Science INDMOL (Electronic Structure Model), GAMESS (Large scale model), Ab Initio (Molecular Dynamics)

Operations: System Administration and Job Management Tools

* Cluster Monitoring Tools * System Accounting Tool * Dedicated Slot Booking Tool

Page 131: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #131

131

* Load leveler 3.1 * Veritas backup Tool (Net Backup 4.5)

Joint operations with some other HPC installations, in the context of federated resources and sharing are not possible; system is fully operated and administrated by C-DAC, Bangalore team.

ANANTA

System data • System Name: ANANTA • Short Description: ANANTA is the flagship supercomputer of Council of

Scientific and Industrial Research (CSIR), located at CSIR Fourth Paradigm Institute (CSIR-4PI), Bangalore. This system was listed as the fastest supercomputer in India as per India’s top supercomputers list published in July 2012 and listed as 58th fastest in the world in the June 2012 list of Top500 supercomputers. Currently, the system is the 155th fastest in the world as per the global list of Top500 supercomputers published in November 2014.

• Location: CSIR Fourth Paradigm Institute, Bangalore • Peak Performance: 362 TFLOPS • Achieved HPL performance: 334.7 TFLOPS

No. of nodes: 1088 (2 x 8 core Intel Xeon E5-2670) No. of CPU cores: 17408

• RAM: 68 TB • No. of Accelerator/GPU: Nil • Interconnect: Fully non-blocking FDR InfiniBand at 56 Gbps • Operating System: Linux (RHEL 6.2)

File System: 2.15 Peta Bytes Luster Parallel Filesystem

Usage and sharing • Access policies: This is a CSIR supercomputing facility for the

computational scientists of CSIR, India. As of now, CSIR has 38 national laboratories/institutes spread across India and the computational scientists from these CSIR Institutions remotely access the facility over the high-speed National Knowledge Network (NKN) of India.

• Approach to HPC resource sharing with a wider community: this is a dedicated facility for the CSIR research community in India.

• Currently, the facility is being extensively used by the CSIR community and hence resource sharing with open community is not feasible.

Top 5 disciplines:

• Computational Chemistry • Computational Biology • Engineering Sciences including CFD • Earth, Ocean, Atmosphere, Climate and Environment • Cyber Security and Cryptography

Page 132: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #132

132

Operations: The system is operational on a round-the-clock basis with state-of-the-art monitoring tools and a dedicated onsite operational team. Resources are allocated through PBSPro and its utilization is monitored using the PBS Analytics tools. User registration and authentication are performed using a centralized database located at CSIR-4PI. For day-to-day operation and user support, there is a dedicated help desk operational on a 24x7 basis. Currently the system is not in any federation mode with another similar resources in India or abroad. It is a stand-alone system remotely accessible to CSIR community over the National Knowledge Network.

IIT Kanpur FDR Cluster

System data: • Name: IIT Kanpur FDR Cluster • Location: Indian Institute of Technology, Kanpur. • Description: HPC Cluster based on Cluster Platform HP SL230s Gen8 nodes

in SL6500 chassis, 768 nodes, 15360 cores, with total memory of 98 TB, RHEL 6.5 operating system, 500 TB total storage with DDN based Lustre file system, nodes connected by FDR Infiniband interconnect.

• Network: FDR Infiniband interconnect • Node details: Dual Intel Xeon E5-2670v2 ten-core processor at 2.5 GHz with

Redhat Enterprise Linux 6.5 OS, 128 GB RAM. • MPI: Intel MPI • Compiler: Intel Cluster Studio • Rmax: 295.25 TFlops • Rpeak: 307.2 TFlops

Usage and sharing: open by IIT Kanpur members only

SAGA

System data: • Location: Vikram Sarabhai Space Centre (VSSC) is a lead Centre of Indian

Space Research Organisation for development of satellite launch vehicles and associated technologies, Thiruvananthapuram.

• Description: The SAGA Cluster has HP SL390 G7 servers and NVIDIA GPUs. It is a heterogeneous system consisting of 320 nodes with Intel Xeon E5530 and Intel Xeon E5645 CPUs, and C2070 and M2090 GPUs. The system has 153600 CUDA Cores on NVIDIA Tesla M2090 GPGPU and 1416 Intel Xeon Westmere-EP cores alongwith WIPRO servers (202 nos) of Intel Xeon Westmere and Nehalem cores, each with 2 numbers of NVIDIA GPGPU C2070 per server.

• Network: 40 Gbps Infiniband QDR interconnect • Node: The system has three kinds of nodes:

i. 185 nodes (WIPRO Z24XX(ii) model Servers) consist of dual quad-core Intel Xeon E5530 CPU @ 2.4 GHz, and dual C2070 GPUs.

ii. 17 nodes (WIPRO Z24XX(ii) model Servers) consist of dual hexa-

Page 133: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #133

133

core Intel Xeon E5645 CPU @ 2.4 GHz, and dual C2070 GPUs. iii. 118 nodes (HP SL390 G7 servers) consist of dual hexa-core Intel

Xeon E5645 CPU @ 2.4 GHz, and dual M2090 GPUs. • All nodes has 24 GB memory each. • Benchmark :

Name: Run-320. CUDA HPL benchmark Nmax: 660000 Nhalf: 210000 MPI: OpenMPI 1.4.5 Compiler: GCC 4.4.1 and NVIDIA CUDA 4.1 with -fomit-frame-pointer -O3 -funroll-loops -W -Wall options

• Rmax: 188.7 TFlops • Rpeak: 394.76 TFlops

Usage and sharing: This HPC resource is used only by Vikram Sarabhai Space Centre (VSSC), Indian Space Research Organization (ISRO), Trivandrum for Computational Fluid Dynamics (CFD) applications.

7.4 Latin America

The institutions that have a common agreement for e-Infrastructures are: CEDIA26 (Ecuador), CUDI27 (Mexico), RENATA28 (Colombia), UNAM29 (Mexico), and UNIANDES 30 (Colombia). Later RedCONARE/CeNAT 31 (Costa Rica) and InnovaRed32 (Argentina). This agreement let to the launch of the SCALAC33: Servicio de Cómputo Avanzado para América Latina y el Caribe, initiative, aiming at providing computing resources and support to research groups in Latin America. Since the starting of SCALAC on the 1st of March 2013, a task force coordinated by UNAM has been working to interoperation of the e-Infrastructure. Several centres in Europe and LA are supporting SCALAC, among them:

1. CEDIA: Consorcio Ecuatoriano para el Desarrollo de Internet Avanzado, Ecuador

2. CESUP34: Centro Nacional de Supercomputação, Brazil 3. CUDI: Corporación Universitaria para el Desarrollo de Internet CUDI, México 4. INNOVA|RED 35 : Red Nacional de Investigación y Educación de Argentina,

Argentina

26 http://www.cedia.org.ec 27 http://www.cudi.mx 28 http://www.renata.edu.co 29 http://www.unam.mx 30 http://www.uniandes.edu.co/ 31 http://www.cenat.ac.cr/ 32 http://www.innova-red.net/ 33 https://comunidades.redclara.net/wiki/scalac

34 http://www.cesup.ufrgs.br 35 http://www.innova-red.net

Page 134: D3.4–Interoperation report - Documents: Documentsdocuments.ct.infn.it/record/581/files/CHAIN-REDS-D3 4.pdf · Figure 13: Status of the ABINIT installation in LA, Arab and EU infrastructures

CHAIN-REDS Project - Deliverable D3.4 Page #134

134

5. LNCC36: Laboratório Nacional de Computação Científica, Brazil 6. RedCLARA 37 : Cooperación Latino Americana de Redes Avanzadas,

Internacional 7. CeNAT Centro Nacional de Alta Tecnología, Costa Rica 8. RENATA: Red Nacional Académica de Tecnología Avanzada, Colombia 9. SC3UIS38: Supercomputación y Cálculo Científico UIS, Colombia 10. SINAPAD39: Sistema Nacional de Procesamiento de Alto Desempeño a través

del Laboratorio Nacional de Computación Científica, Brazil 11. UNAM: Universidad Nacional Autónoma de México, México 12. UNRC40: Universidad Nacional de Río Cuarto, Argentina 13. NLHPC41: National Laboratory of High Performance Computing, Chile

Another project tightly related to computing recourses is RISC42. The RISC web site defines the project as "The RISC project aims at deepening strategic R&D cooperation between Europe and Latin America in the field of High Performance Computing (HPC) by building a multinational and multi-stakeholder community that will involve a significant representation of the relevant HPC R&D European and Latin American actors (researchers, policy makers, users)." Besides aiming at setting up HPC resources, the targeted research areas of this project may also offer a good source of data for analysis.

7.5 Summary Overall, the landscaping exercise has demonstrated that no major HPC centres in the region are open for cross-border resource sharing. Nevertheless, some basic guidelines for the future possibility of cross-border sharing can be inferred from the peer-review-based approaches of PRACE, HP-SEE and LinkSCEEM projects.

One exception is the Arab region where the installations in Jordan and Egypt will be shared internationally via the upcoming VI-SEEM project, following the best practices for common HPC operations established in the HP-SEE and LinkSCEEM projects.

36 http://www.lncc.br 37 http://www.redclara.net 38 http://sc3.uis.edu.co 39 https://www.lncc.br/sinapad 40 http://www.unrc.edu.ar 41 http://www.nlhpc.cl

42 http://www.risc-project.eu