32
Ivanov V.V. Ivanov V.V. Laboratory of Information Technologies, Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna, Joint Institute for Nuclear Research, Dubna, Russia Russia CBM Collaboration meeting, GSI, Darmstadt CBM Collaboration meeting, GSI, Darmstadt 9-12 March, 2005 9-12 March, 2005 Grid computing Grid computing for CBM for CBM at at JINR/Dubna JINR/Dubna

Grid computing for CBM at JINR/Dubna

  • Upload
    sun

  • View
    36

  • Download
    2

Embed Size (px)

DESCRIPTION

Grid computing for CBM at JINR/Dubna. Ivanov V.V. Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna, Russia CBM Collaboration meeting, GSI, Darmstadt 9-12 March, 2005. Main directions of this activity includes:. - PowerPoint PPT Presentation

Citation preview

Page 1: Grid computing for CBM                 at JINR/Dubna

Ivanov V.V.Ivanov V.V.

Laboratory of Information Technologies,Laboratory of Information Technologies,Joint Institute for Nuclear Research, Dubna, Joint Institute for Nuclear Research, Dubna,

RussiaRussia

CBM Collaboration meeting, GSI, DarmstadtCBM Collaboration meeting, GSI, Darmstadt9-12 March, 20059-12 March, 2005

Grid computing for CBMGrid computing for CBM at JINR/Dubna at JINR/Dubna

Page 2: Grid computing for CBM                 at JINR/Dubna

• Integration and shared use of informational and computational resources, distributed databases, electronic libraries. Realisation of the Dubna-Grid project.

• Use of modern systems for storing and processing large-scale arrays of model and experimental data.

• Possibility of remote participation of experts of the JINR Member States at the basic facilities of JINR.

• Joint work on managing corporate networks, including the problems of control, analysis, protection of networks, servers, information.

• Joint mastering and application of the Grid-technologies for physical experiments and participation in creation of national Grid-segments.

• Joint work on creation of distributed supercomputer applications.

Main directions of this activity includes:

Page 3: Grid computing for CBM                 at JINR/Dubna

JINR telecommunication links

Page 4: Grid computing for CBM                 at JINR/Dubna

JINR Gigabit Ethernet infrastructure (2003-2004)

Page 5: Grid computing for CBM                 at JINR/Dubna
Page 6: Grid computing for CBM                 at JINR/Dubna

Star-like logical topology of the JINR Gigabit Ethernet backbone with the Cisco Catalyst 6509 and Cisco Catalyst 3550 switches in the center of the core, and the Cisco Catalyst 3550 switches in 7 JINR divisions (in 6 Laboratories and in the JINR Administration), and Cisco Catalyst 3750 switch in LIT.

LIT

DLNP

Page 7: Grid computing for CBM                 at JINR/Dubna

In the year 2004:

The network of Laboratory of Information Technologies was left as a part of the JINR backbone, meanwhile the rest JINR divisions (7) were isolated off backbone with their Catalyst 3550 switches.

Controlled-access (Cisco PIX-525 firewall) at the entrance of the network.

Page 8: Grid computing for CBM                 at JINR/Dubna

Characteristics of the network:•High-speed transport structure(1000 Mbit/sec);• Security-Controlled access (Cisco PIX-525 firewall) at the entrance of the network;•Partially isolated local traffic (6 divisions have their subnetworks with Cisco Catalyst 3550 as a gateway).

Page 9: Grid computing for CBM                 at JINR/Dubna

Network MonitoringIncoming and outgoing traffic distribution

Total year 2004 36.1 Tb Incoming

Total year 200443.64 Tb Outgoing

Page 10: Grid computing for CBM                 at JINR/Dubna

MYRINETclusterCOMMON

PC-farm

INTERACTIVE PC-farm

CCIC JINR

130 CPU

17TB RAID-5

10 – Interactive & UI

32 – Common PC-farm

30 – LHC

14 – MYRINET (Parallel)

20 – LCG

24 – servers

Page 11: Grid computing for CBM                 at JINR/Dubna

TOTAL 2005 2006 2007

CPU (kSI2000) 100 660 1000

Disk Space (TB) 50 200 400

Mass Storage (TB)

1.5 50 450

JINR Central Information and Computing Complex

Page 12: Grid computing for CBM                 at JINR/Dubna

Russian regional centre: the DataGrid cloud

PNPI

IHEP

RRC KI

ITEP

JINR

SINP MSU

RRC-LHC

LCG Tier1/Tier2cloud

CERN

Gbits/s

FZK

Regional connectivity:

cloud backbone – Gbit/s

to labs – 100–1000 Mbit/s

Collaborative centers

Tier2cluster

Grid access

Page 13: Grid computing for CBM                 at JINR/Dubna

LCG Grid Operations CentreLCG-2 Job Submission Monitoring Map

Page 14: Grid computing for CBM                 at JINR/Dubna

LHC Computing Grid Project (LCG)LHC Computing Grid Project (LCG)

• LCG Deployment and Operation• LCG Testsuit• Castor• LCG AA- Genser&MCDB• ARDA

Page 15: Grid computing for CBM                 at JINR/Dubna

Main results of the LCG project• Development of the G2G (GoToGrid) system to maintain

installation and debug the LCG site.• Participation in the development of the CASTOR system:

elaboration of a subservient module that will be served as a garbage collector.

• Development of structure of the database, creation of a set of base modules, development of a WEB-interface for creation/addition of articles to the database (description of files with events and related objects) http://mcdb.cern.ch

• Testing a reliability of data transfer on the GidFTP protocol implemented in the Globus Toolkit 3.0 package.

• Testing the EGEE middleware components (GLite): Metadata and Fireman catalogs.

• Development of a code of constant WMS (Workload Management System) monitoring the INFN site gundam.chaf.infn.it in the testbed of a new EGEE middleware Glite.

Page 16: Grid computing for CBM                 at JINR/Dubna

LCG AA- Genser&MCDB

• Correct Monte Carlo simulation of complicated processes requires rather sophisticated expertise

• Different physics groups often need same MC samples• Public availability of the event files speeds up their

validation• Central and public location where well-documented

event files can be found

The goal of MCDB is to improve the communication between Monte Carlo experts and end-users

Page 17: Grid computing for CBM                 at JINR/Dubna

Main Features of LCG MCDB

The most important reason to develop LCG MCDBis to expel the restrictions of the CMS MCDB

• An SQL-based database• Wide search abilities• Possibility to keep the events at particle level as well as

at partonic level• Large event files support – storage: Castor in CERN• Direct programming interface from LCG collaboration

software• Inheritance of all the advantages of the predecessor -

CMS MCDB

Page 18: Grid computing for CBM                 at JINR/Dubna

MCDB Web Interfacehttp://mcdb.cern.ch

Only Mozilla Browser Supported (for the time being)

Page 19: Grid computing for CBM                 at JINR/Dubna

High Energy Physics WEB at LITIdea: Create a server with WEB access to computing resources of LIT for Monte Carlo simulations, mathematical support and etc.

• Provide physicists with informational and mathematical support;

• Monte Carlo simulations at the server;• Provide physicists with new calculation/simulation tools;• Create copy of GENSER of the LHC Computing GRID project• Introduce young physicists into HEP world.

Goals:

HepWeb.jinr.ru will include FRITIOF, HIJING, Glauber approximation, Reggeon approximation, …

HIJING Web Interface

Page 20: Grid computing for CBM                 at JINR/Dubna

Fixed Bug in the HIJING Monte Carlo Model

secures energy conservation

Fixed Bug in the HIJING Monte Carlo Model

secures energy conservationV.V. Uzhinsky (LIT)

Page 21: Grid computing for CBM                 at JINR/Dubna

• G2G is a web-based tool to support the generic installation and configuration of (LCG) grid middleware

– The server runs at CERN – Relevant site-dependent configuration

information is stored in a database– It provides added-value tools, configuration

files and documentation to install a site manually (or by a third-party fabric management tool)

Page 22: Grid computing for CBM                 at JINR/Dubna

• G2G features are thought to be useful for ALL sites …– First level assistance and hints (Grid Assistant)– Site profile editing tool

• … for small sites …– Customized tools to make manual installation easier

• … for large sites …– Documentation to configure fabric management tools

• … and for us (support sites)– Centralized repository to query for site configuration

Page 23: Grid computing for CBM                 at JINR/Dubna

Deployment Strategy

MIGMIG

G2GG2GWorker Node

User Interface

Computing Element

Classical Storage Element

Resource Broker

LCG-BDII

ProxyMon Box

Current LCG Release (LCG-2_2_0)Current LCG Release (LCG-2_2_0)

Next LCG ReleaseNext LCG Release

Page 24: Grid computing for CBM                 at JINR/Dubna

EGEE (Enabling Grids for E-sciencE)

Participation in the EGEE (Enabling Grids for E-sciencE) project together with 7 Russian scientific centres: creation of infrastructure for application of Grid technologies on a petabyte scale. The JINR group activity includes the following main directions:

SA1 - European Grid Operations, Support and Management

NA2 – Dissemination and Outreach

NA3 – User Training and Induction

NA4 - Application Identification and Support

Page 25: Grid computing for CBM                 at JINR/Dubna

Russian Data Intensive GRID

(RDIG) Consortium

EGEE Federation

Eight Institutes made up the consortium RDIG (Russian Data Intensive GRID) as a national federation in the EGEE project. They are: IHEP - Institute of High Energy Physics (Protvino), IMPB RAS - Institute of Mathematical Problems in Biology (Pushchino), ITEP - Institute of Theoretical and Experimental Physics (Moscow), JINR - Joint Institute for Nuclear Research (Dubna), KIAM RAS - Keldysh Institute of Applied Mathematics (Moscow), PNPI - Petersburg Nuclear Physics Institute (Gatchina), RRC KI - Russian Research Center “Kurchatov Institute” (Moscow), SINP-MSU - Skobeltsyn Institute of Nuclear Physics (MSU, Moscow).

Page 26: Grid computing for CBM                 at JINR/Dubna

LCG/EGEE Infrastructure

• The LCG/EGEE infrastructure has been created that comprises managing servers, 10 two-processor computing nodes.

• Software for experiments CMS, ATLAS, ALICE and LHCb has been installed and tested.

• Participation in mass simulation sessions for these experiments.

• A server has been installed for monitoring Russian LCG sites based on the MonALISA system.

• Research on the possibilities of other systems (GridICE, MapCenter).

Page 27: Grid computing for CBM                 at JINR/Dubna

Production in frames of DCs was accomplished at local JINR LHC and LCG-2 farms:

CMS: 150 000 events (350 GB); 0.5 TB data on B-physics was downloaded to the CCIC for the analysis; the JINR investment in CMS DC04 was at a level of 0.3%.

ALICE: the JINR investment in ALICE DC04 was at a level of 1.4% of a total number of successfully done Alien jobs.

LHCb: the JINR investment in LHCb DC04 - 0.5%.

Participation in DC04Participation in DC04

Page 28: Grid computing for CBM                 at JINR/Dubna
Page 29: Grid computing for CBM                 at JINR/Dubna

Dubna educational and scientific network Dubna-Grid Project (2004)

More than 1000 CPU

Laboratory of Information Technologies, JINR University "Dubna" Directorate of programme for development of the science city Dubna University of Chicago, USA University of Lund, Sweden

Creation of Grid-testbed on the basis of resources of Dubna scientific and educational establishments, in particular, JINR Laboratories, International University "Dubna“, secondary schools and other organizations

Page 30: Grid computing for CBM                 at JINR/Dubna

City high-speed networkThe 1 Gbps city high speed network was built on the basis of a single

mode fiber optic cable of the total length of almost 50 km. The total number of network computers in the educational organizations includes more than 500 easily administrated units.

Page 31: Grid computing for CBM                 at JINR/Dubna

Network of the University “Dubna”The computer network of the University “Dubna” incorporates with the

help of a backbone fiber optic highway the computer networks of the buildings housing the university complex. Three server centres maintain applications and services of computer classes, departments and university subdivisions as well as computer classes of secondary schools. Total number of PCs exceeds 500

Page 32: Grid computing for CBM                 at JINR/Dubna

Concluding remarks

• JINR/Dubna Grid segment and personal: are well prepared to be effectively involved into the CBM experiment MC simulation and data analysis activity

• Working group: prepare a proposal on a common JINR-GSI-Bergen Grid activity for the CBM experiment

• Proposal: present at the CBM Collaboration meeting in September