Upload
logan-lang
View
214
Download
0
Embed Size (px)
Citation preview
Creating a Sustainable Cycle of InnovationCreating a Sustainable Cycle of Innovation
Harvey B Newman, CaltechHarvey B Newman, Caltech
WSIS Pan European Regional Ministerial ConferenceWSIS Pan European Regional Ministerial ConferenceBucharest, November 7-9 2002Bucharest, November 7-9 2002
Global Virtual Organizations Global Virtual Organizations for Data Intensive Sciencefor Data Intensive Science
Challenges of Data Intensive ScienceChallenges of Data Intensive Scienceand Global VOsand Global VOs
Geographical dispersion:Geographical dispersion: of people and resources of people and resources Scale: Scale: Tens of Petabytes per year of dataTens of Petabytes per year of data Complexity:Complexity: Scientic Instruments and information Scientic Instruments and information
5000+ Physicists 250+ Institutes 60+ Countries
Major challenges associated with:Major challenges associated with:Communication and collaboration at a distanceCommunication and collaboration at a distance
Managing globally distributed computing & data resources Managing globally distributed computing & data resources Cooperative software development and physics analysisCooperative software development and physics analysis
New Forms of Distributed Systems: Data Grids New Forms of Distributed Systems: Data Grids
Emerging Emerging Data GridData Grid User Communities User Communities
Grid Physics Projects (GriPhyN/iVDGL/EDG)Grid Physics Projects (GriPhyN/iVDGL/EDG)ATLAS, CMS, LIGO, SDSS; BaBar/D0/CDFATLAS, CMS, LIGO, SDSS; BaBar/D0/CDF
NSF Network for Earthquake NSF Network for Earthquake Engineering Simulation (NEES)Engineering Simulation (NEES)Integrated instrumentation, Integrated instrumentation,
collaboration, simulationcollaboration, simulationAccess Grid; VRVS: supporting new Access Grid; VRVS: supporting new
modes of group-based collaborationmodes of group-based collaborationAndAnd
Genomics, Proteomics, ...Genomics, Proteomics, ...The Earth System Grid and EOSDISThe Earth System Grid and EOSDISFederating Brain DataFederating Brain DataComputed MicroTomography Computed MicroTomography ……Virtual Observatories Virtual Observatories
Grids are Having a Global Impact on Research in Science & Engineering
Global Networks for HENPGlobal Networks for HENPand Data Intensive Scienceand Data Intensive Science Global Networks for HENPGlobal Networks for HENP
and Data Intensive Scienceand Data Intensive Science
National and International Networks with sufficient National and International Networks with sufficient capacity and capability, are essential today forcapacity and capability, are essential today for The daily conduct of collaborative work in bothThe daily conduct of collaborative work in both
experiment and theory experiment and theory Data analysis by physicists from all world regions Data analysis by physicists from all world regions The conception, design and implementation of The conception, design and implementation of
next generation facilities, as “global (Grid) networks” next generation facilities, as “global (Grid) networks” ““Collaborations on this scale would never have Collaborations on this scale would never have
been attempted, if they could not rely on excellent been attempted, if they could not rely on excellent networks” – L. Price, ANLnetworks” – L. Price, ANL
Grids Require Seamless Network Systems with Grids Require Seamless Network Systems with Known, High PerformanceKnown, High Performance
Data volume Moore’s law
High Speed Bulk ThroughputBaBar Example [and LHC]
High Speed Bulk ThroughputBaBar Example [and LHC]
Driven by:Driven by: HENP data rates,HENP data rates, e.g. BaBar ~500TB/year, e.g. BaBar ~500TB/year,
Data rate from experiment >20 MBytes/s; Data rate from experiment >20 MBytes/s; [5-75 Times More at LHC][5-75 Times More at LHC]
Grid of Multiple regional computer centers Grid of Multiple regional computer centers (e.g. Lyon-FR, RAL-UK, INFN-IT, CA: LBNL,(e.g. Lyon-FR, RAL-UK, INFN-IT, CA: LBNL,LLNL, Caltech) need copies of dataLLNL, Caltech) need copies of data
NeedNeed high-speed networks and high-speed networks andthe ability to utilize them fullythe ability to utilize them fully
High speed Today = 1 TB/day High speed Today = 1 TB/day (~100 Mbps Full Time) (~100 Mbps Full Time)
Develop 10-100 TB/day Capability Develop 10-100 TB/day Capability (Several Gbps Full Time) within (Several Gbps Full Time) within the next 1-2 yearsthe next 1-2 years
Data Volumes More than Doubling Each Yr; Driving Grid, Network Needs
HENP Major Links: Bandwidth Roadmap (Scenario) in Gbps
HENP Major Links: Bandwidth Roadmap (Scenario) in Gbps
Year Production Experimental Remarks
2001 0.155 0.622-2.5 SONET/SDH
2002 0.622 2.5 SONET/SDH DWDM; GigE Integ.
2003 2.5 10 DWDM; 1 + 10 GigE Integration
2005 10 2-4 X 10 ? ;Switch ? Provisioning
2007 2-4 10X ~10 10; X 40 Gbps
1st . Gen ? Grids
2009 ~10 10X 1or -2 40 X
~5 40 X or ~20-50 10X
40 Gbps? Switching
2011 ~5 40 X or ~20 10X
~25 40 X or~100 10 X
2nd Gen? Grids Terabit Networks
2013 ~Terabit ~MultiTbps ~ Fill One Fiber Continuing the Trend: ~1000 Times Bandwidth Growth Per Decade;We are Rapidly Learning to Use and Share Multi-Gbps Networks
AMS-IX Internet Exchange Thruput Accelerating Growth in Europe (NL) AMS-IX Internet Exchange Thruput Accelerating Growth in Europe (NL)
Monthly Traffic4X Growth In 14 Months
8/01 – 10/02↓
0
5 Gbps
10 Gbps Hourly Traffic11/02/02
2 Gbps
8 Gbps
6 Gbps
4 Gbps
HENP & World BW Growth: 3-4 Times Per Year; 2 to 3 Times Moore’s Law
National Light Rail FootprintNational Light Rail Footprint
15808 Terminal, Regen or OADM siteFiber route
PITPIT
PORPOR
FREFRE
RALRAL
WALWAL
NASNASPHOPHO
OLGOLG ATLATL
CHICHI
CLECLE
KANKAN
OGDOGDSACSAC BOSBOSNYCNYC
WDCWDC
STRSTR
DALDAL
DENDEN
LAXLAX
SVLSVL
SEASEA
SDGSDG
NLRBuildout Starts
November 2002Initially 4 10 Gb
WavelengthsTo 40 10Gb Waves in Future
NREN Backbones reached 2.5-10 Gbps in 2002 in Europe, Japan and US;US: Transition now to optical, dark fiber, multi-wavelength R&E network
Distributed System Services Architecture (DSSA): CIT/Romania/Pakistan
Distributed System Services Architecture (DSSA): CIT/Romania/Pakistan
Agents: Autonomous, Auto-Agents: Autonomous, Auto-discovering, self-organizing, discovering, self-organizing, collaborative collaborative
““Station Servers” (static) host Station Servers” (static) host mobile “Dynamic Services”mobile “Dynamic Services”
Servers interconnect dynamically; Servers interconnect dynamically; form a robust fabric in which form a robust fabric in which mobile agents travel, with a mobile agents travel, with a payload of (analysis) taskspayload of (analysis) tasks
Adaptable to Web services: Adaptable to Web services: OGSA; and many platformsOGSA; and many platforms
Adaptable to Ubiquitous, Adaptable to Ubiquitous, mobile working environmentsmobile working environments
StationStationServerServer
StationStationServerServer
StationStationServerServer
LookupLookupServiceService
LookupLookupServiceService
Proxy ExchangeProxy Exchange
Registration
Registration
Service Listener
Service Listener
Lookup Lookup Discovery Discovery
ServiceService
Remote Notification
Remote Notification
Managing Global Systems of Increasing Scope and Complexity, In the Service of Science and Society, Requires A New Generation of Scalable, Autonomous, Artificially Intelligent Software Systems
By I. Legrand (Caltech) Deployed on US CMS Grid Agent-based Dynamic
information / resource discovery mechanism
Implemented in Java/Jini; SNMP WDSL / SOAP with UDDI
Part of a Global “Grid Control Room” Service
http://cil.cern.ch:8080/MONALISA/
MonaLisa: A Globally Scalable Grid Monitoring System
History - Throughput Quality Improvements from US to World Bandwidth of TCP < MSS/(RTT*Sqrt(Loss)) (1) 80% annual
improvement Factor ~100/8 yr
Progress, but the Digital Divide is Maintained: Action is Required
NREN Core Network Size (Mbps-km):http://www.terena.nl/compendium/2002
NREN Core Network Size (Mbps-km):http://www.terena.nl/compendium/2002
Logarithmic Scale
1k
100k
100
100M
10M
1M
10kRo
It
PlGrIr
Ukr
Hu Cz
Es
Nl
Fi
Ch
Lagging
In Transition
Leading
Advanced
Perspectives on the Digital Divide: Int’l, Local, Regional, Political
Building Petascale Global Grids:Implications for Society
Building Petascale Global Grids:Implications for Society
Meeting the challenges of Petabyte-to-Exabyte Meeting the challenges of Petabyte-to-Exabyte Grids, and Gigabit-to-Terabit Networks, will Grids, and Gigabit-to-Terabit Networks, will transform research in science and engineering transform research in science and engineering
These developments could create the first truly These developments could create the first truly global virtual organizations (GVO) global virtual organizations (GVO)
If these developments are successful, and deployedIf these developments are successful, and deployedwidely as standards, this could lead to profound widely as standards, this could lead to profound advances in industry, commerce and society at largeadvances in industry, commerce and society at large
By changing the relationship between people By changing the relationship between people and “persistent” information in their daily lives and “persistent” information in their daily lives
Within the next five to ten yearsWithin the next five to ten years Realizing the benefits of these developments for society, Realizing the benefits of these developments for society,
and and creating a sustainable cycle of innovationcreating a sustainable cycle of innovation compels us compels us TO CLOSE the DIGITAL DIVIDETO CLOSE the DIGITAL DIVIDE
RecommendationsRecommendations
To realize the Vision of Global Grids, governments, To realize the Vision of Global Grids, governments, international institutions and funding agencies should:international institutions and funding agencies should: Define IT international policies (for instance AAA)Define IT international policies (for instance AAA) Support establishment of international standardsSupport establishment of international standards Provide adequate funding to continue R&D Provide adequate funding to continue R&D
in Grid and Network technologiesin Grid and Network technologies Deploy international production Grid and Deploy international production Grid and
Advanced Network testbeds on a global scaleAdvanced Network testbeds on a global scale Support education and training in Grid & Network Support education and training in Grid & Network
technologies for new communities of userstechnologies for new communities of users Create open policies, and encourage jointCreate open policies, and encourage joint
development programs, to help development programs, to help Close the Digital Divide Close the Digital Divide
The WSIS RO meeting, starting today, is an important step in The WSIS RO meeting, starting today, is an important step in the right directionthe right direction
Some Extra Slides Some Extra Slides
FollowFollow
IEEAF: Internet Educational Equal Access Foundation; Bandwidth Donations for Research and Education
Next Generation Requirements for Physics Experiments
Next Generation Requirements for Physics Experiments
Rapid access to event samples and analyzed Rapid access to event samples and analyzed results drawn from massive data stores results drawn from massive data stores From Petabytes in 2002, ~100 Petabytes by 2007, From Petabytes in 2002, ~100 Petabytes by 2007,
to ~1 Exabyte by ~2012. to ~1 Exabyte by ~2012. Coordinating and managing the large but Coordinating and managing the large but LIMITED LIMITED
computing, data and networkcomputing, data and network resources effectivelyresources effectively Persistent access to physicists throughout Persistent access to physicists throughout
the world, for collaborative work the world, for collaborative work
Grid Reliance on NetworksGrid Reliance on Networks Advanced applications such as Data Grids rely on Advanced applications such as Data Grids rely on
seamless operation of Local and Wide Area Networksseamless operation of Local and Wide Area Networks With reliable, quantifiable high performanceWith reliable, quantifiable high performance
Networks, Grids and HENPNetworks, Grids and HENP Grids are changing the way we do science and engineeringGrids are changing the way we do science and engineering Next generation 10 Gbps network backbones are here:Next generation 10 Gbps network backbones are here:
in the US, Europe and Japan; across oceans in the US, Europe and Japan; across oceans Optical Nets with many 10 Gbps wavelengths will followOptical Nets with many 10 Gbps wavelengths will follow
Removing regional, last mile bottlenecks and Removing regional, last mile bottlenecks and compromises in network quality are now compromises in network quality are now All on the critical pathAll on the critical path
Network improvements are especially needed in Network improvements are especially needed in SE Europe, So. America; and many other regions:SE Europe, So. America; and many other regions:
Romania; India, Pakistan, China; Brazil, Chile; AfricaRomania; India, Pakistan, China; Brazil, Chile; Africa Realizing the promise of Network & Grid technologies means Realizing the promise of Network & Grid technologies means
Building a new generation of high performance network Building a new generation of high performance network tools; artificially intelligent scalable software tools; artificially intelligent scalable software
systems systems Strong regional and inter-regional funding initiativesStrong regional and inter-regional funding initiatives
to support these ground breaking developments to support these ground breaking developments
Closing the Digital DivideClosing the Digital Divide
What HENP and the World Community Can DoWhat HENP and the World Community Can Do Spread the message: ICFA SCIC, IEEAF et al. can helpSpread the message: ICFA SCIC, IEEAF et al. can help Help identify and highlight specific needs (to Work On)Help identify and highlight specific needs (to Work On)
Policy problems; Last Mile problems; etc.Policy problems; Last Mile problems; etc. Encourage Joint programs [Virtual Silk Road project; Encourage Joint programs [Virtual Silk Road project;
Japanese links to SE Asia and China; AMPATH to So. Japanese links to SE Asia and China; AMPATH to So. America] America]
NSF & LIS Proposals: US and EU to South AmericaNSF & LIS Proposals: US and EU to South America Make direct contacts, arrange discussions with gov’t officialsMake direct contacts, arrange discussions with gov’t officials
ICFA SCIC is prepared to participate where appropriateICFA SCIC is prepared to participate where appropriate Help Start, Get Support for Workshops on Networks & Grids Help Start, Get Support for Workshops on Networks & Grids
Encourage, help form funded programs Encourage, help form funded programs Help form Regional support & training groups Help form Regional support & training groups
[Requires Funding][Requires Funding]
LHC Data Grid HierarchyLHC Data Grid Hierarchy
Tier 1
Tier2 Center
Online System
CERN 700k SI95 ~1 PB Disk; Tape Robot
FNAL: 200k SI95; 600 TBIN2P3 Center INFN Center RAL Center
InstituteInstituteInstituteInstitute ~0.25TIPS
Workstations
~100-400 MBytes/sec
2.5-10 Gbps
0.1–10 GbpsPhysicists work on analysis “channels”
Each institute has ~10 physicists working on one or more channels
Physics data cache
~PByte/sec
~2.5-10 Gbps
Tier2 CenterTier2 CenterTier2 Center
~2.5 Gbps
Tier 0 +1
Tier 3
Tier 4
Tier2 Center Tier 2
Experiment
CERN/Outside Resource Ratio ~1:2Tier0/( Tier1)/( Tier2) ~1:1:1
[email protected] ARGONNE CHICAGO
Why Grids? 1,000 physicists worldwide pool resources for
petaop analyses of petabytes of data A biochemist exploits 10,000 computers to
screen 100,000 compounds in an hour Civil engineers collaborate to design, execute,
& analyze shake table experiments Climate scientists visualize, annotate, &
analyze terabyte simulation datasets An emergency response team couples real time
data, weather model, population data
[email protected] ARGONNE CHICAGO
Why Grids? (contd) Scientists at a multinational company
collaborate on the design of a new product A multidisciplinary analysis in aerospace
couples code and data in four companies An HMO mines data from its member hospitals
for fraud detection An application service provider offloads excess
load to a compute cycle provider An enterprise configures internal & external
resources to support e-business workload
[email protected] ARGONNE CHICAGO
Grids: Why Now?
Moore’s law improvements in computing produce highly functional endsystems
The Internet and burgeoning wired and wireless provide universal connectivity
Changing modes of working and problem solving emphasize teamwork, computation
Network exponentials produce dramatic changes in geometry and geography9-month doubling: double Moore’s law!1986-2001: x340,000; 2001-2010: x4000?
A Short List: Revolutions in Information Technology (2002-7)
A Short List: Revolutions in Information Technology (2002-7)
Scalable Data-Intensive Metro and Long HaulScalable Data-Intensive Metro and Long Haul Network TechnologiesNetwork Technologies DWDM: 10 Gbps then 40 Gbps per DWDM: 10 Gbps then 40 Gbps per ; ;
1 to 10 Terabits/sec per fiber 1 to 10 Terabits/sec per fiber 10 Gigabit Ethernet (See www.10gea.org) 10 Gigabit Ethernet (See www.10gea.org)
10GbE / 10 Gbps LAN/WAN integration 10GbE / 10 Gbps LAN/WAN integration Metro Buildout and Optical Cross ConnectsMetro Buildout and Optical Cross Connects Dynamic Provisioning Dynamic Provisioning Dynamic Path Building Dynamic Path Building
““Lambda GridsLambda Grids” ” Defeating the “Last Mile” ProblemDefeating the “Last Mile” Problem
(Wireless; or Ethernet in the First Mile) (Wireless; or Ethernet in the First Mile) 3G and 4G Wireless Broadband (from ca. 2003);3G and 4G Wireless Broadband (from ca. 2003);
and/or Fixed Wireless “Hotspots” and/or Fixed Wireless “Hotspots” Fiber to the HomeFiber to the Home Community-Owned NetworksCommunity-Owned Networks
[email protected] ARGONNE CHICAGO
Grid Architecture
“Talking to things”: Communication (Internet protocols) & security
“Sharing single resources”: Negotiating access, controlling use
“Coordinating multiple resources”: ubiquitous infrastructure services, app-specific distributed services
“Controlling things locally”: Access to, & control of resources
Connectivity
Resource
Collective
Application
Fabric
Internet
Transport
Appli-cation
Link
Inte
rnet P
roto
col
Arc
hite
ctu
re
More info: www.globus.org/research/papers/anatomy.pdf
Grid projects have been a step forward for HEP and Grid projects have been a step forward for HEP and LHC: a path to meet the “LHC Computing” challengesLHC: a path to meet the “LHC Computing” challenges
But: the differences between HENP Grids and But: the differences between HENP Grids and classical Grids are not yet fully appreciated classical Grids are not yet fully appreciated
The original Computational and Data Grid concepts are largely The original Computational and Data Grid concepts are largely stateless, open systems: known to be scalablestateless, open systems: known to be scalable
Analogous to the WebAnalogous to the Web The classical Grid architecture has a number of implicit The classical Grid architecture has a number of implicit
assumptionsassumptions The ability to locate and schedule suitable resources,The ability to locate and schedule suitable resources,
within a tolerably short time (i.e. resource richness) within a tolerably short time (i.e. resource richness) Short transactions; Relatively simple failure modesShort transactions; Relatively simple failure modes
HEP Grids are data-intensive and resource constrainedHEP Grids are data-intensive and resource constrained Long transactions; some long queuesLong transactions; some long queues Schedule conflicts; [policy decisions]; task redirectionSchedule conflicts; [policy decisions]; task redirection A Lot of global system state to be monitored+trackedA Lot of global system state to be monitored+tracked
LHC Distributed CM: HENP Data LHC Distributed CM: HENP Data Grids Versus Classical GridsGrids Versus Classical Grids
Upcoming Grid Challenges: Buildinga Globally Managed Distributed System
Upcoming Grid Challenges: Buildinga Globally Managed Distributed System
Maintaining a Maintaining a Global ViewGlobal View of Resources and System State of Resources and System State End-to-end System MonitoringEnd-to-end System Monitoring Adaptive Learning: new paradigms for execution Adaptive Learning: new paradigms for execution
optimization (eventually automated)optimization (eventually automated) Workflow Management,Workflow Management, Balancing Balancing Policy Policy Versus Versus
Moment-to-moment Capability to Complete TasksMoment-to-moment Capability to Complete Tasks Balance High Levels of Usage of Limited Resources Balance High Levels of Usage of Limited Resources
Against Better Turnaround Times for Priority JobsAgainst Better Turnaround Times for Priority Jobs Goal-Oriented; SteeringGoal-Oriented; Steering Requests According to Requests According to
(Yet to be Developed) Metrics(Yet to be Developed) Metrics Robust Grid Transactions In a Multi-User EnvironmentRobust Grid Transactions In a Multi-User Environment Realtime Error Detection, RecoveryRealtime Error Detection, Recovery
Handling User-Grid Interactions: Guidelines; AgentsHandling User-Grid Interactions: Guidelines; Agents Building Higher Level Services, and an IntegratedBuilding Higher Level Services, and an Integrated
User Environment for the AboveUser Environment for the Above
((Physicists’) Application CodesPhysicists’) Application Codes Experiments’ Software Framework LayerExperiments’ Software Framework Layer
Needs to be Modular and Grid-aware: Architecture Needs to be Modular and Grid-aware: Architecture able to interact effectively with the Grid layers able to interact effectively with the Grid layers
Grid Applications LayerGrid Applications Layer (Parameters and algorithms that govern system operations)(Parameters and algorithms that govern system operations)
Policy and priority metricsPolicy and priority metrics Workflow evaluation metricsWorkflow evaluation metrics Task-Site Coupling proximity metricsTask-Site Coupling proximity metrics
Global End-to-End System Services LayerGlobal End-to-End System Services Layer Monitoring and Tracking Component performanceMonitoring and Tracking Component performance Workflow monitoring and evaluation mechanismsWorkflow monitoring and evaluation mechanisms Error recovery and redirection mechanismsError recovery and redirection mechanisms System self-monitoring, evaluation and System self-monitoring, evaluation and
optimizationoptimization mechanisms mechanisms
Interfacing to the Grid:Interfacing to the Grid:Above the Collective LayerAbove the Collective Layer
NLNLSURFnet
GENEVA
UKUKSuperJANET4
ABILENEABILENE
ESNETESNET
CALRENCALREN
ItItGARR-B
GEANT
NewYork
FrFrRenater
STAR-TAP
STARLIGHT
DataTAG ProjectDataTAG Project
EU-Solicited Project. EU-Solicited Project. CERNCERN, PPARC (UK), Amsterdam (NL), and INFN (IT);, PPARC (UK), Amsterdam (NL), and INFN (IT);and US (DOE/NSF: UIC, NWU and Caltech) partnersand US (DOE/NSF: UIC, NWU and Caltech) partners
Main Aims: Main Aims: Ensure maximum interoperability between US and EU Grid ProjectsEnsure maximum interoperability between US and EU Grid ProjectsTransatlantic Testbed for advanced network researchTransatlantic Testbed for advanced network research
2.5 Gbps Wavelength Triangle 7/02 (10 Gbps Triangle in 2003)2.5 Gbps Wavelength Triangle 7/02 (10 Gbps Triangle in 2003)
Wave
Triangle
TeraGrid (www.teragrid.org)TeraGrid (www.teragrid.org)NCSA, ANL, SDSC, CaltechNCSA, ANL, SDSC, Caltech
NCSA/UIUC
ANL
UIC Multiple Carrier Hubs
Starlight / NW UnivStarlight / NW Univ
Ill Inst of Tech
Univ of ChicagoIndianapolis (Abilene NOC)
I-WIRE
Caltech
San Diego
DTF Backplane: 4 X 10 Gbps
AbileneChicago
Indianapolis
Urbana
OC-48 (2.5 Gb/s, Abilene)Multiple 10 GbE (Qwest)Multiple 10 GbE (I-WIRE Dark Fiber)
Source: Charlie Catlett, Argonne
A Preview of the Grid Hierarchyand Networks of the LHC Era
Baseline BW for the US-CERN Link:Baseline BW for the US-CERN Link: HENP Transatlantic WG (DOE+NSF HENP Transatlantic WG (DOE+NSF))
DataTAG 2.5 Gbps Research Link in Summer 2002DataTAG 2.5 Gbps Research Link in Summer 2002 10 Gbps Research Link by Approx. Mid-200310 Gbps Research Link by Approx. Mid-2003
Transoceanic Networking
Integrated with the Abilene,
TeraGrid, Regional Nets
and Continental Network
Infrastructuresin US, Europe,
Asia, South America
Baseline evolution typicalBaseline evolution typicalof major HENPof major HENP
links 2001-2006 links 2001-2006
HENP As a Driver of Networks:Petascale Grids with TB Transactions
HENP As a Driver of Networks:Petascale Grids with TB Transactions
Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes from 1 Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes from 1 to 1000 Petabyte Data Storesto 1000 Petabyte Data Stores
Survivability of the HENP Global Grid System, with Survivability of the HENP Global Grid System, with hundreds of such transactions per day (circa 2007)hundreds of such transactions per day (circa 2007)requires that each transaction be completed in a requires that each transaction be completed in a relatively short time. relatively short time.
Example: Take 800 secs to complete the transaction. ThenExample: Take 800 secs to complete the transaction. Then Transaction Size (TB)Transaction Size (TB) Net Throughput (Gbps)Net Throughput (Gbps) 1 101 10 10 10010 100 100 1000 (Capacity of 100 1000 (Capacity of
Fiber Today) Fiber Today) Summary: Providing Switching of 10 Gbps wavelengthsSummary: Providing Switching of 10 Gbps wavelengths
within ~3 years; and Terabit Switching within 5-8 yearswithin ~3 years; and Terabit Switching within 5-8 yearswould enable “Petascale Grids with Terabyte transactions”,would enable “Petascale Grids with Terabyte transactions”,as required to fully realize the discovery potential of major HENP as required to fully realize the discovery potential of major HENP programs, as well as other data-intensive fields.programs, as well as other data-intensive fields.
National Research Networks in Japan
National Research Networks in Japan
SuperSINET SuperSINET Started operation January 4, 2002Started operation January 4, 2002 Support for 5 important areas:Support for 5 important areas:
HEP,HEP, Genetics, Nano-Technology, Genetics, Nano-Technology,Space/Astronomy, Space/Astronomy, GRIDsGRIDs
Provides 10 Provides 10 ’s:’s: 10 Gbps IP connection 10 Gbps IP connection 7 Direct intersite GbE links7 Direct intersite GbE links Some connections to Some connections to
10 GbE 10 GbE in JFY2002in JFY2002 HEPnet-J HEPnet-J
Will be re-constructed with Will be re-constructed with MPLS-VPN in SuperSINET MPLS-VPN in SuperSINET
Proposal: Two TransPacific Proposal: Two TransPacific 2.5 Gbps Wavelengths, and 2.5 Gbps Wavelengths, and Japan-CERN Grid Testbed by ~2003 Japan-CERN Grid Testbed by ~2003
Tokyo
Osaka
Nagoya
Internet
Osaka U
Kyoto U
ICR
Kyoto-U
Nagoya U
NIFS
NIG
KEK
Tohoku U
IMS
U-TokyoNAO
U Tokyo
NII Hitot.
NII Chiba
IP
WDM path
IP router
OXC
ISAS
National R&E Network ExampleGermany: DFN TransAtlantic Connectivity Q1 2002
STM 4
STM 16
STM 16
2 X 2.5G Now: NY-Hamburg 2 X 2.5G Now: NY-Hamburg and NY-Frankfurtand NY-Frankfurt
ESNet peering at 34 MbpsESNet peering at 34 Mbps Direct Peering to Abilene and Canarie Direct Peering to Abilene and Canarie
expectedexpected UCAID will add another 2 OC48’s; UCAID will add another 2 OC48’s;
Proposing a Global Terabit Research Proposing a Global Terabit Research Network (GTRN) Network (GTRN)
FSU Connections via satellite:FSU Connections via satellite:Yerevan, Minsk, Almaty, BaikalYerevan, Minsk, Almaty, Baikal Speeds of 32 - 512 kbpsSpeeds of 32 - 512 kbps
SILK Project (2002): NATO fundingSILK Project (2002): NATO funding Links to Caucasus and CentralLinks to Caucasus and Central Asia (8 Countries) Asia (8 Countries)
Currently 64-512 kbpsCurrently 64-512 kbpsPropose VSAT for 10-50 X BW:Propose VSAT for 10-50 X BW: NATO + State Funding NATO + State Funding
The simulation program developed within MONARC (MModels odels OOf f NNetworked etworked AAnalysis At nalysis At RRegional egional CCenters) enters) uses a process- oriented approach for discrete event simulation, and provides a realistic modelling tool for large scale distributed systems.
Modeling and Simulation:Modeling and Simulation:MONARC SystemMONARC System
SIMULATION of Complex Distributed Systems for LHC
Farm Monitor
Client(other service)
LookupService
LookupService
Farm Monitor
Discovery
Proxy
Component Factory
GUI marshaling
Code Transport
RMI data access
Push & Pullrsh & ssh
scripts; snmp
Globally Scalable Monitoring ServiceGlobally Scalable Monitoring Service
I. Legrand
RCMonitorService
Registration
MONARC SONN: 3 Regional Centres MONARC SONN: 3 Regional Centres Learning to Export JobsLearning to Export Jobs
NUST20 CPUs
CERN30 CPUs
CALTECH25 CPUs
1MB/s ; 150 ms RTT
1.2 MB
/s
150 ms R
TT
0
.8 M
B/s
200
ms
RTT
Day = 9
<E> = 0.73
<E> = 0.66
<E> = 0.83
By I. Legrand
COJAC: CMS ORCA Java Analysis Component: Java3D Objectivity JNI Web
Services
COJAC: CMS ORCA Java Analysis Component: Java3D Objectivity JNI Web
Services
Demonstrated Caltech-Riode Janeiro (Feb.) and ChileDemonstrated Caltech-Riode Janeiro (Feb.) and Chile
Internet2 HENP WG [*]Internet2 HENP WG [*] Mission: To help ensure that the requiredMission: To help ensure that the required
National and international network infrastructuresNational and international network infrastructures(end-to-end)(end-to-end)
Standardized tools and facilities for high performance Standardized tools and facilities for high performance and end-to-end monitoring and tracking [Gridftp; bbcp…]and end-to-end monitoring and tracking [Gridftp; bbcp…]
Collaborative systemsCollaborative systems are developed and deployed in a timely manner, and used are developed and deployed in a timely manner, and used
effectively to meet the needs of the US LHC and other major HENP effectively to meet the needs of the US LHC and other major HENP Programs, as well as the at-large scientific community.Programs, as well as the at-large scientific community. To carry out these developments in a way that is To carry out these developments in a way that is
broadly applicable across many fields broadly applicable across many fields Formed an Internet2 WG as a suitable framework: Formed an Internet2 WG as a suitable framework:
October 2001 October 2001 [*] Co-Chairs: S. McKee (Michigan), H. Newman (Caltech);[*] Co-Chairs: S. McKee (Michigan), H. Newman (Caltech);
Sec’y J. Williams (Indiana) Sec’y J. Williams (Indiana) Website: Website: http://www.internet2.edu/henphttp://www.internet2.edu/henp; also see the Internet2; also see the Internet2
End-to-end Initiative: End-to-end Initiative: http://www.internet2.edu/e2ehttp://www.internet2.edu/e2e
Palat Telefoane
1G link
1G backup link
Romana
Victoriei
Gara de Nord
Eroilor
Izvor
Universitate
Unirii
NOC
Cat3550-24-L3
C7206 w Gigabit
C7513 w Gigabit
Cat4000 L3 Sw
Bucharest MAN for Ro-Grid
ICI
IFIN
100Mbps
10/100/1000Mbps
Timişoara
Tg-Mureş
Craiova
Iasi
Galaţi
Bucureşti
Cluj
2 Mbps(backup)
8 Mbps
34 Mbps
2 Mbps-POP
December 1, 2002
155 Mbps
GEANT connection
Satu MareBaia Mare
OradeaZalău
Arad
Reşiţa
Tr. Severin
Tîrgu Jiu Rm.VîlcaePiteşti
Alexandria
Slatina
Alba Iulia
Sibiu
Mircurea Ciuc
Sf. Gheorghe
Braşov
Bistriţa
Suceava
Botoşani
Piatra Neamţ
Bacău Vaslui
Focşani
Buzău
Ploieşti
Slobozia
ConstanţaCălăraşi
Giurgiu
Tîrgovişte BrăilaTulcea
Hunedoara
RoEdu Network