Upload
raimundo-gonzales
View
14
Download
0
Embed Size (px)
DESCRIPTION
The Advanced Networks and Services Underpinning the Large-Scale Science of DOE’s Office of Science The Evolution of Production Networks Over the Next 10 Years to Support Large-Scale International Science An ESnet View. William E. Johnston, [email protected] ESnet Manager and Senior Scientist - PowerPoint PPT Presentation
Citation preview
1
The Advanced Networks and ServicesUnderpinning the Large-Scale Science of
DOE’s Office of Science
The Evolution of Production NetworksOver the Next 10 Years
to Support Large-Scale International Science
An ESnet View
William E. Johnston, [email protected] ESnet Manager and Senior Scientist
Lawrence Berkeley National Laboratory
www.es.net
2
DOE Office of Science Drivers for Networking
• The DOE Office of Science supports more than 40% of all US R&D in high-energy physics, nuclear physics, and fusion energy sciences (http://www.science.doe.gov)
• This large-scale science that is the mission of the Office of Science depends on high-speed networks for
o Sharing of massive amounts of datao Supporting thousands of collaborators world-wideo Distributed data processingo Distributed simulation, visualization, and computational steeringo Distributed data management
• The role of ESnet is to provides networking that supports and anticipates these uses for the Office of Science Labs and their collaborators
• The issues were explored in two Office of Science workshops that formulated networking requirements to meet the needs of the science programs (see refs.)
Increasing Large-Scale Science Collaborationis Reflected in Network Usage
• As of May, 2005 ESnet is transporting about 530 Terabytes/mo.• ESnet traffic has increased by 10X every 46 months, on average, since 1990
ESnet Monthly Accepted TrafficFeb., 1990 – May, 2005
TB
ytes
/Mon
th
Fe
b, 9
0A
ug,
90
Fe
b, 9
1A
ug,
91
Fe
b, 9
2A
ug,
92
Fe
b, 9
3A
ug,
93
Fe
b, 9
4A
ug,
94
Fe
b, 9
5A
ug,
95
Fe
b, 9
6A
ug,
96
Fe
b, 9
7A
ug,
97
Fe
b, 9
8A
ug,
98
Fe
b, 9
9A
ug,
99
Fe
b, 0
0A
ug,
00
Fe
b, 0
1A
ug,
01
Fe
b, 0
2A
ug,
02
Fe
b, 0
3A
ug,
03
Fe
b, 0
4A
ug,
04
Fe
b, 0
5
4
Large-Scale Science Has Changed How the Network is UsedT
Byt
es/
Mon
th
DOE Lab-International R&E
Lab-U.S. R&E (domestic)
Lab-Lab(domestic)
ESnet Top 100 Host-to-Host Flows, Feb., 2005
Top 100 flows = 84 TBy
Total ESnet traffic Feb., 2005 = 323 TBy
in approx. 6,000,000,000 flows
All other flows(< 0.28 TBy/month
each)
Dom
estic
International
Inter-L
ab
Lab-Comm.(domestic)
A small number of large-scale science users now account fora significant fraction of all ESnet traffic
Over the next few years this will grow to be the dominate use of the network
5
Large-Scale Science Has Changed How the Network is Used
• These flows are primarily bulk data transfer at this point and are candidates for circuit based services for several reasonso Traffic engineering – to manage the traffic on the
backbone
o Guaranteed bandwidth is needed to satisfy deadline scheduling requirements
o Traffic isolation will permit the use of efficient, but TCP unfriendly, data transfer protocols
6
Virtual Circuit Network Services
• A top priority of the science community
• Todayo Primarily to support bulk data transfer with deadlines
• In the near futureo Support for widely distributed Grid workflow engineso Real-time instrument operationo Coupled, distributed applications
• To get an idea of how circuit services might be used to support the current trends, look at the one year history of the flows that are currently the top 20o Estimate from the flow history what would be the
characteristics of a circuit set up to manage the flow
7
Source and Destination of the Top 20 Flows, Sept. 2005
02468
101214161820
LIG
O -
CalT
ech
SLA
C -
IN
2P
3 (
FR
)
SLA
C -
IN
FN
(IT
)
SLA
C -
IN
FN
(IT
)
FN
AL -
IN
2P
3 (
FR
)
INF
N (
IT)
- S
LA
C
SLA
C -
Ruth
erf
ord
Lab (
UK
)
SLA
C -
Ruth
erf
ord
Lab (
UK
)
SLA
C -
U. V
icto
ria (
CA
)
INF
N (
IT)
- S
LA
C
SLA
C -
Ruth
erf
ord
Lab (
UK
)
SLA
C -
Ruth
erf
ord
Lab (
UK
)
SLA
C -
U. V
icto
ria (
CA
)
LA
NL -
U. M
d.
Ruth
erf
ord
Lab (
UK
) -
SLA
C
IN2P
3 (
IT)
- S
LA
C
FN
AL -
São P
aulo
Analy
sis
Cente
r
(BR
)
NetN
ew
s
SLA
C -
CE
RN
Te
raB
ye
s/m
o.
8
What are Characteristics of Today’s Flows – How “Dynamic” a Circuit?
-505015025035045055065075085095010501150125013501450
9/23
/200
4
10/2
3/20
04
11/2
3/20
04
12/2
3/20
04
1/23
/200
5
2/23
/200
5
3/23
/200
5
4/23
/200
5
5/23
/200
5
6/23
/200
5
7/23
/200
5
8/23
/200
5
9/23
/200
5G
igab
ytes/day
LIGO – CalTechOver 1 year the “circuit” duration is about 3 months
(no data)
9
What are Characteristics of Today’s Flows – How “Dynamic” a Circuit?G
igab
ytes/day
-50050100150200250300350400450500550600650700750800850900950
9/23
/200
4
10/2
3/20
04
11/2
3/20
04
12/2
3/20
04
1/23
/200
5
2/23
/200
5
3/23
/200
5
4/23
/200
5
5/23
/200
5
6/23
/200
5
7/23
/200
5
8/23
/200
5
9/23
/200
5
SLAC - IN2P3 (FR)Over 1 year “circuit” duration is about 1 day to 1 week
(no data)
Between ESnet, Abilene, GÉANT, and the connected regional R&E networks, there will be dozens of lambdas in production networks
that are shared between thousands of users who want to use virtual circuits – Very complex inter-domain issues
similar situationin GÉANT and theEuropean NRENs
US R&E environment
ESnet
Abilene
ESnet-Abilene
x-connects
similar situationin US regionals
11
OSCARS: Virtual Circuit Service
• Despite the long circuit duration, these circuits cannot be managed by hand – too many circuits
o There must automated scheduling, authorization, path analysis and selection, and path setup = management plane and control plane
• Virtual circuits must operate across domainso End points will be on campuses or research institutes that are served
by ESnet, Abilene’s regional networks, and GÉANT’s regional networks – typically five domains to cross to do end-to-end system connection
o There are many issues here that are poorly understood A collaboration between Internet2/HOPI, DANTE/GÉANT, and ESnet
is building a prototype-production, interoperable service
• ESnet virtual circuit project: On-demand Secure Circuits and Advance Reservation System (OSCARS) (Contact Chin Guok ([email protected]) for information.)
12
What about lambda switching?• Two factors argue that this is a long ways out for production
networks1) There will not be enough lambdas available to satisfy the need
- Just provisioning a single lambda ring around the US (7000miles -11,000km) is still about $2,000,000 even on R&E networks
– This should drop by a factor of 5 -10 over next decade
2) Even if there were a “lot” of lambdas (hundreds?) there are thousands of large-scale science users
- Just considering sites (and not scientific groups) there are probably 300 major research science research sites in the US and a comparable number in Europe
- So, lambdas will have to be shared for the foreseeable future– Multiple QoS paths per lambda– Guaranteed minimum level of service for best effort traffic when
utilizing the production IP networks– Allocation management
» There will be hundreds to thousands of contenders with different science priorities
13
References – DOE Network Related Planning Workshops
1) High Performance Network Planning Workshop, August 2002http://www.doecollaboratory.org/meetings/hpnpw
2) DOE Science Networking Roadmap Meeting, June 2003http://www.es.net/hypertext/welcome/pr/Roadmap/index.html
3) DOE Workshop on Ultra High-Speed Transport Protocols and Network Provisioning for Large-Scale Science Applications, April 2003
http://www.csm.ornl.gov/ghpn/wk2003
4) Science Case for Large Scale Simulation, June 2003http://www.pnl.gov/scales/
5) Workshop on the Road Map for the Revitalization of High End Computing, June 2003
http://www.cra.org/Activities/workshops/nitrd http://www.sc.doe.gov/ascr/20040510_hecrtf.pdf (public report)
6) ASCR Strategic Planning Workshop, July 2003http://www.fp-mcs.anl.gov/ascr-july03spw
7) Planning Workshops-Office of Science Data-Management Strategy, March & May 2004
o http://www-conf.slac.stanford.edu/dmw2004
14
The Full Talk
ES
net
Scie
nce D
ata
N
etw
ork
(S
DN
) core
TWC
SNLL
YUCCA MT
BECHTEL-NV
PNNLLIGO
INEEL
LANL
SNLAAlliedSignal
PANTEX
ARM
KCP
NOAA
OSTI ORAU
SRS
JLAB
PPPLLab DCOffices
MIT
ANL
BNL
FNALAMES
NR
EL
LLNL
GA
DOE-ALB
OSC GTNNNSA
International (high speed)10 Gb/s SDN core10G/s IP core2.5 Gb/s IP coreMAN rings (≥ 10 G/s)OC12 ATM (622 Mb/s)OC12 / GigEthernetOC3 (155 Mb/s)45 Mb/s and less
Office Of Science Sponsored (22)NNSA Sponsored (12)Joint Sponsored (3)
Other Sponsored (NSF LIGO, NOAA)Laboratory Sponsored (6)
42 end user sites
ESnet IP core
SINet (Japan)Russia (BINP)
CA*net4 FranceGLORIAD (Russia, China)Korea (Kreonet2
Japan (SINet)Australia (AARNet)Canada (CA*net4Taiwan (TANet2)Singaren
ESnet IP core: Packet over SONET Optical Ring and
Hubs
ELP
ATL
DC
commercial and R&E peering points
MAE-E
PAIX-PAEquinix, etc.
PN
WG
Po
P/
PA
cifi
cWav
e
SEA
ESnet Today Provides Global High-Speed Internet Connectivity forDOE Facilities and Collaborators
ESnet core hubs IP
Abilene high-speed peering points with Internet2/Abilene
Abilene
Ab
ilen
e
CERN(USLHCnet CERN+DOE funded)
GÉANT - France, Germany, Italy, UK, etc
NYC
Starlight
Chi NAP
CHI-SL
SNV
Ab
ilene
SNV SDN
JGI
LBNL
SLACNERSC
SDSC
Equinix
MA
N L
AN
Abi
lene
MAXGPoP
SoXGPoP
SNV
ALB
ORNL
CHI
MRENNetherlandsStarTapTaiwan (TANet2, ASCC)
16
DOE Office of Science Drivers for Networking
• The DOE Office of Science supports more than 40% of all US R&D in high-energy physics, nuclear physics, and fusion energy sciences (http://www.science.doe.gov)
• This large-scale science that is the mission of the Office of Science depends on networks for
o Sharing of massive amounts of datao Supporting thousands of collaborators world-wideo Distributed data processingo Distributed simulation, visualization, and computational steeringo Distributed data management
• The role of ESnet is to provide networking that supports these uses for the Office of Science Labs and their collaborators
• The issues were explored in two Office of Science workshops that formulated networking requirements to meet the needs of the science programs (see refs.)
Tier 1
Tier2 Center
Online System
eventreconstruction
French Regional Center
German Regional Center
InstituteInstituteInstituteInstitute ~0.25TIPS
Workstations
~100 MBytes/sec
~0.6-2.5 Gbps
100 - 1000
Mbits/sec
Physics data cache
~PByte/sec
Tier2 CenterTier2 CenterTier2 Center
~0.6-2.5 Gbps
Tier 0 +1
Tier 3
Tier 4
Tier2 Center Tier 2
• 2000 physicists in 31 countries are involved in this 20-year experiment in which DOE is a major player.
• Grid infrastructure spread over the US and Europe coordinates the data analysis
CERN LHC CMS detector
15m X 15m X 22m, 12,500 tons, $700M.
analysis
Italian Center FermiLab, USA Regional Center
Courtesy Harvey
Newman, CalTech
CERN / LHC High Energy Physics Data Provides One ofScience’s Most Challenging Data Management Problems
(CMS is one of several experiments at LHC)
2.5-40 Gbits/sec
event simulation
human
18
LHC Networking
• This picture represents the MONARCH model – a hierarchical, bulk data transfer model
• Still accurate for Tier 0 (CERN) to Tier 1 (experiment data centers) data movement
• Not accurate for the Tier 2 (analysis) sites which are implementing Grid based data analysis
19
Example: Complicated Workflow – Many Sites
20
Distributed Workflow
• Distributed / Grid based workflow systems involve many interacting computing and storage elements that rely on “smooth” inter-element communication for effective operation
• The new LHC Grid based data analysis model will involve networks connecting dozens of sites and thousands of systems for each analysis “center”
21
Carbon Assimilation
CO2 CH4
N2O VOCsDust
HeatMoistureMomentum
ClimateTemperature, Precipitation,Radiation, Humidity, Wind
ChemistryCO2, CH4, N2O
ozone, aerosols
MicroclimateCanopy Physiology
Species CompositionEcosystem StructureNutrient Availability
Water
DisturbanceFiresHurricanesIce StormsWindthrows
EvaporationTranspirationSnow MeltInfiltrationRunoff
Gross Primary ProductionPlant RespirationMicrobial RespirationNutrient Availability
Ecosystems
Species CompositionEcosystem Structure
WatershedsSurface Water
Subsurface WaterGeomorphology
Biogeophysics
En
erg
y
Wa
ter
Ae
ro-
dyn
am
ics
Biogeochemistry
MineralizationDecomposition
Hydrology
So
il W
ate
r
Sn
ow
Inte
r-ce
pte
dW
ate
r
Phenology
Bud Break
Leaf Senescence
HydrologicCycle
VegetationDynamics
Min
utes-T
o-H
ou
rsD
ays-To
-Week
sY
ears-T
o-C
en
turies
Example: Multidisciplinary Simulation
(Courtesy Gordon Bonan, NCAR: Ecological Climatology: Concepts and Applications. Cambridge University Press, Cambridge, 2002.)
A “complete” approach to
climate modeling
involves many interacting
models and data that are provided
by different groups at different locations
(Tim Killeen, NCAR)
22
Distributed Multidisciplinary Simulation
• Distributed multidisciplinary simulation involves integrating computing elements at several remote locationso Requires co-scheduling of computing, data storage, and
network elements
o Also Quality of Service (e.g. bandwidth guarantees)
o There is not a lot of experience with this scenario yet, but it is coming (e.g. the new Office of Science supercomputing facility at Oak Ridge National Lab has a distributed computing elements model)
23
Projected Science Requirements for NetworkingScience Areas considered in the Workshop [1](not including Nuclear Physics and Supercomputing)
Today End2End
Throughput
5 years End2End
Documented Throughput
Requirements
5-10 Years End2End Estimated
Throughput Requirements
Remarks
High Energy Physics
0.5 Gb/s 100 Gb/s 1000 Gb/s high bulk throughput with deadlines (Grid based analysis systems require QoS)
Climate (Data & Computation)
0.5 Gb/s 160-200 Gb/s N x 1000 Gb/s high bulk throughput
SNS NanoScience Not yet started
1 Gb/s 1000 Gb/s remote control and time critical throughput (QoS)
Fusion Energy 0.066 Gb/s(500 MB/s burst)
0.198 Gb/s(500MB/20 sec. burst)
N x 1000 Gb/s time critical throughput (QoS)
Astrophysics 0.013 Gb/s(1 TBy/week)
N*N multicast 1000 Gb/s computational steering and collaborations
Genomics Data & Computation
0.091 Gb/s(1 TBy/day)
100s of users 1000 Gb/s high throughput and steering
24
DENDEN
ALBALB ATLATL
Metropolitan Area Rings
ESnet Goal – 2009/2010
Aus.
CERN Europe
SDGSDG
AsiaPacSEASEA
Major DOE Office of Science SitesHigh-speed cross connects with Internet2/Abilene
New ESnet hubsESnet hubs
SNVSNV
Europe
Japan
CHICHI
Science Data Network coreLab suppliedMajor international
Production IP ESnet core
DCDC
Japan
NYCNYC
Aus.
MetropolitanAreaRings
• 10 Gbps enterprise IP traffic • 40-60 Gbps circuit based transport
ESnetScience Data Network
(2nd Core – 30-50 Gbps,National Lambda Rail)
ESnet IP Core(≥10 Gbps)
10Gb/s10Gb/s30Gb/s
40Gb/s
CERN
Eu
rop
e
25
Observed Drivers for the Evolution of ESnet
ESnet Monthly Accepted TrafficFeb., 1990 – May, 2005
ESnet is currently transporting About 530 Terabytes/mo.and this volume is increasing exponentially – ESnet traffic has increased
by 10X every 46 months, on average, since 1990
TB
ytes
/Mon
th
Fe
b, 9
0A
ug,
90
Fe
b, 9
1A
ug,
91
Fe
b, 9
2A
ug,
92
Fe
b, 9
3A
ug,
93
Fe
b, 9
4A
ug,
94
Fe
b, 9
5A
ug,
95
Fe
b, 9
6A
ug,
96
Fe
b, 9
7A
ug,
97
Fe
b, 9
8A
ug,
98
Fe
b, 9
9A
ug,
99
Fe
b, 0
0A
ug,
00
Fe
b, 0
1A
ug,
01
Fe
b, 0
2A
ug,
02
Fe
b, 0
3A
ug,
03
Fe
b, 0
4A
ug,
04
Fe
b, 0
5
26
Observed Drivers: The Rise of Large-Scale ScienceT
Byt
es/
Mon
th
DOE Lab-International R&E
Lab-U.S. R&E (domestic)
Lab-Lab(domestic)
A small number of large-scale science users now account fora significant fraction of all ESnet traffic
ESnet Top 100 Host-to-Host Flows, Feb., 2005
Top 100 flows = 84 TBy
Total ESnet traffic Feb., 2005 = 323 TBy
in approx. 6,000,000,000 flows
All other flows(< 0.28 TBy/month
each)
Dom
estic
International
Inter-L
ab
Lab-Comm.(domestic)
27
Traffic Evolution over the Next 5-10 Years
• The current traffic pattern trend of the large-scale science projects giving rise to the top 100 data flows that represent about 1/3 of all network traffic will continue to evolve
• This evolution in traffic patterns and volume is driven by large-scale science collaborations and will result in large-scale science data flows overwhelming everything else on the network in 3-5 yrs. (WEJ predicts)
o The top 100 flows will become the top 1000 or 5000 flowso These large flows will account for 75-95% of a much
larger total ESnet traffic volume as- the remaining 6 billion flows will continue to account for the
remainder of the traffic, which will also grow even as its fraction of the total becomes smaller
28
Virtual Circuit Network Services
• Every requirements workshop involving the science community has put bandwidth-on-demand as the highest priority – e.g. foro Massive data transfers for collaborative analysis of
experiment data
o Real-time data analysis for remote instruments
o Control channels for remote instruments
o Deadline scheduling for data transfers
o “Smooth” interconnection for complex Grid workflows
29
What is the Nature of the Required Circuits
• Todayo Primarily to support bulk data transfer with deadlines
• In the near futureo Support for widely distributed Grid workflow engines
o Real-time instrument operation
o Coupled, distributed applications
• To get an idea of how circuit services might be used look at the one year history of the flows that are currently the top 20o Estimate from the flow history what would be the
characteristics of a circuit set up to manage the flow
30
Source and Destination of the Top 20 Flows, Sept. 2005
02468
101214161820
LIG
O -
CalT
ech
SLA
C -
IN
2P
3 (
FR
)
SLA
C -
IN
FN
(IT
)
SLA
C -
IN
FN
(IT
)
FN
AL -
IN
2P
3 (
FR
)
INF
N (
IT)
- S
LA
C
SLA
C -
Ruth
erf
ord
Lab (
UK
)
SLA
C -
Ruth
erf
ord
Lab (
UK
)
SLA
C -
U. V
icto
ria (
CA
)
INF
N (
IT)
- S
LA
C
SLA
C -
Ruth
erf
ord
Lab (
UK
)
SLA
C -
Ruth
erf
ord
Lab (
UK
)
SLA
C -
U. V
icto
ria (
CA
)
LA
NL -
U. M
d.
Ruth
erf
ord
Lab (
UK
) -
SLA
C
IN2P
3 (
IT)
- S
LA
C
FN
AL -
São P
aulo
Analy
sis
Cente
r
(BR
)
NetN
ew
s
SLA
C -
CE
RN
Te
raB
ye
s/m
o.
31
What are Characteristics of Today’s Flows – How “Dynamic” a Circuit?
-505015025035045055065075085095010501150125013501450
9/23
/200
4
10/2
3/20
04
11/2
3/20
04
12/2
3/20
04
1/23
/200
5
2/23
/200
5
3/23
/200
5
4/23
/200
5
5/23
/200
5
6/23
/200
5
7/23
/200
5
8/23
/200
5
9/23
/200
5G
igab
ytes/day
LIGO – CalTechOver 1 year the “circuit” duration is about 3 months
(no data)
32
What are Characteristics of Today’s Flows – How “Dynamic” a Circuit?G
igab
ytes/day
-50050100150200250300350400450500550600650700750800850900950
9/23
/200
4
10/2
3/20
04
11/2
3/20
04
12/2
3/20
04
1/23
/200
5
2/23
/200
5
3/23
/200
5
4/23
/200
5
5/23
/200
5
6/23
/200
5
7/23
/200
5
8/23
/200
5
9/23
/200
5
SLAC - IN2P3 (FR)Over 1 year “circuit” duration is about 1 day to 1 week
(no data)
33
What are Characteristics of Today’s Flows – How “Dynamic” a Circuit?G
igab
ytes/day
-50050100150200250300350400450500550600650700
9/23
/200
4
10/2
3/20
04
11/2
3/20
04
12/2
3/20
04
1/23
/200
5
2/23
/200
5
3/23
/200
5
4/23
/200
5
5/23
/200
5
6/23
/200
5
7/23
/200
5
8/23
/200
5
9/23
/200
5
SLAC - INFN (IT)Over 1 year “circuit” duration is about 1 to 3 months
(no data)
34
-500
50100
150200
250300
350400
450500
9/2
3/2
00
4
10
/23
/20
04
11
/23
/20
04
12
/23
/20
04
1/2
3/2
00
5
2/2
3/2
00
5
3/2
3/2
00
5
4/2
3/2
00
5
5/2
3/2
00
5
6/2
3/2
00
5
7/2
3/2
00
5
8/2
3/2
00
5
9/2
3/2
00
5What are Characteristics of Today’s Flows – How “Dynamic” a Circuit?
Gig
abytes/d
ay
FNAL - IN2P3 (FR)Over 1 year “circuit” duration is about 2 to 3 months
(no data)
35
What are Characteristics of Today’s Flows – How “Dynamic” a Circuit?G
igab
ytes/day
-50
050
100150
200250
300350
400450
500
9/2
3/2
00
4
10
/23
/20
04
11
/23
/20
04
12
/23
/20
04
1/2
3/2
00
5
2/2
3/2
00
5
3/2
3/2
00
5
4/2
3/2
00
5
5/2
3/2
00
5
6/2
3/2
00
5
7/2
3/2
00
5
8/2
3/2
00
5
9/2
3/2
00
5
(no data)
INFN (IT) - SLACOver 1 year “circuit” duration is about 3 weeks to 3 months
36
Characteristics of Today’s Circuits – How “Dynamic”?
• These flows are candidates for circuit based services for two reasonso Traffic engineering – to manage the traffic on the IP
production backbone
o To satisfy deadline scheduling requirements
o Traffic isolation to permit the use of efficient, but TCP unfriendly, data transfer protocols
• Despite the long circuit duration, this cannot be managed by hand – too many circuitso There must automated scheduling, authorization, path
analysis and selection, and path setup
37
Virtual Circuit Services - What about lambda switching?
• Two factors argue that this is a long ways out for production networks1) There will not be enough lambdas available to satisfy the need
- Just provisioning a single lambda ring around the US (7000miles -11,000km) is still about $2,000,000 even on R&E networks
– This should drop by a factor of 5 -10 over next 5 -10 years
2) Even if there were a “lot” of lambdas (hundreds?) there are thousands of large-scale science users
- Just considering sites (and not scientific groups) there are probably 300 major research science research sites in the US and a comparable number in Europe
- So, lambdas will have to be shared for the foreseeable future– Multiple QoS paths per lambda– Guaranteed minimum level of service for best effort traffic when
utilizing the production IP networks– Allocation management
» There will be hundreds to thousands of contenders with different science priorities
38
OSCARS: Guaranteed Bandwidth Service
• Virtual circuits must operate across domainso End points will be on campuses or research institutes that
are served by ESnet, Abilene’s regional networks, and GÉANT’s regional networks – typically five domains to cross to do end-to-end system connection
o There are many issues here that are poorly understood
o An ESnet – Internet2/HOPI – DANTE/GÉANT collaboration
• ESnet virtual circuit project: On-demand Secure Circuits and Advance Reservation System (OSCARS) (Contact Chin Guok ([email protected]) for information.)
39
OSCARS: Guaranteed Bandwidth Service
usersystem2
usersystem1
site B
resourcemanager
resourcemanager
resourcemanager
polic
er
auth
oriz
atio
n
shap
er
site A
allocationmanager
• To address all of the issues is complex
-There are many potential restriction points
-There are many users that would like priority service, which must be rationed
polic
er
bandwidthbroker
path manager(dynamic,
global view of network)
40
ESnet 2010 Lambda Infrastructure and LHC T0-T1 Networking
Denver
Seattle
Su
nn
yv
ale
LA
San Diego
Chicago
Pitts
Raleigh
Jacksonville
Atlanta
KC
Baton Rouge
El Paso - Las Cruces
Phoenix
Pensacola
Dallas
San Ant.Houston
Albuq. Tulsa
New YorkClev
Boise
CE
RN
-1G
ÉA
NT
-1G
ÉA
NT
-2
Wash DC
CE
RN
-2
Tier 1 Centers
ESnet IP core hubs
New hubs
ESnet SDN/NLR hubsESnet Production IP core (10-20 Gbps)ESnet Science Data Network core (10G/link)(incremental upgrades, 2007-2010)Other NLR linksCERN/DOE supplied (10G/link)International IP connections (10G/link)
Cross connects with Internet2/Abilene
NLR PoPs
TRIUMF
FNAL
BNL
CE
RN
-3
Vancouver
Toronto
CANARIE
41
Abilene* and LHC Tier 2, Near-Term Networking
Denver
Seattle
Su
nn
yv
ale
LA
San Diego
Chicago
Pitts
Raleigh
Jacksonville
Atlanta
KC
Baton Rouge
El Paso - Las Cruces
Phoenix
Pensacola
Dallas
San Ant.Houston
Albuq. Tulsa
New YorkClev
Boise
CE
RN
-1G
ÉA
NT
-1G
ÉA
NT
-2
Wash DC
CE
RN
-2
Tier 1 Centers
ESnet IP core hubs
New hubs
ESnet SDN/NLR hubsESnet Production IP core (10-20 Gbps)ESnet Science Data Network core (10G/link)(incremental upgrades, 2007-2010)Other NLR linksCERN/DOE supplied (10G/link)International IP connections (10G/link)
Cross connects with Internet2/Abilene
NLR PoPs
Vancouver
Toronto
CANARIE
Atlas Tier 2 Centers• University of Texas at Arlington • University of Oklahoma Norman • University of New Mexico Albuquerque • Langston University • University of Chicago • Indiana University Bloomington • Boston University• Harvard University • University of Michigan
FNAL
TRIUMF
CMS Tier 2 Centers• MIT • University of Florida at Gainesville • University of Nebraska at Lincoln • University of Wisconsin at Madison• Caltech• Purdue University • University of California San Diego
CE
RN
-3
BNL
< 10G connections to Abilene10G connections to USLHC or
ESnetAbilene/GigaPoP nodes
* WEJ projection of future AbileneUSLHC nodes
Between ESnet, Abilene, GÉANT, and the connected regional R&E networks, there will be dozens of lambdas in production networks
that are shared between thousands of users who want to use virtual circuits – Very complex inter-domain issues
similar situationin Europe
US R&E environment
ESnet
Abilene
ESnet-Abilene
x-connects
similar situationin US regionals
43
2005 2006 2007 2008 2009 2010
• Dedicated virtual circuits
• Dynamic virtual circuit allocation
ESnet Optical Networking Roadmap
• GMPLS
• Dynamic provisioning of MPLS circuits (Layer 3)
• Interoperability between VLANs and MPLS circuits(Layer 2 & 3)
• Interoperability between GMPLS circuits, VLANs, and MPLS circuits (Layer 1-3)
44
Tying Domains Together (1/2)
• Motivation:o For a virtual circuit service to be successful, it must
- Be end-to-end, potentially crossing several administrative domains- Have consistent network service guarantees throughout the circuit
• Observation:o Setting up an intra-domain circuit is easy compared with coordinating an inter-
domain circuit
• Issues:o Cross domain authentication and authorization
- A mechanism to authenticate and authorize a bandwidth on-demand (BoD) circuit request must be agreed upon in order to automate the process
o Multi-domain Acceptable Use Policies (AUPs)- Domains may have very specific AUPs dictating what the BoD circuits can be used
for and where they can transit/terminate
o Domain specific service offerings- Domains must have way to guarantee a certain level of service for BoD circuits
o Security concerns- Are there mechanisms for a domain to protect itself? (e.g. RSVP filtering)
45
Tying Domains Together (2/2)
• Approach:o Utilize existing standards and protocols (e.g. GMPLS, RSVP)
o Adopt widely accepted schemas/services (e.g X.509 certificates)
o Collaborate with like-minded projects (e.g. JRA3 (DANTE/GÉANT), BRUW (Internet2/HOPI) to:
1. Create a common service definition for BoD circuits
2. Develop an appropriate User-Network-Interface (UNI) and Network-Network-Interface (NNI)