Upload
yardley
View
37
Download
0
Tags:
Embed Size (px)
DESCRIPTION
Fail over From Rout-D to Rout-A (SURFnet Amsterdam, Internet-2 NY, CANARIE Toronto, Starlight Chicago). OMNI-View Lightpath map. DARPA DANCE Demo (May 31 st , ‘02). OG - 2. EvaQ8 Sw. Crisis Center. A notional view of an EvaQ8 end-to-end network - PowerPoint PPT Presentation
Citation preview
1
Fail over From Rout-D to Rout-A(SURFnet Amsterdam, Internet-2 NY, CANARIE Toronto, Starlight Chicago)
2
OMNI-View Lightpath map
3
DARPA DANCE Demo (May 31st , ‘02)
EthernetSwitch
EthernetSwitch
MEMSswitch
EthernetSwitch
10 GE
SafeEnd
DisasterArea
ASTN Control Plane
L2-L7Switch
EvaQ8 Sw.
L2-L7Switch
EvaQ8 Sw.
L2-L7Switch
EvaQ8 Sw.
OG - 1
OG - 3
Disaster Event/Environ. Sensor
Control Mesg
100Mbps
OG - 2
> A notional view of an EvaQ8 end-to-end network
> Automatic optical path setup on disaster trigger
> Sample measurements
CrisisCenter
4
Sample measurements
Timeline in ms
Disaster TriggerAnd processing
< 1 ms
0 2 14 1000 1200 13,000
Signal and
Response1.4 ms
Photonic MEMs
control12 ms
Ethernet Switch QoS control
1150 ms
Inter-process communicatio
n3 ms
NOT TO SCALE
> Measurements taken with clocks synchronized using NTP
> Layer 1 link setup and IP QoS reconfiguration took around 1.2 seconds
> The VLANs/Spanning Tree took an additional 12 seconds to converge
> Further work with larger networks needed
VLAN/Spanning Tree convergence
12 seconds
EvaQ8StartL1 L2
5
10/100/GE
10 GE
Lake Shore
Photonic Node
S. Federal
Photonic Node
W Taylor SheridanPhotonic
Node 10/100/GE
10/100/GE
10/100/GE
Optera5200
10Gb/sTSPR
Photonic Node
10 GE
PP
8600
Optera5200
10Gb/sTSPR
10 GE
Optera520010Gb/
sTSPR
Optera5200
10Gb/sTSPR
1310 nm 10 GbE
WAN PHY interfaces
10 GE
PP
8600
…
EVL/UICOM5200
LAC/UICOM5200
StarLightInterconnect
with otherresearchnetworks
10GE LAN PHY (Aug 04)
TECH/NUOM5200
10
Optera Metro 5200 OFA#5 – 24 km
#6 – 24 km
#2 – 10.3 km
#4 – 7.2 km
#9 – 5.3 km
5200 OFA
5200 OFA
Optera 5200 OFA
5200 OFA
OMNInet
• 8x8x8 Scalable photonic switch
• Trunk side – 10G DWDM• OFA on all trunks• ASTN control plane
GridClusters
Grid Storage
10
#8 – 6.7 km
PP
8600
PP
8600
2 x gigE
6
Data Management Service
Uses standard ftp (jakarta commons ftp client)
Implemented in Java
Uses OGSI calls to request network resources
Currently uses Java RMI for other remote interfaces
Uses NRM to allocate lambdas
Designed for future scheduling
λData Receiver Data Source
FTP client FTP server
DMS NRM
Client App
7
Network Resource Manager
Network Resource Manager
End-to-End-Oriented Allocation Interface
Using Application(DMS)
Omninet Network Manager (Odin)
Omninet Data Interpreter
Segment-Oriented Topology and Allocation Interface
Scheduling / Optimizing Application
Network-Specific Network Manager
Network-Specific Network Manager
Network-Specific Data Interpreter
Network-Specific Data Interpreter
Items in blue are planned
8
20GB File Transfer
9
Initial Performance measure:End-to-End Transfer Time
0.5s 3.6s 0.5s 174s 0.3s 11s
OD
IN S
erve
r P
roce
ssin
g
File
tra
nsfe
r do
ne,
path
re
leas
ed
File
tra
nsfe
r re
ques
t ar
rives
Pat
h D
eallo
cati
on
req
ues
t
Dat
a T
ran
sfer
20 G
B
Pat
h ID
re
turn
ed
OD
IN S
erve
r P
roce
ssin
g
Pat
h A
lloca
tio
n
req
ues
t
25s
Net
wo
rk
reco
nfi
gu
rati
on
0.14sF
TP
set
up
ti
me
10
-30 0 30 60 90 120 150 180 210 240 270 300 330 360 390 420 450 480 510 540 570 600 630 660
allocate path de-allocate path
#1 Transfer
Customer #1 Transaction Accumulation
#1 Transfer
Customer #2 Transaction Accumulation
Transaction Demonstration Time Line6 minute cycle time
time (sec)
#2 Transfer #2 Transfer
11
From 100 Days to 100 Seconds
12
13
14
SDSS
Mouse Applications
Apps Middleware
Network(s)
Overall System
Lambda-Grid
Meta-Scheduler
Resource Managers
IVDSC
Control Plane
GT3
SRB
NRS
DTS
Data G
rid
Com
p Grid
Net G
rid
OGSI-fy
NMI
Our contribution
15
DTS - NRS
Data service
Scheduling logic
Replica service
NMI /IF
Apps mware I/F
Proposal evaluation
NRS I/F
GT3 /IF
Datat calc
DTS
Topology map
Scheduling algorithm
proposal constructor
NMI /IF
DTS IF
Scheduling service
Optical control I/F
proposal evaluator
GT3 /IF
Network allocation
Net calc
NRS
16
Layered Architecture
CONNECTION
Fabric
UDP
ODIN
Resources
Grid FTP
BIRN Mouse
Apps Middleware
TCP/HTTP
Grid Layered Architecture
Lambda Data Grid
IP
Connectivity
Application
Resource
CollaborativeBIRN
Workflow
NMI
NRS
BIRN Toolkit
Lambda
Resource managers
DB
Storage Computation
Optical Control
WSRF
Optical protocols
Optical hw
OGSA
OMNInet
Data Transmission Plane
optical Control Plane
1 n
DB
1
n
1
n
Storage
Control Interactions
Optical Control Network
Optical Control Network
Network Service Plane
Data Grid Service Plane
NRS
DTS
Compute
NMI
Scientific workflow
Apps Middleware
Resource managers
18
NRS Interface and Functionality
// Bind to an NRS service:NRS = lookupNRS(address);//Request cost function evaluationrequest = {pathEndpointOneAddress, pathEndpointTwoAddress, duration, startAfterDate, endBeforeDate};ticket = NRS.requestReservation(request);// Inspect the ticket to determine success, and to findthe currently scheduled time:ticket.display();// The ticket may now be persisted and usedfrom another locationNRS.updateTicket(ticket);// Inspect the ticket to see if the reservation’s scheduled time has changed, or verify that the job completed, with any relevant status information:ticket.display();
19
Overheads - Amortization
Setup time = 48 sec, Bandwidth=920 Mbps
0%10%20%30%40%50%60%70%80%90%
100%
100 1000 10000 100000 1000000 10000000
File Size (MBytes)
Se
tup
tim
e /
To
tal T
ran
sfe
r T
ime
500GB
When dealing with data-intensive applications, overhead is
insignificant!
20
0
0.2
0.4
0.6
0.8
1 2 3 4 5 6
experiment number
bloc
king
pro
babi
lity
Simulation
Erlang B Model
Blocking Probability
Network Scheduling – Simulation Study
0
0.2
0.4
0.6
0.8
1 2 3 4 5 6
experiment numberbl
ocki
ng p
roba
bilit
y
0%
50%
100%
low er-bound
Blocking probabilityUnder-constrained requests
Optical Control Network
Optical Control Network
Network Service Request
Data Transmission Plane
OmniNet Control PlaneODIN
UNI-N
ODIN
UNI-N
Connection Control
L3 router
L2 switch
Data storageswitch
DataPath
Control
DataPath Control
DATA GRID SERVICE PLANEDATA GRID SERVICE PLANE
1 n
1
n
1
n
DataPath
DataCenter
ServiceControl
ServiceControl
NETWORK SERVICE PLANENETWORK SERVICE PLANE
GRID Service Request
DataCenter
DWDM-RAM Service Control Architecture
23
24
Path Allocation Overhead as a % of the Total Transfer Time
> Knee point shows the file size for which overhead is insignificant
Setup time = 2 sec, Bandwidth=100 Mbps
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
0.1 1 10 100 1000 10000
File Size (MBytes)
Setu
p tim
e / To
tal Tr
ansfe
r Tim
e
1GB
Setup time = 2 sec, Bandwidth=300 Mbps
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
0.1 1 10 100 1000 10000
File Size (MBytes)
Setup
time /
Total
Tran
sfer T
ime
5GB
Setup time = 48 sec, Bandwidth=920 Mbps
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
100 1000 10000 100000 1000000 10000000
File Size (MBytes)
Setup
time /
Total
Tran
sfer T
ime
500GB
25
Packet Switched vs Lambda NetworkSetup time tradeoffs (Optical path setup time = 2 sec)
0.0
50.0
100.0
150.0
200.0
250.0
0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0
Time (s)
Da
ta T
ran
sfe
rre
d (
MB
)
Packet sw itched (300 Mbps)
Lambda sw itched (500 Mbps)
Lambda sw itched (750 Mbps)
Lambda sw itched (1 Gbps)
Lambda sw itched (10Gbps)
Packet Switched vs Lambda NetworkSetup time tradeoffs (Optical path setup time = 48 sec)
0.0
500.0
1000.0
1500.0
2000.0
2500.0
3000.0
3500.0
4000.0
4500.0
5000.0
0.0 20.0 40.0 60.0 80.0 100.0 120.0
Time (s)
Da
ta T
ran
sfe
rre
d (
MB
)
Packet sw itched (300 Mbps)
Lambda sw itched (500 Mbps)
Lambda sw itched (750 Mbps)
Lambda sw itched (1 Gbps)
Lambda sw itched (10Gbps)
26
File transfer times
1
2
5
10
1
2
5
10
0
1
2
3
4
5
6
7
8
9
10
0 100 200 300 400 500 600 700 800 900 1000
Time (sec)
File
Siz
e (G
b) DWDM-RAM (overOMNINet)
FTP (over Internet)
27
Fixed Bandwidth List Scheduling