Upload
mitchell-jacobs
View
215
Download
1
Embed Size (px)
DESCRIPTION
3 Contents Past: Packet Switching Approach Stored On-going: Circuit Switching Approach Real-Time Single Large Data Flow
Citation preview
1
Masaki HirabaruNetwork Architecture Group
PL MeetingNew Generation Network Research Center
July 26, 2006
A Role of Network Architecture in e-VLBI
2
Network Architecture
B
A C
D
(1) Full Mesh Needs
Architecture(Optimization)
Seeds
B
A C
D
(2) Blocking
SWSW
3
Contents
Past: Packet Switching Approach StoredOn-going: Circuit Switching Approach Real-Time
Single Large Data Flow
4
Motivations• MIT Haystack – NICT Kashima e-VLBI Experiment
on August 27, 2003 to measure UT1-UTC in 24 hours– 41.54 GB NICT → MIT 107 Mbps (~50 mins)
41.54 GB MIT → NICT 44.6 Mbps (~120 mins)– RTT ~220 ms, UDP throughput 300-400 Mbps
However TCP ~6-8 Mbps (per session, tuned)– BBFTP with 5 x 10 TCP sessions to gain performance
• HUT – NICT Kashima Gigabit VLBI Experiment - RTT ~325 ms, UDP throughput ~70 Mbps
However TCP ~2 Mbps (as is), ~10 Mbps (tuned)- Netants (5 TCP sessions with ftp stream restart extension)
There was bandwidth available but we could not utilize.
5
e-VLBI (Very Long Baseline Interferometry)•e-VLBI geographically distributed observation, interconnecting radio antennas over the world
ASTRONOMYGEODESY
Del
ay(d
) radio signal from a star
correlator
A/D clockA/D
Internet
clock~Gbps~Gbps
A B
AB
d
Large Bandwidth-Delay Product Network issueNICT Kashima Space Center 34m
Partners: MIT Haystack, NASA, Onsala (Sweden), Shanghai Observatory, JIVE, HUT, CSIRO, etc.
•Gigabit / real-time VLBI multi-gigabit rate sampling
NGC4261
6
• Observing Bandwidth Data rate (Precision of Time Delay)-
1
(SNR)1/2
• Wave Length / Baseline Length Angular Resolution• Baseline Length (EOP Precision)-1
VLBI - CharacteristicsVLBI - Characteristics
Faster Data Rate = Higher Sensitivity
Longer Distance = Better Resolution
7
Long Distant Rover Control
(at least) 7 minutes one way delay
Image
Command
EarthMars
When operator saw collision, it was too late.
8
ExampleHow much speed can we get?
ReceiverSenderHigh-Speed
Backbone
L2/L3SW
1G 100M
Delay at light speed: 100ms
1G
A higher-speed device spoils the performance ???
100M
Average TCP Throughput less than 20Mbps
Q=50
9
Example: From Tokyo to BostonTCP on a fast long path with a bottleneck
bottleneck
overflowqueue
loss
Tokyosenderrate control
Bostonreceiverloss detection
feedback
It takes 150ms to know the loss (buffer overflow).It keeps overflowing during the period…150ms is very long for the high-speed network.150ms at 1Gbps generates ~19MByte on the wire.
Los Angeles
50ms 50ms
100ms
bw 1G
bw 0.8G
Buffer25MB
10
TCP’s Way of Rate Control (slow-start)
RTT (200ms)
20ms 40ms 80ms 160ms
t
1Gbps
rate
average rate
150 Mbps average rate overflows with a 1000-packet queue
100Mbps
11
(a) H
ighS
pee
d(b
) Sca
labl
e(c
) B
IC(d
) FA
ST
Bottleneck bandwidth and queue size
TCP Burstyness
12
TCP Performance with Different Queue Sizes
13
Measuring Bottleneck Queue Sizes
ReceiverSenderCapacit
y C
packet train lost packetmeasured packet
Queue Size = C x (Delaymax – Delaymin)
RouterSwitch
1Gbps(10G)100Mbps
(1G)
b)
Typical Bottleneck Cases
RouterSwitch
a)
Queue~100 Queue
~1000
VLANs
14
Bottleneck Detection
• Black Box – Traditional by Hirabaru
• Gray Box – ECN (similarity: Optical Grid by Harai)
• White Box – SIRENS by Kobayashi and Nakauchi
15
Difference in Approaches
- Stored (Packet Switching): Variable Bandwidth, Available
- Real-Time (Circuit Switching): Constant, Guaranteed
16
NICT Koganei and DRAGON e-VLBI Experiment 2007- Interoperability -
NICT Koganei
PC (.30)9816T
10G
NICT Kashima
9816GB
U-Node
U-Node9816GB
10G
PC (.29)FES12G
Tokyo
9816GB
SL F10
GS4000
Chicago
10G
HOPI DRAGON
MIT Haystack
Washington DC PC
SWCorrelator
SW
SW
JGN II VLAN ID 1196, 98XXX Allied Telesis CenterCOM L2 Switch, U-node NEC ADM (MTU 1500B limitation), GS4000 Hitachi L2 SwitchTLD: Tunable Laser Diode (Tunable Wavelength Transmitter/Receiver)
Internet
DRAGONCTL?
NICT(Koganei)-GMPLSGMPLSRSVP-TEJGN II
GS4000
NICT-Optical Grid
JGN II L1(Optical Fiber)
TLDCTL TLD
CTL
OXCCTL
GS4000
SW
10G1G
Circuit speed
ENNISW
ENNISW
NICTCTL
NICTCTL
ENNISW
- Path/Packet API- Bottleneck shift
17
e-VLBI Contributions to Network Architecture
Future Requirements from e-VLBI
Combination and Control
Network devices and elements
Additional requirements: Loss-tolerant, scientific data streaming
Field TrialsProblem Finding
18
A Case Study: VLBI System Transitions and Architecture
K5 Data AcquisitionTerminal
1st Generation
2nd Generation
1983~Open-Reel TapeHardware Correlator 1990~
Cassette TapeHardware Correlatore-VLBI over ATM
3rd Generation
2002~PC-based SystemHard-disk StorageSoftware Correlatore-VLBI over Internet
K3 Correlator (Center)K3 Recorder (Right)
K4 Terminal
K4 Correlator
64Mbps256Mbps
1 ~ 2Gbps
OptimizedPlatform = Architecture
19
KwangjuBusan
2.5G
Fukuoka
Korea
2.5G SONET
KORENTaegu
Daejon
10G
1G (10G)1G
1G
Seoul XP
Genkai XP
Kitakyushu
Kashima
1G (10G)
Fukuoka Japan
250km
1,000km2.5G
JGN II
9,000km
4,000km
Los Angeles
Chicago
Washington DC
MIT Haystack
10G2.4G
APII/JGNII
Abilene
Koganei
1G(10G)
Indianapolis
100km bwctl server
Experiment for High-Performance Scientific Data Transfer
10G
Tokyo XP /JGN II I-NOC
*Performance Measurement Point Directoryhttp://e2epi.internet2.edu/pipes/pmp/pmp-dir.html
perf server
e-vlbi server
JGNII
10GGEANT
SWITCH
7,000km
TransPAC
Pittsburgh
U of Tokyo
A BWCTL account available for CMMincluding Korean researchers
International collaboration to support for science applications
20
e-VLBI Show-case for Network Architecture
JGN II Network Diagram and Result foriGrid and SC 2005 e-VLBI Experiment
NICT Koganei
SL F10
K-5(.30)
GS4000 9816T
10G
GS4000
NICT Kashima
U-Node
U-Node9816GB
GS400010G
PC (.29)FES12G
GS4000
10G
10G1G
10GTokyo
Chicago
10G
Circuit speed
9816GB
HOPIDRAGON
MIT Haystack
Washington DCMARK V
SW Correlator
SW
SW
3/7 untag
#9untag
JGN II VLAN ID 1196, 98XXX Allied Telesis CenterCOM L2 Switch, U-node NEC ADM (MTU 1500B limitation), GS4000 Hitachi L2 Switch
9816GB
Venue Mr. Jerry Sobieski (Mid-Atlantic Crossroads.left) and Dr. Alan Whitney (MIT Haystack)
http://www-iepm.slac.stanford.edu/monitoring/bulk/sc2005/photos/i2 -sc05/
JGN II
19
Analyzing Advanced TCP Dynamic Behavior in a Real Network(Example: From Tokyo to Indianapolis at 1G bps with HighSpeed TCP)
The data was obtained during e-VLBI demonstration at Internet2 Member Meeting in October 2003.
Throughput
RTT
Window Sizes
Packet Losses
The graphs were generatedthrough web100.
2004 年 6 月 30 日 : JGN II を活用して、 VLBI 観測データを日米間で高速伝送することにより、地球の自転速度の変動を表す UT1 を約 4.5 時間という極めて短時間のうちに推定することに成功しました。 ( 報道発表 )IDEA Award Winner 2006 by Internet2Very High Speed Electronic Very Long Baseline Interferometry (e-VLBI)•Alan Whitney, MIT Haystack Observatory •Yasuhiro Koyama, NICT •Arpad Szomoru, Joint Institute for VLBI in Europe (JIVE) •Hisao Uose, NTT Laboratories GEMnet2/GALAXY Project
15
e-VLBI Demonstration in JGN II Osaka (Jan. 2005)
e-VLBI data transfer achieved~700Mbps from Haystack to Osaka~900Mbps from Kashima to Osaka
Software Cross Correlation ~240Mbps per station
Dr. Koyama
4 Apple G5 Server machines (8 CPUs in Total)
Osaka
#7,#8
1G 1G
Raid Disks
1G x4 1G x4
Raid Disks
#5,#6
#1,#2
#3,#4
CPU x8
10G
Tokyo
NICTKashima
Abilene(10G)
MITHaystack
CHI
WAS
JGN II Int’l(10G)
1G/10G
1G/2.5G
*TCP parameters were tuned for the path.