Upload
hazina
View
20
Download
0
Embed Size (px)
DESCRIPTION
Updates on Backward Congestion Notification. Davide Bergamasco ([email protected]) Cisco Systems, Inc. IEEE 802 Plenary Meeting San Francisco, USA July 20, 2005. Agenda. Previous presentation May 2005 IEEE 802.1 Interim Meeting in Berlin, Germany - PowerPoint PPT Presentation
Citation preview
1
Updates on Backward Congestion Notification
Davide Bergamasco ([email protected])
Cisco Systems, Inc.
IEEE 802 Plenary Meeting
San Francisco, USA
July 20, 2005
222
Agenda
• Previous presentation
• May 2005 IEEE 802.1 Interim Meeting in Berlin, Germany
• http://www.ieee802.org/1/files/public/docs2005/new-bergamasco-backward-congestion-notification-0505.pdf
• Updates
• Algorithm
• Derivative to improve stability
• Solicit Bit to accelerate recovery
• AQM in rate limiter queues to reduce blocking
• Simulations
333
- - - -+ + + +
Stop Generation of BCN Messages
t
Q
Qeq
Queue Stability
• ISSUE: Overshoots and undershoots accumulate over time
• SOLUTION: Signal only when
• Q > Qeq && dQ/dt > 0
• Q < Qeq && dQ/dt < 0
• Easy to implement in hardware: just an Up/Down counter
• Increment @ every enqueue
• Decrement @ every dequeue
• Reduces signaling rate by 50%!!
444
Solicit Bit
t
BCN0
R
RandomTime
BCN+2
Rmin
Rsolicit
Force BitOn
BCN+2 BCN+4
Force BitOff
BCN+1
• ISSUE: When the rate is very low, recovery may take too long because of sampling.
• SOLUTION: Solicit Bit in RL tag
• if R < Rsolicit Solicit bit is set
• if R >= Rsolicit Solicit bit is cleared
• If possible, CP will generate a BCN+ for every frame with Solicit bit on, regardless of sampling
555
Changes to Detection & Signaling
FULL QUEUE
OUTIN
T+4T+3T+2T+1T+0
BCN+4BCN+3BCN+2BCN+1
BCN 0 No Message
NoMessage
BCN 0
RLTaggedFrame?
SampleFrame with
Probability P
No
Yes
MESSAGE TO GENERATE
MESSAGE TO GENERATE
EQUILIBRIUMEMPTY QUEUE
T-1T-2T-3T-4
BCN-1BCN-2BCN-3BCN-4
BCN-1BCN-2BCN-3BCN-4
SampledFrame?
Yes
RL Tag && Solicit
Bit Set?
No
Yes
No
BCNtype
dQ/dt < 0?
dQ/dt > 0
+ Yes
NOP
SendBCN
NOP
Yes
No
No
-
BCN+4BCN+3BCN+2BCN+1No Message
MESSAGE TO GENERATE
NOP
0
666
Data OUT
R1F1
R2F2
RnFn
No
Ma
tch
Control IN
Data IN
Packets Marked withRATE_LIMITED_TAG
EDGENODE
NETWORKCORE
BCN Messagesfrom congestedpoint
Rate Limiter Queue AQM
Tail Drop
Flow Control
• ISSUE: Blocking @ RL queues due to buffer exhaustion
• SOLUTION: add an AQM mechanism to control buffer usage
777
Rate Limiter Queue AQM
• Traditional AQM such as RED (mark/drop) don’t work well for RL queues:• Buffer too small
• Very few flows
• Traffic statistics very different from Internet traffic
• A novel and very simple solution based on:
• Threshold on the RL queue QAQM (e.g., 10 pkts)
• Fixed drop or mark probability P (e.g., 1%)
• Two counters:
• CTCP: Number of TCP packets in the RL queue
• CUDP: Number of UDP packets in the RL queue
• Drop or mark TCP packets with probability P when CTCP > QAQM
• Drop UDP packets when CUDP > QAQM
TCP
UDP
QAQM
888STb1 STo1 STb2 STo2 STb3 STo3 STb4 STo4 DTb DTo
SR2
DR2
SJ
Core Switch
ES2 ES3 ES4 ES5
ES6
SR1
ES1
DR1
Simulation Environment (1)
Congestion
TCP Bulk
TCP Ref1
TCP Ref2
TCP On/Off
999
Simulation Environment (2)
• Short Range, High Speed DC Network
• Link Capacity = 10 Gbps
• Switch latency = 1 s
• Link Length = 100 m (.5 s propagation delay)
• Control loop delay ~ 3 s
• Workload
1) TCP only
• STb1-STb4: 3 parallel connections transferring 1 MB each continuosly
• STi1-STi4: 3 parallel connections transferring 1 MB then waiting 10 ms
• SR1: 1 connection transferring 10 KB (avg 16 s wait)
• SR2: 1 connection transferring 10 KB (1s wait)
2) 80% TCP + 20% UDP
• STb1-STb4: same as above
• STi1-STi4: same as above
• SR1-SR2: same as above
• SU1-SU4: variable length bursts with average offered load of 2 Gbps
101010
Simulation Goals
• Study the performance of BCN with various congestion management techniques at the RL
• No Link-level Flow Control
• Link-level Flow Control
• Link-level Flow Control + RL simple AQM (drop/mark)
• Metrics:• Throughput and Latency of TCP bulk and on/off
connections
• Throughput and Latency of Reference Flows
• Bottleneck Link Utilization
• Buffer Utilization
111111
Bulk & On/Off Application Throughput & Latency (Workload 1: TCP Only)
RL Congestion Management Mechanism
Bulk TCP Throughput
(Tps)
Bulk TCP Latency
(s)
On/Off TCP Throughput
(Tps)
On/Off TCP
Latency (s)
Throughput on
Bottleneck link (Gbps)
No Flow Control
67.17 15,220 25.92 27,880 9.85
Flow Control 63.00 15,970 32.92 20,337 10.00
Flow Control + RL AQM (drop)
61.83 16,249 33.42 19,570 10.00
Flow Control + RL AQM (mark)
59.17 17,043 36.67 16,873 10.00
WorstBest
121212
Reference Applications Throughput & Latency (Workload 1: TCP Only)
RL Congestion Management Mechanism
Ref1 TCP Throughput
(Tps)
Ref1 TCP Latency
(s)
Ref2 TCP Throughput
(Tps)
Ref2 TCP Latency
(s)
No Flow Control
6702 132.8 30334 31.97
Flow Control 7108 124.30 31038 31.22
Flow Control + AQM (drop)
7210 122.33 31307 30.94
Flow Control + AQM (mark)
7419 119.21 31362 30.89
WorstBest
131313
Buffer Utilization: No FC
141414
Buffer Utilization: FC
151515
Buffer Utilization: FC + RL AQM (drop)
161616
Buffer Utilization: FC + RL AQM (mark)
171717
Summary & Next Steps
• A number of improvements have been made to BCN
• Derivative to improve stability
• Solicit Bit to speed up recovery
• AQM in RL queues to reduce blocking
• Future Steps
• Build a Prototype???
• …
181818181818