66
1 End to End Protocols

1 End to End Protocols. 2 End to End Protocols r We already saw: m basic protocols m Stop & wait (Correct but low performance) r Now: m Window based protocol

  • View
    218

  • Download
    1

Embed Size (px)

Citation preview

1

End to End Protocols

2

End to End Protocols

We already saw: basic protocols Stop & wait (Correct but low performance)

Now: Window based protocol.

• Go Back N• Selective Repeat

TCP protocol. UDP protocol.

3

Pipelined protocols

Pipelining: sender allows multiple, “in-flight”, yet-to-be-acknowledged pkts range of sequence numbers must be increased buffering at sender and/or receiver

Two generic forms of pipelined protocols: go-Back-N, selective repeat

4

Go Back N (GBN)

5

Go-Back-NSender: k-bit seq # in pkt header “window” of up to N, consecutive unack’ed pkts allowed

ACK(n): ACKs all pkts up to, including seq # n - “cumulative ACK” may deceive duplicate ACKs (see receiver)

timer for each in-flight pkt timeout(n): retransmit pkt n and all higher seq # pkts in window

6

GBN: sender extended FSM

7

GBN: receiver extended FSM

receiver simple: ACK-only: always send ACK for correctly-received pkt with

highest in-order seq # may generate duplicate ACKs need only remember expectedseqnum

out-of-order pkt: discard (don’t buffer) -> no receiver buffering! ACK pkt with highest in-order seq #

8

GBN inaction

9

GBN - correctness

Safety: The sequence

numbers guarantee: packet received in

order. No gaps. No duplicates. Safety follows from

extectedsequencenum Next seg. Received exactly once.

Liveness: Eventually timeout. Re-sends the window. Eventually base is

received correctly.

Receiver: from that time ACK at

least base. Eventually an ACK will

get through. The sender will

update to Base (or more).

10

GBN - correctness

Clearing a FIFO channel:

Data k

Ack k

impossible

Data i<k

Ack i<k

impossible

Claim: After receiving Data/ACK k no Data/ACK i<k is received.

Sufficient to use N+1 seq. num.

11

Selective Repeat

12

Selective Repeat

receiver individually acknowledges all correctly received pkts buffers pkts, as needed, for eventual in-order delivery to upper

layer

sender only resends pkts for which ACK not received sender timer for each unACKed pkt

sender window N consecutive seq #’s again limits seq #s of sent, unACKed pkts

13

Selective repeat: sender, receiver windows

14

Selective repeat

data from above : if next available seq # in

window, send pkt

timeout(n): resend pkt n, restart

timer

ACK(n) in [sendbase,sendbase+N]:

mark pkt n as received if n smallest unACKed

pkt, advance window base to next unACKed seq #

senderpkt n in [rcvbase, rcvbase+N-

1]

send ACK(n) out-of-order: buffer in-order: deliver (also

deliver buffered, in-order pkts), advance window to next not-yet-received pkt

pkt n in [rcvbase-N,rcvbase-1]

ACK(n)

otherwise: ignore

receiver

15

Selective repeat in action

16

Selective Repeat - Correctness Infinite seq. Num.

Safety: immediate from the seq. Num. Liveness: Eventually data and ACKs get

through. Finite Seq. Num.

Idea: Re-use seq. Num. Use less bits to encode them.

Number of seq. Num.: At least N. Needs more!

17

Selective repeat: dilemma

Example: seq #’s: 0, 1, 2, 3 window size=3

receiver sees no difference in two scenarios!

incorrectly passes duplicate data as new in (a)

Q: what relationship between seq # size and window size?

18

Choosing the window size

Small window size: idle link (under-utilization).

Large window size: Buffer space Delay after loss

Ideal window size (assuming very low loss) RTT =Round trip time C = link capacity window size = RTT * C

What happens with no loss?

19

End to End Protocols:

Multiplexing &Demultiplexing

20

applicationtransportnetwork

MP2

applicationtransportnetwork

Multiplexing/demultiplexing

Recall: segment - unit of data exchanged between transport

layer entities aka TPDU: transport

protocol data unitreceiver

HtHn

Demultiplexing: delivering received segments (TPDUs)to correct app layer processes

segment

segment Mapplicationtransportnetwork

P1M

M MP3 P4

segmentheader

application-layerdata

21

Multiplexing/demultiplexing

multiplexing/demultiplexing: based on sender, receiver port

numbers, IP addresses source, dest port #s in each

segment recall: well-known port

numbers for specific applications

gathering data from multiple app processes, enveloping data with header (later used for demultiplexing)

source port # dest port #

32 bits

applicationdata

(message)

other header fields

TCP/UDP segment format

Multiplexing:

22

Multiplexing/demultiplexing: examples

host A server Bsource port: xdest. port: 23

source port:23dest. port: x

port use: simple telnet app

WWW clienthost A

WWWserver B

WWW clienthost C

Source IP: CDest IP: B

source port: x

dest. port: 80

Source IP: CDest IP: B

source port: y

dest. port: 80

port use: WWW server

Source IP: ADest IP: B

source port: x

dest. port: 80

23

TCP Protocol

24

TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581

full duplex data: bi-directional data flow in

same connection MSS: maximum segment

size

connection-oriented: handshaking (exchange of

control msgs) init’s sender, receiver state before data exchange

flow controlled: sender will not overwhelm

receiver

point-to-point: one sender, one receiver

reliable, in-order byte steam: no “message boundaries”

pipelined: TCP congestion and flow

control set window size

25

TCP segment structure

source port # dest port #

32 bits

applicationdata

(variable length)

sequence number

acknowledgement numberrcvr window size

ptr urgent datachecksum

FSRPAUheadlen

notused

Options (variable length)

URG: urgent data (generally not used)

ACK: ACK #valid

PSH: push data now(generally not used)

RST, SYN, FIN:connection estab(setup, teardown

commands)

# bytes rcvr willingto accept

countingby bytes of data(not segments!)

Internetchecksum

26

TCP seq. #’s and ACKsSeq. #’s:

byte stream “number” of first byte in segment’s data

ACKs: seq # of next byte

expected from other side

cumulative ACK

Q: how receiver handles out-of-order segments A: TCP spec doesn’t

say, - up to implementor

Host A Host B

Seq=42, ACK=79, data = ‘C’

Seq=79, ACK=43, data = ‘C’

Seq=43, ACK=80

Usertypes

‘C’

host ACKsreceipt

of echoed‘C’

host ACKsreceipt of

‘C’, echoesback ‘C’

timesimple telnet scenario

27

TCP: reliable data transfer

simplified sender, assuming

waitfor

event

waitfor

event

event: data received from application above

event: timer timeout for segment with seq # y

event: ACK received,with ACK # y

create, send segment

retransmit segment

ACK processing

•one way data transfer•no flow, congestion control

28

TCP: reliable data transfer

00 sendbase = initial_sequence number 01 nextseqnum = initial_sequence number 0203 loop (forever) { 04 switch(event) 05 event: data received from application above 06 create TCP segment with sequence number nextseqnum 07 start timer for segment nextseqnum 08 pass segment to IP 09 nextseqnum = nextseqnum + length(data) 10 event: timer timeout for segment with sequence number y 11 retransmit segment with sequence number y 12 compue new timeout interval for segment y 13 restart timer for sequence number y 14 event: ACK received, with ACK field value of y 15 if (y > sendbase) { /* cumulative ACK of all data up to y */ 16 cancel all timers for segments with sequence numbers < y 17 sendbase = y 18 } 19 else { /* a duplicate ACK for already ACKed segment */ 20 increment number of duplicate ACKs received for y 21 if (number of duplicate ACKS received for y == 3) { 22 /* TCP fast retransmit */ 23 resend segment with sequence number y 24 restart timer for segment y 25 } 26 } /* end of loop forever */

SimplifiedTCPsender

29

TCP ACK generation [RFC 1122, RFC 2581]

Event

in-order segment arrival, no gaps,everything else already ACKed

in-order segment arrival, no gaps,one delayed ACK pending

out-of-order segment arrivalhigher-than-expect seq. #gap detected

arrival of segment that partially or completely fills gap

TCP Receiver action

delayed ACK. Wait up to 500msfor next segment. If no next segment,send ACK

immediately send singlecumulative ACK

send duplicate ACK, indicating seq. #of next expected byte

immediate ACK if segment startsat lower end of gap

30

TCP: retransmission scenarios

Host A

Seq=92, 8 bytes data

ACK=100

loss

tim

eout

time lost ACK scenario

Host B

X

Seq=92, 8 bytes data

ACK=100

Host A

Seq=100, 20 bytes data

ACK=100

Seq=

92

tim

eout

time premature timeout,cumulative ACKs

Host B

Seq=92, 8 bytes data

ACK=120

Seq=92, 8 bytes data

Seq=

10

0 t

imeou

t

ACK=120

31

TCP Flow Controlreceiver: explicitly

informs sender of (dynamically changing) amount of free buffer space RcvWindow field

in TCP segmentsender: keeps the

amount of transmitted, unACKed data less than most recently received RcvWindow

sender won’t overrun

receiver’s buffers bytransmitting too

much, too fast

flow control

receiver buffering

RcvBuffer = size or TCP Receive Buffer

RcvWindow = amount of spare room in Buffer

32

TCP Round Trip Time and TimeoutQ: how to set TCP

timeout value? longer than RTT

note: RTT will vary too short: premature

timeout unnecessary

retransmissions too long: slow

reaction to segment loss

Q: how to estimate RTT? SampleRTT: measured time

from segment transmission until ACK receipt ignore retransmissions,

cumulatively ACKed segments

SampleRTT will vary, want estimated RTT “smoother” use several recent

measurements, not just current SampleRTT

33

TCP Round Trip Time and TimeoutEstimatedRTT = (1-x)*EstimatedRTT + x*SampleRTT

Exponential weighted moving average influence of given sample decreases exponentially fast typical value of x: 0.1

Setting the timeout EstimtedRTT plus “safety margin” large variation in EstimatedRTT -> larger safety margin

Timeout = EstimatedRTT + 4*Deviation

Deviation = (1-x)*Deviation + x*|SampleRTT-EstimatedRTT|

34

TCP Connection Management

Recall: TCP sender, receiver establish “connection” before exchanging data segments

initialize TCP variables: seq. #s buffers, flow control info

(e.g. RcvWindow) client: connection initiator Socket clientSocket = new

Socket("hostname","port number"); server: contacted by client Socket connectionSocket =

welcomeSocket.accept();

Three way handshake:

Step 1: client sends TCP SYN control segment to server specifies initial seq #

Step 2: server receives SYN, replies with SYNACK control segment

ACKs received SYN allocates buffers specifies server-to-receiver

initial seq. #

Step 3: client sends ACK and data.

35

TCP Connection Management (cont.)

Closing a connection:

client closes socket: clientSocket.close();

Step 1: client end system sends TCP FIN control segment to server.

Step 2: server receives FIN, replies with ACK. Closes connection, sends FIN.

client

FIN

server

ACK

ACK

FIN

close

close

closed

tim

ed w

ait

36

TCP Connection Management (cont.)

Step 3: client receives FIN, replies with ACK.

Enters “timed wait” - will respond with ACK to received FINs

Step 4: server, receives ACK. Connection closed.

Note: with small modification, can handly simultaneous FINs.

client

FIN

server

ACK

ACK

FIN

closing

closing

closed

tim

ed w

ait

closed

37

TCP Connection Management (cont)

TCP clientlifecycle

TCP serverlifecycle

38

Principles of Congestion Control

Congestion: informally: “too many sources sending too

much data too fast for network to handle” different from flow control! manifestations:

lost packets (buffer overflow at routers) long delays (queueing in router buffers)

a top-10 problem!

39

Causes/costs of congestion: scenario 1

two senders, two receivers

one router, infinite buffers

no retransmission

large delays when congested

maximum achievable throughput

40

Causes/costs of congestion: scenario 2

one router, finite buffers sender retransmission of lost packet

41

Causes/costs of congestion: scenario 2 always: (goodput)

“perfect” retransmission only when loss:

retransmission of delayed (not lost) packet makes

larger (than perfect case) for same

in

out

=

in

out

>

in

out

“costs” of congestion: more work (retrans) for given “goodput” unneeded retransmissions: link carries multiple copies of pkt

42

Causes/costs of congestion: scenario 3 four senders multihop paths timeout/retransmit

in

Q: what happens as and increase ?

in

43

Causes/costs of congestion: scenario 3

Another “cost” of congestion: when packet dropped, any “upstream transmission capacity

used for that packet was wasted!

44

Approaches towards congestion control

End-end congestion control:

no explicit feedback from network

congestion inferred from end-system observed loss, delay

approach taken by TCP

Network-assisted congestion control:

routers provide feedback to end systems single bit indicating

congestion (SNA, DECbit, TCP/IP ECN, ATM)

explicit rate sender should send at

Two broad approaches towards congestion control:

45

Case study: ATM ABR congestion control

ABR: available bit rate:

“elastic service” if sender’s path

“underloaded”: sender should use

available bandwidth if sender’s path

congested: sender throttled to

minimum guaranteed rate

RM (resource management) cells:

sent by sender, interspersed with data cells

bits in RM cell set by switches (“network-assisted”) NI bit: no increase in rate

(mild congestion) CI bit: congestion

indication RM cells returned to sender

by receiver, with bits intact

46

Case study: ATM ABR congestion control

two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell sender’ send rate thus minimum supportable rate on

path

EFCI bit in data cells: set to 1 in congested switch if data cell preceding RM cell has EFCI set, sender sets CI

bit in returned RM cell

47

TCP Congestion Control end-end control (no network assistance) transmission rate limited by congestion window size, Congwin,

over segments:

w segments, each with MSS bytes sent in one RTT:

throughput = w * MSS

RTT Bytes/sec

Congwin

48

TCP congestion control:

two “phases” slow start congestion avoidance

important variables: Congwin threshold: defines

threshold between two slow start phase, congestion control phase

“probing” for usable bandwidth: ideally: transmit as

fast as possible (Congwin as large as possible) without loss

increase Congwin until loss (congestion)

loss: decrease Congwin, then begin probing (increasing) again

49

TCP Slowstart

exponential increase (per RTT) in window size (not so slow!)

loss event: timeout (Tahoe TCP) and/or or three duplicate ACKs (Reno TCP)

initialize: Congwin = 1for (each segment ACKed) Congwin++until (loss event OR CongWin > threshold)

Slowstart algorithmHost A

one segment

RTT

Host B

time

two segments

four segments

50

TCP Congestion Avoidance

/* slowstart is over */ /* Congwin > threshold */Until (loss event) { every w segments ACKed: Congwin++ }threshold = Congwin/2Congwin = 1perform slowstart

Congestion avoidance

1

1: TCP Reno skips slowstart (fast recovery) after three duplicate ACKs

Tahoe

Reno

51

TCP FairnessFairness goal: if N TCP

sessions share same bottleneck link, each should get 1/N of link capacity

TCP congestion avoidance:

AIMD: additive increase, multiplicative decrease increase window by

1 per RTT decrease window

by factor of 2 on loss event

AIMD

TCP connection 1

bottleneckrouter

capacity R

TCP connection 2

52

Why is TCP fair?

Two competing sessions: Additive increase gives slope of 1, as throughout increases multiplicative decrease decreases throughput proportionally

R

R

equal bandwidth share

Connection 1 throughputConnect

ion 2

th

roughput

congestion avoidance: additive increaseloss: decrease window by factor of 2

congestion avoidance: additive increaseloss: decrease window by factor of 2

53

TCP strains

Tahoe

Reno

Vegas

54

Vegas

55

From Reno to Vegas

Reno Vegas

RTT measurem-t

coarse fine

Fast Retrans 3 dup ack Dup ack + timer

CongWin decrease

For each Fast Retrans.

For the 1st Fast Retrans. in RTT

Congestion detection

Loss detection Also based on measured thruput

56

Some more examples

57

TCP latency modeling

Q: How long does it take to receive an object from a Web server after sending a request?

TCP connection establishment

data transfer delay

Notation, assumptions: Assume one link between

client and server of rate R

Assume: fixed congestion window, W segments

S: MSS (bits) O: object size (bits) no retransmissions (no

loss, no corruption)Two cases to consider: WS/R > RTT + S/R: ACK for first segment in

window returns before window’s worth of data sent

WS/R < RTT + S/R: wait for ACK after sending window’s worth of data sent

58

TCP latency Modeling

Case 1: latency = 2RTT + O/R Case 2: latency = 2RTT + O/R+ (K-1)[S/R + RTT - WS/R]

K:= O/WS

59

TCP Latency Modeling: Slow Start

Now suppose window grows according to slow start. Will show that the latency of one object of size O is:

R

S

R

SRTTP

R

ORTTLatency P )12(2

where P is the number of times TCP stalls at server:

}1,{min KQP

- where Q is the number of times the server would stall if the object were of infinite size.

- and K is the number of windows that cover the object.

60

TCP Latency Modeling: Slow Start (cont.)

RTT

initia te TCPconnection

requestobject

first w indow= S /R

second w indow= 2S /R

third w indow= 4S /R

fourth w indow= 8S /R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

Example:

O/S = 15 segments

K = 4 windows

Q = 2

P = min{K-1,Q} = 2

Server stalls P=2 times.

61

TCP Latency Modeling: Slow Start (cont.)

R

S

R

SRTTPRTT

R

O

R

SRTT

R

SRTT

R

O

stallTimeRTTR

O

P

kP

k

P

pp

)12(][2

]2[2

2latency

1

1

1

windowth after the timestall 2 1 kR

SRTT

R

S k

ementacknowledg receivesserver until

segment send tostartsserver whenfrom time RTTR

S

window kth the transmit totime2 1

R

Sk

RTT

initia te TCPconnection

requestobject

first w indow= S /R

second w indow= 2S /R

third w indow= 4S /R

fourth w indow= 8S /R

com pletetransm issionobject

delivered

tim e atc lient

tim e atserver

62

UDP Protocol

63

UDP: User Datagram Protocol [RFC 768]

“no frills,” “bare bones” Internet transport protocol

“best effort” service, UDP segments may be: lost delivered out of order to

app connectionless:

no handshaking between UDP sender, receiver

each UDP segment handled independently of others

Why is there a UDP? no connection establishment

(which can add delay) simple: no connection state at

sender, receiver small segment header no congestion control: UDP can

blast away as fast as desired

64

UDP: more

often used for streaming multimedia apps loss tolerant rate sensitive

other UDP uses (why?): DNS SNMP

reliable transfer over UDP: add reliability at application layer application-specific error

recover!

source port # dest port #

32 bits

Applicationdata

(message)

UDP segment format

length checksumLength, in

bytes of UDPsegment,including

header

65

UDP checksum

Sender: treat segment contents as

sequence of 16-bit integers checksum: addition (1’s

complement sum) of segment contents

sender puts checksum value into UDP checksum field

Receiver: compute checksum of received

segment check if computed checksum

equals checksum field value: NO - error detected YES - no error detected. But

maybe errors nonethless?

Goal: detect “errors” (e.g., flipped bits) in transmitted segment

66

Summary

principles behind transport layer services: multiplexing/

demultiplexing reliable data transfer flow control congestion control

instantiation and implementation in the Internet UDP TCP