109
Lecture 1: Introduction to Communication Systems and Computer Networks Why Computer Communications? Some Example Communication Networks Data Communication Standards Layered Architectures for Communication Systems Introduction to the ISO-OSI Reference Model Why Computer Communications? Remote access to shared resources such as: Mass storage Programs/Packages Data Processing power Printers, plotters etc. Improved reliability: distributed systems degrade gracefully, centralised systems tend to crash abruptly. Price/Performance ratio: mainframe performance is around 20x that of a PC, cost is at least 100x. Human-human communication such as electronic mail (email) world-wide web, newsgroups teleconferencing bulletin boards Some Example Communication Networks 1. Simple Computer-Computer Connections (Halsall p6, fig 1.2) Some Example Communication Networks

Lecture Notes 2002

Embed Size (px)

Citation preview

Page 1: Lecture Notes 2002

Lecture 1: Introduction to Communication Systemsand Computer Networks

• Why Computer Communications?

• Some Example Communication Networks

• Data Communication Standards

• Layered Architectures for Communication Systems

• Introduction to the ISO-OSI Reference Model

Why Computer Communications?

• Remote access to shared resources such as:

• Mass storage• Programs/Packages• Data• Processing power• Printers, plotters etc.

• Improved reliability: distributed systems degrade gracefully, centralised systems tend to crash abruptly.

• Price/Performance ratio: mainframe performance is around 20x that of a PC, cost is at least 100x.

• Human-human communication such as• electronic mail (email)• world-wide web, newsgroups• teleconferencing• bulletin boards

Some Example Communication Networks

1. Simple Computer-Computer Connections(Halsall p6, fig 1.2)

Some Example Communication Networks

2. Local Area Networks(Halsall p7, fig 1.3)

Some Example Communication Networks

3. Enterprise-wide Private Network(Halsall p8, fig 1.4)

Page 2: Lecture Notes 2002

Some Example Communication Networks

4. Public Carrier Data Networks(Halsall p9, fig 1.5)

Some Example Communication Networks

5. Worldwide Internetwork(Halsall p10, fig 1.6)

Data Communication Standards

• Until relatively recently, different manufacturers' computer systems could not exchange information (they were "closed systems").

• International standards define interfaces, information format and control of the exchange of information.

• Any equipment adhering to such international standards can be used interchangeably with equipment from other manufacturers adhering to the same standards.

• Standards may be• "de jure" ("by law") e.g. ISO-OSI• "de facto" ("from the fact") e.g. DOS, Windows, Unix, TCP/IP.

Layered Architectures for Communication Systems

• Any communication system must provide:

• Error free, timely delivery of information to the correct destination (network services)• Presentation of received information to the end user ('application process') in a suitable

format that the end user can recognise and manipulate (end user services).

• Clearly a complete communication system will be a complex mix of hardware and software to provide these functions.

• To deal with this complexity, it is vital that such systems are designed and implemented in a highly structured fashion

Layered Architectures for Communication Systems

• With layered architectures, the system is conceptually broken down into layers, where each layer is defined by:

• The services it provides to the next layer above.• The services it uses from the layer below in order to carry out its functions.• The protocols it uses to communicate with the corresponding layer on a remote machine

(peer-to-peer protocols).

Page 3: Lecture Notes 2002

• Physical communication is only achieved at the bottom layer (the "physical layer") which usually provides for simple bit transmission.

Layered Architectures for Communication Systems

• e.g. a simple 3-layer architecture offering a simple file transfer service:

Application Process Application Process| || |

------------------------ ------------------------Layer 3 Layer 3------------------------ ------------------------

| |------------------------ ------------------------Layer 2 Layer 2------------------------ ------------------------

| |------------------------ ------------------------Layer 1 Layer 1------------------------ ------------------------

| | Physical Communication

Layered Architectures for Communication Systems

• In this simple 3-layer architecture, the services offered by the layers to the layer above might be:

• Layer 1: simple bit transmission

• Layer 2: error free point to point transmission of data blocks

• Layer 3: a file transfer service

• Possible peer-to-peer protocols might define:

• Layer 1: voltage levels, bit rate...

• Layer 2: error detection codes, error control protocol...

• Layer 3: a file transfer protocol

Introduction to the ISO-OSI Reference Model

Page 4: Lecture Notes 2002

• The International Standards Organisation (ISO) have defined a reference model for Open Systems Interconnection (OSI) based on a 7 layer architecture.

• Each layer provides a specific set of functions using the services provided by the layer below.

Introduction to the ISO-OSI Reference Model

• Overall Structure of the ISO Reference Model(Halsall p14, fig 1.10)

Introduction to the ISO-OSI Reference Model

• Protocol Layer Summary(Halsall p15, fig 1.11)

Page 5: Lecture Notes 2002

Lecture 2: Error Control

• Forward and Feedback Error Control

• Error Detecting Codes

• Simple parity checking

• Block parity checking

• Cyclic redundancy checking

Forward and Feedback Error Control

• Both methods transmit additional (redundant) error checking information along with the data.

• Forward Error Control• Transmission errors are detected and corrected at the receiver.• Employs error correcting codes

• Feedback Error Control• Transmission errors are detected at the receiver.• Employs error detecting codes together with a retransmission control scheme.

• Error correction requires substantially more redundant checking information than error correction. Therefore, feedback error control is more widely used, although forward error control is usually preferred for

• Simplex systems• Broadcast systems• Transmission links with large loop delay (e.g. satellite systems) where the amount of data

in transit is large.

Error Detecting Codes

• Simple Parity Checking

• Appends an extra bit to data units, set to '0' or '1' depending on the number of 1's in the data unit.

• e.g. if 11001101 is the data unit, a parity bit set to 1 is appended (even parity) since the number of 1's in the data is odd.

• The receiver counts the number of 1's in the received block (data+parity bit) and if this is even, it is assumed no errors have occurred.

• Simple parity checking can only detect odd numbers of errors, and if a long sequence of data is corrupted (a burst error), the probability of error detection is only 1/2.

Page 6: Lecture Notes 2002

Error Detecting Codes

• Block Parity Checking

• Data is formed into a block, N bits wide by M bits high, and both longitudinal and transverse parity bits are added.(Halsall p129, fig 3.15)

• This allows detection of all burst errors of length up to and including N, although some error patterns are undetected.

• Block parity checking also allows single bit error correction.

Error Detecting Codes

• Cyclic Redundancy Checking

• Extra check bits are calculated and appended to data so that the extended frame (data + CRC check bits) is exactly divisible by some predetermined binary number (called the division sequence or generator polynomial)

• The receiver divides the received frame by the division sequence and if the remainder is 0, assumes no errors.

• Mathematical operations (division etc.) are carried out using modulo 2 arithmetic (no carries or borrows)

Error Detecting Codes

• Basic algorithm for determining the cyclic redundancy checksum:

1. Append (n-1) 0's to the right hand side of the data (n is the number of bits in the division sequence).

2. Divide the resulting bit sequence by the division sequence to give the remainder. This remainder is the check sum.

3. Append the check sum to the original data and transmit the resulting data block

Page 7: Lecture Notes 2002

Error Detecting Codes

• Example CRC check sum calculation(Halsall p 132, fig 3.17)

Error Detecting Codes

• The CRC division sequence is often represented as a "generator polynomial"

• e.g. the sequence 1011 is represented as 1.x3 + 0.x2 + 1.x + 1 or simply x3 + x + 1

• CRC checking is very powerful (i.e. it detects almost all errors) and is easily implemented in very fast hardware.

• An example CRC generator polynomial is CRC-CCITT:

x16 + x12 + x5 + 1

CRC using this will detect

• All burst errors of length 16 or less• All errors with an odd number of bits• 99.997% of 17-bit burst errors• 99.998% of longer burst errors

CRC - Mathematical Analysis

• A binary number containing m bits can be represented as a polynomial of degree (m-1):

D(x) = dm-1xm-1 + dm-2xm-2 + ... + d1x + d0

where the di's are 0 or 1.

• If n is the number of bits in the division sequence, then adding (n-1) 0's to the RHS is equivalent to multiplying by xn-1, giving xn-1D(x).

• Let the generator polynomial be G(x). After performing the division of xn-1D(x) by G(x), we get a remainder R(x) and a quotient Q(x) satisfying

xn-1D(x) = Q(x)G(x) + R(x) Equation 1

• Appending this remainder to the original data D(x) gives us the transmitted block, T(x) which is equal to

xn-1D(x) + R(x)

which, from equation 1 is exactly divisible by G(x), and if no errors occur, the receiver calculates a zero remainder.

CRC - Mathematical Analysis

Page 8: Lecture Notes 2002

• Suppose errors do occur, and are represented by the error polynomial E(x). Then the received frame, T(x) is given by

T(x) = T(x) + E(x) = Q(x)G(x) + E(x)

• Clearly for T(x) to give zero remainder when divided by G(x), this implies that E(x) must be exactly divisible by G(x)

• Put another way: Error patterns which are multiples of the generating polynomial will not be detected; all other error patterns will.

CRC - Hardware Implementation

• As mentioned earlier, the calculations needed to perform CRC are easily implemented in fast hardware.

• This hardware takes the form of feedback shift registers e.g.(Halsall p 135, fig 3.18)

Page 9: Lecture Notes 2002

Lecture 3: Error Control Protocols

• Generally employ error detecting codes in conjunction with some form of retransmission mechanism.

• The most commonly used mechanisms are called automatic repeat request (ARQ) protocols

• ARQ protocols come in various forms, including:

• Idle ARQ

• Continuous ARQ• Selective repeat• Go-back-N

Idle ARQ (Stop-and-Wait)

General mechanism:

• Tx transmits a single information frame (I-frame), starts a timer and waits.

• If Rx receives an I-frame without errors, it accepts the frame and sends back a short acknowledgement (ACK-frame).

• If Rx receives an I-frame containing errors it discards the I-frame.

• On receipt of an error-free ACK frame, Tx sends the next I-frame, restarts the timer and waits.

• If the Tx timer expires before an error-free ACK frame is received, Tx resends the I-frame, restarts the timer and waits.

Idle ARQ (Stop-and-Wait)

Additional comments:

• The Tx timeout interval must be greater than the I-frame transmission time + (2 x end-end propagation delay) + processing time at Rx.

• I-frames and ACK-frames must include a sequence number, to allow Rx to discriminate between

duplicate copies of I-frames.

• Optionally, Rx may send back a negative acknowledgement frame (NACK) when an erroneous I-frame is received.

• If the link propagation delay is large compared to the I-frame transmission time, Idle ARQ has poor link utilisation.

Page 10: Lecture Notes 2002

Idle ARQ (Stop-and-Wait)

(Halsall p171, fig. 4.1)

Continuous ARQ

• Tx sends I-frames continuously without waiting for ACK-frames to be returned (although the number of unacknowledged frames allowed to be outstanding is limited to a certain maximum, called the "window size").

• All I-frames and ACK-frames contain sequence numbers.

• Copies of all transmitted I-frames are kept at Tx in a retransmission list.

• Rx returns an ACK-frame for each correctly received I-frame.

• When Tx receives an error free ACK-frame, it removes the corresponding I-frame from its retransmission list.

Continuous ARQ

• If an I-frame or its corresponding ACK-frame are lost or damaged, Tx detects this either via a timeout, or because ACK's are out of order.

• In this event, two main retransmission schemes are available:

• Selective Repeat - Tx retransmits only the unacknowledged I-frame, then carries on

• Go-back-N - Tx retransmits the unacknowledged I-frame and all succeeding frames that were transmitted in the interim.

Continuous ARQ

• Selective Repeat(Halsall p191, fig. 4.12)

Page 11: Lecture Notes 2002

Continuous ARQ

• Go-back-N(Halsall p196, fig. 4.14)

Link Utilisation of ARQ Protocols

• Clearly the use of ARQ protocols entails some overhead which reduces the link utilisation.

• Utilisation, U, may be defined as

where

Tf = Time for transmitter to emit a single frame

Tt = Total time that line is engaged in the transmission of a single frame.

• Parameters which affect link utilisation for ARQ protocols include: frame transmission time, link propagation delay, error rate, window size (continuous ARQ).

• In the following treatment, processing time at Tx and Rx are assumed to be negligible, and ACKs are assumed to be very short.

Link Utilisation of ARQ Protocols

Case 1: No Errors

Idle ARQ

The total time to successfully exchange a frame is equal to the sum of:

• frame transmission time, Tf

• frame propagation time• ACK propagation time

The last two are each equal to the link propagation delay Tp.

So the link utilisation, U, is given by

where a = Tp/Tf

Page 12: Lecture Notes 2002

Link Utilisation of ARQ Protocols

Case 1: No Errors

Continuous ARQ

With no errors, utilisation for selective repeat and go-back-N are the same (there are no retransmissions).

Let the window size be K.

If K 1+2a, the transmitter can send continuously without pause since ACK frames come back before the window size is reached.

If K < 1+2a, the transmitter sends K frames and then has to wait until a time T f+2Tp from the start of transmission until ACKs start returning. The utilisation is therefore given by

Link Utilisation of ARQ Protocols

Case 2: With Errors

Idle ARQ

To calculate utilisation with a frame error rate of P, we need to estimate how many times (on average) a frame must be transmitted to be received without error.

The probability that a frame must be transmitted i times before being successfully received is equal to

Pi-1(1-P)

That is, we have (i-1) unsuccessful attempts followed by one successful attempt.

The average number of times a frame must be transmitted is equal to

It can be shown that this sum is simply equal to

1/(1-P)

The utilisation of idle ARQ with errors is therefore equal to:(1-P)/(1+2a)

Link Utilisation of ARQ Protocols

Case 2: With Errors

Continuous ARQ (Selective Repeat)

Page 13: Lecture Notes 2002

Since only erroneous frames are retransmitted, the utilisation is simply reduced by the average number of times a frame needs to be sent for it to be received without errors.

From the earlier case of Idle ARQ with errors, this average number is equal to 1/(1-P).

The utilisation of selective repeat with errors is therefore given by

U = 1-P when K 1 + 2a

U = (1-P)K/(1+2a) when K < 1 + 2a

Link Utilisation of ARQ Protocols

Case 2: With Errors

Continuous ARQ (Go-back-N)

The situation is more complicated here since an erroneous frame entails the transmitter "going-back-N" and retransmitting several frames.

Let f(i) be the total number of frames which must be retransmitted if the original frame must be transmitted i times. If, for each erroneous transmission of the original frame, the transmitter has to "Go-back-N" then

f(i) = 1 + (i - 1)N

The average total number of frames, Nr, which must be transmitted for the successful exchange of a single frame is then given by

This can be simplified to

Link Utilisation of ARQ Protocols

Case 2: With Errors

Continuous ARQ (Go-back-N) (continued)

To complete the estimation of utilisation, we need to determine the value of N - i.e. exactly how far must the transmitter go back when a frame error occurs?

The value of N depends on the relative values of the window size, K, and the normalised link propagation delay, 1 + 2a.

If K 1 + 2a, then N = 1 + 2a

If K < 1 + 2a, then N = K

Page 14: Lecture Notes 2002

After some manipulation, this gives us the utilisation of the Go-back-N protocol as

(Note: This is a different, and better, derivation than given in Halsall's text - for more details see W. Stallings, "Data and Computer Communications", 5th edition, 1997, pages 190-196).

Link Utilisation of ARQ Protocols

The following equalities are used in these derivations:

Page 15: Lecture Notes 2002

Lecture 4: Protocol Specification and Verification

• The need for formal methods:

• To allow unambiguous and complete protocol specification using mathematical formalism

• To allow formal analytic proofs of correctness, and automated verification

• To provide a basis for the generation of test cases to verify protocol implementations.

Protocol Specification

• Most methods for specifying a communication protocol are based on based on modelling the protocol as a finite state machine or automaton - the protocol entity can only be in one of a finite number of defined states at any instant.

• Commonly used methods include:

• State transition diagrams and extended event-state tables

• High level structured programs, or specification languages (e.g. Estelle. SDL, LOTOS).

• Petri Nets

Protocol Specification

• Model of communication subsystem architecture and protocol entity interfaces.(Halsall p178, fig. 4.4)

State Based Specification Methods

• State based models can be defined in terms of:

• Subsystem states (e.g. Idle)

• Incoming events, which cause state transitions (e.g. ACK-frame received)

• Outgoing events, usually generated as a result of an incoming event (e.g. send next I-frame)

• Predicates or boolean variables (e.g. IF SequenceNo in outstanding I-Frame = SequenceNo in ACK-frame)

State Based Specification Methods

• Idle-ARQ events, states and predicates - primary(Halsall p179, fig. 4.5a)

Page 16: Lecture Notes 2002

State Based Specification Methods

• Idle-ARQ events, states and predicates - secondary(Halsall p180, fig. 4.5b)

State Based Specification Methods

• Idle ARQ state transition diagram and extended event-state table - primary(Halsall p181, fig. 4.6)

State Based Specification Methods

• Idle ARQ state transition diagram and extended event-state table - secondary(Halsall p181, fig. 4.7)

High-Level Language Specification Methods

• Idle ARQ pseudocode - primary(Halsall p183, fig. 4.8a)

High-Level Language Specification Methods

• Idle ARQ Pseudocode - secondary(Halsall p183, fig. 4.8b)

High-Level Language Specification Methods

• Specialised specification languages have been developed for defining state-driven systems such as protocols.

• One such language is Estelle - Extended State Transition Language - which is an extended version of Pascal which allows explicit representation of transition rules and actions.

• Example portion of Estelle - idle ARQ primary.(Halsall p184, fig. 4.8c)

Petri Net Models

• Widely used in studying all types of concurrent systems, a Petri net is made up of the following four elements:

• Places which represent the state of part of the system (e.g. primary, secondary, channel)

• Transitions (state transitions)

• Arcs which connect input and output places to transitions

Page 17: Lecture Notes 2002

• Tokens which indicate the current state of the net.

Petri Net Models

• The rules for "executing" the Petri net are:

• The net is initially marked - tokens are deposited in certain places to indicate the initial state of the system.

• A transition is enabled if all of its input places are in possession of a token.

• Any enabled transition will fire at will, removing tokens from all input places and depositing a token in each output place.

• If two or more transitions are enabled, any one of them may fire (selected at random) - this enables the modelling of nondeterminism in the system.

Petri Net Models

• A Petri net model for Idle ARQTanenbaum, 'Computer Networks', 3rd Ed., 1996, p225, fig. 3.23

Protocol Verification

• A formal specification of a protocol may be subjected to automated or semi-automated verification.

• The most commonly used method is that of state exploration (reachability analysis):

• An initial state of the system is defined

• All system states reachable from the initial state are determined by systematically exploring all transitions.

• Reachable states are analysed to determine whether they manifest errors.

• Among the many possible protocol design errors are:

• Incompleteness (e.g. the specification may not say what is to happen when a particular event occurs in a particular state.)

• Deadlock (e.g. a subset of states exists for which it is impossible to exit and continue).

• Redundant states (states which are never entered).

Page 18: Lecture Notes 2002

Lecture 5: Local Area Networks 1

• Definition of a LAN

• LAN Topologies

• LAN Medium Access Control Methods

• LAN Protocol Standards

Definition of a LAN

• A computer network used to connect machines in a single building or localised group of buildings

• LANs are usually owned, installed and maintained by the organisation.

LAN Topologies

• A LAN topology describes the physical layout of the network cabling and the way in which connected nodes access the network.

• Choice of topology is affected by a number of factors including: economy; type of cable used; ease of maintenance; reliability.

LAN Topologies

• Star Topology

Nodes communicate via a central switch (e.g. a private digital exchange).(Halsall p6.2, fig. 6.2a)

LAN Topologies

• Ring Topology

Nodes are connected together in a closed loop or ring. Data flow round the ring is usually one-way, and nodes contain active repeaters.(Halsall p274, fig. 6.2b)

LAN Topologies

• Bus Topology

A single network cable is routed to all nodes. Nodes "tap" onto the shared cable.(Halsall p274, fig. 6.2c)

Page 19: Lecture Notes 2002

LAN Topologies

• Hub/Tree Topology

A combination of star/bus or star/ring. The hub is simply the bus or ring wiring collapsed into a central unit, and does not perform switching.(Halsall p274, fig. 6.2d)

LAN Medium Access Control Methods

• With bus and ring topologies (the most common), nodes are connected by a single transmission channel

• Nodes must obey a discipline which determines the way in which access to the shared transmission medium is controlled. This is the medium access control method.

• Two main MAC methods in widespread use:

• Carrier Sense Multiple Access with Collision Detection (CSMA/CD)• Token passing

LAN Medium Access Control Methods

• Carrier Sense Multiple Access with Collision Detection (CSMA/CD)

• CSMA/CD is a contention-based protocol used on broadcast (e.g. bus) networks.

• Basic set of rules for operation of a node wishing to transmit data:

1. If the shared channel is quiescent, then transmit a packet of data.

2. If the channel is busy, monitor the channel until it becomes free, then transmit a packet.

3. While transmitting a packet, monitor the channel for collisions. If a collision is detected, abort packet transmission, wait a random amount of time and GOTO step 1.

• All nodes read all packets from the channel. Packets contain destination addresses and error check bits. When a node reads a packet containing its own address and with no errors, the packet is accepted.

LAN Medium Access Control Methods

• CSMA/CD Operation in Flowchart Form(Halsall p291, fig. 6.11)

LAN Medium Access Control Methods

• Token Passing

• Can be employed on ring and bus topology networks

Page 20: Lecture Notes 2002

• A control token is passed from node to node; a node may only transmit when in possession of the token, and must pass on the token when finished transmitting data.

LAN Medium Access Control Methods

• Token Ring

• Nodes are connected using a ring topology network.

• The token is passed around the ring from node to node. A node wishing to transmit data waits until it reads the token.

• When the token arrives, the node removes it and then begins transmitting one or more data frames.

• Data frames circulate round the ring. All nodes inspect the frame destination address. The node for which the frame is intended makes a copy of the frame.

• Finally when the frame arrives back at the sending node, it is removed by that node. The node then regenerates the token and passes it to the next station in the ring.

LAN Medium Access Control Methods

• Token Ring(Halsall p282, fig. 6.6a)

LAN Medium Access Control Methods

• Token Bus

• Nodes are connected over a bus topology network.

• Token passing similar in principle to token ring, except that the token must now contain an address field (since the bus is a broadcast medium)

• Every node must keep a record of which node is next in the logical ring

• An Aside: In reality, token passing is more complex than described. It commonly implements priorities and must have some method built into the protocol to deal with loss of token and the adding/removal of stations to/from the network.

LAN Medium Access Control Methods

• Token Bus(Halsall p282, fig. 6.6b)

LAN Medium Access Control Methods

Page 21: Lecture Notes 2002

• Comparison of CSMA/CD and token passing

CSMA/CD Token

Access Determination Contention Token

Packet Length > 2 x prop. No Restrictiondelay

Principle Advantage Simplicity Regulated/fair access

Principle Disadvantage Performance Complexitydegrades underheavy load

LAN Medium Access Control Methods

• Comparison of CSMA/CD and token passing(Halsall p316, fig. 6.23)

LAN Protocol Standards

• Halsall p342, fig. 6.38

Page 22: Lecture Notes 2002

Lecture 6:Local Area Networks 2

• IEEE Standard 802.3 LANs("Ethernet")

• Introduction

• Cabling Options

• Frame format and operational parameters

• Switched IEEE 802.3 LANs

IEEE Standard 802.3 LANs

• Original Ethernet developed by Xerox (Metcalf and Boggs, 1976), then adopted and further developed by Xerox, DEC and Intel. Finally extended and standardised by the IEEE as standard 802.3.

• Main features of IEEE 802.3 networks:

• bus based topology, using CSMA/CD medium access control

• main data rate is 10 Mbps (although other data rates are included in the standard)

• a variety of transmission media and cabling options are specified

IEEE Standard 802.3 LANs

• Cabling options:

10BASE5 Thick Coax 500m max. length10BASE2 Thin Coax 200m max. length10BASET Twisted pair 100m max. length10BASEF Optical fibre 1000m max. length

IEEE Standard 802.3 LANs

• 10BASE5

• Earliest version of 802.3, also known as "Thick Ethernet"

• Station connects to thick coaxial cable via a transceiver cable and transceiver unit:(Halsall p286, figs. 6.8a, 6.8b)

IEEE Standard 802.3 LANs

• 10BASE5

• Functions of transceiver unit:

• Send and receive data to/from the cable• Collision detection• Provide electrical isolation between cable and LAN interface electronics• Protect shared cable from transceiver/DTE malfunctions (e.g. "jabber")

Page 23: Lecture Notes 2002

• Transceiver schematic:(Halsall p286, fig. 6.8c)

IEEE 802.3 Standard LANs

• 10BASE2

• Broadly similar to 10BASE5 but uses cheaper cable. Sometimes called "Thin Ethernet" or "Cheapernet"

• Transceiver electronics is incorporated into the LAN interface board, so a transceiver cable is not needed and the coax runs straight to the DTE.

IEEE Standard 802.3 LANs

• 10BASET

• Uses dual twisted pair as the transmission medium

• Twisted pairs run to a network hub (hence a physical star topology). On pair is used for transmit, the other for receive

• A collision is detected when a node senses incoming data on the receive pair while it is transmitting.

• Hub contains repeater electronics(Halsall p288, fig. 6.9)

IEEE 802.3 Standard LANs

• 10BASEF

• Similar physical star topology as 10BASET but using dual optical fibre cable for longer transmission distances

• A general point about commercial 802.3 network cards: most provide multiple connectors to support the different types of transmission medium

IEEE 802.3 Standard LANs

• Frame format:(Halsall p289 fig. 6.10a)

• Preamble contains seven octets, each equal to 10101010, to provide synchronisation.

Page 24: Lecture Notes 2002

• Frame sizes are limited to 64 bytes minimum (see tutorials for the reason for this), and may contain up to 1500 bytes of data.

• Frames are error checked using CRC

IEEE 802.3 Standard LANs

• Operational Parameters(Halsall p289, fig. 6.10b)

• IEEE 802.3 networks use the truncated binary exponential backoff algorithm when there are repeated collisions

IEEE 802.3 Standard LANs

• Switched 802.3 LANs

• A basic 802.3 hub repeats an incoming transmission to all outgoing links and clearly only one transmission can be in progress at any one time

• By increasing the complexity of the hub electronics, the hub can operate in non-broadcast mode:

• hub reads source addresses from packets and builds up a table of MAC addresses and corresponding ports

• using this table, the hub can repeat packets only to the ports to which they are addressed, and so several 10 Mbps paths can effectively be in use simultaneously.

IEEE 802.3 Standard LANs

• Switched IEEE 802.3 LANs

• To operate in this mode, the hub must be able to repeat several frames in parallel. This can be done with the following arrangement:(Halsall p356, fig. 7.2a)

IEEE 802.3 Standard LANs

• Switched IEEE 802.3 LANs

• Heavily used paths (e.g. connections to file servers, connections between hubs) can use higher data rates (in multiples of 10 Mbps):(Halsall p356 fig. 7.2b)

Page 25: Lecture Notes 2002

Lecture 7: High Speed LANs and Bridged LANs

• Fast Ethernet Networks

• Fibre Distributed Data Interface (FDDI) Networks

• Distributed Queue Dual Bus (DQDB) Networks

• Bridged Local Area Networks

Fast Ethernet Networks

• Aim: obtain an order of magnitude increase in speed (100 Mbps compared to 10 Mbps) while retaining the same wiring systems, MAC method and frame formats.

• Two versions available:

• 100BASE4T (uses voice grade 4-pair cable)

• 100BASEX (uses shielded twisted pair, or fibre optic cable)

Fast Ethernet Networks

• Architecture

• A convergence sublayer provides the interface between the standard IEEE 802.3 MAC sublayer and the underlying physical medium dependent sublayer:(Halsall p358, fig. 7.3)

Fast Ethernet Networks

• 100BASE4T

• Each node is connected to the hub by four twisted pairs, used as follows:

Pair 1: Transmit only (and collision detection)Pair 2: Receive only (and collision detection)Pair 3: BidirectionalPair 4: Bidirectional

i.e. transmission uses pairs 1, 3 and 4; reception uses pairs 2, 3, 4. (Halsall p360, fig. 7.4a)

Fast Ethernet Networks

• Each twisted pair carries data at 33.3 Mbps (giving the composite data rate of 3 x 33.3 = 100 Mbps)

• The limited bandwidth of the twisted pair cable means that Manchester encoding (used in 10BASET) cannot be employed. Instead an encoding method called 8B6T is used:

• 8B6T takes 8 binary symbols (bits) and converts them into 6 ternary (3 -level) symbols. This reduces the baud rate on each cable to 25 Mbaud

Page 26: Lecture Notes 2002

• 6 ternary symbols gives 729 (36) possible codewords of which only 256 (28) are needed to encode 8 bits of data. Ternary codewords are chosen to achieve DC balance, and to ensure all codewords contain at least two signal transitions (for synchronisation).

Fast Ethernet Networks

• 8B6T Codeword Set(Halsall p361 table 7.1)

Fast Ethernet Networks

• 100BASEX

• Uses high quality shielded twisted pair or optical fibre

• Collision detection method the same as 10BASET.

• Employs a coding technique called 4B5B (same as FDDI) to ensure guaranteed signal transitions at least every two bits for synchronisation

FDDI Networks

• FDDI - Fibre Distributed Data Interface.

• Dual ring topology (for reliability), normally implemented as a hub/tree

• Two types of station: dual attached stations (connected to both rings) and single attach stations (connected only to one ring - the primary)(Halsall p377 fig. 7.12)

FDDI Networks

• Transmission medium: multi-mode optical fibre, giving a maximum network length of 100km and a maximum internode spacing of 2km (a copper version, CDDI, is available for shorter distance working.)

• Each ring operates at 100Mbps using 4B5B encoding

• Employs a modified release after transmission token passing protocol called a timed token rotation protocol

FDDI Networks

• Timed token rotation protocol:

• A preset parameter - the target token rotation time (TTRT) - is defined (4ms - 165ms)

• For each rotation of the token, each station measures the time expired since it last acquired the token. This is the token rotation time (TRT).

• On receiving the token, a station computes TTRT - TRT, called the token hold time (THT) If the station has data to send, this is the maximum amount of time it is allowed to transmit for before passing on the token.

Page 27: Lecture Notes 2002

FDDI Networks

• FDDI provides an option for synchronous data - data that is delay sensitive and must be transferred within a guaranteed maximum time interval.

• To deal with synchronous as well as asynchronous data, the timed token rotation protocol is modified as follows:

• When a station wishes to send synchronous data, it sends a request to a network management station, stating how much of the network capacity it requires.

• If capacity is available, the network management station will allocate the requested amount of capacity.

• Every time a station receives the token, it may send its allocated amount of synchronous traffic.

• Any remaining time available up to the token hold time (THT) may be used for asynchronous transmission.

DQDB Networks

• DQDB - Distributed Queue Dual Bus

• Designed for both LANs and Metropolitan Area Networks (MANs) and standardised in IEEE 802.6

• Topology: Two parallel unidirectional buses are used to connect stations.

• Each bus has a head-end which generates a steady stream of 53 byte cells. Each cell travels downstream from the head-end.(Halsall p603, fig. 10.19a)

DQDB Networks

• Each cell contains a 44byte payload field, together with control and address bits.

• DQDB, as its name implies, implements a distributed first-come-first-served queue medium access protocol:

• Each cell contains two control bits - a busy bit and a request bit. Inspection of these bits allow a station to determine:• if a cell is in use• if an "upstream" station is requesting to transmit

• Each station maintains two counters for each bus:• a request counter (RC), which is used for counting the number of requests• a countdown counter (CD), used when a station is ready to transmit

DQDB Networks

• DQDB MAC (continued)

• A request for transmission on one bus is made by setting the request bit in a cell on the other bus.

Page 28: Lecture Notes 2002

• For each bus, when a cell passes through:• If the busy bit = 0, decrement RC• If the request bit = 1 increment RC

• Therefore, at any time, a station "knows" how many requests there are outstanding by other stations for each bus

• When a station has data to send, it copies RC into CD and every time a cell goes by with the busy bit = 0, it decrements CD.

• When CD becomes 0, the station uses the next unused cell

Bridged Local Area Networks

• A bridge connects network segments locally or remotely so they appear to the user as a single network.

• A bridge reads, error checks and buffers incoming data frames on the network segments it connects. The frame destination is inspected and the frame is forwarded to the correct network segment.

• A bridge therefore operates at the MAC sublayer(Halsall p391, fig. 7.19)

Bridged Local Area Networks

• Reasons for using bridges:

• To segment a network for localisation of traffic

• To connect different LANs (e.g. Ethernet to token ring)

• To extend the size of a LAN (length and number of nodes)

• To increase reliability (using redundant bridges) and security.

Bridged Local Area Networks

• There are two main types of bridges:

• Transparent bridges • primarlly used on Ethernet-style networks• route calculation performed by the bridges

• Source routing bridges • primarily used in token ring networks• route calculation performed by end stations

• We will look only at the operation of transparent bridges (details on source routing bridges can be found in Halsall p409-417)

Bridged Local Area Networks

• Operation of transparent local area network bridges

Page 29: Lecture Notes 2002

• Frame forwarding (filtering)

• A bridge maintains a routing table which stores, for each station, the outgoing port to be used for frames addressed to that station

• When a frame arrives at the bridge (which operates in promiscuous mode), the destination address is inspected and used to index into the routing table

• The frame is forwarded to the correct outgoing port (unless this is the same port on which the frame arrived, in which case the frame is discarded)

Bridged Local Area Networks

• Operation of transparent LAN bridges (continued)

• Bridge Learning

• When a bridge is first powered up, its routing table in empty.

• The bridge "learns" where stations are by inspecting the source address in each frame. Routing table entries are constructed using this information.

• When a frame is read by a bridge with no routing table entry for its destination address, the frame is forwarded to all other outgoing ports of the bridge (flooding)

• Removal and moving of stations is catered for by using an inactivity timer for each entry in the routing table.

Page 30: Lecture Notes 2002

Lecture 8: Wireless Local Area Networks

• Applications, and Advantages over Wired LANs

• Wireless Media• Radio• Infrared

• WLAN Transmission Schemes

• WLAN Medium Access Control Methods

• WLAN Standards

Wireless Local Area Networks

(Halsall p317, fig.6.24)

Applications and Advantages over Wired LANs

• Wired LANs incur costs of cabling, and of changing the wiring plan if the installation changes

• Wired LANs do not naturally support the increasing proliferation of hand-held terminals and portable computers

• Some locations may be difficult to wire

• DTEs may be mobile

• Some applications of WLANs: factories, hospitals, historic buildings, emergency LAN backup

Wireless Media - Radio

• Transmission impairments:

• Path loss (signal strength at receiver 1/d2, less with obstacles)

• Adjacent Channel Interference:• external interference• other PAUs; this interference can be reduced using a 3-cell repeat pattern

(Halsall p321, fig. 6.25a)

Wireless Media - Radio

• Transmission impairments (continued):

Page 31: Lecture Notes 2002

• Multipath Effects - multipath dispersion causes intersymbol interference and Rayleigh fading. (Can be reduced using adaptive equalisers.)(Halsall p321, fig. 6.25b)

Wireless Media - Infrared

• Uses LEDs or laser diodes

• Three main operation topologies/modes:• diffused mode• passive satellite• active satellite(Halsall p324, fig. 6.26)

Transmission Schemes (Radio)

• Direct Sequence Spread Spectrum

• The source data to be transmitted is exclusive-ORed with a faster rate pseudorandom binary sequence (Halsall p327, fig. 6.28b)

• This has the effect of widening the spectrum of the information carrying signal (hence "spread-spectrum)

• The pseudorandom sequence is also known as the spreading sequence; each bit in the sequence as a chip, the resulting transmission bit rate as the chipping rate and the number of bits in the sequence as the spreading factor.

Transmission Schemes (Radio)

• Direct Sequence Spread Spectrum

• The pseudorandom binary sequence is usually generated using a feedback shift register:(Halsall p327, fig.6.28a)

Page 32: Lecture Notes 2002

Transmission Schemes (Radio)

• Direct Sequence Spread Spectrum

• To allow synchronisation at the data rate (as opposed to the chipping rate), data frames are transmitted with a preamble (e.g. a sequence of 1's) and a start of frame delimiter:(Halsall p326, fig.6.27)

Transmission Schemes (Radio)

• Direct Sequence Spread Spectrum

• As the signal arrives at the receiver, the demodulated binary stream is fed into an autocorrelation detector:(Halsall p327, fig.6.28b,c)

Transmission Schemes (Radio)

• Direct Sequence Spread Spectrum

• An autocorrelation example (using the 11 bit spreading code 10110111000:(Halsall p 329, fig.6.29)

Transmission Schemes (Radio)

• Frequency Hopping Spread Spectrum

• The allocated frequency band is divided into a number of lower frequency sub-bands called channels.

• Transmitter uses each channel for a short period of time before "hopping" to a different channel

• The hopping sequence is pseudorandom

Transmission Schemes (Radio)

• Frequency Hopping Spread Spectrum

• Fast and slow frequency hopping(Halsall p331, fig. 6.30)

Transmission Schemes (Infrared)

• Direct modulation (e.g. on-off keying with Manchester encoding)

• Pulse position modulation (reduces the power requirements of transmitter)

• Carrier modulation - binary data is modulated onto a suitable frequency carrier using FSK or PSK

• Multi-subcarrier modulation - available bandwidth is divided into sub-bands. Each sub-band is used to transmit a portion of the bit stream. e.g. if 4 sub-bands are used, each bit

Page 33: Lecture Notes 2002

cell period is one quarter compared to that using direct carrier modulation - this reduces intersymbol interference caused by multipath effects.

(Note: the latter is also used on some radio WLANs)

WLAN Medium Access Control Methods

• Code Division Multiple Access (CDMA): different pairs of stations use different frequency hopping sequences

• CSMA/CD (modified): • the same collision detection methods as used on wired LANs cannot be used since, with

radio and infrared, transmission and reception at the same time is not possible• CSMA/CD on wireless LANs uses a "comb" - a pseudorandom sequence appended to the

start of each frame (different stations use different, random combs):(Halsall p336, fig.6.33)

WLAN Medium Access Control Methods

• CSMA/CA (Collision Avoidance) - when the medium becomes quiet, a station with data to send waits a random amount of time before transmitting(Halsall p337, fig.6.34)

WLAN Medium Access Control Methods

• TDMA - the portable access unit (PAU) establishes the slot/timing structure. Stations are offered timeslots on demand.

• FDMA - the PAU determines different frequency channels and assigns these on demand.

Wireless LAN Standards

• IEEE 802.11

1 and 2 Mbps using frequency hopping, direct sequence spread spectrum radio , and direct-modulated infrared.

4 Mbps using carrier modulated infrared

10 Mbps using multi-subcarrier-modulated infrared

• ETSI HiperLAN

User bit rate : 10-20 Mbps

Operation range 50m

Radio

Single-carrier modulation (with equalisation) using CSMA/CD or CSMA/CA

Page 34: Lecture Notes 2002

Lecture 9: Wide Area Networks I

• Introduction

• Packet switching and circuit switching

• The X.25 Interface for Packet Switched Networks

Wide Area Networks

• Introduction

• Wide Area Networks (WANs) span national, continental, global areas

• WANs include:

• Public Data Networks (PDNs), which are operated and administered by national telecomms authorities; international standards define the interfaces to these networks

• Enterprise networks, operated by large organisations (who can justify the cost by the amounts of traffic required to be conveyed); network links are leased from telecomms authorities

• Most WANs are either circuit-switched or packet-switched.

Circuit Switching

• A dedicated connection is established exclusively for the use of two subscribers for the duration of the connection

• Connection data rate is usually fixed, and end-end delays are small and fixed

• Usually involves connection setup and connection cleardown phases (although leased point-point permanent circuit switched connections are available)

• The current PSTN is circuit switched

• Circuit switched networks do not usually offer any kind of error or flow control

• Network congestion results in refusal of new connections

Packet Switching

• With packet switching, DTEs break down the data to be conveyed into packets, which are individually offered to the network

• Unlike circuit switching, no dedicated physical connections are established

• Packets contain some form of destination address, and are individually routed through a network of packet switching exchanges (PSEs) on a store-and-forward basis

Page 35: Lecture Notes 2002

Packet Switching

(Halsall p427 fig. 8.2)

Packet Switching

• Each network link carries interleaved packets from different sources to different destinations

• When packets arrive simultaneously at a PSE for routing on the same outgoing link, the packets are placed in a first-come-first-served queue or buffer

• Network congestion results in unpredictably long delays (PSE buffers become full)

• Packet switched networks will commonly provide error and flow control

• Two common implementations of packet switching:

• Datagram• Virtual Circuit

Packet Switching

• Datagram Packet Switching

• No setup or cleardown phase for the connection

• Each packet contains the full destination address, used by PSEs to route packets individually

• Since packets are routed independently, they can arrive at the destination out of order - sequence numbering, buffering and reordering is required

Packet Switching

• Virtual Circuit Packet Switching

• A virtual circuit is established before data packets are sent; packets contain a virtual circuit identifier and all follow the same route

• To establish a virtual circuit to a specific destination DTE, a source DTE sends a special call request packet to its local PSE. Each call request packet contains

• the full destination address• a virtual circuit identifier

• The PSE forwards the call request packet, and records

• the incoming link number, • the incoming virtual circuit identifier, • the outgoing link number and • the outgoing virtual circuit identifier

Page 36: Lecture Notes 2002

in a routing table

• As the call request packet is routed through the network to the destination, each PSE in the path taken creates a similar routing table entry

Packet Switching

• Virtual Circuit Packet Switching

• When the call request packet arrives at the destination DTE, the latter responds with a call accept packet, which is returned to the source DTE

• At this point the virtual circuit has been set up and a fixed route through the network has been established (through the PSE routing table entries)

• Subsequent data packets contain the virtual circuit identifier (as opposed to the full destination address) which is used by the PSEs to route packets

• If there are no errors, a virtual circuit PS network delivers packets in the correct sequence

Packet Switching

• Virtual Circuit Packet Switching(Halsall p428, fig. 8.3)

The X.25 Interface for Packet Switched Networks

• X.25 defines an interface between a DTE and a packet switching network

• Originally approved in 1976, subsequently revised in 1980, 1984, 1988, 1992, 1993; it is considered old for many purposes (most X.25 networks run at 64 kbps) but you should be aware of its existence.

• X.25 defines three layers (which correspond to the bottom 3 layers of the OSI model):

• physical layer• link layer• packet layer

The X.25 Interface for Packet Switched Networks

(Halsall p430, fig. 8.5)

The X.25 Interface for Packet Switched Networks

• Physical layer is defined by another standard - X.21 (and X.21 bis)

• The frame layer provides the packet layer with a reliable (error free and no duplicates) packet transport facility between DTE and local PSE. It is based on another protocol called HDLC

Page 37: Lecture Notes 2002

• The packet layer provides a virtual circuit packet transfer facility, and deals with such issues as virtual circuit setup/cleardown, addressing, flow control and delivery confirmation.

• DTEs which do not "speak" X.25 (e.g. simple character mode terminals which do not have the facility to generate packets) connect to X.25 networks via a packet assembler/disassembler (PAD)

Page 38: Lecture Notes 2002

Lecture 10: Wide Area Networks II

• The Integrated Services Digital Network (ISDN)

• Frame Relay

• Broadband ISDN and ATM

The Integrated Services Digital Network (ISDN)

• Aims and Intended Services

Integrates video, audio and data in addition to telephony over the same digital network with a common interface.

• Telephony (Voice) services:

• Digitised voice right up to the subscribers premises (no analogue local loop). • Very fast call setup times internationally (since purely digital).• PABX style services such as call transfer, calling party ID, conferencing, 'camp-on' etc.

but internationally.• Closed user groups, allowing an organisation to use the public network as its own local

PABX.

• Data services:• High-speed (multiples of 64kbps) switched data services, either circuit or packet

switched.• Videotex (remote database access; e.g. on-line directory assistance), Teletex (E-mail),

high-speed facsimile.• Telemetry and alarm services.

ISDN System Architecture and User-Network Interface

• Customer connects to ISDN network using a bidirectional digital 'bit-pipe'.

• Network terminating equipment (NTE) connects the customers premises to the local ISDN exchange.

• Various types of access points are defined:

• 'R' access point used to connect devices using existing interface standards (such as X21, V24) to an ISDN terminal adaptor;

• 'S' access point connects ISDN devices locally on the customers premises;• 'T' access point connects customers premises to local ISDN exchange.

ISDN System Architecture and User-Network Interface

• ISDN Customer Access Points(Halsall p 463, fig. 8.27)

Page 39: Lecture Notes 2002

ISDN System Architecture and User-Network Interface

• ISDN 'bit pipes' provide multiple channels interleaved using time division multiplexing

• Two ISDN access rates are common:

• basic rate access: (2B + D). B channels are 64 kbps, D channels are 16 kbps, giving composite bit-pipe user data rate of 144 kbps (actual bit rate is 192 kbps, including synchronisation and framing)

• primary rate access (30B+D, Europe; 23B+D, USA and Japan) giving composite bit rates of 2.048Mbps in Europe (which fits in nicely with CCITT PCM hierarchy) and 1.544Mbps in USA/Japan which fits in nicely with AT&T's T1 system.

Frame Relay

• Existing X.25 networks perform switching and multiplexing at the packet layer, even though information arrives in frames. i.e. frames need to be reassembled to form packets, which are then routed, and split up into frames again for retransmission over the correct outgoing link.

• X.25 employs flow control and error correction (using retransmission protocols) at both the frame level and the packet level.

• Clearly this is appropriate for a low quality network (such as an analogue PSTN for which X.25 was originally designed) but is extremely inefficient for a high-speed low error rate network.

Frame Relay

• Frame relay alleviates these problems by switching and multiplexing at the frame level (hence its name).

• In addition, frame relay does not provide any error correction within the network (although it will discard erroneous frames). Higher layer, end-to-end protocols are responsible for error correction.

• Frame relay operates over current networks giving users end-to-end data rates of typically 2Mbps.

• A typical use of frame relay is to connected geographically dispersed LANs in an enterprise WAN.

Broadband ISDN and ATM

• Overall aim of B-ISDN

• to provide a single new network to replace the entire telephone system and all the specialised data networks with a single integrated network for all kinds of information transfer

• In addition to telephony, B-ISDN will support services such as

• video on demand• live TV from many sources• full motion multimedia electronic mail• CD quality music

Page 40: Lecture Notes 2002

• LAN interconnection• very high speed data transport services

• The proposed access rates for B-ISDN are 155 Mbps and 620 Mbps

Broadband ISDN and ATM

• The proposed implementation for B-ISDN is Asynchronous Transfer Mode (ATM)

• Summary of ATM:

• uses a modification of the virtual circuit packet switching model; a virtual channel is set up between two end users through the network and a variable-rate full duplex flow of fixed-size cells is exchanged over the connection;

• use of small, fixed-size cells (53 bytes) allows faster switching and lower queueing delay for high priority cells;

Broadband ISDN and ATM

• To deal with traffic of very different characteristics and very different requirements, ATM offers a number of service categories:(Tanenbaum p459)

Class Description Example

CBD Constant bit rate T1 circuit

RT-VBR Variable bit rate: Real-timereal time videoconferencing

NRT-VBR Variable bit rate: Multimedia emailnon-real time

ABR Available bit rate Browsing the web

UBR Unspecified bit rate Background filetransfer

Broadband ISDN and ATM

• In addition to different service categories, ATM also supports quality of service negotiation when connections are established

• During connection establishment, the ATM network performs admission control:

• when a DTE requires a new virtual circuit, it must describe the traffic to be offered and the service expected

• the network then checks to see if it can offer this connection without adversely affecting existing connections

• if it can, the request is accepted (admitted) and the connection is set up; if it cannot, the connection is rejected

Page 41: Lecture Notes 2002

• The ATM network also carries out policing: usage of network usage is monitored for each established connections. If this usage is greater than that negotiated during admission, excess cells can be discarded.

Broadband ISDN and ATM

• Some of the ATM quality of service parameters

Parameter Meaning

Peak cell rate Maximum rate at which cells can be sent

Sustained cell rate The long-term average cell rate

Minimum cell rate The minimum acceptable cell rate

Cell delay variation The maximum acceptable tolerance cell jitter

Cell loss ratio Fraction of cells lost or delivered too late

Cell transfer delay How long delivery takes (mean and maximum)

Broadband ISDN and ATM

• Some of the ATM quality of service parameters (continued)

Parameter Meaning

Cell delay variation The variance in cell delivery times

Cell error ratio Fraction of cells delivered containing errors

Severely-errored Fraction of blocks garbledcell block ratio

Cell misinsertion Fraction of cells delivered torate wrong destination

Page 42: Lecture Notes 2002

Lecture 11: Introduction to Queuing Theory

• Example of Queuing Systems

• Queue Parameters

• M/M/1 Queues

• Calculation of Mean Number of Customers and Mean Waiting Time for M/M/1 Queues

• Example Calculation - a Statistical Multiplexer

Examples of Queuing Systems

• Supermarket checkouts

• Aircraft takeoffs/landings

• Printer spoolers

• Statistical Multiplexers

• Packet Switches

Queue Parameters

• Inter-arrival time probability density function (pdf)(Determines the pattern of arrivals)

• Service time probability density function (pdf)(Determines the pattern of service/departures)

• Number of servers

• Queuing discipline (e.g. first-come-first-served, shortest job first etc.)

• Amount of buffer space

(Diagram)

M/M/1 Queues

• Inter-arrival time pdf = EXPONENTIAL (or Markov)

• Mean arrival rate = λ

• Probability of an arrival between times t and t+Δt = λΔt

• Probability of exactly n customers arriving in a time t is equal to

Page 43: Lecture Notes 2002

• Probability of inter-arrival time being between t and Δt is equal to λe-λt

M/M/1 Queues

• Service time pdf = EXPONENTIAL, with mean service rate μ (same equations as for λ above apply)

• Number of servers = 1

• Queuing discipline = First-Come-First-Served

• Buffer size = Infinity

• (An aside for computer networks. The mean arrival rate, λ , is the mean rate at which packets arrive for transmission over a particular link. The service rate, μ , is equal to the mean packet size divided by the data rate of the link).

Mean Number of Customers and Mean Waiting Time in M/M/1 Queues

• Define the state of the queue at a given time as the number of customers in the queue at a given time

• Queue state transition diagram:(diagram)

• State transition probabilities are determined by the probabilities of arrivals/departures. Only transitions between adjacent states are allowed (birth-death system)

Mean Number of Customers and Mean Waiting Time in M/M/1 Queues

• It can be shown that, for a queue in equilibrium, that the probability of finding the system in a given state does not change with time

• From this follows the Principle of Detailed Balancing which states that:

λPk = μPk+1

Page 44: Lecture Notes 2002

• Hence Pk+1 = (λ/μ)Pk = ρPk where ρ = λ/μ and is called the "traffic intensity"

• Therefore:

P1 = ρP0

P2 = ρP1 = ρ2P0

..........Pn = ρnP0

Mean Number of Customers and Mean Waiting Time in M/M/1 Queues

• It can be shown that P0 = 1 - ρ which gives

Pn = ρn(1-ρ)

• The mean length of the queue is equal to

or

which is equal to

Mean Number of Customers and Mean Waiting Time in M/M/1 Queues

• The mean waiting time is calculated using Little's result which states that

N = λT

where N is the average queue occupancy, and T is the mean waiting time

• From this, we end up with the simple result that the average waiting time is equal to

• e.g if μ = 1.0 customers/sec and λ = 0.5 customers/sec, tha mean waiting time is 2 seconds

Page 45: Lecture Notes 2002

• Note that if λ μ the mean waiting time is infinite (in fact, the queue never reaches equilibrium and the analysis given above does not hold)

Example - a Statistical Multiplexer

• Two computers are connected by a 64 kbps line. There are eight parallel sessions using the line. Each session generates Poisson traffic with a mean of two packets/sec. The packet lengths are exponentially distributed with a mean of 2000 bits. The system designers must choose between giving each session a dedicated 8 kbps piece of bandwidth (via TDM or FDM) or having all packets compete for a single 64 kbps channel. Which alternative gives a better response time?

(From Tanenbaum, Computer Networks, 2nd Edition)

Example - a Statistical Multiplexer

• Using 8 x 8kbps channels

• Each 8 kbps channel operates as an independent queueing system with λ = 2 packets/sec and μ = 4 packets/sec (8 kbps data rate with mean frame size 2000 bits)

• Therefore the mean waiting time = 1/(μ-λ) which is equal to 500 ms

• Using a single 64kbps channel

• Here λ = 16 (8 sessions, 2 packets/sec per session) and μ = 32 (64 kbps data rate with mean frame size 2000 bits)

• Therefore the mean waiting time = 1/(μ-λ) which is equal to 66.7 ms

• This conclusion is very general - splitting up a single channel into k fixed pieces makes the response time k times worse (approx.). The reason is that it frequently happens that several of the smaller channels are idle, while other ones are overloaded. The lost bandwidth can never be regained.

Page 46: Lecture Notes 2002

Lectures 12/13: Principles of Network Routing

• Introduction

• Definition• Desirable characteristics of routing algorithms• Static versus adaptive routing

• Static routing methods

• Shortest path routing• Flooding/Selective flooding• Flow based routing

• Adaptive routing methods

• Distance vector routing• Link state routing• Example - history of the ARPANET routing algorithm

Introduction

• The network layer is concerned with getting packets from the source all the way to the destination. Getting packets to the destination typically requires making many hops at intermediate routers (switches) in a complex interconnected mesh of such routers.

• The routing algorithm is that part of the network layer software responsible for deciding which outgoing line an incoming packet should be transmitted on.

• Datagram networks apply routing on a packet-by-packet basis; virtual circuit networks apply routing at the virtual circuit set-up time (sometimes called session routing).

Introduction

• Some desirable characteristics of routing algorithms:

• Correctness

• Simplicity

• Robustness• Routing algorithm should cope with host/router/line failures and changes in

traffic and topology

• Stability• For example, a routing technique which reacts quickly to changing conditions

(e.g. traffic) may exhibit unstable swings

• Fairness• Different users (sessions) should be treated fairly (i.e. offered similar grades of

service)

• Optimality

Page 47: Lecture Notes 2002

• e.g. optimise some criterion such as mean packet delay, network throughput..

NOTE: These requirements often conflict!

Introduction

• Static (non-adaptive) versus Adaptive routing

• Static routing methods

Do not base routing decisions on measurements or estimates of current traffic and topology. Routing choices are calculated offline and downloaded to routers when the network is booted

• Adaptive (dynamic) routing methods

Routing decisions are changed to reflect changes in topology and/or traffic. Adaptive algorithms differ in

• where they get their information (e.g. locally, adjacent routers, all routers),

• when they change the routes (e.g. every Δt, when the load changes, when the topology changes),

• and what metric is used for optimisation (e.g. distance, number of hops, estimated transit time

Static Routing Methods

• Shortest Path Routing

• Considers the network as a graph, where each node represents a router and each arc represents a communication link. To choose a path between a given pair of routers, the algorithm just finds the shortest path between them on the graph

• Various metrics can be used in computing shortest paths e.g.

• physical distance (each link is assigned a "cost" equal to its distance)

• number of hops (each link is assigned a cost of 1)

• delay (each link is assigned a cost equal to measured or estimated queuing and transmission delay)

Page 48: Lecture Notes 2002

Static Routing Methods

• Shortest Path Routing

• Several algorithms for computing the shortest path between two nodes in a graph are known. Perhaps the most widely used is that due to Dijkstra

• Dijkstra's algorithm works in a step-by-step fashion, building up the shortest path tree from a source node until the furthermost node has been reached

Static Routing Methods

• Dijkstra's Shortest Path Algorithm

Definitions: D(v) is the distance (i.e. the sum of link weights or costs along a given path) between the source (node 1) to node v.

L(i,j) is the cost of the link from node i to node j

N is the set of nodes for which the shortest path has been calculated in a particular step of the algorithm

Static Routing Methods

• Dijkstra's Shortest Path Algorithm

There are two parts to the algorithm: an initialisation step and a step to be repeated until the algorithm terminates:

1. Initialisation. Set N = {1}. For each node not in N, set D(v) = L(1,v). For nodes not connected to node 1 set D(v) =

2. At each subsequent step. Find a node w not in N for which D(w) is a minimum. Then update D(v) for all nodes remaining that are not in N by computing

D(v) Min[D(v), D(w)+L(w,v)]

Step 2 is repeated until all nodes are in N

Static Routing Methods

• Dijkstra's Shortest Path Algorithm

Example (Schwartz, Telecommunication Networks, pages 270-271)

Static Routing Methods

• Flooding

• Every incoming packet is sent on every outgoing line except the one on which it arrived

Page 49: Lecture Notes 2002

• This guarantees that all packets will reach all destinations along the shortest path (but unfortunately many other paths too!)

• Clearly to avoid an infinite number of duplicate packets, some form of damping must be applied. Some methods are:

• Include a hop counter in each packet, which is decremented at each hop. The packet is discarded when the hop counter reaches zero

• Include a sequence number in each packet and have each node record each sequence number the first time the packet is routed. Duplicate copies of the packet at later times are discarded

• Selective flooding: only flood packets in approximately the right direction

Static Routing Methods

• Flooding

• Clearly, even taking measures to control duplicate packets, flooding is not practical in most applications. However, it is extremely robust (if any path exists between source and destination, then flooding will find it). It therefore has some specialised uses e.g.

• Military applications, where large numbers of routers may be obliterated at any instant

• Flooding of link state data packets for adaptive routing (see later)

• Flooding of packets in LANs where routers or bridges have incomplete routing tables (in conjunction with backward learning)

Static Routing Methods

• Flow Based Routing

• Is a static routing method which takes into account both network topology and the expected mean data flow between each pair of nodes

• If the data flow between all nodes is known in advance and is, to a reasonable approximation, constant in time, flow based routing can be used to analyse the flows mathematically to optimise the routing

• Whereas shortest path algorithms pick a particular route for all traffic between a particular source and destination pair, flow based routing allows this traffic to be shared over several paths (it is often called multi-path or bifurcated routing)

Static Routing Methods

• Flow Based Routing

• Flow based routing attempts to minimise the network-wide average packet delay, E(T), given by:

Page 50: Lecture Notes 2002

where M is the number of links, λi is the offered traffic on link i, μi is the service rate of link i and γ is the total external offered traffic to the network, given by

the γij's can be visualised as entries in an N by N traffic matrix consisting of average traffic arrival rates flowing between the different nodes (this must be known).

Static Routing Methods

• Flow Based Routing

• Equation (1) can be rewritten in the form

where Ci is the capacity of the i'th link and fi is the flow over that link

• Flow based routing attempts to minimise the value of E(T) given in equation (1) subject to a number of flow constraints. These constraints are, namely, that the flow of traffic from session (source i, destination j) into a particular node must be equal to the traffic leaving (unless the node is i or j). This difference in traffic flow is

which must equal -rij if l = i, +rij if l = j or 0 otherwise. This gives us a set of N 2(N-1) equations for the flows in the network which must be satisfied.

Static Routing Methods

• Flow Based Routing

• The objective of the routing strategy is now to find each flow in the network f ijml

satisfying the conservation equations (4) and minimising the network wide delay (3) (this is called a constrained optimisation problem)

• Various algorithms have been devised to solve this problem, most using some form of iteration or "gradient descent" (the function to be optimised is convex and unimodal)

15

16

17

18

Page 51: Lecture Notes 2002

• (e.g. see Bertsekas and Gallager, Data Networks).

Adaptive Routing Methods

• The routing methods studies so far do not adapt to changes in network topology or load (they are static)

• We will now look at two popular adaptive routing methods which vary routing decisions over time according to measured/estimated network state. These are:

• Distance vector routing

• Link state routing

Adaptive Routing Methods

• Distance Vector Routing

• Each router maintains a table (vector) giving the best known distances (using some metric e.g. delay) to each destination, and which outgoing link to use to get there

• Tables are updated by periodically exchanging information with neighbouring routers

• At any time, a router knows its "distance" to each neighbour (e.g. it can locally estimate the queuing delay or directly measure delays by time stamping packets)

• Once every T milliseconds, each router sends its current estimated delays for each destination to each of its neighbours.

• Imagine node A sends its estimated delays for all destination nodes {B, C, D....} to node J. Node J can then compute a new set of estimated delays to all nodes going via node A (these are equal to A's estimated delays plus the delay from J to A which J knows).

• Once J receives tables from all of its neighbours, it can update its table of estimated distances and route its packets over the outgoing links which it estimates provide the lowest overall distance to each destination

Adaptive Routing Methods

• Distance Vector Routing(Tanenbaum, Computer Networks, p356)

Adaptive Routing Methods

• Distance Vector Routing

• Unfortunately, the simple distance vector routing algorithm described suffers from a potential drawback - although reacts well to "good news" (e.g. a new router coming on-line), it can react slowly to "bad news" (e.g. a router or a link failing)(Tanenbaum p 357)

Adaptive Routing Methods

• Link State Routing

Page 52: Lecture Notes 2002

• Each router periodically carries out the following steps:

1. Discover its neighbours and their network addresses.

2. Measure the delay or cost to each of its neighbours (i.e. over each outgoing link).

3. Construct a packet containing the delay over each outgoing link.

4. Send this packet to all other destinations (i.e. not just neighbours).

5. Compute the shortest path to each router.

• In effect, the complete topology and all delays are measured and distributed to every router, which can then apply a shortest path algorithm (e.g. Dijkstra's) to find the best routes to every other node

Adaptive Routing Methods

• Link State Routing

• Learning about neighbours

• When a router is booted, it sends a special "HELLO" packet over each outgoing link

• Neighbours then send replies giving their (unique) network name or address

• Measuring line cost

• Node can send a special time-stamped "ECHO" packet to its neighbour, which the neighbour is required to return immediately

Adaptive Routing Methods

• Link State Routing

• Building link state packets

• Create a packet containing the identity of the sender; a sequence number; an age (see later) and a list of all neighbours with measured delays(Tanenbaum p362)

Adaptive Routing Methods

• Link State Routing

• Distributing the link state packets

• Clearly, distribution of link state packets must be done reliably. For this reason, a modified flooding method is often used:

Page 53: Lecture Notes 2002

• Routers keep a track of all (Source Router, Sequence Number) pairs they see. When a new link state packet arrives, it is checked to see if it is a duplicate; if it is not, it is flooded, if it is, it is discarded (note that sequence numbers must be large enough to not "wrap-around).

• Link state packets contain an "age" field which is decremented; a packet with an age of 0 is discarded

Adaptive Routing Methods

• Link State Routing

• Calculating the new routes

• Once a router has accumulated a complete set of link state packets, it can construct the complete network graph

• With this information, it can run a shortest path algorithm to determine the best routes which are then used to update the routing table

Adaptive Routing Methods

• Example - History of the ARPANET Routing Algorithm

• The ARPANET (ARPA = Advanced Research Projects Agency) was the testing ground for a number of different adaptive routing algorithms.

Adaptive Routing Methods

• 1st Generation ARPANET

• Designed in 1969

• Used distance-vector adaptive routing (each node maintained a table of estimated delays to all other nodes, and exchanged these with neighbours every 2/3 seconds)

• The cost metric was the instantaneous (i.e. not averaged) queue length and did not take into account link bandwidth and latency

• With relatively frequent measurements of instantaneous queue length, the method

produced pronounced instabilities and looping of packets

Adaptive Routing Methods

• 2nd Generation ARPANET

• Installed in 1979, and used link state routing

• Took both bandwidth and latency into consideration and actually measured packet delays using time stamps:

Delay = (Depart Time - Arrival Time) + Transmission Time + Latency

• Packet delays were averaged over 10 seconds

Page 54: Lecture Notes 2002

• This provided a great improvement over the 1st generation method, although problems still persisted at high loads. In particular:

• Despite averaging, routing stability remained a problem: a congested link advertising a high cost would be simultaneously de-selected by routers, leaving it idle, therefore attracting the traffic back and so on.

• The range of link values was to high (too large a dynamic range) - some routes could appear 100s times more attractive than others

Adaptive Routing Methods

• 3rd Generation ARPANET

• 1989, also using link state routing

• Delay metric was further improved:

• Variations in measured delay were "smoothed" by averaging the current measurement with the previous one (a form of digital low pass filtering)

• The cost metric was calculated using a function which included line type and utilisation(Diagram)

• As a result, upgrading from generation 2 to generation 3 reduced round-trip delays by 46% despite a 13% increase in throughput.(Source: Khanna and Zinky: The Revised ARPANET Routing Metric, Proceedings of SIGCOMM '89 Symposium)

Page 55: Lecture Notes 2002

Lecture 14: Congestion Control Algorithms

• Introduction

• Taxonomy of Congestion Control Methods

• Open Loop Congestion Control

• Closed Loop Congestion Control

• Congestion Control in TCP

Introduction

• When the traffic offered to (part of) a packet network exceeds network capacity, congestion sets in and performance degrades

• As queuing delays become large, transmitters repeatedly time-out and retransmit duplicate packets => even worse performance(Tanenbaum p374, fig. 5-22)

Taxonomy of Congestion Control Methods

• Router Centric

Addresses congestion from within the network

• Host Centric

Addresses congestion from the hosts on the edge of the network (e.g. transport layer)

Taxonomy of Congestion Control Methods

• Open Loop Congestion Control

Effectively attempts congestion avoidance by reserving an agreed amount of network capacity for each session, which is adhered to (e.g. admission control in ATM networks)

• Closed Loop (Feedback) Congestion Control

Reacts to measured onset of congestion. Feedback may be

• Explicit: packets are sent from the point of congestion to control the source

• Implicit: source deduces the existence of congestion by making local observations (e.g. time for ACKs to return)

Page 56: Lecture Notes 2002

Taxonomy of Congestion Control Methods

• Window Based

The transmitter may send packets without ACKs up to some maximum window size(e.g. TCP)

• Rate Based

The transmitter is limited in the rate (maximum and mean) at which traffic is offered to the network(e.g. ATM)

Open Loop Congestion Control

• Traffic Shaping

• One of the main causes of congestion is that traffic is often "bursty"

• Traffic shaping forces hosts to regulate the burstiness of traffic offered to the network, so packets are offered at a more predictable rate

• Often used in conjunction with service negotiation and policing (e.g. ATM networks)

Open Loop Congestion Control

• Traffic Shaping

• The Leaky Bucket Algorithm(Tanenbaum p380, fig. 5-24)

• Although the flow into the bucket may be bursty, the output from the bucket is regulated

• Host is allowed to put one packet per clock tick into the network (if the application is generating packets faster than this, they are buffered in the host)

• A "byte counting leaky bucket" can be used for variable length frames

Open Loop Congestion Control

• Traffic Shaping

• The Leaky Bucket Algorithm

e.g. consider a computer which can generate data at 25 Mbyte/sec (200 Mbps), connected to a network which can handle 2 Mbyte/sec on average without congestion. Data comes in 1 Mbyte bursts, one 40ms burst every second.(Tanenbaum p382, fig. 5-25)

Page 57: Lecture Notes 2002

Open Loop Congestion Control

• Traffic Shaping

• The Token Bucket Algorithm

• The leaky bucket algorithm is perhaps a little too rigid in its enforcement of a fixed output rate

• The token bucket algorithm allows the output to speed up temporarily when large bursts arrive

• The algorithm:

The leaky bucket contains tokens (NOT packets), generated by a clock at the rate of one token every Δt seconds

For a packet to be transmitted it must acquire and destroy one token

An idle transmitter can save up tokens (up to the size of the token bucket), allowing it to send large bursts later

Open Loop Congestion Control

• Traffic Shaping

• The Token Bucket Algorithm(Tanenbaum p383, fig. 5-26)

Open Loop Congestion Control

• Traffic Shaping

• The Token Bucket Algorithm

As an example, consider a computer generating a maximum data rate of 25 Mbyte/sec, in 1 Mbyte/sec bursts. Suppose it has been idle and the token bucket has filled up. When a burst arrives, data may be sent at the full 25 MByte/sec rate until all of the tokens have been used up. How long can the transmitter send at 25 MByte/sec?

If the bucket size is C (bytes), the token arrival rate is ρ, the maximum output rate is M bytes/sec, and the time allowed for transmission at rate M is S, then

C + ρS = MS

and S = C/(M-ρ)

Open Loop Congestion Control

• Traffic Shaping

Page 58: Lecture Notes 2002

• The Token Bucket Algorithm(Tanenbaum p382, fig. 5-25)

Output from a token buckets with capacities, C, 250KB, 500KB, 750KB (M = 25 MB/s, ρ = 2 MB/s).

Closed Loop Congestion Control

• Choke packets

• Each router monitors the utilisation of its outgoing lines (this is usually averaged with the last utilisation to damp oscillations)

• Whenever the utilisation rises above some threshold, a warning flag is set for that link

• When a newly arriving data packet arrives for routing over that link, the router extracts the packet's source address and sends a "choke packet" back to the source. This choke packet contains the destination address

• The original data packet is tagged so that it will not generate any more choke packets, then forwarded

• When the source host gets the choke packet, it is required to reduce the traffic sent to the particular destination by X%; it ignores other choke packets for the same destination for a fixed time interval

Closed Loop Congestion Control

• Hop-by-Hop Choke packets

• At high speeds and over large distances, sending a choke packet to the source host takes too long

• An alternative approach is to have the choke packet take effect at every node it passes through.

• This gives quicker response at the price of requiring more buffer space upstream

Closed Loop Congestion Control

• Hop-by-Hop Choke packets(Tanenbaum p391, fig. 5-30)

Congestion Control in TCP/IP

• TCP/IP uses host centric (source based), window-based congestion control with implicit feedback

• Each sender maintains two windows:

• a receiver window (for end-end flow control)• a congestion window (for congestion control)

The number of bytes which can be transmitted in a burst (i.e. without ACKs) is equal to the minimum of these two

Page 59: Lecture Notes 2002

• The congestion window is modified by the source according to the level of congestion it perceives in the network

Congestion Control in TCP/IP

• When a connection is established, the sender sets the congestion window equal to the maximum TCP segment size.

• This amount of data is then transmitted, and timers are started (see later). If the data is ACK'd before the timers expire, the congestion window is doubled (this is called slow start).

• This continues until either the receiver's window is reached (when end-end flow control kicks in) or a timeout occurs. When a timeout occurs, the congestion window is set back to the maximum TCP segment size.

• Clearly the exponential growth in window size needs to be controlled and for this a third parameter, the threshold is used (initially set to 64K).

• When a timeout occurs, the threshold is set to half of the current value of the congestion window.

• Slow start is then applied again (with exponential growth in congestion window size) until the congestion window size reaches the threshold. From then on, the congestion window is grown linearly (i.e. one maximum segment size for each burst)

Congestion Control in TCP/IP

• Example(Tanenbaum p539, fig. 6-32)

Maximum segment size = 1024 bytesStarting value of threshold = 32KCongestion window = 1K at Tx 0

Congestion Control in TCP/IP

• TCP Timer Management

• We have assumed the existence of a timeout interval. How long should this interval be?

• The solution is to use a highly dynamic algorithm which constantly adjusts the timeout interval based on continuous measurements of network performance.

• For each connection, TCP keeps a variable, RTT, that is the current best estimate of the round-trip time to the destination in question. It computes this estimate using measured times for ACKs to arrive (using smoothing with previous estimates)

• In addition to computing RTT, TCP also measures the deviation (D) of RTT (RTT is a statistical quantity). Most TCP implementations then calculate the timeout interval as equal to RTT + 4D.

Page 60: Lecture Notes 2002

Lecture 15: Internetworking with IP (the Internet Protocol)

• Introduction

• IP Address Structure

• IP Datagram Structure

• IP Routing• the Address Resolution Protocol• Interior Gateway Protocols• Exterior Gateway Protocol

• the Internet Control Message Protocol (ICMP)

• IPv6

Introduction

• Internetworking: the interconnection of several separate networks, which typically run different protocols

• Example internetwork architectures(Halsall p486, fig. 9.2)

Introduction

• Two major sources of problems for internetworking:

• The sheer scale of the global Internet (doubling every year in size). How are routing and addressing carried out efficiently in networks containing billions of hosts?

• Heterogeneity: how to provide a predictable host-host service over a hodgepodge of different networks (Ethernets, token rings, point-point links, switched networks of various kinds) all with their own addressing scheme, medium access control, service model and so on.

Introduction

• Some of the ways in which networks differ(Tanenbaum p400, fig. 5-35)

Page 61: Lecture Notes 2002

• the Internet Protocol (IP) provides the "glue" for connecting such disparate networks together

IP Address Structure

• Each host and router on the Internet has its own unique 32 bit IP address (IPv4)

• A node's IP address is distinct from its physical address (e.g. Ethernet address)

• An IP address comprises two parts:

• a network ID• a host ID

• Given 32 bits of address, how many bits should be allocated for the network ID and how many for the host ID? There is a tradeoff in the number of networks that can be encoded and the maximum number of hosts that can be connected to a network.

• The IP designers divided the IP address space into five classes

IP Address Structure

• IP Address Classes(Tanenbaum p416, fig. 5-47)

• This allows 126 class A nets each with 16 million hosts, 16,382 class B nets with 64K hosts, 2 million class C nets with up to 254 hosts

• Some IP addresses are reserved for special use(Tanenbaum p417, fig. 5-48)

IP Address Structure

• IP addresses are usually written in dotted decimal notation : each of the 4 bytes is written in decimal 0 to 255 e.g.(Halsall p497)

• IP addresses are assigned by Internet Service Providers, coordinating with the central Internet Assigned Number Authority

• An IP address with a host ID of 0 indicates a network rather than a host; an IP address with a host ID of all 1's indicates broadcast to all hosts on the particular network

IP Address Structure

• For large sites (e.g. campus networks) with several subnetworks, a further level of addressing is often used called subnetting

• the IP net ID relates to the complete site rather than a single network

Page 62: Lecture Notes 2002

• the host ID is locally (i.e. within the site) viewed as two subfields: a subnet ID and a host ID (the outside Internet knows nothing of this)

• local site routers mask off the host ID field to find the subnet id, which is then used for routing packets to the correct subnet

• a router on subnet k knows how to get to all other local subnets, and also how to get to all

hosts on its own subnet - it does not need to know all the details of hosts on other subnets (resulting in simpler routing tables).

IP Datagram Structure

• IP Packet HeaderTanenbaum p413, fig. 5-45

IP Datagram Structure

• Version: which version of IP the datagram belongs to

• IHL: header length (multiples of 32 bits, minimum 5)

• Type of service: a quality of service parameter (e.g. fast delivery or accurate delivery). Many routers ignore this field.

• Total length: total datagram length (header plus data). Maximum 64 KBytes

• Identification: used when messages need to be fragmented

• Fragment offset: where in the current message the fragment belongs (used for message reassembly)

• Time to live: a counter used to limit packet lifetimes and prevent looping

• Protocol: which transport process the datagram belongs to (e.g. TCP, UDP)

• Header checksum: verifies the header only. Useful for detecting errors caused by bad memory in routers.

IP Datagram Structure

• Source address and destination address: source and destination IP addresses

• Options: 5 options currently defined (not all routers support them):(Tanenbaum p415, fig.5-46)

IP Routing

• To deal with the vast number of hosts on the Internet, routing is hierarchical

• To reflect the fact that the Internet is made up of a number of separately managed and run internets, each internet is treated as an autonomous system with its own routing algorithms and management authority

• the combined Internet is considered as a core backbone network to which a number of autonomous systems are attached

Page 63: Lecture Notes 2002

(Halsall p506, fig. 9.12)

IP Routing

• Routing and routing information is organised hierarchically as follows:

• Hosts maintain sufficient routing information to forward datagrams to other hosts, or an interior gateway(s) that is (are) attached to the same network (this information is acquired using the Address Resolution Protocol, ARP)

• Interior gateways maintain sufficient routing information to forward datagrams to hosts or other interior gateways within the same autonomous system

• Exterior gateways maintain sufficient routing information to forward datagrams either to an interior gateway (if the datagram is for the same autonomous system) or to another exterior gateway (if it is not)

• Interior gateways are used within an autonomous system (running an interior gateway protocol)

• Exterior gateways are used to connect autonomous systems to the core network (running an exterior gateway protocol)

IP Routing

• Routing protocols: architecture, scope and routing tables(Halsall p508, fig. 9.13)(NPA = Network Point of Attachment address)

IP Routing

• The Address Resolution Protocol (ARP)

• Hosts and routers in individual (sub) networks must know the physical addresses (e.g. Ethernet addresses) corresponding to all local IP host ID addresses

• ARP is the protocol by which this information is acquired

• ARP operates as follows:

• ARP software in each host/router maintains a local routing table containing IP host ID/physical address pairs

• If a packet arrives for transmission in a host/router with a destination IP address which is in the routing table, then the physical address is read and used

• If the destination address is not in the routing table, the local ARP software generates an ARP request packet containing its own IP and physical addresses together with the required (target) IP address. This is either broadcast or sent to a router for forwarding

IP Routing

• The Address Resolution Protocol (ARP)

Page 64: Lecture Notes 2002

• ARP operation (continued)...

• the ARP software in the required destination recognises its own IP address and sends an ARP reply message back to the requesting host. It will also update its own routing table

• IP/Physical addresses are often held on hosts local permanent storage and read by the operating system on startup

• With diskless hosts, this information is stored on the host's server and is acquired by the hosts using the Reverse Address Resolution Protocol (RARP):

• On startup, the diskless host sends an RARP request to the server, containing its own physical address

• the server then replies with the corresponding IP address

IP Routing

• Interior Gateway Protocols

• Routing Information Protocol (RIP)

• the original Internet Interior Gateway Protocol distributed with Berkeley (BSD) Unix

• Uses distance-vector routing

• Works well for small autonomous systems, less well for large. Suffers from count to infinity problem and slow convergence

• Replaced in the Internet in 1979 with a link state routing protocol, but still widely used.

IP Routing

• Interior Gateway Protocols

• Open Shortest Path First (OSPF)

• "Open" - published in the open literature (RFC 1247)

• Uses link state routing

• Supports routing based on type of service by using multiple graphs with different metrics e.g.delay, throughput, reliability

• Introduces a further hierarchical level by dividing autonomous systems into areas (in principle a generalisation of subnetting)

• Allows load balancing - splitting the load to a destination over multiple routes

• Provides authentication of routing messages (a similar form of authentication was added to RIP version 2).

Page 65: Lecture Notes 2002

IP Routing

• The Exterior Gateway Protocol (Border Gateway Protocol - BGP)

• Exterior gateway routing has different requirements to interior gateway routing. The latter moves packets as efficiently as possible within a given autonomous system. Exterior gateway routing involves political, security and economic considerations

• Example routing constraints for an exterior gateway protocol might include such things as (Tanenbaum p429):

1. No transit through certain ASes2. Never put Iraq on a route starting at the Pentagon3. Only transit Albania if there is no alternative to the destination4. Traffic starting or ending at IBM© should not transit Microsoft©

IP Routing

• The Exterior Gateway Protocol (Border Gateway Protocol - BGP)

• BGP uses a modified version of distance-vector routing - not only are distances to destination ASes advertised, but also the actual routes(Tanenbaum p430, fig. 5-55)

• In this example, F examines the routes (and distances) to determine the route with the shortest distance which does not violate any policy constraints

• Note that since BGP routers exchange routes as well as distances, the count to infinity problem does not occur

Internet Control Message Protocol (ICMP)

• Primarily used for

• Error reporting

• Reachability testing

• Congestion control

• Route-change notifications

• Performance measuring

Internet Control Message Protocol (ICMP)

• ICMP defines special message types; each message is encapsulated in an IP packet(Tanenbaum p420, fig. 5-50)

Page 66: Lecture Notes 2002

IPv6

• Has been developed to provide the next generation Internet Protocol. Improvements over IPv4 include:

• 16 byte IP addresses ( 7 x 1023 IP addresses per square metre of the globe!)

• Removal of the checksum and simplified header for faster packet processing

• Improved support for different services of traffic

• Includes encryption and authentication

• Another layer introduced into the routing hierarchy - a cluster address

• Improved support for mobile IP

Page 67: Lecture Notes 2002

Lecture 16: Broadband ISDN and ATM

• Introduction and Background

• ATM Cell Size and Format

• ATM Logical Connections

• ATM Protocol Architecture and the ATM Adaptation Layer

• Quality of Service

• Traffic Shaping and Policing

• Congestion Control

Introduction and Background

• Overall aim of B-ISDN

To provide a single new network to replace the entire telephone system and all the specialised data networks with a single integrated network for all kinds of information transfer(Halsall p559, fig. 10.1)

• The proposed implementation for B-ISDN is Asynchronous Transfer Mode (ATM)

ATM Cell Size

• ATM uses fixed size packets called cells, containing a 5-octet header and a 48 byte payload (53 bytes in all)

• Use of small cells, while somewhat inefficient in terms of header overhead has the advantage of reducing queuing delay of high priority cells

• Use of fixed size cells means that cells can be switched more easily in fast hardware

• ATM is intended to convey all kinds of traffic, including telephony. A small cell size reduces the experienced speech delay

ATM Cell Format

• In the ATM layer, two interfaces are distinguished:

• the UNI (User-Network Interface) which defines the boundary between a host and an ATM network

• the NNI (Network-Network Interface) which applies to the line between ATM switches

• In both cases, cells consist of a 5-byte header and a 48-byte payload, although the headers are slightly different:

Page 68: Lecture Notes 2002

(Halsall p577, fig. 10.7)

ATM Cell Format

• Fields in the ATM cell:

• Generic Flow Control: Originally conceived as being used for flow control or prioritisation between hosts and networks, but not used ("Think of it as a bug in the standard" - Tanenbaum)

• VPI: Virtual Path Identifier selects a virtual path

• VCI: Virtual Channel Identifier selects a virtual channel (circuit) within the chosen virtual path

• PTI: Payload Type Identifier indicates the kind of information carried in the cell, in accordance with:(Tanenbaum p452, fig. 5-63)

Cell types are user supplied, congestion information is network supplied.

ATM Cell Format

• Fields in the ATM cell:

• CLP: Cell Loss Priority, can be set by host to differentiate between high priority traffic and low priority traffic. If congestion occurs and cells must be discarded, switches first attempt to discard cells with CLP = 1

• HEC: Header Error Checksum, an 8-bit CRC on the first 4 octets of the header

ATM Logical Connections

• Logical connections in ATM are referred to as virtual channel connections (VCCs) (analogous to X.25 virtual circuits)

• As well as virtual channels, ATM supports virtual paths. A virtual path connection (VPC) is a bundle of VCCs that have the same end points. All VCCs in a VPC are switched together(Tanenbaum p451, fig.5-61)

Page 69: Lecture Notes 2002

ATM Logical Connections

• The intention of the ATM designers was that routing between interior switches is done on the VPI field of cells - the VCI field is used at the last hop between a switch and a host. This has a number of advantages:

• Once a virtual path has been established between a source and a destination, a new virtual channel can be set up by the end users - no network routing decisions have to be made

• Routing is done on a 12-bit number (VPI) rather than a 12-bit number and a 16-bit number (VPI + VCI)

• Routing on virtual paths makes it easy to re-route a whole group of virtual circuits (e.g. in case of link or switch failure)

• Virtual paths make it easy for carriers to offer closed user groups (private networks)

• Whether real switches will actually use the VPI for routing as planned remains to be seen

ATM Protocol Architecture and the ATM Adaptation Layer (AAL)

• ATM defines its own reference model:(Tanenbaum p63, fig. 1-30)

• ATM Layer: deals with cells and cell transport; defines layout of cells and deals with establishment and release of virtual circuits. Common to all AAL services.

• ATM Adaptation Layer (AAL): Provides a range of service types/classes for the transport of the byte streams/message units generated by higher layers

ATM Protocol Architecture and the ATM Adaptation Layer (AAL)

• The ATM reference model defines 3 planes:

• User Plane: provides for user information transfer along with associated controls (e.g. flow control, error control)

• Control Plane: performs call control and connection control functions

• Management Plane: station management (e.g. error reporting, interlayer coordination)

ATM Protocol Architecture and the ATM Adaptation Layer (AAL)

• The AAL provides an adaptation (convergence) functions between the class of service provided to the user layer (e.g. message transfer) and the cell based service provided by the ATM layer

Page 70: Lecture Notes 2002

• A number of different classes of services (A, B, C and D) are defined within the AAL layer together with a corresponding set of protocols (AAL type 1, AAL type 2, AAL type 3/4, AAL type 5)

• The service classes are classified according to three criteria:

• the existence of a time relationship between source and destination • the bit rate associated with the transfer (constant or variable)• the connection mode (connection oriented or connectionless)

ATM Protocol Architecture and the ATM Adaptation Layer (AAL)

• ATM Service Classes:

Class A CBR Constant bit rate, connection oriented, synchronous traffic, (e.g. uncompressed voice or video)

Class B VBR-RT Variable bit rate, real time, connection oriented (e.g. real time videoconferencing)

Class B VBR-NRT As above but not real time. (e.g. video playback, multi-media)

Class C ABR Available bit rate, connection oriented (e.g. asynchronous traffic such as X.25 or Frame Relay over ATM, browsing the web)

Class D UBR Unspecified bit rate, connectionless packet data (e.g. background file transfer)

ATM Protocol Architecture and the ATM Adaptation Layer (AAL)

• ATM Service Classes(Stallings, p343, fig. 11.12)

Quality of Service

• When an ATM virtual channel is established, the user transport layer ("the customer") and the ATM network layer ("the carrier") must agree on a contract defining the service

• To enable a concrete definition of traffic contracts, ATM defines a number of Quality of Service (QoS) parameters.

Quality of Service

• Some of the ATM quality of service parameters

Parameter Meaning

Page 71: Lecture Notes 2002

Peak cell rate Maximum rate at which cells can be sent

Sustained cell rate The long-term average cell rate

Minimum cell rateThe minimum acceptable cell rate

Cell delay variation The maximum acceptable tolerance cell jitter

Cell loss ratio Fraction of cells lost or delivered too late

Cell transfer delay How long delivery takes (mean and maximum)

Quality of Service

• Some of the ATM quality of service parameters (continued)

Parameter Meaning

Cell delay variation The variance in cell delivery times

Cell error ratio Fraction of cells delivered containing errors

Severely-errored Fraction of blocks garbledcell block ratio

Cell misinsertion Fraction of cells delivered torate wrong destination

Traffic Shaping and Policing

• The mechanism for enforcing quality of service parameters is based on a specific algorithm - the Generic Cell Rate Algorithm (GCRA)

• GCRA has two parameters: the peak cell rate (PCR) and the cell delay variation tolerance (CDVT); the reciprocal of PCR, T = 1/PCR, is the minimum cell interarrival time

• A sender is always permitted to space consecutive cells by T or greater - any cell arriving more than T after the previous cell is said to be conforming

• Cells arriving more than L seconds early is said to be non-conforming; depending on the carrier, non-conforming cells are either discarded or have their priorities set to low (L is usually set equal to the CDVT)

Traffic Shaping and Policing

• The Generic Cell Rate Algorithm(Tanenbaum p464, fig.5-73)

Page 72: Lecture Notes 2002

Congestion Control

• ATM uses both congestion avoidance and congestion control methods:

• Admission Control and Resource Reservation(Congestion Avoidance)

• Attempts to avoid congestion in the first place by admission control at connection set-up time - hosts describe expected traffic offered and service expected. If this can not be handled without adversely affecting existing connections, the new connection is rejected

• At connection set-up time, if a connection is accepted, the network can reserve network resources (bandwidth) for the duration of the connection

Congestion Control

• Rate-Based Congestion Control(Closed loop congestion control)

• After every k data cells, a host transmits a special RM (Resource Management) cell (indicated by 110 in the call payload type field) - the cell travels the same path as data cells, but is treated specially by the switches along the way (see below)(Tanenbaum p470, fig. 5-75)

• The RM cell sent contains the rate at which the sender would currently like to transmit (possibly the Peak Cell Rate, possibly less) - this is called the ER (Explicit Rate)

• As the RM cell passes through the various switches on its way to the receiver, those that are congested reduce ER (no switch may increase it!)

• When the sender gets the RM cell back, it can adjust its actual cell rate to bring it into line with what the slowest switch can handle

For a web-based tutorial on ATM, see:

http://www.webproforum.com/nortel2

Page 73: Lecture Notes 2002

Lecture 17: The Transport Layer

"The transport layer is not just another layer. It is the heart of the whole protocol hierarchy. Its task is to provide reliable, cost-effective data transport from the source machine to the destination machine, independent of the physical network or networks currently in use."

Andrew Tanenbaum

__________________________________________________

• Introduction

• Elements of Transport Protocols

• Position of Transport Layer in ISO-OSI and TCP/IP

• The Transport Layer in ISO-OSI

• The Transport Layer in TCP/IP

• Implementation Issues

Introduction

• The purpose of the Transport Layer is to provide to applications a message transfer facility which is independent of the underlying network.

• Like other functional layers in a layered architecture model, the Transport Layer is defined in terms of (i) the services it offers to the layer above, (ii) peer-peer protocols and the (iii) services it uses from the layer below: (Halsall p657, Fig. 11.10)

Elements of Transport Protocols

• Addressing

• Establishing Connections

• Transfer of Data

• Releasing Connections

• Flow Control and Buffering

• Congestion Control

• Multiplexing

• Crash Recovery

Position of Transport Layer in OSI and TCP/IP

(Halsall p642, Fig. 11.1)

Page 74: Lecture Notes 2002

The Transport Layer in ISO-OSI

• User Service Primitives(Halsall p671, Fig. 11.19)

The Transport Layer in ISO-OSI

• Time Sequence Diagram for User Services(Halsall p672, Fig. 11.20)

The Transport Layer in ISO-OSI

• User Services State Transition Diagram and Sequence Table(Halsall p673, Fig. 11.21)

The Transport Layer in TCP/IP

• Provides TWO services: User Datagram Protocol (UDP) and Transmission Control Protocol (TCP)

• Both build on the layer 3 Internet Protocol (IP) which implements datagram packet switching.

• UDP is 'connectionless' and does not provide sequencing or flow control. It is used for fast 'one-shot' message exchanges.

• TCP is 'connection-oriented', provides reliable data transfer, and implements flow control, congestion control etc.

• Both use the same form of addressing - IP address + port address.

The Transport Layer in TCP/IP

• TCP User Service Primitives(Halsall p647, Fig. 11.1)

The Transport Layer in TCP/IP

• TCP User Services - Time Sequence Diagram(Halsall p649, Fig. 11.4)

Implementation Issues(or "What does the software look like?")

• Each layer is usually implemented as one or more 'tasks' or 'processes', which implement the layer protocol.

• Layers (Tasks) communicate with each other via first-in-first-out (FIFO) queues of 'Event Control Blocks': (Halsall p686, Fig. 11.31)

Implementation Issues(or "What does the software look like?")

• An implementation of Transport Event Control Blocks: (Halsall p688, Fig. 11.32a)

Page 75: Lecture Notes 2002

Implementation Issues(or "What does the software look like?")

• Outline program structure of a protocol layer:(Halsall p690, Fig. 11.33)

Page 76: Lecture Notes 2002

Lecture 18: Application and Application Support Protocols

• Application-oriented layers in OSI and TCP/IP models

• The Session layer

• The Presentation layer- ASN.1- Data Encryption

• OSI Common Application Service Elements

• The Application Layer in TCP/IP and ISO-OSI protocol stacks

Application-oriented Layers in OSI Model

(Halsall Fig.12.1, p695)

Application-oriented Layers inTCP/IP Model

(Halsall Fig.13.2, p758)

The Session Layer (OSI)

• Main functions:

- establish a logical communication path ("session connection") with another application entity, use it to exchange data ("dialog units") and release the connection in an orderly way.

- establish synchronisation points during a dialog and, in the event of errors, resume the dialog from an agreed synchronisation point.

- interrupt (suspend) a dialog and resume it later at a prearranged point.

The Session Layer (OSI)

• Synchronisation in the session layer:(Halsall Fig.12.3a, p700)

The Presentation Layer

• Different applications, operating systems, programming languages etc. typically use different representations for data.

• The aim of the presentation layer is to ensure that messages exchanged between two application processes have a common meaning or shared semantics.

• (Halsall Fig.12.9, p707):

Page 77: Lecture Notes 2002

The Presentation Layer

• ISO has defined the Abstract Syntax Notation 1 (ASN.1) - a data description language and a set of encoding rules - which defines a transfer syntax for unambiguously converting data structures to a sequence of bytes for transmission (and for unambiguously decoding at the receiver).

• ASN.1 compilers are available for a range of programming languages - these compilers generate encoding and decoding functions to convert between specific language data types and the ASN.1 representation.

The Presentation Layer

• Data Encryption Terminology(Halsall Fig.12.16, p719)

The Presentation Layer

• The Data Encryption Standard (DES), defined by the US National Bureau of Standards is one commonly used encryption technique.

• DES is a block cipher - it operates on fixed size blocks of data (64 bits) using a 56 bit encryption key. The same key is used for decryption.

• DES uses a combination of substitution (replace a group of bits by another) and transposition (change the order of the bits) using "S" and "P" boxes (implemented in fast hardware):(Halsall Figs.12.17, 12.18, p721, 722)

The Presentation Layer

• DES uses 19 stages of transposition/substitution, with different "sub-keys" derived from the 56 bit encryption key used at each stage:(Halsall Fig.12.19, p723)

The Presentation Layer

• Using a 56 bit key gives around 1017 possible keys. Many believe that this is insufficient. (See Tanenbaum's book for a good discussion of this).

• Using identical keys for encryption and decryption suffers from the key distribution problem - the selected key needs to be sent to the receiver and may be intercepted.

• Public Key Cryptography provides an elegant method for overcoming the key distribution problem.

The Presentation Layer

• Public Key Cryptography

- Uses different (but obviously complementary) encryption and decryption keys (algorithms) E and D such that

Page 78: Lecture Notes 2002

D(E(P) = P

It is exceedingly difficult to deduce D from E

E cannot be broken by a chosen plaintext attack.

- Under these conditions, the encryption key can be made public - a recipient, A, generates encryption and decryption keys EA and DA. The encryption algorithm and EA are made public. Anyone can send encrypted data to A using EA but only A can decrypt it.

- One commonly used algorithm for calculating E and D is the RSA algorithm (Rivest, Shamir and Adleman) which uses number theory to generate the key pairs.(Halsall Fig.12.21, p727)

The Presentation Layer

• Public Key Cryptography can also be used as a method of message authentication. The idea here is that DA is made public and EA is kept secret. If someone receives a message which can be decrypted using DA, it must have come from A since only A knows EA.

A two stage process can provide both authentication and encryption:(Halsall Fig.12.22, p729)

OSI Common Application Service Elements

• Part of the OSI Application layer, CASE's provide general support functions to specific applications services such as File Transfer, E-mail etc. etc.

• The main CASE's are:

- Association Control Service Element - initiates connections (associations) between application entities)

- Remote Operations Service Element - provides a remote procedure call facility

- Commitment, Concurrency and Recovery - controls access to shared resources

The Application Layer in ISO-OSI and TCP/IP Protocol Stacks

• Application protocols in the application layer provide services to 'application processes'

• Application protocols exist in both ISO-OSI and TCP/IP stacks which provide services such as: remote terminal access; file transfer; electronic mail; network management; WWW; directory services...

• The way in which ISO-OSI and TCP/IP application protocols operate is somewhat different:

• the ISO-OSI model provides extensive application support protocols and Presentation and Session layers

• in the TCP/IP stack, the application protocol communicates directly with the transport layer

Page 79: Lecture Notes 2002

• Both ISO-OSI and TCP/IP use the idea of virtual devices (although at different layers)

The Application Layer in ISO-OSI and TCP/IP Protocol Stacks

• Application layer in OSI and TCP/IP suites(Halsall p756 fig. 13.1)

Example TCP/IP Application protocols

• telnet

• Enables a user at a terminal to remotely login to a remote machine and use it as if the terminal were directly connected to it.

• ftp (file transfer protocol)

• enables a user to send and receive files to/from a remote file system

• smtp (simple mail transfer protocol)

• Provides a networkwide mail transfer service between the mail systems associated with different machines

• snmp (simple network management protocol)

• Enables a user (e.g. the network manager) to gather performance data or to control the operation of network elements (e.g. bridges) via the network itself

Example ISO-OSI Application Protocols

• VT (Virtual Terminal)

• Provides similar services to telnet

• FTAM (File Transfer, Access and Management)

• Provides similar services to ftp

• MOTIS (Message Oriented Text Interchange Standard)

• Provides similar services to smtp

• CMIP (Common Management Information Protocol)

• Provides similar services to snmp

Page 80: Lecture Notes 2002

Example ISO-OSI Application Protocols

• JTM (Job Transfer and Manipulation)

• Provides a facility for a user application process(AP) to submit a job to a remote AP for processing

• MMS (Manufacturing Messaging Service)

• Provides a standard protocol for manufacturing related messages (e.g. for robot or numerical machine control)

• RDA (Remote Database Access)

• Provides a facility for an AP to access a remote database management system (DBMS)