96
Dr. Omari Mohammed Maître de Conférences Classe A Université d’Adrar Courriel : [email protected] SUPPORT DE COURS Matière : Réseaux Avancés 1 Niveau : 1 ère Année Master en Informatique Option : Réseaux et Systèmes Intelligents

Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

Dr. Omari Mohammed

Maître de Conférences Classe A

Université d’Adrar

Courriel : [email protected]

SUPPORT DE COURS

Matière : Réseaux Avancés 1

Niveau : 1ère

Année Master en Informatique

Option : Réseaux et Systèmes Intelligents

Page 2: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-1-

Advanced Computer Networks

Text Book: Data communication and Networking, 4th

Edition, Behrouz Forouzan.

Program:

1- Reminder of basic notions

2- Virtual Networks

3- ATM Networks

4- GigaBit-Ethernet Networks

5- Introduction to Quality of Service

6- Multicast Routing

Page 3: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-2-

Reminder of Basic Notions

I- Introduction:

1- Why Computer Networks?

- Exchanging ideas.

- Separating data from physical storage.

- Convenience: without computer networks everybody needs a huge “super computer”.

- Mobility: it is hard to move a super computer while traveling.

- Robustness: a sudden damage to one or few machines should not affect the entire system.

- Concurrency: Different machines can be used in processing distributed computing

applications.

2- Network Effectiveness:

The effectiveness of any computer network system depends on 3 fundamental characteristics:

1. Delivery: The network must deliver data to the correct destination.

2. Accuracy: The network must deliver data without alteration in transmission.

3. Timeliness: The network must deliver data in time schedule. Some data are asynchronous: email,

file transmission, thus they don’t require high quality service of transmission. On the other hand,

live TV or radio transmission needs a nearly immediate broadcasting.

Host1

Host2

Host3

IMP1

IMP2

IMP3

IMP5

IMP4

IMP : Interface Message Processor.

Page 4: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-3-

3- Data Communication Components:

- Message: The information to be communicated such as text, sound, image, video, …

- Sender: The device that sends the message such as a computer, a server, a video camera, …

- Receiver: The device that receives the message.

- Medium: The physical path by which a message travels from a sender to a receiver, e.g.,

twisted-pair wire, coaxial cable, fiber optic cable, radio waves, …

- Protocol: The set of rules that governs data communications.

Example of a protocol:

At the sender side At the receiver side

- Put the message in an envelop.

- Stick a stamp on the envelop.

- Go outside and post it (slide it into a posting

box).

- Check your PO box for new mail.

- Open the envelop and throw it away.

- Read the message

4- Direction of Data Flow:

Data flow has three modes of direction:

- Simplex: In this mode, information are transmitted in one-way direction only: The sender can

only send information while the receiver can only receive it.

Sender Receiver

Message

Medium

Protocol

Step 1 : …

Step 2: …

Step 3: …

Protocol

Step 1 : …

Step 2: …

Step 3: …

Page 5: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-4-

- Half-Duplex: In this mode, both stations can send and receive information but not at the same

time: When a station is sending the other receives, and vice versa. Walkie-Talkie

conversations are an example of half-duplex communications.

- Full-Duplex: In this mode, both stations can send and receive information simultaneously.

Phone communications are full duplex since both persons can speak at the same time!!!

5- Types of Connections:

A network is two or more devices connected together through links. A links is communication

pathway that is able to transfer data from one device to another.

In order for two devices to be able to communicate, they need to be connected to the same link at

the same time. There are two types of connections:

- Point-to-Point connection: in this type, a dedicated link is provided between two devices.

When you change a TV channel, the infrared connection between the remote control and the

TV set is point-to-point.

Direction of data at any time

Workstation 1 Workstation 2

Direction of data at time 1

Workstation 1 Workstation 2

Direction of data at time 2

Main Frame Monitor

Direction of data

Page 6: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-5-

- Multipoint connection: in this type, a link is shared between many communicating devices,

either spatially or temporally:

When the link is shared spatially, two or more devices can use some of the link capacity. Let’s

say that the link has a bandwidth of 10 Mbps (Mega bits per second), then 5 workstations can

download simultaneously using 2 Mbps.

On the other hand, devices can share the link temporally: each device uses the entire bandwidth

for certain moment.

6- Types of Topologies:

A network topology refers to the way in which the devices are connected physically. We can

distinguish 4 types of topologies:

- 6.1. Mesh topology: in this topology, each station has a dedicated point-to-point link to

every other device. Therefore, for n stations we need n(n-1)/2 links to complete the

connections a fully connected network.

A Link

SERVER

could be

a hub or a

switch

A Link

Workstation 1 Workstation 2

Page 7: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-6-

Advantages:

- Robust.

- Security and privacy.

- Fast response (1 hop).

- Fault detection.

Disadvantages:

- Very Expensive: the number of links and ports is quadratic to the number of devices.

- 6.2. Star topology: in this topology, each station has a dedicated point-to-point link only to

central controller, usually called a hub. Hence, the traffic between any two stations takes two

hops:

Advantages:

- Only one link and one port per station.

- Robust (when a device is down).

- Fast response (2 hops).

- Fault detection.

HUB

Page 8: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-7-

Disadvantages:

- Not robust: when the hub is down, the entire network is disabled.

- The Hub must have as ports as the number of connecting stations.

- No privacy.

- 6.3. Bus topology: this topology is a multipoint connection where each station is connection

through a tap (using a drop line) to a backbone cable:

Advantages:

- Drop lines are proportionally shorter than links in ring topology. Only the backbone cable

can extend to reach all stations.

- Easy installation: Backbone cable can be built-in offices walls.

- Robust (when a device is down).

Disadvantages:

- Fault detection is difficult.

- Connections are limited: adding more stations weakens the signal transmission.

- Not robust when the backbone cable is damaged.

- No privacy.

- 6.4. Ring topology: in this topology, each station has two dedicated point-to-point links to

stations on the right and left sides. So, data are transmitted from one station to another until it

reaches the destination:

Tap

Tap

Tap

Tap Cable

End

Cable

End

Drop Line

Page 9: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-8-

Advantages:

- Easy to install: two ports and two links per station.

Disadvantages:

- Considerable amount of hops.

- Not Robust.

- No Privacy.

II- Network Categories:

Networks are classified into three categories: LAN, MAN, and WAN

1- Local Area Networks (LAN):

• Privately owned.

• Links devices in a single building, or multiple buildings.

• Example: connecting two PCs and a printer in a house.

• Example: connecting 100 workstations and a server in an institute.

• Goals: sharing local resources (printer, …) , client/server applications

• Topologies: Ring, Bus, Star.

• Size: up to 1 KM.

• Speed: up to 100 Mbps.

Page 10: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-9-

2- Metropolitan Area Networks (MAN):

• Extends over a city.

• Owned by a public/private company.

• Size: up to 10 km (city size)

• Speed: up to 10 Gbps.

Public City Network

Single-building

LAN

Multiple-building

LAN

Backbone Cable

Page 11: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-10-

3- Wide Area Networks (WAN):

• Extends over a country or a continent.

• Provides long distance transmissions: large bandwidths, high quality and capacity of transmission

medium.

• Utilizes public resources.

III- Network Models: Internet Model

The Internet model is a layered protocol stack that dominates data communications and networking today.

It is basically composed of 5 layers:

- A layer represents some functions that are mostly related.

- Functions of different layers are of different abstraction level.

- Headers and trailers might be attached to data units to serve protocol application.

Application

Transport

Network

Data Link

Physical

User Level

Subnet Level

Page 12: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-11-

- When data is passed from one layer to another, it might be divided into segments that we called

data units: transport data units (TDU), network data units (packets), data link data units (frames).

1- Physical Layer:

• Raw data transmission over communication channel.

• Data rate transmission.

• Voltage level, representation of bits.

• Simplex, half-duplex, duplex connections.

• Modulation, encoding, decoding.

• Transmission media.

• Transmission techniques: analog, digital.

2- Data Link Layer:

• Responsible for (local) node-to-node delivery of frames (information).

• Since noise might hit the transmitted signal at the physical layer, the data link layer detect and

correct these errors at the receiver site before passing data to the network layer.

010001010111

101010101000

10101010101

Level 5 Data H5

H4 Level 4 Data

Level 3 Data H3

Level 2 Data H2

01000101011110101010100010101010101

T2

Level 5 Data H5

H4 Level 4 Data

Level 3 Data H3

Level 2 Data H2 T2

Peer-to-peer communication

Layer 5 Data H5

Layer 4 Data 1 H4 Layer 4 Data 2 H4 Layer 4 Data 3 H4

TDU 1 TDU 2 TDU 3

Page 13: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-12-

• At the sender site, the data link layer divides the network data units into frames of manageable

sizes.

• Each frame has an attached header that contains the physical address of the sender and the

receiver.

• If the receiver is outside the sender’s network, the frame’s header should contain the address of the

bridge that connects the local network to the outside.

• Flow control: fast computers connected to slow ones.

• Access control: in multipoint connection for instance, data link protocols solve collision problems.

3- Network Layer:

• Responsible for (global) source-to-destination delivery of packets (information).

• When two systems belong to the same network, there is no need for a network layer. However, in

order to be able to send a packet between two systems of different networks, routing algorithms are

necessary to route packet through the grid of IMPs (Routers, switches, …).

• The physical address is used in the same network to handle local problems (data link layer). The

logical address (e.g., IP address) is attached to any machine logically, and can be changed any

time. It serves as a universal address in internetworks.

10 28 53 65 87

T2 Level 2 Data 10 65

IMP

Source

Destination

Page 14: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-13-

• IMPs have usually two or more interfaces, each directed to the connected network. Therefore an

IMP has more than one physical and logical address.

• When the frame is passed from a network to another, the physical addresses stored in the header

are changed based on the logical addresses that are kept in the network packet inside the frame.

4- Transport Layer:

• Responsible for process-to-process delivery of the entire sent message.

• They could be many processes that communicate between two systems. The transport layer

organizes the communication through port numbers, each correspond to an application or a service

(process).

• Port numbers are stored in the header of the transport data unit TDU.

• The transport layer is responsible for segmentation and reassembly of the data coming from the

application layer.

10 A

87 E

71 H

95 P

77 M

Network 1

Network 2

Network 3

20 F

99 T

33 N

66 Z

BUS

RING

BUS

T2 10 20 A P Data

T2 99 33 A P Data

T2 66 95 A P Data

Frame

Packet

Data from the application layer

Data Segment 1 Header 1 Data Segment 1 Header 1 Data Segment 1 Header 1

Segmentation

Transport Data Unit

TDU

Page 15: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-14-

• TDUs can be sent in different paths (connectionless service) or a dedicated path (connection

oriented service). In connectionless mode, sequence numbers of the TDUs must be added to the

header in order to arrange them back in case they arrived in different order.

• The transport layer is also concerned with the flow control between end systems.

• Error control is also handled by the transport layer: if a TDU is lost (never arrived) the destination

usually asks the source to retransmit it.

5- Application Layer:

• Serves as the interface between network and the user.

• Provides support for services like email, web access, remote login, file transfer.

Page 16: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-15-

III- Network Models: OSI Model

The OSI model (Open Systems Interconnections), designed by ISO (International Organization for

standardization), and has two more layers compared to the internet model: the presentation and the session

layers:

1- Presentation layer:

• Designed to handle syntax and semantics of the exchanged information.

• Character sets: ASCII, extended ASCII, Unicode, ISO, …

• Compression and decompression.

• Encryption and decryption.

2- Session layer:

• Sessions Establishment, Session ending.

• Dialog controller.

Application

Transport

Network

Data Link

Physical

User Level

Subnet Level

Presentation

Session

Page 17: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-16-

Virtual Networks: VLANs

1- Introduction:

• A station is considered part of a LAN if it physically belongs to that LAN. So the criterion of

membership is geographic.

• In a switched LAN, 10 stations are grouped into three LANs that are connected by a switch. The

first four engineers work together as the first group, the next three engineers work together as the

second group, and the last three engineers work together as the third group.

• Now the administrators needed to move two engineers from the first group to the third group, in

order to speed up the project being done by the third group.

• The LAN configuration would need to be changed. The network technician must rewire.

• The problem is repeated if, in another week, the two engineers move back to their previous group.

• In a switched LAN, changes in the work group mean physical changes in the network

configuration, which can be expensive, if we assign a switch per each group.

• A VLAN is a technology that divides a physical LAN into logical segments, without the need to

rewire when a station is migrating.

• A VLAN is a work group in the organization. Therefore, a physical LAN can include multiple

VLANs.

• The group membership in VLANs is defined by software, not hardware. Any station can be

logically moved to another VLAN using appropriate software that is run by an administrator.

SWITCH

SWITCH

SWITCH

SWITCH

Page 18: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-17-

• All members belonging to a VLAN can receive broadcast messages sent to that particular VLAN.

• When a station moves from VLAN 1 to VLAN 2, it receives broadcast messages sent to VLAN 2,

but no longer receives broadcast messages sent to VLAN 1.

• Moving stations from one group to another through software is easier than changing the

configuration of the physical network.

• VLAN technology even allows the grouping of stations connected to different switches in a

VLAN.

SWITCH 'A'

VLAN1

VLAN2

VLAN3

SWITCH 'B'

Backbone SWITCH

1 2 3 4 5 6 7 8 9 10

SWITCH

VLAN1

VLAN2

VLAN3

1 2 3 4 5 6 7 8 9 10

Page 19: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-18-

• This is a good configuration for a company with two separate buildings. Each building can have its

own switched LAN connected by a backbone.

• People in the first building and people in the second building can be in the same work group even

though they are connected to different physical LANs.

• A VLAN defines a broadcast domain. For instance, in the previous figure, when station 1

broadcast some information to VLAN 1, the message is broadcasted to station 2 and station 5 that

connected to switch A. At the same time, the message is broadcasted to switch B in order to be

sent to station 7 only.

2- Membership

• Vendors use different characteristics such as port numbers, MAC addresses, IP unicast addresses,

IP multicast addresses, in order to handle the memberships.

Port Numbers

• Some VLAN vendors use the switch port numbers as a membership characteristic. For example, in

the previous figure, the administrator can define that stations connecting to ports 1, 2, 5, and 7

belong to VLAN 1; stations connecting to ports 3, 4, and 6 belong to VLAN 2; and so on.

MAC Addresses

• Some VLAN vendors use the 48-bit MAC address as a membership characteristic. For example,

the administrator can stipulate that stations having MAC addresses E21342A12334 and

F2A123BCD341belong to VLAN 1.

IP Addresses

• Some VLAN vendors use the 32-bit IP address as a membership characteristic. For example, the

administrator can stipulate that stations having IP addresses 181.34.23.67, 181.34.23.72,

181.34.23.98, and 181.34.23.112 belong to VLAN 1.

Multicast IP Addresses

• A multicast IP addresses (of class D) is assigned to a group of stations instead of one station. This

group is entitled to receive a particular service from a particular server. Only stations that are

members in the group should be allowed to receive such a service.

• Some VLAN vendors use the multicast IP address as a membership characteristic. Multicasting at

the IP layer is now translated to multicasting at the data link layer, since VLANs are managed at

the frame level.

Combination

• Recently, the software available from some vendors allows all these characteristics to be

combined. The administrator can choose one or more characteristics when installing the software.

In addition, the software can be reconfigured to change the settings.

Page 20: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-19-

3- Configuration

• Stations are configured in one of three ways: manual, semiautomatic, and automatic.

Manual Configuration

• In a manual configuration, the network administrator uses the VLAN software to manually assign

the stations into different VLANs at setup.

• The term manually here means that the administrator types the port numbers, the IP addresses, or

other characteristics, using the VLAN software.

Automatic Configuration

• In an automatic configuration, the stations are automatically connected or disconnected from a

VLAN using criteria defined by the administrator.

• For example, the administrator can define the project number as the criterion for being a member

of a group. When a user changes the project, he or she automatically migrates to a new VLAN,

without the involvement of the administrator.

Semiautomatic Configuration

• A semiautomatic configuration is somewhere between a manual configuration and an automatic

configuration. Usually, the initializing is done manually, with migrations done automatically.

4- Communication Between Switches

• In a multi-switched backbone, each switch must know not only which station belongs to which

VLAN, but also the membership of stations connected to other switches.

• For example, in the previous figure, switch A must know the membership status of stations

connected to switch B, and vice versa.

• There are three methods for switch communication: table maintenance, frame tagging, and time-

division multiplexing.

Table Maintenance

• In this method, when a station sends a broadcast frame to its group members, the switch creates an

entry in a table and records station membership.

• The switches send their tables to one another periodically for updating.

Frame Tagging

• In this method, when a frame is traveling between switches, an extra header (tag) is added to the

MAC frame to define the destination VLAN. The frame tag is used by the receiving switches to

determine the VLANs to be receiving the broadcast message.

Time-Division Multiplexing (TDM)

• In this method, the connection between switches is divided into timeshared channels as in TDM.

For example, if the total number of VLANs in a backbone is five, each incoming connection in a

Page 21: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-20-

switch is divided into five channels. The traffic destined for VLAN 1 travels in channel l, the

traffic destined for VLAN 2 travels in channel 2, and so on.

• The receiving switch determines the destination VLAN by checking the channel from which the

frame arrived.

5- IEEE Standard

• In 1996, the IEEE 802.1 subcommittee passed a standard called 802.1Q that defines the format for

frame tagging.

• VLANs are set up between switches by inserting a tag into each Ethernet frame. A tag field

contains VLAN membership information. It may contain also the 802.1p priority field that is used

in quality of service purposes.

• IEEE 802.1Q has opened the way for further standardization in other issues related to VLANs.

Most vendors have already accepted the standard.

6- Advantages

• There are several advantages to using VLANs.

Cost and Time Reduction

• VLANs can reduce the migration cost of stations going from one group to another.

• Physical reconfiguration takes time and is costly. Instead of physically moving one station to

another segment or even to another switch, it is much easier and quicker to move it by using

software.

Creating Virtual Work Groups

• VLANs can be used to create virtual work groups. For example, in a campus environment,

professors working on the same project can send broadcast messages to one another without the

necessity of belonging to the same department.

• This can reduce traffic if the multicasting capability of IP was previously used.

Page 22: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-21-

Security

• VLANs provide an extra measure of security. People belonging to the same group can send

broadcast messages with the guaranteed assurance that users in other groups will not receive these

messages.

Page 23: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-23-

ATM Networks

I- Circuit Switching:

• Circuit switching creates a physical connection between two devices such as phones or computers.

• Circuit switching is divided into two types: space division switches and time division switches.

• In space division switching, each input of the switch has a direct physical connection to the output.

• We distinguish three types of space division switching: crossbar switching and multistage

switching, multiple path switching.

• In crossbar switching, we use a grid that links every input to every output through crosspoints.

• In multistage switching, we use multiple crossbar switches at different levels in order to reduce the

number of crossbars.

3 inputs

4 outputs

Crosspoint

Switching

Circuit Switching Packet Switching

Datagram Virtual Circuit

Page 24: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-24-

II- Packet Switching:

• Packet switching is performed at the network layer. Packet switching follows the store-and-

forward principle.

• Network layer divides the data coming from the transport layer into equal size packets including

the logical addresses of the sender and the receiver.

• Packet switching was invented to handle the problem of message switching where long messages

are entirely forwarded to the same router.

• In packet switching, the message is divided into multiple packets.

Circuit Switching Message Switching Packet Switching A B C D A B C D A B C D

Msg

Msg Msg

Msg

Msg

Msg

Pac.1

Pac.2

Pac.3 Pac.1

Pac.2

Pac.3

Pac.1

Pac.2

Pac.3

Propagation

delay

Connection

request

Connection

accept

Sending data

Propagation

delay

Queuing

delay

Queuing

delay

Propagation

delay

1st Stage 2nd Stage 3rd Stage

Page 25: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-25-

Item Circuit Switch Packet Switch

(datagram)

Connection setup YES NO

Dedicated physical path YES NO

Each packet follows the same route YES NO

Packets arrive in order YES NO

Is switch crash fatal YES NO

Bandwidth available Fixed Dynamic

Time of possible congestion At connection setup On every packet

Potentially wasted bandwidth YES NO

Store-and-forward transmission NO YES

• Packet switching is divided into 2 types: datagram and virtual circuit.

1- Datagram Packet Switching:

• In datagram packet switching, packets are routed from one router (smarter switch) to another based

on the available capacity of links between routers, and the ability to store them by receiving

routers.

• Each packet must include a logical address of the sender and the receiver (e.g., IP address).

• In datagram packet switching, there is no guarantee that packets arrives in order. Therefore, the

receiver has to wait to receive all packets, then reorder them back following their sequence

numbers.

• In datagram packet switching, there is no need for connection establishment: the packets are sent

to the next available router and stored there. Then the router route them to the next, and so on.

• The next router is selected based on the final destination of the packet: The router reads the

destination address of the packet and decides with neighbor router is in one possible path of the

packet.

Message to be sent

Packets

Packets

arrived

in different

order

Message received

Page 26: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-26-

III- Virtual Switching: • A virtual-circuit network is a cross between a circuit-switched network and a datagram network. It

has some characteristics of both.

• As in a circuit-switched network, there are setup and teardown phases in addition to the data

transfer phase.

• Resources (bandwidth) can be allocated during the setup phase, as in a circuit-switched network,

or on demand, as in a datagram network.

• As in a datagram network, data are packetized and each packet carries an address in the header.

However, the address in the header has local use, not end-to-end use.

• As in a circuit-switched network, all packets follow the same path established during the

connection.

• A virtual-circuit network is normally implemented in the data link layer, while a circuit-switched

network is implemented in the physical layer and a datagram network in the network layer.

• In the previous figure, the virtual-circuit network has switches that allow traffic from sources to

destinations. A source or destination can be a computer, packet switch, bridge, or any other device

that connects other networks.

1- Addressing

• In a virtual-circuit network, two types of addressing are involved: global and local. These

addresses are often called virtual-circuit identifier (VCI).

• A source or a destination needs to have a global address such as IP address. However, a global

address in virtual-circuit networks is used only to create a virtual-circuit identifier, which is

considered as a local address.

2- Virtual-Circuit Identifier

• The identifier that is actually used for data transfer is called the virtual-circuit identifier (VCI). A

VCI, unlike a global address, is a small number that has only switch local scope.

End System

End System

End System

End System

Switches

Virtual

Circuit

Network

Page 27: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-27-

• The VCI is used by a frame between two switches. When a frame arrives at a switch, it has a VCI;

when it leaves, it has a different VCI.

3- Communication Phases:

• As in a circuit-switched network, a source and destination need to go through three phases in a

virtual-circuit network: setup, data transfer, and teardown.

• In the setup phase, the source and destination use their global addresses to help switches build a

virtual path by making table entries for the connection.

• In the teardown phase, the source and destination inform the switches to delete the corresponding

entry.

• Data Transfer Phase:

• To transfer a frame from a source to its destination, all switches need to have a table entry for this

virtual circuit. The table, in its simplest form, has four columns.

Incoming Outgoing

Port VCI Port VCI

1 14 2 22

1 77 3 41

• This means that the switch holds four pieces of information for each virtual circuit that is already

set up.

Data 14

VCI Incoming

Frame Data 22

VCI Outgoing

Frame

Switch

Data 77

VCI Incoming

Frame

Data 41

VCI Outgoing

Frame

Port 1

Port 3

Port 2

Data 14

VCI Incoming

Frame Data 77

VCI Outgoing

Frame

Switch

Page 28: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-28-

• The previous figure shows how a frame from source A reaches destination B and how its VCI

changes during the trip. Each switch changes the VCI and routes the frame.

• The process creates a virtual circuit, not a real circuit, between the source and destination.

• Setup Phase:

• In the setup phase, a switch creates an entry for a virtual circuit. For example, suppose source A

needs to create a virtual circuit to B. Two steps are required: the setup request and the

acknowledgment.

• A setup request frame is sent from the source to the destination. The frame should contain the

global source and destination addresses.

• When passing through a switch, it knows that a frame going from A to B goes out through port 3.

• The switch creates an entry in its table for this virtual circuit, but it is only able to fill three of the

four columns. The switch assigns the incoming port (1) and chooses an available incoming VCI

(77) and the outgoing port (2). It does not yet know the outgoing VCI, which will be found during

the acknowledgment step. The process is repeated for each switch.

A B

Switches

2

3

1

1 2

3

1

2

3

1 2

1

3 1

2

3 Data 77

Data 14

Data 66

Data 22

Data 55

Data 33

Incoming Outgoing

Port VCI Port VCI

1 77 2 14

Incoming Outgoing

Port VCI Port VCI

1 14 2 66

Incoming Outgoing

Port VCI Port VCI

1 66 3 22

Incoming Outgoing

Port VCI Port VCI

2 55 3 33

Incoming Outgoing

Port VCI Port VCI

2 22 3 55

2

3

Page 29: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-29-

• Acknowledgment:

• A special frame, called the acknowledgment frame, completes the entries in the switching tables.

• The destination sends an acknowledgment back to the switch.

• The acknowledgment carries the global source and destination addresses so the switch knows

which entry in the table is to be completed. The frame also carries VCI 33, chosen by the

destination as the incoming VCI for frames from A.

• The Switch uses this VCI to complete the outgoing VCI column for this entry. Then it sends back

the previously chosen incoming VCI to the previous switch. The process is repeated until we reach

the source.

• The source uses the received VCI (14) as the outgoing VCI for the data frames to be sent to

destination B.

A B

Switches

2

3

1

1 2

3

1

2

3

1 2

1

3 1

2

3

Incoming Outgoing

Port VCI Port VCI

1 77 2

Incoming Outgoing

Port VCI Port VCI

1 14 2

Incoming Outgoing

Port VCI Port VCI

1 66 3

Incoming Outgoing

Port VCI Port VCI

2 55 3

Incoming Outgoing

Port VCI Port VCI

2 22 3

2

3

Page 30: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-30-

• Teardown Phase:

• In this phase, source A, after sending all frames to B, sends a special frame called a teardown

request. Destination B responds with a teardown confirmation frame.

• All switches delete the corresponding virtual-circuit entry from their tables.

4- Delay in Virtual-Circuit Networks

• In a virtual-circuit network, there is a one-time delay for setup and a one-time delay for teardown.

• If resources are allocated during the setup phase, there is no wait time for individual packets.

• There are two other delays to be taken in consideration: the transmission delay, which is needed

for a packet to leave a station. For instance, If a station is sending at 1 Kbps, then the bit delay is 1

ms.

• The second delay is the propagation delay of the transmission medium. If a cable is 100 m long,

and it the transmission speed is 106 m/s, so the delay is 10

-4 s.

• So, the Total delay (setup) = setup delay + transmission delay + propagation delay.

• And, the Total delay (teardown) = teardown delay + transmission delay + propagation delay.

• And, Total delay (packets) = transmission delay + propagation delay. (good quality of service).

IV- ATM

• Asynchronous Transfer Mode (ATM) is the cell relay protocol designed by the ATM Forum and

adopted by the ITU-T.

• ATM allow for high-speed interconnection of all the world's networks. In fact, ATM can be

thought of as the "highway" of the information superhighway.

1- Design Goals

A B

Switches

2

3

1

1 2

3

1

2

3

1 2

1

3 1

2

3 77

22

55

33

Incoming Outgoing

Port VCI Port VCI

1 77 2 14

Incoming Outgoing

Port VCI Port VCI

1 14 2 66

Incoming Outgoing

Port VCI Port VCI

1 66 3 22

Incoming Outgoing

Port VCI Port VCI

2 55 3 33

Incoming Outgoing

Port VCI Port VCI

2 22 3 55

2

3

14 66

Page 31: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-31-

• The need for a transmission system to optimize the use of high-data-rate transmission media, in

particular optical fiber. Newer transmission media and equipment are dramatically less susceptible

to noise degradation. Therefore, the extensive use of error and flow control should be minimized as

the technology advances.

• The new system must interface with existing systems (Ethernet, token ring, TCP/IP, …) and

provide wide-area interconnectivity between them without lowering their effectiveness or

requiring their replacement.

• If ATM is to become the backbone of international communications, as intended, it must be

available at low cost to every user who wants it.

• The new system must be able to work with and support the existing telecommunications

hierarchies (local loops, local providers, long-distance carriers, ….).

• The new system must be connection-oriented to ensure accurate and predictable delivery.

• In order to gain speed, the new system should move as many of the functions to hardware as

possible.

2- Problems

• Before ATM, data communications at the data link layer had been based on frame switching and

frame networks. Different protocols (Ethernet, token ring, wireless, …) use frames of varying size.

• As networks become more complex, the information that must be carried in the header becomes

more extensive. The result is larger and larger headers relative to the size of the data unit.

• To improve utilization, some protocols provide variable frame sizes to users.

• Mixed Network Traffic:

• Switches, multiplexers, and routers must incorporate elaborate software systems to manage the

various sizes of frames.

• Internetworking among the different frame networks is slow and expensive.

• Another problem is that of providing consistent data rate delivery when frame sizes are

unpredictable and can vary so dramatically.

• Imagine the results of multiplexing frames from two networks with different requirements (and

frame designs) onto one link. What happens when line 1 uses large frames (usually data frames)

while line 2 uses very small frames (the norm for audio and video information)?

A1

B3 B2 M

U

X

B1

A1 B3 B2 B1

Page 32: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-32-

• If line A's gigantic frame A1 arrives at the multiplexer even a moment earlier than line B's frames,

the multiplexer puts frame A1 onto the new path first. After all, even if line B's frames have

priority, the multiplexer has no way of knowing to wait for them and so processes the frame that

has arrived first.

• Frame B1 must therefore wait for the entire A1 bit stream to move into place before it can follow.

The huge size of A1 creates an unfair delay for frame B1, B2, and B3.

• Because audio and video frames ordinarily are small, mixing them with conventional data traffic

often creates unacceptable delays of this type and makes shared frame links unusable for audio and

video information. So, the traffic must travel over different paths.

3- Cell Networks

• Many of the problems associated with frame internetworking are solved by adopting a concept

called cell networking.

• A cell is a small data unit of fixed size.

• In a cell network, which uses the cell as the basic unit of data exchange, all data are loaded into

identical cells that can be transmitted with complete predictability and uniformity.

• As frames of different sizes and formats reach the cell network from a tributary network, they are

split into multiple small data units of equal length and are loaded into cells.

• The cells are then multiplexed with other cells and routed through the cell network.

• Because each cell is the same size and all are small, the problems associated with multiplexing

different-sized frames are avoided.

• In this way, cells arrive to the destination in a continuous stream (no interruptions or unpredictable

delay).

• So, a cell network can handle real-time transmissions, such as a phone call, without the parties

being aware of the segmentation or multiplexing at all.

4- Asynchronous TDM

• ATM uses asynchronous time-division multiplexing, and that is why it is called Asynchronous

Transfer Mode. The multiplexer combines cells from different channels.

B3 B2 M

U

X

B1

B3 A3 B2

A3 A2 A1

A2 B1 A1

A Cell

Page 33: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-33-

• It uses fixed-size slots (size of a cell). ATM multiplexers fill a slot (TDM terminology) with a cell

(ATM terminology) from any input channel that has a cell ready; the slot is empty if none of the

channels has a cell to send.

5- Architecture

• ATM is a cell-switched network. The user access devices, called the endpoints, are connected

through a user-to-network interface (UNI) to the switches inside the network.

• The switches are connected through network-to-network interfaces (NNIs).

6- Virtual Connection:

• Connection between two endpoints is accomplished through transmission paths (TPs), virtual paths

(VPs), and virtual circuits (VCs).

• A transmission path (TP) is the physical connection (wire, cable, satellite, and so on) between an

endpoint and a switch or between two switches.

• .A transmission path is divided into several virtual paths. A virtual path (VP) provides a

connection or a set of connections between two switches.

• Cell networks are based on virtual circuits (VCs). All cells belonging to a single message follow

the same virtual circuit and remain in their original order until they reach their destination.

End Point

End Point

End Point

End Point

End Point

UNI UNI

NNI NNI

NNI

Switches

A3 A2 A1

B3 B2

C3 C1

T

D

M

Slot

C3 B3 A3 B2 A2 C1 A1

Page 34: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-34-

• In short, the physical path is the medium that accepts many connections at a time; the virtual path

is one possible connection that might be used by many end-to-end points, but not at the same time;

and the virtual circuit in the real connection between two endpoints, that uses one virtual path over

one physical path.

7- Identifiers

• In a virtual circuit network, to route data from one endpoint to another, the virtual connections

need to be identified. For this purpose, the designers of ATM created a hierarchical identifier with

two levels: a virtual path identifier (VPI) and a virtual-circuit identifier (VCI).

• The VPI defines the specific VP, and the VCI defines a particular VC inside the VP.

• The VPI is the same for all virtual connections that are combined (logically) into one VP.

• The lengths of the VPIs for UNIs and NNIs are different. In a UNI, the VPI is 8 bits, whereas in an

NNI, the VPI is 12 bits.

• The length of the VCI is the same in both interfaces (16 bits).

• So, a virtual connection is identified by 24 bits in a UNI and by 28 bits in an NNI.

• The idea of using two identifiers (VPI and VC) is to allow hierarchical routing. Most of the

switches in a typical ATM network are routed using VPIs. The switches at the boundaries of the

network use both VPIs and VCIs in order to route cells to the endpoint devices.

8- Cells

• The basic data unit in an ATM network is called a cell. A cell is only 53 bytes long with 5 bytes

allocated to the header and 48 bytes carrying the payload. User data may be less than 48 bytes, so

padding is added to complete the 48 bytes.

• The header is occupied by the VPI and VCI that define the virtual connection.

VC = 21

TP

VPI=14

VPI=18

VPI=14

VPI=18

VC = 32 VC = 45

VC = 70 VC = 74 VC = 45

VC = 21

VC = 32 VC = 45

VC = 70 VC = 74 VC = 45

This virtual connection is

uniquely defined by the pair

(14, 21); i.e., (VPI, VC)

VC

VC

VC

VC VC

VC

VC

VC

VC

VC VC

VC

VP

VP

VP

VP

TP

Page 35: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-35-

9- Connection Establishment and Release

• ATM uses two types of connections: PVC and SVC.

• PVC is a permanent virtual-circuit connection is established between two endpoints by the network

provider.

• SVC is a switched virtual-circuit connection, each time an endpoint wants to make a connection

with another endpoint, a new virtual circuit must be established.

• In both cases, the resources are allocated at the setup phase and released at the teardown phase.

• ATM needs the network layer logical addresses (e.g., IP) to establish the connection.

10- Switching (Routing)

• ATM uses switches to route the cell from a source endpoint to the destination endpoint.

• A switch routes the cell using both the VPIs and the VCIs.

• In the above figure, a cell with a VPI of 153 and VCI of 67 arrives at switch interface (port) 1. The

switch checks its switching table, which stores six pieces of information per row: arrival port

number, incoming VPI, incoming VCI, corresponding outgoing port number, the new VPI, and the

new VCI. The switch finds the entry with the interface 1, VPI 153, and VCI 67 and discovers that

the combination corresponds to output interface 4, VPI 140, and VCI 92. It changes the VPI and

VCI in the header to 140 and 92, respectively, and sends the cell out through interface 4.

V- ATM Layers

Incoming Outgoing

Port VPI VCI Port VPI VCI

1 153 67 4 140 92

Data 67

VCI Incoming

Frame Outgoing

Frame

Switch

Port 1

Port 2

Port 4

Port 3

153

VPI

Data 92

VCI

140

VPI

VPI, VCI Payload (Data)

Header

5 bytes 48 bytes

ATM

Cell

Page 36: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-36-

• The ATM standard defines three layers: the application adaptation layer (AAL), the ATM layer,

and the physical layer.

• The endpoints use all three layers while the switches use only the two bottom layers.

1- Physical Layer

• Like Ethernet and wireless LANs, ATM cells can be carried by any physical layer carrier.

• However, the original design of ATM was based on SONET as the physical layer carrier.

• SONET is preferred for two reasons. First, the high data rate of SONET's carrier reflects the design

and philosophy of ATM. Second, in using SONET, the boundaries of cells can be clearly defined.

• SONET specifies the use of a pointer to define the beginning of a payload. If the beginning of the

first ATM cell is defined, the next cell is found directly after 53 bytes ahead, and so on.

• ATM does not limit the physical layer to SONET. Other technologies, even wireless, may be used.

However, the problem of cell boundaries must be solved.

2- ATM Layer

• The ATM layer provides routing, traffic management, switching, and multiplexing services.

• It accepts the 48-byte segments from the AAL layer and transforming them into a 53-byte cell by

the addition of a 5-byte header.

End Point End Point

Switches AAL

ATM

Phys.

AAL

ATM

Phys.

ATM

Phys.

ATM

Phys.

AAL Layer

ATM Layer

Physical Layer

AAL 1 AAL 2 AAL 3/4 AAL 5

Page 37: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-37-

Header Format

• ATM uses two formats for the header, one for user-to-network interface (UNI) cells and another

for network-to-network interface (NNI) cells.

• GFC: Generic flow control. The 4-bit GFC field provides flow control at the UNI level. The ITU-T

has determined that this level of flow control is not necessary at the NNI level, so more bits are

added to the VPI. The longer VPI allows more virtual paths to be defined at the NNI level.

• VPI: Virtual path identifier (8 bits a UNI cell and a 12 bits an NNI cell).

• VCI: Virtual circuit identifier (16-bits).

• PT: Payload type (3 bits). The first bit defines the payload as user data or managerial information.

The interpretation of the last 2 bits depends on the first bit.

• CLP: Cell loss priority (1 bit). This field is provided for congestion control. A cell with its CLP bit

set to 1 must be retained as long as there are cells with a CLP of 0.

• HEC: Header error correction (8 bits). The HEC is a code computed for the first 4 bytes of the

header. It is a CRC-8 with the divisor x8 + x

2 + x + 1 that is used to correct single-bit errors and a

large class of multiple-bit errors.

3- Application Adaptation Layer

1 Byte

CFG VPI

VPI VCI

VCI

VCI PT CLP

HEC

Payload

Data

(48 Bytes)

1 Byte

VPI

VPI VCI

VCI

VCI PT CLP

HEC

Payload

Data

(48 Bytes)

UNI

Cell

NNI

Cell

48-Byte Segment

48-Byte Segment 5-Byte

Header

AAL Layer

ATM Layer

Page 38: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-38-

• The application adaptation layer (AAL) was designed to enable two ATM concepts.

• First, ATM must accept any type of payload, both data frames and streams of bits.

• A data frame can come from an upper-layer protocol (e.g., Internet) that creates a clearly defined

frame to be sent to a carrier network such as ATM.

• ATM must also carry multimedia payload. It can accept continuous bit streams and break them

into chunks to be encapsulated into a cell at the ATM layer.

• AAL uses two sublayers to accomplish these tasks.

• Whether the data are a data frame or a stream of bits, the payload must be segmented into 48-byte

segments to be carried by a cell. At the destination, these segments need to be reassembled to

recreate the original payload.

• The AAL defines a sublayer, called a segmentation (at the source) and reassembly (at the

destination): the SAR sublayer.

• Before data are segmented by SAR, they must be prepared to guarantee the integrity of the data.

This is done by a sublayer called the convergence sublayer (CS).

• ATM defines four versions of the AAL: AALl, AAL2, AAL3/4, and AAL5, but the common

versions today are AAL1, AAL2 and AAL5.

• AAL1 and AAL2 are used in streaming audio and video communication.

• AAL5 is used in data communications.

a- AAL1

• AAL1 supports applications that transfer information at constant bit rates, such as video and voice.

It allows ATM to connect existing digital telephone networks such as voice channels and T-lines.

Applications

(Data, images, VoIP, …)

Voice

(telephones, cell phones, …)

TCP UDP

IP

Ethernet

802.3

Token

Ring

802.5

Wireless

802.11

AAL5 AAL1 AAL2

ATM

SONET Fiber UTP Wireless T-1 T-3

Application

Transport

Network

Data Link

Physical

Page 39: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-39-

• The bit stream of data is chopped into 47-byte chunks and encapsulated in cells. The CS sublayer

divides the bit stream into 47-byte segments and passes them to the SAR sublayer below. Note that

the CS sublayer does not add a header. The SAR sublayer adds 1 byte of header and passes the 48-

byte segment to the ATM layer.

• The SAR header has two subfields:

• SN: sequence number (4 bits). This field defines a sequence number to order the bits. The first bit

is sometimes used for timing, which leaves 3 bits for sequencing (modulo 8).

• SNP: sequence number protection (4 bits). This field protects the first SN field. The first 3 bits

automatically correct the SN field. The last bit is a parity bit that detects error over all 8 bits.

b- AAL2

• Originally AAL2 was intended to support a variable-data-rate bit stream, where usually a

compressed format is involved.

• It is now used for low-bit-rate traffic and short-frame traffic such as audio. A good example of

AAL2 use is in mobile telephony. AAL2 allows the multiplexing of short frames into one cell.

• The CS layer overhead consists of five fields.

CS

SAR

ATM

Layer 48 bytes

H 5

bytes

AAL2

Layer

1-45 bytes

Short packets from

upper layer

H

1-45 bytes 3

bytes

47 bytes

H PAD

1

byte 47 bytes

H 1

byte

48 bytes

H 5

bytes 48 bytes

H 5

bytes

1-45 bytes 1-45 bytes

H

1-45 bytes 3

bytes

H

1-45 bytes 3

bytes

47 bytes 47 bytes 47 bytes

…1010010111101010011101010111100101010101101110101010001010100100100….

CS

SAR 48 bytes

H

48 bytes

H

48 bytes

H

ATM

Layer 48 bytes

H 5

bytes 48 bytes

H 5

bytes 48 bytes

H 5

bytes

AAL1

Layer

Digitized voice or

video

Page 40: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-40-

• CID: Channel identifier (8 bits). This field defines the channel (user) of the short packet.

• LI: Length indicator (6 bits). This field indicates how much of the packet is data. The value is one

less than the packet payload and has a maximum value of 45 bytes (may be set to 64 bytes).

• PPT: Packet payload type (2 bits). This field defines the type of packet (dialed digits, alarm, …).

• UUI: User-to-user indicator (3 bits). This field can be used by end-to-end users.

• HEC: Header error control (5 bits). This field is used to correct errors in the header.

• The SAR layer overhead consists of three fields:

• OSF: Offset field (6 bits). Identifies the location of the start of the next CS packet within the SAR

packet.

• SN: Sequence number (1 bit). Protects data integrity.

• P: Parity (1 bit). Protects the start field from errors.

Page 41: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-41-

c- AAL3/4

• Initially, AAL3 was intended to support connection-oriented data services and AAL4 to support

connectionless services. But they have been combined into a single format called AAL3/4 to

handle both cases.

• The CS layer header and consists of three fields:

• CPI: Common part identifier (8 bits). The CPI defines how the subsequent fields are to be

interpreted. The value at present is 0.

• Btag: Begin tag (8 bits). The value of this field is repeated in each cell to identify all the cells

belonging to the same packet. The value is the same as the Etag.

• BAsize: Buffer allocation size (16 bits). The BA field tells the receiver what size buffer is needed

for the coming data.

• The CS layer trailer consists of three fields:

• AL: Alignment (8 bits). The AL field is included to make the trailer 4 bytes long (why???).

• Etag: Ending tag (8 bits). The Etag field serves as an ending flag. Its value is the same as that of

the beginning tag.

• L: Length (16 bits). The L field indicates the length of the data unit.

• The SAR header consists of three fields:

• ST: Segment type (2 bits). The ST identifier specifies the position of the segment in the message:

beginning (00), middle (01), or end (10). A single-segment message has an ST of 11.

• SN: Sequence number (4 bits). This field defines the sequence order of the segments.

• MID: Multiplexing identifier (10 bits). The MID field identifies cells coming from different data

flows and multiplexed on the same virtual connection.

• The SAR trailer consists of two fields:

• LI: Length indicator (6 bits). This field defines how much of the packet is data, not padding.

• CRC: The last 10 bits of the trailer is a CRC for the entire data unit.

CS

SAR

ATM

Layer 48 bytes

H 5

bytes

AAL3/4

Layer

Data packet up to 65535 bytes

44 bytes

H 2

bytes

48 bytes

H 5

bytes 48 bytes

H 5

bytes

H 4

bytes

PAD T

T 2

bytes 44 bytes

H 2

bytes

T 2

bytes 44 bytes

H 2

bytes

T 2

bytes

4

bytes

Page 42: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-42-

d- AAL5

• AAL3/4 provides sequencing and error control mechanisms that are not necessary for every

application. For these applications, the designers of ATM have provided a fifth AAL sublayer,

called the simple and efficient adaptation layer (SEAL).

• AAL5 assumes that all cells belonging to a single message travel sequentially and that control

functions (error control) are included in the upper layers of the sending application.

• The CS layer trailer consist of 4 fields:

• UU: User-to-user (8 bits). This field is used by end users.

• CPI: Common part identifier (8 bits). The CPI defines how the subsequent fields are to be

interpreted. The value at present is 0. Other values can be used in the future.

• L: Length (16 bits). The L field indicates the length of the original data, not padding.

• CRC: The last 4 bytes is for error control on the entire data unit.

CS

SAR

ATM

Layer 48 bytes

H 5

bytes

AAL5

Layer

Data packet up to 65535 bytes

48 bytes

PAD T 8

bytes

48 bytes 48 bytes

48 bytes

H 5

bytes 48 bytes

H 5

bytes

Page 43: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-43-

e- Comparison

AAL1 AAL2 AAL3/4 AAL5

Connection mode Connection oriented Connection oriented Connection oriented

Connectionless

Connectionless

Traffic Constant bit rate (low) Variable bit rate (high) Variable bit rate Variable bit rate

Timing dependency High High Low Low

Type of Application Interactive (real time)

audio and video

transmission

Compressed audio and

video transmission

Data Data

Streaming

(video and audio)

Yes Yes No No

Packets (data units) No Yes (audio, video) Yes Yes

CS header No Yes Yes No

CS trailer No No Yes Yes

SAR header Yes Yes Yes No

SAR trailer No No Yes No

Padding (single) No No Yes Yes

Error control (header) Yes (only SN) Yes No No

Error control (entire

data unit)

No No Yes No

Sequencing Yes (modulo 8) No Yes (global) No

Cell efficiency 87%

(47 / 53)

21 - 83% if no padding

(11 to 44 / 53)

83% if no padding

(44 / 53)

91% if no padding

(48/53)

Page 44: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-44-

V- ATM LANs

• ATM is mainly a wide-area network (WAN ATM).

• However, the technology can be adapted to local-area networks (ATM LANs).

• The high data rate of ATM (155 and 622 Mbps) has attracted the attention of designers who are

looking for greater and greater speeds in LANs.

• The ATM technology supports different types of connections between two end users: permanent

and temporary connections, multimedia communication with a variety of high bandwidths, data

file transfer, etc.

1- ATM LAN Architecture

• The ATM technology is incorporated in LAN architecture in two different ways: pure ATM LAN

or legacy ATM LAN.

2- Pure ATM Architecture

• In a pure ATM LAN, an ATM switch is used to connect the stations in a LAN, in exactly the same

way stations are connected to an Ethernet switch.

• In this way, stations can exchange data at one of two standard rates of ATM technology (155 and

652 Mbps).

• Since ATM is involved more with data link layer, the station uses VPI and VCI instead of a source

and destination physical addresses.

Station Station

Station

ATM Switch

ATM LAN

Pure ATM LAN Legacy ATM LAN

Mixed

Architecture

Page 45: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-45-

• The major drawback of this architecture is that the system needs to be built from the ground up;

existing LANs are totally replaced.

3- Legacy LAN Architecture

• In legacy LAN architecture, the ATM technology is used as a backbone to connect traditional

LANs.

• Stations on the same LAN (Ethernet, Token Ring, …) can exchange data at the same format and

data rate of traditional LANs. But when two stations on two different LANs need to exchange data,

they can go through a converting device that changes the frame format.

• Several LANs can be multiplexed together to create a high-data-rate input to the ATM switch.

4- Mixed Architecture

• The mixed architecture is a hybrid bet

• ween the previous two architectures.

• This means keeping the existing LANs and, at the same time, allowing new stations to be directly

connected to an ATM switch.

Station

Converter

Ethernet

ATM Switch

Station

Station

Station

Converter

Ethernet

Station

Station

Converter

Station

Station

Station

Station

Token Ring

Page 46: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-46-

• The mixed architecture LAN allows the gradual migration of legacy LANs onto ATM LANs.

• The stations in one specific LAN can exchange data using the same format and data rate of that

particular LAN. The stations directly connected to the ATM switch can use an ATM frame to

exchange data. But the problem remains when a station in a traditional LAN communicates with a

station directly connected to the ATM switch.

5- LAN Emulation (LANE)

• Many problems arose when the ATM technology was incorporated in LANs.

• Connectionless versus connection-oriented: Traditional LANs, such as Ethernet, are connectionless

protocols. A station sends data packets to another station whenever the packets are ready. There is

no connection establishment or connection termination phase. On the other hand, ATM is a

connection-oriented protocol; a station that wishes to send cells to another station must first

establish a connection and terminate the connection afterwards.

• Physical addresses versus virtual-circuit identifiers: Some protocols, such as Ethernet, define the

route of a packet through source and destination physical addresses. However, ATM protocol

defines the route of a cell through virtual connection identifiers (VPIs and VCIs).

• Multicasting and broadcasting delivery: Traditional LANs, such as Ethernet, can both multicast (to

a specific group of stations) and broadcast (to all stations) frames. In ATM, connections are

oriented, and therefore, multicasting and broadcasting are hard to implement due to the high cost

of multiple setup and termination phases.

• Solution: An approach called local-area network emulation (LANE) solves the above-mentioned

problems and allows stations in a mixed architecture to communicate with one another.

ATM Station

ATM Switch

Station

Converter

Ethernet

Station

Station

Converter

Station

Station

Station

Station

Token Ring

ATM Station

Page 47: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-47-

• In network simulation, a machine simulates multiple stations and the network that connects them.

In network emulation, the stations are real, but part of the network or the interaction between these

stations is simulated or replaced by a software.

• The LANE approach uses emulation. Stations can use a connectionless service that emulates a

connection-oriented service. Stations use the source and destination addresses for initial

connection and then use VPI and VCI addressing.

• The approach allows stations to use unicast, multicast, and broadcast addresses.

• Finally, the approach converts frames using a legacy format to ATM cells before they are sent

through the switch.

a- Client/Server Model

• LANE is designed as a client/server model to handle the previously discussed problems.

• The protocol uses one type of client and three types of servers.

b- LAN Emulation Client

• All ATM stations have LAN emulation client (LEC) software installed on top of the three ATM

protocols.

• The upper-layer protocols (that use ATM) send their requests to LEC for a LAN service such as

connectionless delivery using MAC unicast, multicast, or broadcast addresses. The LEC interprets

the request and passes the result on to the servers.

c- LAN Emulation Configuration Server

• The LAN emulation configuration server (LECS) is used for the initial connection between the

client and the LANE.

• This server is always waiting to receive the initial contact. It has a well-known ATM address that

is known to every client in the system.

d- LAN Emulation Server (Unicast)

• LAN emulation server (LES) software is installed on the LES.

• When a station receives a frame to be sent to another station using a physical address, LEC sends a

special frame to the LES. The server creates a virtual circuit between the source and the destination

station. The source station can now use this virtual circuit (and the corresponding identifier) to

send the frame or frames to the destination.

e- Broadcast/Unknown Server (Multicast, Broadcast)

• Multicasting and broadcasting require the use of another server called the broadcast unknown

server (BUS).

• If a station needs to send a frame to a group of stations or to every station, the frame first goes to

the BUS, which has permanent virtual connections to every station.

Page 48: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-48-

• The server creates copies of the received frame and sends a copy to a group of stations or to all

stations, simulating a multicasting or broadcasting process.

f- Mixed Architecture with Client/Server

• The next figure shows clients and servers in a mixed architecture ATM LAN.

• The LECS, LES, and BUS servers are connected to the ATM switch, and can actually be part of

the switch.

• ATM stations are designed to send and receive LANE communication.

• Traditional LAN stations are connected to the switch via converters. These converters act as LEC

clients and communicate on behalf of their connected stations.

ATM Station

ATM Switch

Station

Converter

Ethernet

Station

Station

Converter

Station

Station

Station

Station

Token Ring

ATM Station

Physical

ATM

AAL

LECS

Upper

Layers

LECS Server LES Server BUS Server

Physical

ATM

AAL

LES

Upper

Layers

Physical

ATM

AAL

BUS

Upper

Layers

Physical

ATM

AAL

LEC

Upper

Layers

Physical

ATM

AAL

LEC

Converter Software

Physical

Data

Link

Page 49: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-49-

Ethernet, Fast Ethernet, and Gigabit Ethernet

I- Introduction:

• The LAN market has seen several technologies such as Ethernet, Token Ring, Token Bus, FDDI,

and ATM LAN.

• Some of these technologies survived for a while, but Ethernet is by far the dominant technology.

• Although Ethernet has gone through a four-generation evolution during the last few decades, the

main concept has remained.

1- The IEEE Standard

• In 1985, the Computer Society of the IEEE started a project, called Project 802, to set standards to

enable intercommunication among equipment from a variety of manufacturers.

• Project 802 is a way of specifying functions of the physical layer and the data link layer of major

LAN protocols.

• In 1987, the International Organization for Standardization (ISO) also approved it as an

international standard under the designation ISO 8802.

• The IEEE has subdivided the data link layer into two sublayers: logical link control (LLC) and

media access control (MAC).

• The LLC sublayer handles the problems of error and flow control. The MAC sublayer handles the

problem of physical addressing, frame format, and shared medium access.

• IEEE has also created several physical layer standards for different LAN protocols.

2- Ethernet Generations:

• The original Ethernet was created in 1976 at Xerox's Palo Alto Research Center (PARC).

• Four generations have evolved and emerged to networking: Standard Ethernet (l0 Mbps), Fast

Ethernet (100 Mbps), Gigabit Ethernet (l Gbps), and Ten-Gigabit Ethernet (l0 Gbps).

• Ethernet is known as the IEEE 802.3 standard.

ISO/Internet

Model

Upper Layers

Data Link

Physical

IEEE Standard

Upper Layers

Ethernet

Physical Layer

Token Ring

Physical Layer

Token Bus

Physical Layer

………..

………..

Ethernet

MAC

Token Ring

MAC

Token Bus

MAC

………..

………..

LLC Sublayer

MAC

Page 50: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-50-

II- STANDARD ETHERNET

1- Ethernet Topology

• The Initial Ethernet topology was the bus topology where each station is connected to a shared

medium. Collisions might occur since there is no control mechanism (tokens, …).

2- MAC Sublayer

• The Mac sublayer is responsible for placing data received from the upper layer into a frame and

passes them to the physical layer.

• Ethernet does not provide any mechanism for acknowledging received frames, making it what is

known as an unreliable medium. Acknowledgments must be implemented at the higher layers.

• The Ethernet frame contains seven fields.

• Preamble (7 bytes): It contains 56 bits of alternating 0s and 1s that alerts the receiving system to

the coming frame and enables it to synchronize its input timing. The preamble is actually added at

the physical layer and is not (formally) part of the frame.

• Start frame delimiter (SFD) (1 byte). It is simply formed of the sequence 10101011 that signals the

beginning of the frame. The SFD warns the station or stations that this is the last chance for

Preamble SFD Destination

Address Source

Address

Length

or Type Data

and Padding CRC

7 bytes 1 byte 6 bytes 6 bytes 2 bytes Min: 46 bytes

Max: 1500 bytes 4 bytes

Shared Medium:

Collision Domain

Ethernet

Standard

Ethernet

Fast

Ethernet

Gigabit

Ethernet

10 Gigabit

Ethernet

10 Mbps 100 Mbps 1 Gbps 10 Gbps

Page 51: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-51-

synchronization. So, it is similar to the preamble. The last two one’s warns that next coming field

is the destination address.

• Destination address (DA) (6 bytes). The DA field is 6 bytes and contains the physical address of

the destination station or stations to receive the packet.

• Source address (SA) (6 bytes). The SA field is also 6 bytes and contains the physical address of the

sender of the packet.

• Length or type (2 bytes). This field is defined as a type field or length field. The original Ethernet

used this field as the type field to define the upper-layer protocol using the MAC frame (IP, ARP,

…). The IEEE standard used it as the length field to define the number of bytes in the data field.

• Data (46 to 1500 bytes). This field carries data encapsulated from the upper-layer protocols.

• CRC (4 bytes). The last field contains error detection information, in this case a CRC-32.

3- Frame Length

• Ethernet has imposed restrictions on both the minimum and maximum lengths of a frame.

• The minimum length restriction is required for the correct operation of CSMA/CD (Carrier

Sensing Multiple Access/Collision Detection).

• An Ethernet frame needs to have a minimum length of 512 bits or 64 bytes: 18 bytes for the header

and the trailer, and 46 bytes for the data coming from upper layer. If the data length is less than 46

bytes a padding is added.

• The minimum length of a frame depends on the maximum length of the network: the maximum

distance between two stations in the LAN.

Start

Set backoff to

zero (N= 0)

Send the frame

Collision?

Success

Increment backoff

N = N + 1

Backoff

limit?

Failure

Wait

backoff time

between 0 and 2Nx PD

NO YES

YES NO

Persistence

strategy

Send a jam

signal

Page 52: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-52-

• In an Ethernet network, the round-trip time required for a frame to travel from one end of a maximum-

length network (theoretically 5120 m, practically 2500 m) to the other plus the time needed to send the

jam sequence is called the slot time.

• Slot Time = round-trip time + time required to send the jam sequence.

• During the slot time, the station is allowed to send bits since the echo of any possible collision cannot

arrive before this time. So how many bits can a station send before it senses a collision?

Station 1 senses

the link: idle

Station 1 sends

a frame

Station 1's frame is

propagating

Station 2 senses

the link : idle

Station 2 sends

a frame

Collision!

Collision signal arrives to

sending station

Station 1 sends a jam

signal to all stations

Page 53: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-53-

• Maximum Network Length (m) = Propagation Speed (m/s) x (Slot time /2), with a standard

propagation speed of 2x108 m/s.

• So the Slot Time = 2 x Maximum Network Length / Propagation Speed = 2 x 5120 / (2x108).

• The Ethernet Slot Time = 51.2 µs.

• So, with the traditional Ethernet (10 Mbps), during a slot time of 51.2 µs, a station can send 512 bits

before it can sense a collision.

• Therefore, the minimum frame length is 512 bits (64 bytes).

• The IEEE 802.3 standard defines the maximum length of a frame (without preamble and SFD

field) as 1518 bytes: 18 bytes of header and trailer, and a maximum length of 1500 bytes for the

payload. If the payload length is more than 1500 bytes, a segmentation procedure is done at the

upper layer.

• The maximum length was set to 1500 bytes because memory (or buffer to load in the frame) was

very expensive. The second reason is to prevent a station from monopolizing the shared medium,

blocking other stations that have data to send. The third reason was to reduce the risk of erroneous

frames that are affected by noise; i.e., the larger the frame, the bigger chance to get damaged by

noise.

• So, if a station sends the first 512 bits and senses no collision, then it can sends the rest of the

frame with no fear of collision, because after the duration of sending 512 bits, we guaranty that all

stations have sensed the carrier, and therefore, they have backed off to avoid collision.

4- Addressing

• Each station on an Ethernet network (such as a PC, workstation, or printer) has its own network

interface card (NIC).

• The NIC fits inside the station and provides the station with a 6-byte physical address (e.g.,

64-31-50-57-2E-C3). A station might have more than one network interface card. For instance,

with new laptops, an Ethernet NIC and a Wireless NIC are provided, each with a different physical

address.

5- Unicast, Multicast, and Broadcast Addresses

• A source address is always a unicast address, i.e., the frame comes from only one station.

• The destination address, however, can be unicast, multicast, or broadcast.

• A unicast address has a 0 as the least significant bit of the first byte: (e.g., 64-31-50-57-2E-C3).

This address is assigned to one station only.

• A multicast address has a 1 as the least significant bit of the first byte (e.g., 65-31-50-57-2E-C3).

This address in assigned to a specific group inside the network.

• The only broadcast address in Ethernet is an address with 48 one’s: FF-FF-FF-FF-FF-FF. A frame

with this destination address should arrive to every station in the network.

Page 54: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-54-

Physical Layer

• The Standard Ethernet defines several physical layer implementations. The 4 common ones are:

10Base5, 10Base2, 10BaseT, and 10BaseF.

Encoding and Decoding

• All standard implementations use digital signaling (baseband) at 10 Mbps.

• At the sender, data are converted to a digital signal using the Manchester scheme; at the receiver,

the received signal is interpreted as Manchester and decoded into data.

• Manchester encoding is self-synchronous, providing a transition at each bit interval.

6- 10Base5: Thick Ethernet

• The first implementation is called 10Base5, thick Ethernet, or Thicknet. The coaxial cable was of

the size of a garden hose and too hard to bend with your hands.

• 10Base5 was the first Ethernet specification to use a bus topology with an external transceiver

(transmitter/receiver) connected via a tap to a thick coaxial cable.

Manchester Signal

UTP or Fiber

Data

Encoding

Data

Encoding

Standard Ethernet

10 Mbps

10Base5 10Base2 10BaseT 10BaseF

Topology: BUS

Medium: Thick Coaxial Cable

Topology: BUS

Medium: Thin Coaxial Cable

Topology: Star

Medium: UTP

Topology: Star

Medium: Fiber

Page 55: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-55-

• The transceiver is responsible for transmitting, receiving, and detecting collisions.

• The transceiver is connected to the station via a transceiver cable that provides separate paths for

sending and receiving. This means that collision can only happen in the coaxial cable.

• The maximum length of the coaxial cable must not exceed 500 m, otherwise, there is excessive

degradation of the signal.

• If a length of more than 500 m is needed, repeaters can be used to connect up to five segments,

each of 500 m.

7- 10Base2: Thin Ethernet

• 10Base2, thin Ethernet, or Cheapernet, is the second implementation of Ethernet.

• The cable is much thinner and more flexible. The cable can

• The transceiver is normally part of the network interface card (NIC), which is installed inside the

station.

Cable

End

Cable

End

10 BASE 2

10 Mbps Baseband

or Digital 185 meters

T connector

Thin Coaxial Cable Max Length: 185 meters

Transceiver

Cable

End

Cable

End

Transceiver

Transceiver

Transceiver

10 BASE 5

10 Mbps Baseband

or Digital 500 meters

Transceiver Cable

Max length: 50 meters

Thick Coaxial Cable Max Length: 500 meters

Page 56: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-56-

• Thin coaxial cable is less expensive than thick coaxial and the T connections are much cheaper

than taps.

• Installation is simpler because the thin coaxial cable is very flexible.

• The length of each segment cannot exceed 185 m (close to 200 m) due to the high level of

attenuation in thin coaxial cable.

8- 10Base-T: Twisted-Pair Ethernet

• 10Base-T or twisted-pair Ethernet is the third implementation of Ethernet.

• 10Base-T uses a physical star topology. The stations are connected to a hub via two pairs of

twisted cable (one for sending and one for receiving).

• Collision might happen in the hub only.

• The maximum length of the twisted cable here is defined as 100 m, to minimize the effect of

attenuation in the twisted cable.

9- 10Base-F: Fiber Ethernet

• 10Base-F uses a star topology to connect stations to a hub.

• The stations are connected to the hub using two fiber-optic cables.

• The maximum length is set to 2000 m.

10 BASE T

10 Mbps Baseband

or Digital

Twisted

Pair

HUB

UTP Cable

2 pairs

100 meters

Page 57: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-57-

Characteristics 10Base-5 10Base-2 10Base-T 10Base-F

Media Thick coaxial Thin coaxial 2 UTP 2 Fibers

Maximum Length 500 meters 185 meters 100 meters 2000 meters

Line Encoding Manchester Manchester Manchester Manchester

III- CHANGES IN THE STANDARD

• The 10-Mbps Standard Ethernet has gone through several changes which opened the road to the

evolution of the Ethernet to become compatible with other high-data-rate LANs.

1- Bridged Ethernet

• Bridges have two effects on an Ethernet LAN: They raise the bandwidth and they separate

collision domains.

Bridge

A network without bridging

A network with bridging

10 BASE F

10 Mbps Baseband

or Digital Fiber

HUB

Fiber Optic Cable

2000 meters

Page 58: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-58-

Raising the Bandwidth

• In an unbridged Ethernet network, the total capacity (10 Mbps) is shared among all stations with a

frame to send.

• When one station is sending, the other one refrains from sending.

• We can say that, in this case, each station on average sends at a rate of 5 Mbps.

• In case of n stations, one is sending and (n-1) is refraining. So, in average, each station sends at

rate of ��

� Mbps.

• A bridge divides the network into two or more networks.

• In a bridged network, each sub-network is independent.

• The 10-Mbps capacity in each segment is shared between the stations of the segment only.

• So, if a network with n stations is segmented into two networks, each station theoretically is

offered ��

� Mbps Mbps instead of

��

� Mbps, assuming that the traffic is not going through the

bridge.

• If we further divide the network, we can gain more bandwidth for each segment. For example, if

we use a four-port bridge, each station is now offered ��

� Mbps.

Separating Collision Domains

• Another advantage of a bridge is the separation of the collision domain.

• The collision domain becomes much smaller and the probability of collision is reduced

tremendously.

• In an unbridged network with n station, the probability of collision at time t is 1 − ((1 − �� +

(1 − �����, where P is the probability that one station sends a frame at time t.

• In a bridged network of two balanced segments, the collision at time t is 1 − ((1 − ��

� +

�(1 − �

���

�.

2- Switched Ethernet

• A switched Ethernet LAN is an extension of bridged Ethernet, with N networks, where N is the

number of stations.

• In this way, the bandwidth is shared only between the station and the switch (5 Mbps each).

• The collision domain is divided into N domains.

Page 59: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-59-

• A layer 2 (data link layer) switch is an N-port bridge with additional sophistication (reads the

frame’s header) that allows faster handling of the packets.

• One of the limitations of 10Base5 and l0Base2 is that communication is half-duplex (10Base-T is

always full-duplex).

• The full-duplex mode increases the capacity of each domain from 10 to 20 Mbps.

• In full-duplex switched Ethernet, there is no need for the CSMA/CD method, because each station

is connected to the switch via two separate links.

MAC Control Layer

• In Standard Ethernet there is no explicit flow control or error control (correction) to inform the

sender that the frame has arrived at the destination without error. When the receiver receives the

frame, it does not send any positive or negative acknowledgment.

• In full-duplex switched Ethernet, a new sublayer, called the MAC control, is added between the

LLC sublayer and the MAC sublayer to handle flow and error control.

IV- FAST ETHERNET

• Fast Ethernet was designed to compete with high data rate LAN protocols such as FDDI.

• Fast Ethernet was designed with the following goals:

• 1. Upgrade the data rate to 100 Mbps.

• 2. Make it compatible with Standard Ethernet.

• 3. Keep the same 48-bit address.

ISO/Internet

Model

Upper Layers

Data Link

Physical

Full Duplex Switched Ethernet

Ethernet

Physical Layer

MAC Sublayer

MAC Control

Sublayer

LLC Sublayer

Collision Domain

Switch

Page 60: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-60-

• 4. Keep the same frame format.

• 5. Keep the same minimum and maximum frame lengths.

1- MAC Sublayer

• A decision was made to drop the bus topologies and keep only the star topology.

• Two choices were left for the star topology: half duplex and full duplex.

• In the half-duplex approach, the stations are connected via a hub (no buffering, flooding all ports).

• In the full-duplex approach, the connection is made via a switch with buffers at each port.

• The access method is the same (CSMA/CD) for the half-duplex approach.

• For full-duplex mode, there is no need for CSMA/CD. However, the implementations keep

CSMA/CD for backward compatibility with Standard Ethernet.

2- Physical Layer

• Fast Ethernet is designed to connect two or more stations together. If there are only two stations,

they can be connected point-to-point. Three or more stations need to be connected in a star

topology with a hub or a switch at the center.

3- Implementation

• Fast Ethernet implementation at the physical layer can be categorized as either two-wire or four-

wire.

• The two-wire implementation (100Base-X) can be either category 5 UTP (100Base-TX), or fiber-

optic cable (100Base-FX).

• The four-wire implementation is designed only for category 3 UTP (l00Base-T4).

Switch

Star

topology Point-to-point

topology

Page 61: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-61-

4- 100Base-TX

• 100Base-TX uses two pairs of twisted-pair cable (either category 5 UTP or STP).

• For this implementation, the MLT-3 (multi-line transmission with 3 levels) scheme was selected

since it has good bandwidth performance.

• NRZ-I and differential Manchester are classified as differential encoding but use two transition

rules to encode binary data (no inversion, inversion).

• If we have a signal with more than two levels, we can design a differential encoding scheme with

more than two transition rules.

• MLT-3 uses three levels (+v, 0, and -v) and three transition rules to move between the levels.

• 1. If the next bit is 0, there is no transition.

• 2. If the next bit is 1 and the current level is not 0 (-v or +v), the next level is 0.

• 3. If the next bit is 1 and the current level is 0, the next level is a transition to the opposite of the

last nonzero level.

0 1 0 1 1 0 1 1 1 0 0 1 1 0

Time

Amplitude

+v

-v

Fast Ethernet

100Base-TX 100Base-FX

100Base-X

100Base-T4

2 wires CAT 5 UTP 2 wires fiber 4 wires CAT 3 UTP

Page 62: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-62-

• However, since MLT-3 is not a self-synchronous line coding scheme, 4B/5B block coding is used

to provide bit synchronization by preventing the occurrence of a long sequence of 0s and 1s.

Data Sequence

(4 bits)

Encoded Sequence

(5 bits)

Data Sequence

(4 bits)

Used for Control

Encoded Sequence

(5 bits)

0000 11110 Q (Quiet) 00000

0001 01001 I (Idle) 11111

0010 10100 H (Halt) 00100

0011 10101 J (start delimiter) 11000

0100 01010 K (start delimiter) 10001

0101 01011 T (end delimiter) 01101

0110 01110 S (Set) 11001

0111 01111 R (Reset) 00111

1000 10010

1001 10011

1010 10110

1011 10111

1100 11010

1101 11011

1110 11100

1111 11101

100 Mbps

4 x 25 Mbps

4B/5B

Encoder

MLT-3

Encoder

0000

11110

UTP

wire

4 x 25 Mbps

4B/5B

Decoder

MLT-3

Decoder

UTP

wire

1111

11101

100BaseTX

Encoding Scheme

125 Mpbs 125 Mpbs

Page 63: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-63-

5- 100Base-FX

• 100Base-FX uses two pairs of fiber-optic cables.

• Optical fiber can easily handle high bandwidth requirements by using simple encoding schemes.

• 100Base-FX selected the NRZ-I encoding scheme for this implementation.

• NRZ-I has a synchronization problem for long sequences of 0s (or 1s, based on the encoding).

• To overcome this problem, the designers used 4B/5B block encoding. The block encoding

increases the bit rate from 100 to 125 Mbps, which can easily be handled by fiber-optic cable.

6- 100Base-T4

• A 100Base-TX network can provide a data rate of 100 Mbps, but it requires the use of category 5

UTP or STP cable.

• A new efficient standard, called 100Base-T4, was designed to use category 3 or higher UTP. The

implementation uses four pairs of UTP for transmitting 100 Mbps.

• Category 3 UTP cannot handle more than 25 Mbaud/s.

• Two pairs of UTP are unidirectional (one sending, the other receiving). Two pairs are

bidirectional.

• Three pairs in each direction can handle 75 Mbaud/s (25 Mbaud/s) each. We need to use an

encoding scheme that converts 100 Mbps to a 75 Mbaud/s signal.

00011111

-1 +1 0 +1 -1 0 8 Binary

6 Ternary

100 Mbps

4 x 25 Mbps

4B/5B

Encoder

NRZ-I

Encoder

0000

11110

Fiber

wire

4 x 25 Mbps

4B/5B

Decoder

NRZ-I

Decoder

Fiber

wire

1111

11101

100BaseTX

Encoding Scheme

125 Mpbs 125 Mpbs

Page 64: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-64-

Data Sequence

(8 bits)

Encoded Sequence

(6 ternary signal)

00000000 −+00−+

00000001 0−+−+0

… …

11111111 00+−0+

• The 8B/6T encoding satisfies this requirement. In 8B/6T, eight bits are encoded as six signal

elements (bauds). This means that 100 Mbps uses only (6/8) x 100 Mbps, or 75 Mbaud.

Characteristics 100Base-TX 100Base-FX 100Base-T4

Media CAT 5 UTP Fiber CAT 3 UTP

Number of Wires 2 2 4

Maximum Length 100 meters 100 meters 100 meters

Block Encoding 4B/5B 4B/5B

Line Encoding MLT-3 NRZ-I 8B/6T

V- GIGABIT ETHERNET

• The IEEE committee defined a new standard for gigabit Ethernet calls the Standard IEEE 802.3z.

The goals of the Gigabit Ethernet design can be summarized as follows:

• 1. Upgrade the data rate to 1 Gbps.

• 2. Make it compatible with Standard and Fast Ethernet.

• 3. Use the same 48-bit address.

100 Mbps

8B/6T

Encoder

00000000 8B/6T

Decoder 11111111

100BaseT4

Encoding Scheme

Sending Bi-

directional

Bi-

directional Receiving

Page 65: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-65-

• 4. Use the same frame format.

• 5. Keep the same minimum and maximum frame lengths.

• 6. To support auto-negotiation (check compatibilities) as defined in Fast Ethernet.

1- MAC Sublayer

• Gigabit Ethernet has two distinctive approaches for medium access: half-duplex and full-duplex.

• Almost all implementations of Gigabit Ethernet follow the full-duplex approach. The Gigabit

Ethernet can be compatible with the previous generations in case of half-duplex.

Full-Duplex Mode:

• In full-duplex mode, there is a central switch connected to all computers or other switches.

• In this mode, each switch has buffers for each input port in which data are stored until they are

transmitted.

• There is no collision in this mode, which means that CSMA/CD is not used.

Half-Duplex Mode:

• For compatibility with previous generations, Gigabit Ethernet can also be used in half-duplex

mode, although it is rare.

• In this case, a switch can be replaced by a hub, which acts as the common cable in which a

collision might occur. The half-duplex approach uses CSMA/CD.

2- Physical Layer

• Gigabit Ethernet is designed to connect two or more stations. If there are only two stations, they

can be connected point-to-point. Three or more stations need to be connected in a star topology

with a hub or a switch at the center.

• Another possible configuration is to connect several star topologies.

Page 66: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-66-

3- Implementation

• Gigabit Ethernet can be categorized as either a two-wire or a four-wire implementation.

• The two-wire implementations use fiber-optic cable (1000Base-SX, short-wave, or

• 1000Base-LX, long-wave), or STP (1000Base-CX).

• The four-wire version uses category 5 twisted-pair cable (1000Base-T).

• 1000Base-T was designed to be compatible with users who had already installed this wiring for

other purposes such as Fast Ethernet or telephone services.

Encoding

• Gigabit Ethernet cannot use the Manchester encoding scheme because it involves a very high

bandwidth (2 GBaud).

Gigabit Ethernet

1000Base-SX 1000Base-LX 1000Base-CX 1000Base-T

2 wires

Short wave fiber

2 wires

Long wave fiber

2 wires

Copper STP

4 wires

UTP

Switch

Star

topology

Point-to-point

topology

Switch

Several star

topologies

Switch

Switch

Page 67: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-67-

• The two-wire implementations use an NRZ scheme. To synchronize bits, particularly at this high

data rate, 8B/10B block encoding is used. The resulting stream is 1.25 Gbps.

• In this implementation, one wire (fiber or STP) is used for sending and one for receiving.

• In the four-wire implementation it is not possible to have 2 wires for input and 2 for output,

because each wire would need to carry 500 Mbps, which exceeds the capacity for category 5 UTP.

As a solution, 4D-PAM5 encoding is used to reduce the bandwidth.

• All four wires are bidirectional; each wire carries 250 Mbps, which is in the range for category 5

UTP cable.

1000 Mbps

8 x 125 Mbps

4D-PAM5

Encoder

00000000

4 pairs of

CAT5 UTP

8 x 125 Mbps

4D-PAM5

Decoder

11111111

1000BaseT

Encoding Scheme

1000 Mbps

8 x 125 Mbps

8B/10B

Encoder

NRZ-I

Encoder

00000000

1111011110

Fiber or STP

wire

8 x 125 Mbps

8B/10B

Decoder

NRZ-I

Decoder

Fiber or STP

wire

11111111

1110111101

1000BaseSX, 1000BaseLX,

1000BaseCX

Encoding Scheme

1.25 Gpbs 1.25 Gpbs

Page 68: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-68-

Characteristics 1000Base-SX 1000Base-LX 1000Base-CX 1000Base-T

Media Fiber (short wave) Fiber (long wave) Copper (STP) Cat5 UTP

Number of Wires 2 2 2 4

Maximum Length 550 meters 5000 meters 25 meters 100 meters

Block Encoding 8B/10B 8B/10B 8B/10B

Line Encoding NRZ NRZ NRZ 4D-PAM5

Ten-Gigabit Ethernet

• The IEEE committee created Ten-Gigabit Ethernet and called it Standard IEEE 802.3ae.

• The goals of the Ten-Gigabit Ethernet design can be summarized as follows:

• 1. Upgrade the data rate to 10 Gbps.

• 2. Make it compatible with Standard, Fast, and Gigabit Ethernet.

• 3. Use the same 48-bit address.

• 4. Use the same frame format.

• 5. Keep the same minimum and maximum frame lengths.

• 6. Allow the interconnection of existing LANs into MANs or WANs.

• 7. Make Ethernet compatible with technologies such as Frame Relay and ATM.

1- MAC Sublayer

• Ten-Gigabit Ethernet operates only in full duplex mode which means there is no need for

contention; CSMA/CD is not used in Ten-Gigabit Ethernet.

2- Physical Layer

• The physical layer in Ten-Gigabit Ethernet is designed for using fiber-optic cable over long

distances. Three implementations are the most common: 10GBase-S, 10GBase-L, and 10GBase-E.

Characteristics 10GBase-S 10GBase-L 10GBase-E

Media Short wave

850 nm

multimode

Long wave

1310 nm

single mode

Extended

1550 nm

single mode

Maximum Length 300 meters 10 Km 40 Km

Page 69: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-69-

Quality of Service and Multimedia

• Quality of service (QoS) is an internetworking issue that has been discussed more than defined.

We can informally define quality of service as something a flow seeks to attain.

I- Flow Characteristics:

• Traditionally, four types of characteristics are attributed to a flow: reliability, delay, jitter, and

bandwidth.

1- Reliability:

• Lack of reliability means losing a packet or acknowledgment.

• The sensitivity of application programs to reliability is not the same. For example, it is more

important that email, file transfer, and Internet access have reliable transmissions than telephony or

audio conferencing.

• Reliability is guaranteed by special protocols such as TCP.

2- Delay:

• Source-to-destination delay is another flow characteristic.

• Applications can tolerate delay in different degrees. In this case, telephony, audio conferencing,

video conferencing, and remote login need minimum delay, while delay in file transfer or e-mail is

less important.

3- Jitter:

• Jitter is the variation in delay for packets belonging to the same flow. For example, if four packets

depart at times 0, 1, 2, 3 and arrive at 20, 21, 22, 23, all have the same delay, 20 units of time. On

the other hand, if the above four packets arrive at 21, 23, 21, and 28, they will have different

delays: 21,22, 19, and 25.

• For applications such as audio and video, it does not matter if the packets arrive with a short or

long delay as long as the delay is the same for all packets. For this application, the second case is

not acceptable.

• High jitter means the difference between delays is large; low jitter means the variation is small. If

the jitter is high in multimedia applications, some action is needed in order to use the received

data.

4- Bandwidth:

• Different applications need different bandwidths. In video conferencing we need to send millions

of bits per second to refresh a color screen while the total number of bits in an e-mail may not

reach even a million.

Page 70: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-70-

II - TECHNIQUES TO IMPROVE QoS:

• We briefly discuss four common methods: scheduling, traffic shaping, admission control, and

resource reservation.

1- Scheduling:

• Packets from different flows arrive at a switch or router for processing. A good scheduling

technique treats the different flows in a fair and appropriate manner. Several scheduling techniques

are designed to improve the quality of service. We discuss three of them here: FIFO queuing,

priority queuing, and weighted fair queuing.

• Scheduling is used to guarantee services that require less delay.

1-1 FIFO Queuing:

• In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node (router or

switch) is ready to process them.

• If the average arrival rate is higher than the average processing rate, the queue will fill up and new

packets will be discarded.

• A FIFO queue is familiar to those who have had to wait for a bus at a bus stop.

1-2- Priority Queuing:

• In priority queuing, packets are first assigned to a priority class. Each priority class has its own

queue. The packets in the highest-priority queue are processed first. Packets in the lowest-priority

queue are processed last. Note that the system does not stop serving a queue until it is empty.

Queue

is full? Processor Departure Arrival

Discard

Yes

No

Queue

Techniques to

improve QoS

Scheduling Traffic

Shaping

Admission

Control

Resource

Reservation

Page 71: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-71-

• A priority queue can provide better QoS than the FIFO queue because higher priority traffic, such

as multimedia, can reach the destination with less delay. However, if there is a continuous flow in

a high-priority queue, the packets in the lower-priority queues will never have a chance to be

processed. This is a condition called starvation.

1-3 Weighted Fair Queuing:

• A better scheduling method is weighted fair queuing.

• In this technique, the packets are still assigned to different classes and admitted to different queues.

The queues, however, are weighted based on the priority of the queues; higher priority means a

higher weight.

• The system processes packets in each queue in a round-robin fashion with the number of packets

selected from each queue based on the corresponding weight.

• Example: If the weights are 3, 2, and 1, three packets are processed from the first queue, two from

the second queue, and one from the third queue.

Queue is

full?

Departure Arrival

Discard

Yes

No

High Priority Queue

Queue is

full?

Discard

Yes

No

Switch

Cl

as

sif

ie

r

Class 3

Class 1 Low Priority Queue

The switch selects 3

packets from queue 3, 2

packets from queue 2 and

1 packet from queue 1 in

a round robin fashion.

Queue is

full?

Discard

Yes

No Class 2

Middle Priority Queue

Processor

Weight = 3

Weight = 2

Weight = 1

Queue is

full?

Processor Departure Arrival Discard

Yes

No

High Priority Queue

Queue is

full?

Discard

Yes

No

Switch

Cl

as

sif

ie

r

Class 2

Class 1 Low Priority Queue

Switching is performed

only when current queue

is empty.

Page 72: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-72-

2- Traffic Shaping:

• Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the network.

• Traffic shaping is used to provide services for application that requires low jitter.

• Two techniques can shape traffic: leaky bucket and token bucket.

2-1 Leaky Bucket:

• If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate as long

as there is water in the bucket.

• The rate at which the water leaks does not depend on the rate at which the water is input to the

bucket unless the bucket is empty.

• The input rate can vary, but the output rate remains constant.

• The leaky bucket technique can smooth out bursty traffic. Bursty chunks are stored in the bucket

and sent out at an average rate.

• If we assume that the network has committed a bandwidth of 3 Mbps for a host. The use of the

leaky bucket shapes the input traffic to make it conform to this commitment.

• In the Figure above, the host sends a burst of data at a rate of 12 Mbps for 2 s, for a total of 24

Mbits of data. The host is silent for 5 s and then sends data at a rate of 2 Mbps for 3 s, for a total of

6 Mbits of data. In all, the host has sent 30 Mbits of data in 10s. The leaky bucket smooths the

traffic by sending out data at a rate of 3 Mbps during the same 10 s.

• Without the leaky bucket, the beginning burst may have hurt the network by consuming more

bandwidth than is set aside for this host.

• We can also see that the leaky bucket may prevent congestion.

• As an analogy, consider the freeway during rush hour (bursty traffic). If, instead, commuters could

stagger their working hours, congestion on our freeways could be avoided.

Bursty flow

Fixed flow

Time

Data

Rate Bursty Traffic

Time

Data

Rate Fixed-rate Traffic

12 Mbps

2 Mbps

3 Mbps

Page 73: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-73-

Implementation

• A leaky bucket could be implemented using a FIFO queue that holds the packets.

• If the traffic consists of fixed-size packets (e.g., cells in ATM networks), the process removes a

fixed number of packets from the queue at each tick of the clock.

• If the traffic consists of variable-length packets (e.g., IP datagrams), the fixed output rate must be

based on the number of bytes or bits.

• The following is an algorithm for variable-length packets:

1. Initialize a counter to n at the tick of the clock.

2. If n is greater than the size of the packet, send the current packet and decrement the

counter by the current packet size.

3. Repeat this step with the next packets until n is smaller than the next packet size.

4. Reset the counter and go to step 1.

• Example: Suppose that we have 5 packets of different sizes: 1000 bytes, 2000 bytes, 3000 bytes,

2000 bytes, 2000 bytes. Let's say that we initialize n at 3000 bytes. Then, the traffic is shaped as

follows:

At tick 1: Packets 1 and 2 are sent (3000 bytes).

At tick 2: Packet 3 is sent (3000 bytes).

At tick 3: Packet 4 is sent (2000 bytes).

At tick 4: Packet 5 is sent (2000 bytes).

2-2 Token Bucket:

• The leaky bucket is very restrictive. It does not credit an idle host. The time when the host was idle

is not taken into account.

• The token bucket algorithm allows idle hosts to accumulate credit for the future in the form of

tokens. For each tick of the clock, the system sends n tokens to the bucket. The system removes

one token for every cell (or byte) of data sent. For example, if n is 100 and the host is idle for 100

ticks, the bucket collects 10,000 tokens. Now the host can consume all these tokens in one tick

with 10,000 cells, or the host takes 1000 ticks with 10 cells per tick.

Queue

is full? Processor Departure

Discard

Yes

No

Queue

Remove at fixed rate

Arrival at

bursty rate

Page 74: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-74-

• In other words, the host can send bursty data as long as the bucket is not empty.

Implementation:

• The token bucket can easily be implemented with a counter. The token is initialized to zero. Each

time a token is added, the counter is incremented by 1. Each time a unit of data is sent, the counter

is decremented by 1. When the counter is zero, the host cannot send data.

2-3 Combining Token Bucket and Leaky Bucket:

• The two techniques can be combined to credit an idle host and at the same time regulate the traffic.

The leaky bucket is applied after the token bucket.

• The rate of the leaky bucket needs to be higher than the rate of tokens dropped in the bucket.

3- Resource Reservation:

• A flow of data needs resources such as a buffer, bandwidth, CPU time, and so on. The quality of

service is improved if these resources are reserved previously. We will discuss later one QoS

model called Integrated Services, which depends heavily on resource reservation to improve the

quality of service.

4- Admission Control:

• Admission control refers to the mechanism used by a router, or a switch, to accept or reject a flow

based on predefined parameters called flow specifications.

• Before a router accepts a flow for processing, it checks the flow specifications to see if its capacity

(in terms of bandwidth, buffer size, CPU speed, etc.) and its previous commitments to other flows

can handle the new flow.

Queue

is full? Processor Departure

Discard

Yes

No

Queue

Arrival at

bursty rate

Remove if there

is a token in the

bucket.

Page 75: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-75-

5- Integrated Services:

• Two models have been designed to provide quality of service in the Internet: Integrated Services

and Differentiated Services.

• Both models emphasize the use of quality of service at the network layer (IP), although the model

can also be used in other layers such as the data link.

• IP was originally designed for best-effort delivery. This means that every user receives the same

level of services. This type of delivery does not guarantee the minimum of a service, such as

bandwidth, to applications such as real-time audio and video. If such an application accidentally

gets extra bandwidth, it may be detrimental to other applications, resulting in congestion.

• Integrated Services (IntServ) is a flow-based QoS model, which means that a user needs to create a

flow, a kind of virtual circuit, from the source to the destination and inform all routers of the

resource requirement.

5-1 Signaling:

• Since the IP is a connectionless, datagram, packet-switching protocol, how can we implement a

flow-based model over a connectionless protocol? The solution is a signaling protocol to run over

IP that provides the signaling mechanism for making a reservation. This protocol is called

Resource Reservation Protocol (RSVP).

5-2 Flow Specification:

• When a source makes a reservation, it needs to define a flow specification.

• A flow specification has two parts: Rspec (resource specification) and Tspec (traffic specification).

• Rspec defines the resource that the flow needs to reserve (buffer, bandwidth, etc.).

• Tspec defines the traffic characterization of the flow.

5-3 Admission:

• After a router receives the flow specification from an application, it decides to admit or deny the

service. The decision is based on the previous commitments of the router and the current

availability of the resource.

5-4 Service Classes:

• Two classes of services have been defined for Integrated Services: guaranteed service and

controlled-load service.

• Guaranteed Service Class: This type of service is designed for real-time traffic that needs a

guaranteed minimum end-to-end delay. The end-to-end delay is the sum of the delays in the

routers, the propagation delay in the media, and the setup mechanism.

• Only the delays in the routers can be guaranteed by the router. This type of service guarantees that

the packets will arrive within a certain delivery time and are not discarded if flow traffic stays

Page 76: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-76-

within the boundary of Tspec. We can say that guaranteed services are quantitative services, in

which the amount of end-to-end delay and the data rate must be defined by the application.

• Controlled-Load Service Class: This type of service is designed for applications that can accept

some delays, but are sensitive to an overloaded network and to the danger of losing packets. Good

examples of these types of applications are file transfer, e-mail, and Internet access. The

controlled-load service is a qualitative type of service in that the application requests the

possibility of low-loss or no-loss packets.

6- RSVP:

• In the Integrated Services model, an application program needs resource reservation for a flow.

• This means that if we want to use IntServ at the IP level, we need to create a flow, a kind of

virtual-circuit network, out of the IP, which was originally designed as a datagram packet-switched

network.

• A virtual-circuit network needs a signaling system to set up the virtual circuit before data traffic

can start. The Resource Reservation Protocol (RSVP) is a signaling protocol to help IP create a

flow and consequently make a resource reservation.

6-1 Multicast Trees:

• RSVP is a signaling system designed for multicasting. However, RSVP can be also used for

unicasting because unicasting is just a special case of multicasting with only one member in the

multicast group. The reason for this design is to enable RSVP to provide resource reservations for

all kinds of traffic including multimedia which often uses multicasting.

6-2 Receiver-Based Reservation:

• In RSVP, the receivers, not the sender, make the reservation. This strategy matches the other

multicasting protocols. For example, in multicast routing protocols, the receivers, not the sender,

make a decision to join or leave a multicast group.

6-3 RSVP Messages:

• RSVP has several types of messages. We discuss only two of them: Path and Resv.

• Path Messages: Recall that the receivers in a flow make the reservation in RSVP. However, the

receivers do not know the path traveled by packets before the reservation is made. The path is

needed for the reservation.

• To solve the problem, RSVP uses path messages. A Path message travels from the sender and

reaches all receivers in the multicast path. On the way, a path message stores the necessary

information for the receivers.

• A Path message is sent in a multicast environment; a new message is created when the path

diverges.

Page 77: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-77-

• Resv Messages: After a receiver has received a path message, it sends a Resv message. The Resv

message travels toward the sender (upstream) and makes a resource reservation on the routers that

support RSVP. If a router does not support RSVP on the path, it routes the packet based on the

best-effort delivery methods.

6-4 Reservation Merging:

• In RSVP, the resources are not reserved for each receiver in a flow; the reservation is merged.

• For example, Rc3 requests a 2-Mbps bandwidth while Rc2 requests a 1-Mbps bandwidth. Router

R3, which needs to make a bandwidth reservation, merges the two requests. The reservation is

made for 2 Mbps, the larger of the two, because a 2-Mbps input reservation can handle both

requests. The same situation is true for R2.

• You may ask why Rc2 and Rc3, both belonging to one single flow, request different amounts of

bandwidth. The answer is that, in a multimedia environment, different receivers may handle

different grades of quality. For example, Rc2 may be able to receive video only at 1 Mbps (lower

quality), while Rc3 may be able to receive video at 2 Mbps (higher quality).

Sender

Receiver 1 Receiver 2

Receiver 3

2-Mbps

1-Mbps

2-Mbps

3-Mbps

3-Mbps 3-Mbps

Resv

Sender

Receiver 1 Receiver 2

Receiver 3

Resv Resv Resv

Resv Resv

Path Path Path Path

Path Path Sender

Receiver 1 Receiver 2

Receiver 3

Page 78: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-78-

6-5 Reservation Styles:

• When there is more than one flow, the router needs to make a reservation to accommodate all of

them. RSVP defines three types of reservation styles:

• Wild Card Filter Style: In this style, the router creates a single reservation for all senders. The

reservation is based on the largest request. This type of style is used when the flows from different

senders do not occur at the same time.

• Fixed Filter Style: In this style, the router creates a distinct reservation for each flow. This means

that if there are n flows, n different reservations are made. This type of style is used when there is a

high probability that flows from different senders will occur at the same time.

• Shared Explicit Style: In this style, which is a hybrid of the first two styles, the router creates a

single reservation which can be shared by a set of flows. This reservation could be some portion

(e.g., 40%) of the sum of all requests.

6-6 Problems with Integrated Services:

• There are at least two problems with Integrated Services that may prevent its full implementation

in the Internet: scalability and service-type limitation.

• Scalability: The Integrated Services model requires that each router keep information for each

flow. As the Internet is growing every day, this is a serious problem.

• Service-Type Limitation: The Integrated Services model provides only two types of services,

guaranteed and control-load. Those opposing this model argue that applications may need more

than these two types of services, such as low loss and low latency.

7- DIFFERENTIATED SERVICES:

• Differentiated Services (DS or Diffserv) was introduced to handle the shortcomings of Integrated

Services. Two fundamental changes were made:

• First: The main processing was moved from the core of the network to the edge of the network.

This solves the scalability problem. The routers do not have to store information about flows. The

applications, or hosts, define the type of service they need each time they send a packet.

Reservation Style

Wild Card

Filter

Fixed

Filter

Shared

Explicit

Page 79: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-79-

• Second: The per-flow service is changed to per-class service. The router routes the packet based on

the class of service defined in the packet, not the flow. This solves the service-type limitation

problem.

• We can define different types of classes based on the needs of applications.

• In short, Differentiated Services is a class-based QoS model designed for IP.

7-1 DS Field:

• In Diffserv, each packet contains a field called the DS field.

• The value of this field is set at the boundary of the network by the host or the first router

designated as the boundary router.

• The DS field contains two subfields: DSCP and CU. The DSCP (Differentiated Services Code

Point) is a 6-bit subfield that defines the per-hop behavior (PHB). The 2-bit CU subfield is not

currently used.

• The Diffserv router uses the DSCP 6 bits as an index to a table defining the packet-handling

mechanism for the current packet being processed.

Codepoint Unused

Differentiated Services

20-60 bytes

20-65,536 bytes

Header Data

VER

4 bits

HLEN

4 bits

DS

8 bits

Total length

16 bits

Identification

16 bits

Fragmentation offset

16 bits

Flags

3 bits

Time to live

8 bits

Protocol

8 bits

Header Checksum

16 bits

Source IP address

32 bits

Destination IP address

32 bits

Option

Variable

Page 80: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-80-

7-2 Per-Hop Behavior:

• The Diffserv model defines per-hop behaviors (PHBs) for each node that receives a packet. So far

three PHBs are defined: DE PHB (low service), EF PHB (medium service), and AF PHB (high

service).

• DE PHB: The DE PHB (default PHB) is the same as best-effort delivery, which is compatible with

TOS (type of service used in IPV4).

• EF PHB: The EF PHB (expedited forwarding PHB) provides the following services:

- Low loss

- Low latency

- Ensured bandwidth

• This is the same as having a virtual connection between the source and destination.

• AF PHB: The AF PHB (assured forwarding PHB) delivers the packet with a high assurance as

long as the class traffic does not exceed the traffic profile of the node. The users of the network

need to be aware that some packets may be discarded.

III- QoS IN SWITCHED NETWORKS

• In this section we discuss QoS in two switched networks: Frame Relay and ATM.

• These two networks are virtual-circuit networks that need a signaling protocol such as RSVP.

1- QoS in Frame Relay

• In Frame Relay networks, 4 different attributes has been defined in order to control traffic:

a. access rate,

b. committed burst size Bc,

c. committed information rate CIR,

d. and excess burst size Be.

• These parameters are set during the negotiation between the user and the network.

• For Permanent Virtual Circuit connections, they are negotiated once; for Short Virtual Circuit

connections, they are negotiated for each connection during connection setup.

Page 81: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-81-

Access Rate

• For every connection, an access rate (in bits per second) is defined. The access rate actually

depends on the bandwidth of the channel connecting the user to the network.

• The user can never exceed this rate. For example, if the user is connected to a Frame Relay

network by a T-l line, the access rate is 1.544 Mbps and can never be exceeded.

Committed Burst Size

• For every connection, Frame Relay defines a committed burst size Bc, which is the maximum

number of bits in a predefined time that the network is committed to transfer without discarding

any frame.

• For example, if a Bc of 400 kbits for a period of 4 s is granted, the user can send up to 400 kbits

during a 4-s interval without worrying about any frame loss.

• The Bc is a cumulative measurement which means that it is not defined for each second. For

instance, in the previous example, the user can send 300 kbits during the first second, no data

during the second and the third seconds, and finally 100 kbits during the fourth second.

Committed Information Rate

• The committed information rate (CIR) is similar in concept to Bc except that it defines an average

rate in bits per second, not in a predefined time.

��� =��

• For example, if the Bc is 400 kbits in a period of 4 s, the CIR is 400000/4, or 100 kbps.

• If the user follows this average rate continuously, the network is committed to deliver the frames.

• On the other hand, a user may send data at a higher rate than the CIR at times or at a lower rate

other times. So, as long as the average for the predefined period is respected, the frames will be

delivered.

• The cumulative number of bits sent during the predefined period cannot exceed Bc.

Excess Burst Size

Data Rate

(bits/sec)

Access Rate

CIR

Area = Be

Area = Bc

Seconds

Page 82: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-82-

• For every connection, Frame Relay defines an excess burst size Be, which is the maximum number

of bits in excess of Bc that a user can send during a predefined time.

• The network is committed to transfer these bits only if there is no congestion.

User Rate

• A user can send bursty data with its own data rate.

• If the user never exceeds Bc the network is committed to transmit the frames without discarding

any.

• If the user exceeds Bc by less than Be (less than Bc + Be) the network is committed to transfer all

the frames if there is no congestion. If there is congestion, some frames will be discarded.

• The first switch that receives the frames from the user has a counter and sets the Discard Eligibility

bit (DE) for the frames that exceed Be, which authorizes the rest of the switches to discard these

frames in case of congestion.

• A user who needs to send data faster may exceed the Bc level. As long as the level is not above

(Bc + Be) there is a chance that the frames will reach the destination without being discarded.

• When the user exceeds the (Bc + Be) level, all the frames sent after that level are automatically

discarded by the first switch.

2- QoS in ATM

• The QoS in ATM is based on the class, user-related attributes, and network-related attributes.

Classes

• 4 service classes are defined in ATM: CBR, VBR, ABR, and UBR.

• The constant-bit-rate (CBR) class is designed for customers who need real time audio or video

services. The service is similar to that provided by a dedicated line such as a T line.

• The variable-bit-rate (VBR) class is divided into two subclasses: real-time (VBR-RT) and non-

real-time (VBR-NRT).

• VBR-RT is designed for those users who need real-time services (such as voice and video

transmission) and use compression techniques (MPEG, AVI, MP3, …) to create a variable bit rate.

Data Rate

(bits/sec)

Access Rate

CIR

Seconds

Area = Total bits sent in T seconds

T

- if Area < Bc, no discarding (DE=0).

- if Bc < Area < Bc+Be, possible

discarding if congestion (DE=1).

- if Area > Bc+Be, discarding occurs.

Actual

Rate

Page 83: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-83-

• VBR-NRT is designed for those users who do not need real-time services but use compression

techniques (ZIP, RAR, …) to create a variable bit rate.

• The available-bit-rate (ABR) class delivers cells at a minimum rate. If more network capacity is

available, this minimum rate can be exceeded. ABR is particularly suitable for applications that are

bursty.

• UBR The unspecified-bit-rate (UBR) class is a best-effort delivery service that does not guarantee

anything.

User-Related Attributes

• User-related attributes define how fast the user wants to send data.

• These are negotiated at the time of contract between a user and a network. Among these attributes

we have:

• The sustained cell rate (SCR) which is the average cell rate over a long time interval.

• The actual cell rate may be lower or higher than this value, but the average should be equal to or

less than the SCR.

• The peak cell rate (PCR) defines the sender's maximum cell rate. The user's cell rate can

sometimes reach this peak, as long as the SCR is maintained.

• The minimum cell rate (MCR) defines the minimum cell rate acceptable to the sender. For

example, if the MCR is 50,000, the network must guarantee that the sender can send at least

50,000 cells per second.

• The cell variation delay tolerance (CVDT) is a measure of the variation in cell transmission times.

For example, if the CVDT is 5 ns, this means that the difference between the minimum and the

maximum delays in delivering the cells is at worst 5 ns.

Network-Related Attributes

• The network-related attributes define characteristics of the network. Among these attributes we

have:

• The cell loss ratio (CLR) defines the fraction of cells lost or lately delivered.

• For example, if the sender sends 100 cells and one of them is lost, then

CBR

VBR

ABR and UBR

Time

Network Capacity

Page 84: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-84-

�� =1

100= 10

• The cell transfer delay (CTD) is the average time needed for a cell to travel from source to

destination. The maximum CTD and the minimum CTD are also considered attributes.

• The cell delay variation (CDV) is the maximum difference between the CTD maximum and the

CTD minimum.

• The cell error ratio (CER) defines the fraction of the cells delivered in error.

Page 85: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-85-

IV- QoS and Multimedia

• Many parameters were defined to enhance the QoS in multimedia. Here are some of the most

important features:

Bandwidth

• The bandwidth is a network QoS parameter that refers to the data rate supported by a network.

• Multimedia applications usually require high bandwidth as compared to other general applications.

For example, MPEG2 video requires around 4Mbps, while MPEG1 video requires about 1-2

Mbps.

• Network technologies that do not support such high bandwidth cannot play multimedia contents.

For instance, Bluetooth technology version 1 only supports a maximum bandwidth of 746 kbps

and thus, devices relying on Bluetooth for connectivity cannot play MPEG1 videos. Better

throughput means better QoS received by the end-user.

Delay

• Delay is the time interval elapsed between the departures of data from the source to its arrival at

the destination.

• This can range from a few nanoseconds or microseconds in LANs to about 0.25 s in satellite

communications systems.

Jitter

• It refers to the variation in time between packets arriving at the destination. It is caused mainly by

network congestion.

• Depending on the multimedia application type, jitter may or may not be significant. For example,

audio or video conference applications are not tolerable to jitter due to the very limited buffering in

live presentations, whereas prerecorded multimedia playback is usually tolerable to jitter, as

modern players buffer around 5 seconds to alleviate the affect of jitter.

Loss

• Loss mainly denotes the amount of data that did not make it to the destination in a specified time

period.

• Different methods can be used to reduce the chances of loss; either by providing individual

channels/guaranteed bandwidth for specific data transmissions, or by retransmission of data for

loss recovery.

Reliability

• Most Multimedia applications are error tolerant to a certain extent. However, few multimedia

applications, like distance learning examination or Tele-Surgery are sensitive to packet loss.

Frame size

Page 86: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-86-

• It defines the size of the video image on the display screen. A bigger frame size requires a higher

bandwidth.

• In QoS adaptation, frame size can be modified according to available QoS support.

• For example, if a video is transmitted at 800x600 pixel size frames, and network congestion is

experienced, then the frame size can be reduced to 640x480 pixels to lower the bandwidth

requirement for the video transmission.

Frame rate

• It defines the number of frames sent to network per second.

• A higher Frame rate requires higher bandwidth. If QoS adaptation is required, the frame rate can

be modified to lessen the bandwidth requirements at the cost of video quality.

Image clarity

• It refers to the perceptual quality of the image which a user perceives for a certain delivery of

Multimedia content. Human eyes are less sensitive to high frequency components of the image, so

the high frequency components can be eliminated without any noticeable loss in image quality.

Audio quality

• It is defined in terms of sampling rate per second. A higher sampling rate renders better audio

quality.

• For example, audio encoded in 128kbps is higher in quality than that is encoded in 16kbps.

Usually, for audio conversation, 64kbps audio quality is adequate (8000 samples x 8bits/sample),

however, 128kbps or higher is required for stereo music contents.

Audio/Video Confidentiality and Integrity

• It dictates the security and access right restrictions for audio and video streams.

• For example, security is vital for an audio conference between two agents of a company discussing

the new product line, whereas security is not important for a news broadcast over the Internet.

Page 87: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-87-

Multicast Routing

I- Introduction

• A message can be unicast, multicast, or broadcast.

1- Unicasting

• In unicast communication, there is one source and one destination.

• The relationship is one-to-one.

• In this type of communication, both the source and destination addresses, in the IP datagram, are

the unicast addresses assigned to the hosts.

• When a router receives a packet, it forwards the packet through only one of its interfaces (the one

belonging to the optimum path) as defined in the routing table.

• The router may discard the packet if it cannot find the destination address in its routing table.

2- Multicasting

• In multicast communication, there is one source and a group of destinations.

• The relationship is one-to-many.

• In this type of communication, the source address is a unicast address, but the destination address

is a group address, which defines one or more destinations.

• When a router receives a packet, it may forward it through several of its interfaces.

3- Broadcasting

• In broadcast communication, the relationship between the source and the destination is one-to-all.

There is only one source, but all the other hosts are the destinations.

• The Internet does not explicitly support broadcasting because of the huge amount of traffic it

would create and because of the bandwidth it would need.

• Broadcasting is usually performed inside a LAN only. Some protocols such as Ethernet specifies

an address for broadcasting (FF:FF:FF:FF:FF:FF)

4- Multicasting vs Multiple Unicasting

• Multicasting is more efficient than multiple unicasting.

• Multicasting starts with one single packet from the source that is duplicated by the routers.

• The destination address in each packet is the same for all duplicates.

• Only one single copy of the packet travels between any two routers.

• In multiple unicasting, several packets start from the source.

• If there are five destinations, for example, the source sends five packets, each with a different

unicast destination address.

Page 88: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-88-

• When a person sends an e-mail message to a group of people, this is multiple unicasting.

• In multiple unicasting, the packets are created by the source with a relative delay between packets.

If there are 1000 destinations, the delay between the first and the last packet may be unacceptable.

5- Applications

• Multicasting has many applications today such as access to distributed databases, information

dissemination, teleconferencing, and distance learning.

Access to Distributed Databases

• Most of the large databases today are distributed.

• The information is stored in many locations.

• The user who needs to access the database does not know the location of the information.

• A user's request is multicast to all the database locations, and the location that has the information

responds.

Information and News Dissemination

• Businesses often need to send advertisement information to their customers.

Source

Not a member

Multicasting

Source

Not a member

Multiple Unicasting

Page 89: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-89-

• If the nature of the information is the same for each customer, it can be multicast. In this way a

business can send one message that can reach many customers.

Teleconferencing

• Teleconferencing involves multicasting.

• The individuals attending a teleconference all need to receive the same information at the same

time.

Distance Learning

• Lessons taught by one single professor can be received by a specific group of students.

II- Multicast Routing Basics

1- Optimal Routing: Shortest Path Trees

• The process of optimal interdomain routing eventually results in the finding of the shortest path

tree.

• The root of the tree is the source, and the leaves are the potential destinations.

• The path from the root to each destination is the shortest path.

• The Dijkstra algorithm creates a shortest path tree from a graph (the topology). The algorithm divides

the nodes into two sets: tentative and permanent. It finds the neighbors of a current node, makes them

tentative, examines them, and if they pass the criteria, makes them permanent.

• Example:

Start

Set root to local node and move it

to tentative list.

Tentative list

empty? End

Among nodes in tentative list,

move the one with the shortest

path to permanent list.

Add each unprocessed neighbor of last moved node to tentative

list if not already there. If neighbor is in the tentative list with

larger cumulative cost, replace it with new one.

Yes

No

Page 90: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-90-

2 3

7 2 5

2 2

3 3

The topology

B C

D E F

G

A

A

- Permanent List {}

- Tentative List {A(0)}

Chose A and move it:

- Permanent List {A(0)}

Add A neighbors:

- Tentative List {B(2), D(7)}

2

7

B

D

A 2 3

7 2

B C

D E

A

Chose B and move it:

- Permanent List {A(0), B(2)}

Add B neighbors:

- Tentative List {D(7), C(5), E(4)}

2 3

2

2 2

3

B C

D E F

G

A

Chose E and move it:

- Permanent List {A(0), B(2), E(4)}

Add E neighbors, update their existing costs:

- Tentative List {D(6), C(5), F(6), G(7)}

2 3

2

2 2

3

B C

D E F

G

A

Chose C and move it:

- Permanent List {A(0), B(2), E(4), C(5)}

Add C neighbors:

- Tentative List {D(6), F(6), G(7)}

2 3

2

2 2

3

B C

D E F

G

A

Chose D and move it:

- Permanent List {A(0), B(2), E(4), C(5), D(6)}

Add D neighbors:

- Tentative List {F(6), G(7)}

2 3

2

2 2

3

B C

D E F

G

A

Chose F and move it:

- Permanent List {A(0), B(2), E(4), C(5), D(6),

F(6)}

Add F neighbors:

- Tentative List {G(7)} 2 3

2

2 2

3

B C

D E F

G

A

Chose G and move it:

- Permanent List {A(0), B(2), E(4), C(5), D(6),

F(6), G(7)}

Add G neighbors:

- Tentative List { }

Page 91: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-91-

• Each node uses the shortest path tree protocol to construct its routing table. The routing table shows the cost

of reaching each node from the root. The next table shows the routing table for node A:

Node Cost Next Router

A 0 -

B 2 -

C 5 B

D 6 B

E 4 B

F 6 B

G 7 B

• However, the number of trees and the formation of the trees in unicast and multicast routing are

different.

• In unicast routing, when a router receives a packet to forward, it needs to find the shortest path to

the destination of the packet. The router consults its routing table for that particular destination.

• In unicast routing, each router needs only one shortest path tree to forward a packet. The routing

table is extracted from one shortest path tree, and each line is the shortest path to the destination:

Node Cost Next Router

A 0 -

B 2 -

C 5 B

D 6 B

E 4 B

F 6 B

G 7 B

• When a router receives a multicast packet, this packet may have destinations in more than one

network.

• The routing table is formed of many shortest path trees since each line corresponds to a shortest

path tree to a group. Notice that all we have is a multicast address of a group not unicast addresses

of members.

Group Path tree

G1 Shortest path tree to reach all members of G1

Node Cost Next Router

A 0 -

B 2 -

C 5 B

Page 92: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-92-

G2 ...

• Forwarding of a single packet to members of a group requires a shortest path tree. If we have n

groups, we may need n shortest path trees, which makes routing more complex.

• Many approaches have been proposed to solve the problem, such as the source-based trees and the

group-shared trees.

2- Source-Based Tree

• In the source-based tree approach, each router needs to have one shortest path tree for each group.

• The shortest path tree for a group defines the next hop for each network that has loyal member(s)

for that group.

• In the above figure, we assume that we have only five groups in the domain: G1, G2, G3, G4, and

G5.

• G1 has loyal members in four networks, G2 in three, G3 in two, G4 in two, and G5 in two.

• In the routing table of R1, there is one shortest path tree for each group; therefore there are five

shortest path trees for five groups.

• If router R1 receives a packet with destination address G1, it needs to send a copy of the packet to

the attached network, a copy to router R2, and a copy to router R4 so that all members of G1 can

receive a copy.

• So, if the number of groups is m, each router needs to have m shortest path trees, one for each

group.

• The routing table becomes complex if we have hundreds or thousands of groups.

R1 R4 R2

R3

G1, G2 G1, G2 G3 G1, G4, G5

G3, G5

G1, G2, G4

Routing table for R1

Destination Next Hop

G1 R2, R4

G2 R2

G3 R2

G4 R2, R4

G5 R2, R4

Routing table

for R2

… …

… …

Routing table

for R3

… …

… …

Routing table

for R4

… …

… …

Page 93: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-93-

3- Group-Shared Tree.

• In this approach, instead of each router having m shortest path trees, only one designated router,

called the center core, or rendezvous router, takes the responsibility of distributing multicast

traffic.

• The core has m shortest path trees in its routing table. The rest of the routers in the domain have

none.

• If a router receives a multicast packet, it encapsulates the packet in a unicast packet and sends it to

the core router.

• The core router removes the multicast packet from its capsule, and consults its routing table to

route the packet.

II- New Multicast Routing Protocols

• During the last few decades, several multicast routing protocols have emerged.

R1 R4 R2

R3

G1, G2 G1, G2 G3 G1, G4, G5

G3, G5

G1, G2, G4

Routing table for Core Router

Destination Next Hop

G1 R2, R3, R4

G2 R2, R3

G3

G4 R3, R4

G5 R4

Core Router

Page 94: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-94-

1- Multicast Link State Routing: MOSPF

• Each router creates a shortest path tree by using Dijkstra's algorithm.

• The routing table is a translation of the shortest path tree.

• Multicast link state routing is an extension of unicast routing and uses a source-based tree approach.

• In unicast routing, each node needs to advertise the state of its links, or the new topology.

• In multicasting, a node advertises every group which has any loyal member on the link.

• When a router receives all these LSPs (n groups info), it creates n topologies, from which n shortest

path trees are made by using Dijkstra's algorithm.

• So each router has a routing table that represents the shortest path trees as there are groups.

• Of course, the problem with this protocol is the time and space needed to create and save the many

shortest path trees.

• To alleviate the problem, routers can create the trees only when needed. When a router receives a

packet with a multicast destination address, it runs the Dijkstra algorithm to calculate the shortest path

tree for that group.

2- Multicast Distance Vector: DVMRP

• Multicast distance vector routing is an extension to the unicast distance vector routing.

• Yet, multicast routing does not allow a router to send its routing table to its neighbors.

• The idea is to create a table from scratch by using the information from the unicast distance vector

tables.

• Multicast distance vector routing uses source-based trees, but the router never actually makes a routing

table.

• When a router receives a multicast packet, it forwards the packet (using a strategy) as though it is

consulting a routing table.

Multicasting Protocols

Source-based

tree

Group-based

tree

DVMRP MOSFP CBT

PIM-DM PIM-SM

Page 95: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-95-

• We can say that the shortest path tree is evanescent. After its use (after a packet is forwarded) the table

is destroyed.

• To accomplish this, the multicast distance vector algorithm uses a process based on four decision-

making strategies.

• A. Flooding:

• In flooding, a router receives a packet and sends it out from every interface except the one from which

it was received.

• In this way, we guarantee that the packet arrive to active members.

• But other networks without active members also receive this packet, which wastes a huge bandwidth.

• A packet that has left the router may come back (loop) again from another interface. This can be easily

solved by keeping a copy of the packet for a while and discard any duplicates to avoid loops.

• B. Reverse Path Forwarding (RPF):

• To prevent loops, only one copy is forwarded; the other copies are dropped.

• In RPF, a router forwards only the copy that has traveled the shortest path from the source to the router.

• The router consults its unicast routing table as if it wants to send a packet to the source address. The

routing table tells the router the next hop (reverse way). So, the router knows now which copy to keep:

the one that came from the found hop. If the path from A to B is the shortest, then it is also the shortest

from B to A.

• This strategy prevents loops because there is always one shortest path from the source to the router.

• If a packet leaves the router and comes back again, it has not traveled the shortest path.

• C. Reverse Path Broadcasting (RPB):

• RPF does not guarantee that each network receives only one copy; a network may receive two or more

copies.

• To eliminate duplication, we must define only one parent router for each network.

Rejected

Rejected

Case of duplicate copy Case of loop back

Page 96: Advanced Networks 1 - Dr. Mohammed Omari / University of Adrar

-96-

• The reverse path broadcasting (RPB) guarantees that the packet reaches every network and that every

network receives only one copy.

• A network can receive a multicast packet from a particular source only through a designated parent

router.

• So, for each source, the router sends the packet only out of those interfaces for which it is the

designated parent.

• The designated parent router can be the router with the shortest path to the source.

• D. Reverse Path Multicasting (RPM):

• RPB does not multicast the packet, it broadcasts it, which is not efficient.

• To increase efficiency, the multicast packet must reach only those networks that have active members

for that particular group.

• Reverse path multicasting (RPM) tries to convert broadcasting to multicasting. It uses two procedures,

pruning and grafting.

• The designated parent router of each network is responsible for holding the membership information.

This is done through the IGMP protocol.

• When a router connected to a network finds that there is no interest in a multicast packet, i.e., there is

no member in the network, the router sends a prune message to the upstream router (where the packet

came from) so that it can exclude the corresponding interface.

• So, the upstream router can stop sending multicast messages for this group through that interface.

• Now if this router receives prune messages from all downstream routers, it, in turn, sends a prune

message to its upstream router.

• In case a station wants suddenly to be a member in a group, the corresponding router sends a graft

message in order to forces the upstream router to resume sending the multicast messages.

Net1 Net2 Net3 Net1 Net2 Net3

RPF RPB

R1 is the parent of Net1 and Net2

R2 is the parent of Net3

R1 R2 R1 R2