Connection Set-up and QoS Monitoring in ATM Networks Sin-Lam Tan1, Chen-Khong Tham2, Lek-Heng Ngoh3 Correspondence to Sin-Lam Tan Laboratories for Information Technology 21 Heng Mui Keng Terrace Singapore 119613 Tel: (65) 6874-7865 Fax: (65) 6775-5014
Abstract
This paper describes a system for creating virtual connections based on QoS requirements and providing basic QoS routing functionality. It effectively bridges the gap between complex protocols like PNNI and tedious manual set-up of virtual connections.
1 Laboratories for Information Technology, Singapore. (email: [email protected] ) 2 National University of Singapore, Singapore. (email: [email protected] ) 3 Laboratories for Information Technology, Singapore. (email: [email protected] )
Connection Set-up and QoS Monitoring in ATM Networks
Connection Set-up and QoS Monitoring in ATM
Networks
Sin-Lam Tan, Chen-Khong Tham, Lek-Heng Ngoh
1 Introduction
The current network infrastructure of the Internet consists of heterogeneous
switches, bridges and routers. These hardware devices require network management
support to monitor and configure the devices. A good network management tool usually
provides basic configuration utilities, fault isolation, and performance monitoring
capabilities.
There are many existing network management products such as OpenView from
Hewlett-Packard, CiscoWorks from Cisco, ForeView from Marconi, and SunNet
Manager from Sunsoft. All these tools have a Graphical User Interface (GUI) to
configure and monitor network devices using Simple Network Management Protocol
(SNMP) [1]. SNMP is an UDP-based network management protocol that is used
predominantly in TCP/IP networks. It is widely used to monitor, poll and control
network devices through some network management tools.
Any SNMP-compliant device can be monitored with these tools, regardless of
vendor origin. However, these tools are usually restricted to certain types of devices. In
ATM networks, these tools are designed to manage certain ATM switches and they are
not interoperable with network management packages from other vendors. The PNNI
protocol [2] solves this interoperability problem among different types of ATM
2
Connection Set-up and QoS Monitoring in ATM Networks
switches, by allowing fast connection set-up and dynamic QoS routing. However, the
connection will be destroyed if it is not used after a certain period of time.
PNNI signaling is a fairly complex signaling protocol and it requires great
amount of effort to implement the protocol. The PNNI specification describes PNNI
framework and signaling protocol in detail, but it leaves implementation and QoS
routing algorithm to switch vendors. Even though most ATM switches already support
PNNI, these switches may not fully implement the PNNI signaling specification. The
complexity comes from the two aspects: it is scalable to a very large network, and it
supports QoS routing. The PNNI implements hierarchical network organization to
support scalability, with summarized reachable information between levels in the
hierarchy. Nodes within a given level are grouped into sets known as a peer group, and
such hierarchical information might produce some drawbacks. The aggregated
information will be much less accurate than information about individual switches,
because aggregated information consists of merely summarized values. Advertising
metrics about such nodes imply an assumption about the symmetry and compactness of
the topology of the child peer group and its traffic flows, which is unlikely to be
accurate in practice.
There are a few goals in this research project. They are listed in the order below:
• It provides a simple and flexible framework to allow ATM connection set up in a
semi-automatic manner. This feature reduces the time required to set up a
connection compared to manually set up a connection.
• It allows a user to choose a specific path in favor of the others and thus provide
more flexibility to the user. The user can choose from a list of possible paths
returned by the system.
3
Connection Set-up and QoS Monitoring in ATM Networks
• It allows automatic path selection based on the user’s criteria.
• It permits cost assignment to individual link to investigate its effect on route
selection.
• It creates a new connection based on a user’s QoS requirement. If the requirement
cannot be met, the system should return an error status.
• It enables a network administrator to set up connection from any machine in the
TCP/IP network.
• It enables a network administrator to monitor network status and get a visual
representation of the network topology of ATM switches.
2 Approach
The approach of this project depends on several criteria. One of the criteria takes
into account of the overall system is designed for a small to medium-sized ATM
network. This project proposes a distributed system that consists of eight subsystems.
The system should not exhaust network resources by constantly pooling the network
status. The other criterion is to design a flexible system, such that all subsystems can
execute in one machine or each subsystem may employ on independent machines. The
system should run on normal IP network which is the basis for most existing networks.
It also emphasizes on simplicity such that it provides a simple solution to virtual
connection set-up process. Lastly, the subsystems should not constraint to one specific
platform; it should run in as many platforms as possible without redesigning the whole
architecture.
This project implements a distributed system that consists of a few subsystems
to monitor and configure the network, and measure the QoS parameters of the network.
4
Connection Set-up and QoS Monitoring in ATM Networks
To measure end-to-end delay of an ATM link, the system requires two hosts with ATM
interface cards to send and receive ATM cells. Finally, we add the capability to assign
QoS for a network path. The system brings together a user’s QoS requirement and tags
it with the new PVC that is created during connection set-up. This will make sure the
new connection does not violate the traffic requirement during data transfer.
3 Main Design
The basic framework of the overall system is shown in Figure 1. The system
consists of Local Agent (LA), Global Agent (GA), Name Server (NS), Routing
Manager (RM), Database Server (DB), Connection Manager (CM), QoS Manager (QM)
and QoS Measurement Agent (QMA). Each subsystem will be described in detail in the
following section. The system uses both SNMP and (Remote Procedure Call) RPC
protocols for its implementation: a LA uses SNMP to query and manage an ATM
switch, and all subsystems communicate with one another using RPC mechanism.
These subsystems are required to start in a specific order. The NS is the first
component to begin its service, followed by the DB and the GA. Each LA should be
assigned to a particular switch before it is activated. The CM and RM should be started
before users are allowed to make any new connection. The RM is started first, and then
followed by the CM. A LA can be started or terminated at any time.
The QM and QMA are started before the CM. They are in charge of collecting
QoS parameters on a per path basis during connection set up. Other subsystems have to
remain active all the time except the CM, which starts when a user makes a new
connection and terminates when the connection set up is completed. Next few sections
show interaction diagrams from one subsystem to other subsystems.
5
Connection Set-up and QoS Monitoring in ATM Networks
3.1 Name Server
The role of the NS is to map services and subsystems to the associated IP
addresses. Without a NS, all subsystems are required to use hard-coded IP addresses to
communicate with each other. All subsystems (except the LA) should register
themselves with the NS when they are started. The NS allows a subsystem to be
executed in any host machine. This improves flexibility because each subsystem is only
required to remember a known address, i.e. the address of the NS.
The system currently does not have a de-registration functionality. Once a
subsystem has registered with the NS, it is assumed to remain active throughout the
whole session. Based on the service registration, the NS is able to respond to service
queries from any subsystem. When there is a query about a particular service, the server
checks its database and returns the appropriate IP address to the requester or an error if
no entry is found.
The name service database is stored in a file and in memory. Memory access
speeds up service query, whereas file storage allows permanent data storage and enables
a system administrator to check the addresses of running subsystems. The database of
the NS consists of service type, IP address of the host machine and the port number that
the service is running. There is no record for LAs and QMAs (QMA). This is because
each LA directly registers with the GA, whereas each QMA registers to the QM
respectively.
3.2 Database Server
The DB maintains a centralized database as files and in memory. Any node in
the IP network can be the DB. The database query and update are executed using the
6
Connection Set-up and QoS Monitoring in ATM Networks
RPC mechanism. The DB is registered with the NS on start-up. The only components
that communicate with DB are the GA and RM. The GA updates the DB whenever
there is any change in link-state information of the switches. A LA does not talk directly
to the DB. They would have to go through the GA. The RM queries the DB to find out
link-state information before deciding on the best path.
The DB has to keep its database up-to-date at all times. However, if the update
interval is large, the database may be inaccurate between update intervals. With a small
update interval, a lot of SNMP messages will flood the network. To solve this problem,
LAs use a suitable update interval to compromise between accuracy and amount of
network traffic, and GA informs the DB only if there is any change in link-state
information. The DB also distinguishes between critical data and non-critical data. Only
critical data is maintained in memory for faster access. All critical and non-critical data
are kept as files in DB, which are ensured to be the latest.
The system also supports minimum cost criterion for path selection. A network
administrator uses a TCL/TK script to graphically configure link cost assignment. This
tool allows the cost assignment for each link in both directions. As a result, in a
connection set-up using minimum cost as the criteria, a network administrator can force
a particular link to be in favor over other links so that the network traffic can be evenly
distributed.
The last role of DB is to analyze the data and build the topological information.
When it receives each update from the GA, it computes the interconnection of the
switches again and constructs a network topology map. At any time, a network
administrator can view the topology map of the ATM network by executing one of the
TCL/TK scripts.
7
Connection Set-up and QoS Monitoring in ATM Networks
3.3 Global Agent
The GA acts as a coordinator for all LAs. If any subsystem wants to broadcast
messages to all LAs, the GA is responsible to handle this request. For example, the
system may inform the GA to shutdown all LAs, or the CM or QM may request the GA
to set up virtual connections across multiple switches.
The separation of LA from all other subsystems has several reasons. One of the
reasons is the detail of a LA is hidden from other subsystems and all other subsystems
do not require to know the locations of LAs. If they need some information about a
particular switch, they will send a query to the GA and GA will forward the request to
the specific LA that is in-charge of that switch. It also provides a clearer interface and
promotes scalability in the system since more LAs can be added in the future easily.
There are four types of interactions from GA to other subsystems.
1. The interaction between GA and LA as discussed above.
2. The interaction between GA and NS for registration and querying of services.
3. The interaction between GA and CM for PVC creation. When a virtual connection is
being set up, the CM passes the QoS requirements and switch addresses to the GA.
The GA subsequently requests each LA to create a PVC path with the required QoS.
4. The interaction between the GA and the QM. This happens in connection set-up
requests with minimum delay as its criterion. The QM makes a connection request
to the GA to set up paths for QoS measurement.
The GA also continuously tracks the availability of each LA. Each LA sends
keep-alive messages every few seconds. If GA does not receive the message from a
particular LA within a specific period, it will send a query to check whether the LA is
still alive. If no response is obtained from the LA, it further pings the host to check if
8
Connection Set-up and QoS Monitoring in ATM Networks
the node exists on the network. If the LA is found to be out of service, GA will inform
DB to clean up the database entries related to that switch.
3.4 Local Agent
This system extends the SNMP functionality in ATM switches by introducing a
LA. The LA is responsible for gathering link-state information of an ATM switch and
passing this information to the GA. The interaction of LA to other subsystems is fairly
simple, given that it only communicates with the GA and its ATM switch. Initially, it
obtains the GA address from the NS. It then communicates with the GA using the RPC
protocol, and queries ATM switches using SNMP protocol. During the LA start-up
phase, it registers its address and the switch information that it is in-charge of, with the
GA.
The LA is implemented differently for each type of ATM switch; however, the
GA does not require any custom design. Before a LA can participate in a network, a
network administrator should manually assign it to a switch. If a LA abruptly
terminates, the system assumes that the switch is removed from the network. For a
network with multiple switches, the number of switches determines how many LAs
should be there. This design allows the possibility of LA code to be incorporated into a
switch in the future. Any node can be the agent as long as it is able to exchange
information with the ATM switch using SNMP. It is also responsible for constantly
sending keep-alive message to the GA using a separate thread to keep the database up-
to-date.
The LA is responsible to periodically query link-state information of the ATM
switch and update the information to the DB. The update period for LA should take into
9
Connection Set-up and QoS Monitoring in ATM Networks
consideration of the slow response from an SNMP query because this period affects the
accuracy of the database. If paths are determined based on outdated information, it is
possible to end up using inefficient network paths, wasting network resources. From the
simulation result of QoS routing on update trigger period (or clamp down timer) [3], it
concludes that for a large update trigger period, the routing performance is better than
the change-based trigger policy and static routing. The update trigger period used in the
simulation is between 200 seconds and 600 seconds.
The current system uses an update trigger period of 300 seconds. Based on our
observation on the implementation in our small ATM network, we found that this value
is a good tradeoff between flooding the network with SNMP messages and accuracy of
database. This value is acceptable since most of the time the topology database does not
have major changes in this short period and it reduces SNMP messages sent across to
the switches and avoid network congestion. The exact value of the update trigger period
can be fine tuned easily by modifying the LA.
LA is required to set up a new virtual connection upon request. The agent can
create a virtual connection with specific QoS requirement using the CM’s GUI. These
QoS parameters are defined in Usage Parameter Control (UPC) tag in an ATM switch.
A new virtual connection is tagged with this UPC so that the QoS is guaranteed for this
connection.
3.5 Connection Manager
The CM has a GUI front end to enter connection set-up information. The CM is
responsible for:
• Gathering user’s input parameters.
10
Connection Set-up and QoS Monitoring in ATM Networks
• Querying the RM to select the best path.
• Requesting GA to set up the link accordingly.
The graphical interface to gather connection set-up information contains both
source and destination IP addresses and the required QoS parameters. These parameters
are system-defined values such as minimum bandwidth used, minimum VCI or VPI
used, minimum delay, etc. It also contains user-defined minimum delay values for
constraint-based routing. All the paths returned from the RM should fulfill this
minimum delay requirement.
The CM initially connects to the NS to register itself and obtains the addresses
of other subsystems. The CM passes a user’s connection set up information to the RM,
and it expects the RM to return all possible paths that meet the criteria, or it asks the
RM to choose the best path. If the user decides to select a path from the list, the RM
displays a list box for the user to select his preferred path. After the user selects his
preferred path, or after the system returns the best path, the user fills in his QoS
requirement for the new virtual connection. He can choose one of the possible traffic
types: UBR, CBR, ABR and VBR. For each traffic type, the corresponding QoS values
are filled using GUI.
The CM uses this information to request the GA to create virtual connections
accordingly. When the request reaches the GA, it splits the path and distributes the
request to each LA involved in the path. The LA will create a UPC contract for the QoS
requested and associate it with a new virtual connection. If the switch does not have
enough resources, it informs the respective LA and the connection set-up will fail.
11
Connection Set-up and QoS Monitoring in ATM Networks
3.6 Routing Manager
The RM is the main engine to determine the best route among all possible
routes. Particularly, there are two routing table computation algorithms to discuss: Path-
optimization routing and constraint-based routing. This subsystem uses path-
optimization routing instead of constraint-based routing, as this is a simpler approach.
Path-optimization routing is to choose a path such that it meets the minimum QoS
requirement. Constraint-based routing is to select the optimal routes for flows such that
the QoS requirements are most likely to be met. The pros and cons of constraint-based
routing are presented in Xipeng’s paper [4].
There are two approaches to compute a path. The approaches are to compute a
path on-demand, or path pre-computation before a path is requested. An on-demand
approach has the benefit of being able to always use the most recent information.
However, if requests arrive too frequently, this approach may prove costly even if the
algorithm is relatively simple. Another approach using path pre-computation is similar
to how a best-effort routing table is pre-computed. Nevertheless, since the amount of
bandwidth requested is not known in advance, such a routing table needs to pre-
compute and store multiple alternate paths to each destination, potentially for all
possible values of bandwidth requests [5]. Due to the complexity in this approach, this
subsystem uses the path computation on-demand in exchange for simplicity. This
approach is acceptable since a user initiates a connection set-up manually.
The RM contacts the NS for service registration and queries it for the addresses
of other subsystems. It receives queries from the CM for a given source and destination
addresses and their QoS values. The RM then obtains all possible routes from the DB,
and analyzes these routes by further querying the DB for more detailed information on
12
Connection Set-up and QoS Monitoring in ATM Networks
link bandwidth and cell errors. Finally, it returns the best route or the list of routes to the
CM.
If the path selection has minimum delay requirement, instead of querying DB,
the RM will contact the QM to get delay information. The QM will create virtual
connections dynamically with the help of the GA, and then it obtains delay information
from QMAs. If one or more paths satisfy the delay value specified by the user, the RM
will choose the best path; otherwise, no new connection is created.
3.7 QoS Manager
The QM acts as a coordinator for all QMAs. It is activated when the RM needs
to find out a path with minimum delay. Apart from collecting end-to-end delay value, it
also collects delay variation, cell error and throughput. It registers with the NS upon
start up and obtains the addresses of GA and RM. All QMAs register with the QM
providing their IP addresses and the switches they in charge.
The QM receives the RM requests to get the end-to-end delay value for a
particular path. Based on the path information provided by the RM, the QM contacts the
GA to set up virtual connections with a fixed VCI for all the switches in the path. The
QM then informs the QMAs of both source and destination switches to start the delay
measurement. Upon completion of the measurement, the QM obtains the delay, delay
variation, throughput for end-to-end path, and it stores the data into a file for statistical
analysis. Subsequently, it returns the delay value for that path to the RM. The RM uses
these values to choose the least delay path among all possible paths, in order to satisfy
the user-specified delay requirement. Finally, it tears down all the virtual connections
used for the delay measurement.
13
Connection Set-up and QoS Monitoring in ATM Networks
3.8 QoS Measurement Agent
The QMA should attach directly to one of the ATM switches. This subsystem is
responsible for sending probe cells through a delay measurement path and obtaining
end-to-end QoS measurements such as delay, delay variation and throughput. Initially,
the QMA obtains the QM address from the NS during the initialization stage. In order to
perform delay measurements, a pair of QMAs is required. The implementation is
platform-specific. In Windows NT, the Winsock2 API is used. And in the UNIX
environment, the API provided by a specific vendor for the ATM switch is used to send
and receive native ATM cells. This native ATM portion is the core component of the
QMA. There also exists a TCL component to act as an RPC client and communicates
with the QM to perform functions such as registering with QM upon start up, accepting
the VCI to be used in delay measurement and sending the QoS result back to the QM.
The TCL language cannot be used for the measurement since TCL is a scripting
language and it does not support hardware access to the ATM network interface card.
Besides, TCL is too slow to use in real time measurement.
4 Results and Discussions
This section discusses some issues and presents the results of the system. The
Figure 2 shows a sample of the ATM network topology used in the experiment. When a
user clicks on one of the squares that represent the ATM switches, a dialog box shows
all the ports information related to that switch.
The CM GUI in Figure 3 consists of the graphical interface to collect user’s
connection set up information. The advanced option is only used when the respective
QoS criteria is chosen. There is an option to allow the user to choose a route from a list
14
Connection Set-up and QoS Monitoring in ATM Networks
of possible routes determined by the system. In this case, the QoS weight is calculated
based on relative scale of the paths. The network administrator selects a path in the list
box. The Figure 4 shows the ATM traffic parameters to be used for the new connection
set up after a route is chosen. Note that the ABR field is unused in the experiment since
the switches do not support ABR traffic for new PVC creation.
4.1 Routing Analysis
For routing analysis, the RM uses an average index value to compare each
possible route based on QoS criteria. To illustrate the algorithm, suppose the system
wants to compute the average index value for minimum bandwidth. There are two
possible routes (Figure 5): the first route consists of three switches, while the second
route consists of two switches.
To analyze the first route, the RM obtains the total available bandwidth in the
input port for Switch 1 (iAvailBw1), total available bandwidth in the output port for
Switch 1 (oAvailBw1), input port bandwidth used (iBw1) and output port bandwidth
used (oBw1). The system assigns the bandwidth index used in switch 1 of route 1
BwIndex1 as follows:
BwIndex1 = (iBw1 / iAvailBw1*100% + oBw1 / oAvailBw1*100%) / 2
Similarly the system obtains BwIndex2 and BwIndex3 for switch 2 and 3 respectively.
The total bandwidth index for route 1 (TBwIndex) is defined as follow:
TBwIndex = (BwIndex1 + BwIndex2 + BwIndex3) / 3
15
Connection Set-up and QoS Monitoring in ATM Networks
The RM performs a similar calculation for the second route. The route with least
TBwIndex is chosen as the best route with minimum bandwidth usage. Note that the
TBwIndex is only used as a relative measure among all possible routes. It does not have
any meaningful interpretation of the bandwidth if it is used alone. This approach is also
used in route analysis for minimum VCI, VPI and least link error QoS criteria.
4.2 Issues on Software-based QoS Measurement
Using a software-based approach to measure end-to-end delay, delay variation
and throughput raises some challenging issues. First, there is clock synchronization
problem between source and destination nodes. In view of this, the system only does
time stamping on the source host in both the forward and reverse directions on the same
packet; hence, the clock synchronization problem is avoided. Since end-to-end round-
trip delay is measured, this value remains valid even if each direction is not
symmetrical.
The next issue is related to measurements using the software-based approach.
Several factors affect the accuracy of the delay measurement: operating system
scheduling mechanism and process switching latency, drift of local workstation
hardware clock and software-induced errors. The system developed here does not take
these issues into account, since the aim is to make approximate measurements to enable
path selection based on QoS.
Note that the QMAs are only able to obtain the end-to-end delay for each path,
i.e. they cannot obtain the delay for each individual link in the path. The reason is some
ATM switches do not have nodes (on which the QMA can run), directly connected to
them - this is the case for backbone ATM switches.
16
Connection Set-up and QoS Monitoring in ATM Networks
4.3 End-to-End QoS Measurement
The network topology to measure end-to-end delay is shown in the Figure 6. The
RM uses this average delay to select the shortest delay among all the possible paths.
Initial setup requires four PVCs to be created using the CM, with each ATM switch has
two PVC for both forward and backward directions. The CM is able to create four types
of traffic: UBR, CBR, VBR and ABR. However, the ABR traffic is not supported in
these ATM switches and hence it is not used.
ATM probe cells are sent in a packet size of 8192 bytes from the Workstation 1
to the Workstation 2 continuously, without waiting for the packet to return before
sending the next packet. Only the Workstation 1 performs time stamping for each sent
and received packet, and these time stamps are not stored in the packet but in the
memory of the Workstation 1. However, each packet stores an index to the time stamp
array so that the Workstation 1 can assign returned time stamps to appropriate packets.
To allow continuous sending of packets without blocking, the Workstation 1
executes two threads: one for sending packets and the other for receiving packets.
Similarly, there should not be any blocking in the Workstation 2. Similarly, it contains
two threads for sending and receiving packets. When the Workstation 2 receives a
packet, it will put the packet in a FIFO queue. The sending thread will go to the queue
and retrieve the next packet.
One limitation in the measurement is each workstation has OC3 interface card,
which is capable of handling 155 Mbps transfer rate. The respective ATM ports also
support OC3 interface. However, the sending thread in the Workstation 1 can send
continuous packets very quickly, and it can easily exceed the capability of the respective
ATM port. Using a few trials, we choose a suitable number of packets to avoid packet
17
Connection Set-up and QoS Monitoring in ATM Networks
dropped in the switches, and this system chooses 15 packets for the measurement. The
continuous send throughputs using software measurement for the sending thread of
Workstation 1 and Workstation 2 are shown in Table 1.
Sending thread for Send throughput (Mbps)
Workstation 1 86.367
Workstation 2 159.248
Table 1 Continuous Send Throughput for 15 Packets
From the table, we notice that the throughput is very close to the OC3 transfer
limit. If we use more than 15 packets in the measurement, the send throughput for the
Workstation 2 will far exceed the OC3 transfer limit and therefore it will result packets
loss. The PVC for the returned measurement path will drop packets in the ATM switch
2 (Figure 6) when its queue is full.
The throughput for the Workstation 2 is higher than the Workstation 1. This is
due to the fact that packets in the Workstation 2 are readily available since they are
more likely to be found in the queue. The Workstation 1 is required to store a packet
index in the buffer before sending a packet; hence it has a lower throughput. The
following table (Table 2) lists the QoS reservations for the PVCs associated with each
traffic type used in the delay measurement.
Traffic Type
UPC Index CDVT (usec)
PCR (kbps)
SCR (kbps) MBS (kb)
UBR 1 N/A N/A N/A N/A
CBR 2 5000 100,000 N/A N/A
VBR 3 5000 100,000 50,000 50,000
Table 2 QoS Parameters Used for Delay Measurement
We allocate enough bandwidth for the PVC so that packets are not dropped due
to insufficient bandwidth. This experiment demonstrates the system’s QoS monitoring
18
Connection Set-up and QoS Monitoring in ATM Networks
feature and it is different from normal delay measurement during connection setup. This
experiment intends to measure the pattern of QoS performance with 15 packets under
different types of traffic. For the normal delay measurement, the system should be non-
intrusive, i.e. user services should not be interrupted and active connections should not
be invaded with test traffic. Therefore, the QoS reservation for the PVC can be much
less than this experiment, as the system will only send minimum number of packets to
obtain the QoS performance data.
The next few figures compare different types of traffic for their end-to-end
delay, delay variation and throughput. Each type of traffic is sent separately at different
time. If they are all sent at once, there will be a lot of packet dropping by the ATM
switches. From the Figure 7, the delay value for UBR traffic is the worst compared to
CBR and VBR traffic. In the comparison of CBR and VBR traffic, we notice that the
delay for VBR is slightly higher than CBR. This is an expected result since CBR has
constant data rate with a fixed timing relationship between data samples. Note that the
initial delay values for UBR traffic are much higher than CBR and VBR traffic. This
may be due to the queuing delay introduced at the ATM ports. As more packets are sent,
the queue becomes shorter and hence the delay is decreasing. The switches do not want
the UBR traffic to consume all the bandwidth since it does not reserve any bandwidth.
The Figure 8 gives another comparison of packet delay variation. It measures the
distortion of delay values from its average value as the packets are propagating through
the network. The CBR has fairly constant delay variation and UBR has the largest delay
variation among the three. The VBR traffic approaches average delay value near the end
of the transfer.
19
Connection Set-up and QoS Monitoring in ATM Networks
From the Figure 9, it compares the throughput among the traffic. The CBR
traffic has the highest throughput since it has constant transfer rate. The VBR traffic has
higher throughput than the UBR traffic, until the queue in the ATM ports becomes less
congested in the last few UBR packets, at the moment the UBR traffic has higher
throughput than VBR. In the last section, the average delay, delay variation and
throughput are listed in the Table 3.
Traffic Type Average Delay (msec)
Average Delay Deviation (msec)
Average Throughput
(Mbps) UBR 20.012 2.063 3.312
CBR 16.883 0.920 3.893
VBR 18.214 1.082 3.610
Table 3 Average QoS Measurement for 15 Packets with Packet Size 8192
From the table, the CBR traffic has the lowest average delay and delay variation.
It also has the highest average throughput among the three. The UBR traffic has the
worst average delay, delay variation and the lowest average throughput. Although not
shown in the measurement, the UBR traffic can probably handle more packets than the
other two without dropping any packet, since it shares bandwidth with all virtual
connections for the ATM ports. The CBR and VBR traffic will drop packet when the
QoS reservation is violated.
4.4 Connection Setup Time Measurement
Based on the simple network configuration shown in the Figure 10, we would
like to compare the set up time to create a virtual connection from Host 1 to Host 2. The
PNNI uses SVC to automatically create the connection based on the requested QoS,
whereas this system uses SNMP to create PVC in the switches and uses RPC for
20
Connection Set-up and QoS Monitoring in ATM Networks
communication among subsystems. There are four test cases for connection set up time
measurement from the source host (Host 1) to the destination host (Host 2).
• Test case 1: Measure the SVC setup time by time-stamping the execution of
native ATM API. The PNNI protocol is used when a SVC is created across two
ATM switches. There are two paths to connect from the Host 1 to Host 2: one
from ATM1 to ATM3, and the other one from ATM1 to ATM2 to ATM3. Since
the PNNI does not give an option for path selection, we temporary disable the
link from the ATM1 to ATM2. Therefore only the first path is allowed for the
PNNI.
• Test case 2: Measure the PVC setup time using the shortest path criteria in the
CM, i.e. from ATM1 to ATM3 directly. The system obtains the PVC setup
information from the CM and RM. The subsystems are executed in the Host 3
and 4 in the Ethernet network. The PVC creation for each switch is done
synchronously, i.e. it sends SNMP messages to ATM1 to create a PVC. It is
blocked until the message returns before sending SNMP messages to ATM3 to
create the final PVC.
• Test case 3: It is similar to the test case 2 except that the system sends SNMP
messages in parallel to each switch by creating a separate thread to send the
message. It then waits for all the SNMP messages to return and ensure that the
connection is successful.
• Test case 4: It is similar to the test case 3 except that it does not wait for the
reply of SNMP messages. In this case, the CM does not know whether the PVC
creation is successful. This test case compares the processing time of the system
and the RPC communication among the subsystems.
21
Connection Set-up and QoS Monitoring in ATM Networks
Table 4 Average Connection Setup Time from Host 1 to Host 2
Test Case Connection Time
1 0.079 sec
2 5.076 sec
3 4.577 sec
4 0.613 sec
From the test cases in Table 4, we conclude that the PNNI has the least
connection setup time. This is due to that PNNI is supported internally in the ATM
switches. The creation of virtual connection is accomplished through PNNI signaling
using specific PVC.
This experiment also intends to optimize the connection time based on the test
case 2 to 4. There is minor improvement to send SNMP messages in parallel to setup
connection in the test case 2 and 3. This improvement is more apparent if the PVC
creation is carried out for more switches. For the comparison of the test case 3 and 4, it
is clear that most of the setup time is consumed on waiting for the reply of the SNMP
messages. The system uses less time to communicate among subsystems using RPC,
less processing time to query database and analyze route.
5 Conclusions and Recommendations
This paper presents a system that provides a basic framework to support QoS
routing and connection set up in an ATM network. Currently, most ATM switches may
not fully implement PNNI signaling specification (perhaps only up to the first level of
aggregation). This system is simple compared to PNNI. It does not require the user to
have in-depth knowledge of internal ATM network topology and ATM signaling. In
22
Connection Set-up and QoS Monitoring in ATM Networks
addition, it supports QoS-based connection set up and provides basic QoS routing. For
security reasons, the majority of public ATM services do not support Switched Virtual
Connections (SVCs) across the public UNI. Hence, setting up ATM connections using
PVC is still commonly done. This system is useful for creating new PVCs in these
situations. It enables faster set-up of PVCs compared to the time taken to do this
manually. It bridges the gap between using a complex signaling protocol and tedious
manual PVC creation. The virtual connections that this system created can be used by
normal IP applications running on top of ATM, with the added advantage of QoS
reservations on the link.
Provided the existing ATM network has IP connectivity, this tool can be run on
any node to manage the network. This node does not even require an ATM network
adapter card, with the exception of the software-based delay measurement component,
in which case the QMA has to run on a node that is connected through an ATM adapter
card to the rest of the ATM network.
This current system is not scalable to a large ATM network without modifying
the existing framework. In a larger network, our system can also scale up to contain
hierarchical information as found in PNNI. However, most ATM networks today are
small to medium-sized anyway, so this system is well suited for these networks. PNNI
is scalable to large ATM networks because it aggregates information to summarize
reachable information between levels in the hierarchy. This system has more accurate
information about individual switches. It does not assume about the symmetry and
compactness of the topology of the child peer group and its traffic flows as in PNNI.
Lastly, this system is designed to be platform-independent since TCL/TK is used
as its base implementation. Currently the system can run on the UNIX (Solaris), Linux
23
Connection Set-up and QoS Monitoring in ATM Networks
and Windows platforms. It is a software-based approach that enables a personal
computer to function as a QoS monitoring station in an ATM network, thus providing a
low-cost, off-the-shelf alternative to expensive broadband testing equipment.
In the future, the system can be extended to use better QoS routing algorithms.
Currently the QoS routing algorithm used in this project is very simple. It uses an
average index to compare each path based on some QoS criteria. Alternatively, it can be
redesigned to use hierarchical QoS routing to make it more scalable as in PNNI.
Another enhancement would be to visually construct a virtual path by clicking on the
network topological map, which is currently available in some commercial network
management packages. This makes the network administrator’s task easier as he is able
to set up a virtual connection by simple drag and drop actions.
Whenever an update is detected in a switch, the update information is sent
immediately to the DB. There is no provision now to hold on to the update until it has
reached a certain quantified threshold value. Doing this would reduce the number of
updates to the database. However, this can lead to inaccuracy in the database and an
increase in the number of equal cost paths. When this happens, the system should be
allowed to choose randomly from these choices, so as to reduce the chances of
overloading a particular link.
The LA can be customized and used in any commercially available ATM switch
today. SNMP, GSMP or serial communication can be used to retrieve information from
these switches. As had been mentioned earlier, the LA can be part of an ATM switch, so
that an extra node is not required to implement the LA.
Another way to improve the system is to make it more scalable. Currently the
system uses one subsystem for each DB, GA, CM, RM and QM. This arrangement is
24
Connection Set-up and QoS Monitoring in ATM Networks
acceptable in a small to medium-sized ATM network. For a large network, it is
necessary to support multiple DB, GA, CM, RM and QM. Each corresponding
subsystem can use a predefined protocol to communicate with one another from
separate ATM networks. Probably, the system can follow the PNNI approach, which
uses layering and aggregate information to make it scalable to a large network. This will
be the subject of further research.
Finally, the start-up phase of the system can be redesigned so that it provides
automatic deployment of the system. At present, network operators manually execute
the subsystems in a specific order. The system can utilize a CORBA object to provide a
name service lookup and automatically start the subsystems if they are not on the
network. The CORBA object should know the order of executing the subsystems. It
should be able to start or stop the subsystems on-demand, so that the system can be
initialized in the correct order. This is a very useful feature to automatically deploy the
system in an ATM network.
25
Connection Set-up and QoS Monitoring in ATM Networks
6 Figures
Figure 1 Overall System Design
26
Connection Set-up and QoS Monitoring in ATM Networks
Figure 2 Network Topology Map
27
Connection Set-up and QoS Monitoring in ATM Networks
Figure 3 Connection Management GUI
28
Connection Set-up and QoS Monitoring in ATM Networks
Figure 4 ATM Traffic Types
29
Connection Set-up and QoS Monitoring in ATM Networks
Figure 5 Route Analysis
Figure 6 Network Diagram for Measuring End-to-end Delay
30
Connection Set-up and QoS Monitoring in ATM Networks
Figure 7 Comparison of End-to-end Delay for UBR, CBR and VBR
Figure 8 Comparison of End-to-end Delay Variation for UBR, CBR and VBR
31
Connection Set-up and QoS Monitoring in ATM Networks
Figure 9 Comparison of End-to-end Throughput for UBR, CBR and VBR
Figure 10 Network Diagram for Measuring Setup Time of A Virtual
Connection
32
Connection Set-up and QoS Monitoring in ATM Networks
33
7 References
[1] William Stallings. SNMP, SNMPv2, and RMON: Practical Network Management;
Addison-Wesley, July 1996.
[2] The ATM Forum Technical Committee. Private Network-Network Interface
Specification Version 1.0. af-pnni-0055.000, March 1996.
[3] G. Apostolopoulos, R. Guerin, S. Kamat and S. K. Tripathi. QoS Routing: A
Performance Perspective. Proceedings of SIGCOMM, Vancouver, Canada,
September 1998.
[4] Xipeng Xiao, Lionel M. Ni. Internet QoS: A Big Picture. IEEE Network,
March/April 1999.
[5] G. Apostolopoulos, R. Guerin, S. Kamat, A. Orda, S. K. Tripathi. Intradomain QoS
Routing in IP Networks: A Feasibility and Cost / Benefit Analysis. IEEE Network,
September /October 1999, 42-54.