26
Department of Telecommunications Telecom Engineering Center Khurshid Lal Bhavan, Janpath, New Delhi - 110011 STUDY PAPER ON IMPLEMENTING QUALITY OF SERVICE IN IP NETWORKS R. Saji Kumar Director, J.M.Suri DDG, I Division, Telecom Engineering Center, Department of Telecommunications, New Delhi. 1. INTRODUCTION Service providers have implemented their nationwide MPLS Transport Network with many Core Routers and Edge Routers covering most of the cities. In many such networks, the traffic is treated using Best Effort method to all its services. With the introduction of VoIP and multi-play services, the IP traffic is increasing, thus necessitating some Quality of Service [QoS] measures to insure the optimal usage of the existing network resources and better customer satisfaction. This document describes how Quality of Service (QoS) is to be implemented in the Service providers network and what are the various QoS techniques that can be implemented at different nodes in the network to optimize and manage network bandwidth, delay, jitter and packet loss for various classes of services. This document describes the current IP Transport architecture deployed by different service providers, various network topologies and proposed QoS implementation in the network. 2. HIERARCHICAL ARCHITECTURE OF IP NETWORKS The IP/MPLS network is a multi-layer centrally managed IP backbone network designed to provide reliable routes to cover all possible destinations. It primarily consist of MPLS enabled Provider [P] and Provider Edge [PE] Routers interconnected in such a way as to ensure no single point of failure. It will facilitate the convergence of voice, data and video networks into a single unified packet-based multi-service network capable of providing all the current and futuristic services. The network is envisaged to support the QoS features with four different classes of traffic along with MPLS-Traffic Engineering, Fast Reroute, multi-casting. The network also provides support for multiple access technologies. The network architecture is a collection of logical and physical functions distributed in four levels of hierarchies. These four levels of network hierarchies are Application layer, IP/MPLS routing layer, Aggregation layer, Access layer.

Study paper on IP QoS.pdf

  • Upload
    hakhanh

  • View
    227

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Study paper on IP QoS.pdf

Department of Telecommunications

Telecom Engineering Center Khurshid Lal Bhavan, Janpath, New Delhi - 110011

STUDY PAPER ON IMPLEMENTING QUALITY OF SERVICE IN IP NETWORKS R. Saji Kumar Director, J.M.Suri DDG, I Division, Telecom Engineering Center, Department of

Telecommunications, New Delhi.

1. INTRODUCTION Service providers have implemented their nationwide MPLS Transport Network with many Core Routers and Edge Routers covering most of the cities. In many such networks, the traffic is treated using Best Effort method to all its services. With the introduction of VoIP and multi-play services, the IP traffic is increasing, thus necessitating some Quality of Service [QoS] measures to insure the optimal usage of the existing network resources and better customer satisfaction. This document describes how Quality of Service (QoS) is to be implemented in the Service providers network and what are the various QoS techniques that can be implemented at different nodes in the network to optimize and manage network bandwidth, delay, jitter and packet loss for various classes of services.

This document describes the current IP Transport architecture deployed by different service providers, various network topologies and proposed QoS implementation in the network.

2. HIERARCHICAL ARCHITECTURE OF IP NETWORKS The IP/MPLS network is a multi-layer centrally managed IP backbone network designed to provide reliable routes to cover all possible destinations. It primarily consist of MPLS enabled Provider [P] and Provider Edge [PE] Routers interconnected in such a way as to ensure no single point of failure. It will facilitate the convergence of voice, data and video networks into a single unified packet-based multi-service network capable of providing all the current and futuristic services. The network is envisaged to support the QoS features with four different classes of traffic along with MPLS-Traffic Engineering, Fast Reroute, multi-casting. The network also provides support for multiple access technologies. The network architecture is a collection of logical and physical functions distributed in four levels of hierarchies. These four levels of network hierarchies are Application layer, IP/MPLS routing layer, Aggregation layer, Access layer.

Page 2: Study paper on IP QoS.pdf

2.1 Application Layer

The application layer contains application servers which provide service logic for delivery of various services such as data, video, voice, multi-media contents etc to end users. Typical applications are VOIP, IPTV/VOD, Audio/Video content, Gaming, E-commerce, Tele-education, Tele-medicine, etc.

2.2 IP/MPLS Routing Layer

This layer consist of high capacity, carrier class Core and Edge routers providing a unified IP/MPLS backbone for higher data forwarding/routing capability to support multiple services with multiple QoS levels and interoperating with existing technology and protocols. It supports scalability, resilience, ease of operation and reduced operational cost. The edge Router network provides information exchange between core and aggregation Routers.

2.3 Aggregation Layer

The aggregation layer, also called the metropolitan network, provides traffic aggregation from the access network and connection to the core IP/MPLS network. Ethernet technology, which for decades was primarily used in enterprise networks in a LAN environment, has made significant deployment inroads in carrier grade networks in the WAN environment, primarily due cost effectiveness and simplicity. It is further divided into three levels, i.e., Tier-I (Metro aggregation), Tier-II (Edge aggregation) and Tier-III (Cell site aggregation). Tier-I aggregates IP Traffic from multiple Tier-II Nodes over the Tier-II Ring configuration. Tier-II aggregates the IP traffic from multiple Access Nodes which are connected directly or from Tier-III Nodes over Tier-III Rings. Tier-III Nodes aggregate the IP traffic from multiple Access Nodes which are connected directly. However the number of Tier’s of aggregation varies from Place to Place and among Service Providers

2.4 Access Layer

The access network provides broadband connection in last mile. Broadband Access technologies provide high speed, always on Internet connection for homes and businesses. Broadband access technologies enable data, voice, video and other multimedia applications for home and business use. The choice of what access technologies to deploy depends mainly on its commercially viability and which access technology can best serve the current and future consumer demands. The next generation network is expected to use various access technologies - from xDSL technology for copper access using IP DSLAM/LMG, GPON/FTTH (Fiber to the Home) technology for Fiber Access, Wireless Access over Wi-Fi / Wi-MAX, 3G/4G Networks, etc.

Page 3: Study paper on IP QoS.pdf

Figure 1: Layered Architecture with various Services

3. IP TRANSPORT INFRASTRUCTURE OF VARIOUS SERVICE PROVIDER’S

Various Service Providers have deployed a nationwide MPLS Transport Network with many Core Routers and Edge Routers catering to multiple cities. It consists of multi-gigabit, Multi-Protocol Label Switching (MPLS) based IP Network in the form of a 2-layered centrally managed IP backbone network designed to provide convergent network supporting data, voice and video applications.

3.1 Typical IP/MPLS Core Network Architecture

The Core Network constitutes an integrated IP and MPLS network. The network constitutes the high speed Backbone comprising of Core routers running modular operating system with built-in redundancies supporting both TCP-IP and MPLS protocols and whose function shall primarily be limited to high-speed packet forwarding. These nodes are connected in a mesh configuration over multiple STM-16/ 10G/40G/100G interfaces over the DWDM Network.

Page 4: Study paper on IP QoS.pdf

Figure 2: NIB-II IP MPLS Core Network

In cases where large Telecom Services Providers deploying pan India based, IP-MPLS Networks, these Routers can be part of multi OSPF Areas / ISIS system with one area / system being part of National Core network and other area / system being part of Area core Network. The Area Core Network aggregates the traffic originating from the deployed edge routers.

3.2 Typical Edge Router Architecture

The Edge routers are connected to the Core network either locally through the 1G/10G or remotely through dual homed 1G/STM-1/STM-16/10G. The Edge network architecture provides for dual homing links from the Edge router to the nearest Core routers. The Edge Routers so deployed acts as a multi-service edge and aggregates traffic coming from PSTN (through media Gateway), GSM (through Media Gateway and GGSN), CDMA (through Media Gateway and PDSN), Broadband (through BRAS / BNG), Wi-Max, etc. The logical relation between various network components such as Core Network and Edge Routers is depicted in figure below:

Figure 3: Edge Router Connectivity

Page 5: Study paper on IP QoS.pdf

A sample connectivity of one LSA say Haryana is given below.

AMBALA

JIND

GURGAON

HISSAR

KARNAL

FARIDABAD

REWARI

SONIPAT

ROHTAK

Jalandhar

NOIDA

Chandigarh

Core Router

Edge Router

Core & Edge Router Connectivityin Haryana Circle showing

various LDCA’s

Core to Edge STM-16 Links

Core to Core STM-16 Links

Figure 4: Typical Edge Router Connectivity of Haryana LSA

3.3 MPLS Based Aggregation Network Architecture

A typical aggregation layer architecture which aggregates the traffic from various access nodes is given in the figure below. The three Tiered aggregation architecture is shown here. However the actual number of layers of aggregation network required varies across Service Providers and place to Place depending upon the traffic requirement.

Figure 5: Three Tier Aggregation Network Architecture with access and MPLS layer connectivity

Page 6: Study paper on IP QoS.pdf

3.3.1 RPR Aggregation Network deployed by Service Providers Before the MPLS-TP and similar carrier Ethernet Technologies came into existence, many service providers have deployed RPR based Aggregation Networks. A typical such network diagram is given below.

DSLAM

DSLAM

DSLAMDSLAM

DSLAM

DSLAM

NOC, DRNOC MPLS CORE

RPR Switch

RPR Ring on 10GE

RPR Switch

10GE RPR RingBNG BNG

L3PE L3PE

RPR Switch

Figure 6: RPR Aggregation Network

3.4 LAN Switch Based Aggregation Architecture

Service Providers have also deployed a LAN Switch Based Aggregation Network for aggregating the traffic from small cities or connecting to their data centers. A Typical LAN Switch based aggregation network with protection through traditional SDH Ring Network is given below.

MPLS CORE

RPR Tier-I

BNG

L3PE

FE-STM Convertor

DSLAM

Layer-2 Switch

SDH Ring

Figure 7: LAN Switch based Aggregation Network

Page 7: Study paper on IP QoS.pdf

4. QOS IMPLEMENTATION IN THE NETWORK

Quality of service is a complex and contentious issue that needs to be discussed in the context of interconnection. QoS is measured based on the Mean Opinion Scores (MOS) which is a measure of the user perception of the Quality of the Speech. The MOS figures used in Legacy PSTN are as follows:

5 - Perfect. Like face-to-face conversation 4 - Fair. Imperfections can be perceived, but sound clear. 3 - Annoying. 2 - Very annoying. Nearly impossible to communicate. 1 - Impossible to communicate

As regards the transport networks, QoS is based on the following parameters. 1. End to End bandwidth or throughput 2. Delay or Latency 3. Error rate criteria 4. Protection switching time criteria for a hypothetical reference digital path of

2500Km. Legacy PSTN using TDM transport techniques provide an end to end fixed bandwidth channel say 64Kbps for voice or Nx64Kbps or Nx2Mbps etc. The delays are due to transmission and equipment delays. Moreover such networks have been designed to provide protection switch over of less than 50ms. Thus the legacy PSTN is designed to provide a guaranteed level of QoS, in contrast with the internet which provides for “best effort” QoS. IP based networks achieve less than 50ms protection switch over time using techniques such as Ethernet Ring Protection [ERPS] or MPLS Fast Reroute [FRR]. Moreover IP networks avoid delay due to multipath transmission using Traffic Engineering techniques such as MPLS TE. In addition, IP based networks (e.g. MPLS) have a strong and clear focus on end-to end QoS models including use of techniques such as prioritisation, resource reservation and admission control techniques to ensure deterministic quality for a multitude of services.

4.1 Introduction to QoS QoS is a traffic-management strategy that allows to allocate network resources for both mission-critical and normal data, based on the type of network traffic and the priority you assign to that traffic. In short, QoS ensures unimpeded priority traffic and provides the capability of rate-limiting (policing) default traffic. In order for voice and video to traverse IP networks in a secure, reliable, and toll-quality manner, QoS must be enabled at all points of the network. When QoS is implemented, it allows to: Simplify network operations by collapsing all data, voice, and video network traffic

onto a single backbone with the use of similar technologies.

Page 8: Study paper on IP QoS.pdf

Enable new network applications, such as integrated call center applications and video-based training, that can help differentiate enterprises in their respective market spaces and increase productivity.

Control resource use by controlling which traffic receives which resources. For example, you can ensure that the most important, time-critical traffic receives the network resources (available bandwidth and minimum delay) it needs, and that other applications that use the link get their fair share of service without interfering with mission-critical traffic.

Packet loss, latency and jitter are parameters describing the network performance and hence quality characteristics of IP-traffic. They are particularly important for bidirectional real time services such as voice or videoconference. Special attention should be drawn to guaranteeing QoS requirements across interconnected network borders.

4.2 Requirement of QoS in the Network

QoS represents the set of techniques necessary to manage network bandwidth, delay, jitter, and packet loss. From a business perspective, it is essential to assure that the critical applications are guaranteed the network resources they need, despite varying network traffic load. To provide QoS, it is required to identify traffic sources and types. There is a need for appropriate handling of real time and non-real time traffic such as,

Voice (Delay sensitive) Video (Bandwidth intensive) Data (Loss sensitive) HTTP, FTP, SMTP Bursty and Constant type Multi-service traffic: IP, MPLS Single or Multiple flows of the same type

The parameters which influence the traffic are Latency, Jitter and Packet Loss. Latency: Latency is a measure of time delay experienced in a system. In reliable two-way communication systems, latency limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is "in-flight" at any one moment. In the field of human-machine interaction, perceptible latency has a strong effect on user satisfaction and usability. Network latency in a packet-switched network is measured either one-way (the time from the source sending a packet to the destination receiving it), or round-trip (the one-way latency from source to destination plus the one-way latency from the destination back to the source). Round-trip latency is more often quoted, because it can be measured from a single point. Four key causes of latency are: propagation delay, serialization, data protocols, routing and switching, and queuing and buffing.

Page 9: Study paper on IP QoS.pdf

Jitter: Jitter is defined as a variation in the delay of received packets. At the sending side, packets are sent in a continuous stream with the packets spaced evenly apart. Due to network congestion, improper queuing, or configuration errors, this steady stream can become lumpy, or the delay between each packet can vary instead of remaining constant. Jitter is generally caused by congestion in the IP network. The congestion can occur either at the router interfaces or in a provider or carrier network. When a router receives a Real-Time Protocol (RTP) audio stream for Voice over IP (VoIP), it must compensate for the jitter that is encountered. If the jitter is so large that it causes packets to be received out of the range of this buffer, the out-of-range packets are discarded and dropouts are heard in the audio. Packet Loss: Packet loss occurs when one or more packets of data travelling across a computer network fail to reach their destination. Buffer overflow is the most common cause of packet loss, as shown by both our experience and anecdotal evidence. Buffer overflow can occur in network switches and routers, and in host operating systems

4.3 Managing the Resources For managing the finite resources, there is a need of

Rate Control

The primary goal of QoS in the security appliance is to provide rate limiting on selected network traffic for both individual flow or VPN tunnel flow to ensue that all traffic gets its fair share of limited bandwidth. Every user's Traffic stream can participate in maximum bandwidth limiting. That is, strict policing, which rate-limits the individual user's default traffic to some maximum rate. This prevents any one individual user's traffic streams from overwhelming any other. Policing is a way to ensure that no traffic exceeds the maximum rate (bits/second) that you configure. This ensures that no one traffic flow can take over the entire resource.

Queuing A queue is used to store traffic until it can be processed or serialized. Both switch and router interfaces have ingress (inbound) queues and egress (outbound) queues. An ingress queue stores packets until the switch or router CPU can forward the data to the appropriate interface. An egress queue stores packets until the switch or router can serialize the data onto the physical wire. Switch (and router) queues are susceptible to congestion. Congestion occurs when the rate of ingress traffic is greater than can be successfully processed and serialized on an egress interface. By default, if an interface’s queue buffer fills to capacity, new packets will be dropped. This condition is referred to as tail drop, and operates on a first-come, first-served basis. If a standard queue fills to capacity, any new packets are indiscriminately dropped, regardless of the packet’s classification or marking. QoS provides switches and routers with a mechanism to queue and service higher priority traffic before lower priority traffic.

Page 10: Study paper on IP QoS.pdf

Scheduling The scheduling is the operation which select among packets stored in a buffer to be transmitted over a specify link. The choice must be taken in a very small period of time and it is related to the packet transmission time. Moreover algorithms must be simple in order to have a hardware implementation.

Admission Control To ensure that the voice and video traffic does not use all the bandwidth in the link and cause other important data such as business applications to experience dropped packets, organizations can use calls admission control. Call admission control limits the number of calls allowed through a particular link between sites. Preserving the call quality is important. When calls traverse WAN links, oversubscribing the link can cause call quality to degrade. Call admission control is important because it can prevent calls from filling up the link. When routing a call, the Admission control mechanism knows if a call should be allowed or if the link cannot handle that call. This provides a consistent call experience to users

5. DIFFERENTIATED SERVICE MODEL OF QOS

There are two principal approaches to QoS in modern packet-switched IP networks, a parameterized system based on an exchange of application requirements with the network, and a prioritized system where each packet identifies a desired service level to the network. Integrated services (“IntServ”) implements the parameterized approach. In this

model, applications use the Resource Reservation Protocol (RSVP) to request and reserve resources through a network.

Differentiated services (“DiffServ”) implements the prioritized model. DiffServ marks packets according to the type of service they desire. In response to these markings, routers and switches use various queueing strategies to tailor performance to expectations. Differentiated services code point(DSCP) markings use the first 6 bits in the ToS field of the IP(v4) packet header.

The Differentiated Service model is based on DiffServ Architecture – RFC 2475. It scales well with large flows through aggregation, creates a means for traffic conditioning Agreement (TCA) and per-hop behavior (PHB). The model is given in the figure below.

Figure 8: Differentiated Service Model of QoS

Page 11: Study paper on IP QoS.pdf

The Ingress and Egress Nodes perform both TCA & PHB and the interior Nodes perform the PHB functions. In case of Ingress and Egress Nodes, TCA functions are performed for the Inbound Traffic/Interface and PHB functions are performed for the Outbound Traffic/Interface. DiffServ QoS Techniques at various interfaces are given below.

a. TCA functions performed for the Inbound Traffic/Interfaces of Ingress/Egress

Nodes: i. Traffic Classification & Prioritization

ii. Traffic Marking iii. Policing iv. Shaping

b. PHB functions performed for the Outbound Traffic/Interfaces of Ingress/Egress Nodes and Interior Nodes

v. Queuing vi. Scheduling

vii. Dropping / Congestion Management Further sections will describe in detail about each of these functions.

5.1 Class of Service Tags used in various Headers In the Differentiated Services Model, the class of Service is defined in various Headers either in the Ethernet Frame, IP Header or MPLS Label. The Class of Service Structure is given below.

5.1.1 Ethernet Frame

The 3 Bits priority is used to define the priority of the Packet to be treated in the Layer-2 domain.

Page 12: Study paper on IP QoS.pdf

5.1.2 IPv4 Header

The IPv4 Header defines the 8Bit TOS [Type of Service] Bits for defining the Quality of Service.

There are three methods of interpreting the Type of Service Bits of the IPv4 Header. They are described below.

a) Type of Service Format using IP Precedence

PRECEDENCE = NETCONTROL, INTERNETCONTROL, CRITICAL, FLASHOVERRIDE, FLASH, IMMEDIATE, PRIORITY, or ROUTINE

LOWDELAY = Y or N THROUGHPUT = Y or N RELIABILITY = Y or N

b) DHCP format

c) PHB Format for the TOS Byte

Per-Hop-Behavior (PHB) is the externally observable forwarding behavior applied at a DS-compliant node to a DS behavior aggregate.

3 Bit PHB Class

3 Bit PHB Class Identifier

3 Bit unused

Expedited Forwarding (EF): low delay/jitter/loss Assured Forwarding (AF): low loss Default (CS): No guarantees (best effort)

CS1, CS2, CS3, CS4, CS5, CS6, CS7, AF11, AF12, AF13, AF21, AF22, AF23, AF31, AF32, AF33, EF, NONE, DEFAULT

Page 13: Study paper on IP QoS.pdf

5.1.3 MPLS Label

In the MPLS domain the 3 EXP bits are used for defining the Quality of Service

5.2 Traffic Classification A typical classification of traffic and categorization possible in an MPLS network of a Service Provider is given below.

Table 1: Typical Traffic Types and Categorization

S.no Traffic Type Category 1. Internet Data VPN Broadband 2. Dial-up Internet 3. Internet Leased Lines ISP 4. Non-ISP 5.

IP TV VPN Franchisee -1

6. Franchisee-2 7.

MPLS L-3 VPN Circuits of Customers

Gold 8. Silver 9. Bronze 10. Best Effort 11.

MPLS L-2 VPN Circuits of Customers

Gold 12. Silver 13. Bronze 14. Best Effort 15.

Voice VPN

IP based TAX 16. 3G Mobile 17. LTE 18. Wi-MAX 19. FTTH 20. LMG 21. Voice Call Signaling 22. Gaming Best Effort 23. Video Conferencing 24. Routing 25. Network Management

These categories and types of traffic get classified in the following way. The routing information packets get the highest priority in the network. The network will have different domains like Layer-2 switching domain, MPLS domain, IP Routing domain etc. The routing layer also may follow the IP TOS classification based on IPP, PHB or DSCP. The Table below indicates all the traffic classifications involved. It may be noted that the IP traffic when it crosses one domain to another maps the QoS classification suitable as per the next domain. This process is called Mapping.

Page 14: Study paper on IP QoS.pdf

Table 2: Traffic Classification

S.No. Application L3 Classification L2 MPLS IPP PHB DSCP CoS EXP

1. Routing 6 CS6 48 6 6 2. Voice VPN 5 EF 46 5 5 3. Video Conferencing 4 AF43 35 4 4 4. L2 VPN (Gold Class) (Video) 4 AF43 35 4 4 5. MPLS VPN (Gold Class) (Video) 4 AF43 35 4 4 6. L2 VPN (Gold Class) (Voice) 4 AF42 34 4 4 7. MPLS VPN (Gold Class) (Voice) 4 AF42 34 4 4 8. L2 VPN (Gold Class) (Data) 4 CS4 32 4 4 9. MPLS VPN (Gold Class) (Data) 4 CS4 32 4 4 10. Internet Leased Lines (ISP) 4 CS4 32 4 4 11. L2 VPN (Silver Class) (Video) 3 AF33 27 3 3 12. MPLS VPN (Silver Class) (Video) 3 AF33 27 3 3 13. L2 VPN (Silver Class) (Voice) 3 AF32 26 3 3 14. MPLS VPN (Silver Class) (Voice) 3 AF32 26 3 3 15. L2 VPN (Silver Class) (Data) 3 CS3 24 3 3 16. MPLS VPN (Silver Class) (Data) 3 CS3 24 3 3 17. Voice Call Signaling 3 CS3 24 3 3 18. Internet Leased Lines (Non-ISP) 3 CS3 24 3 3 19. L2 VPN (Bronze Class) (Video) 2 AF23 19 2 2 20. MPLS VPN (Bronze Class)(Video) 2 AF23 19 2 2 21. L2 VPN (Bronze Class)(Voice) 2 AF22 18 2 2 22. MPLS VPN (Bronze Class) (Voice) 2 AF22 18 2 2 23. L2 VPN (Bronze Class) (Data) 2 CS2 16 2 2 24. MPLS VPN (Bronze Class)(Data) 2 CS2 16 2 2 25. Network Management 2 CS2 16 2 2 26. L2 VPN (Best Effort)(Video) 1 AF13 11 1 1 27. MPLS VPN (Best Effort)(Video) 1 AF13 11 1 1 28. L2 VPN (Best Effort)(Voice) 1 AF12 10 1 1 29. MPLS VPN (Best Effort)(Voice) 1 AF12 10 1 1 30. L2 VPN (Best Effort)(Data) 1 CS1 8 1 1 31. MPLS VPN (Best Effort)(Data) 1 CS1 8 1 1 32. IP TV VPN 0 0 0 0 0 33. Internet Data VPN 0 0 0 0 0 34. Gaming 0 0 0 0 0

5.3 Bandwidth Profile for various services:

Some of the terms used while defining a bandwidth profile are as follows. Committed Information rate [CIR]: Committed information rate or CIR in a Frame relay network is the average bandwidth for a virtual circuit guaranteed by an ISP to work under normal conditions. At any given time, the bandwidth should not fall below this committed figure.

Page 15: Study paper on IP QoS.pdf

Excess Information Rate (EIR): Above the CIR, an allowance of burst bandwidth is often given, whose value can be expressed in terms of additional rate (known as the Excess Information Rate, EIR) or as its absolute value (Peak Information Rate, PIR). The provider guarantees that the connection will always support the CIR rate, and sometimes the EIR rate provided that there is adequate bandwidth. The PIR, i.e. the CIR plus excess burst rate (EIR), is either equal to or less than the speed of the access port into the network. Committed Burst Size (CBS): The maximum number of bits in a specific time period that a Frame Relay network must transfer without discarding any frames. Excess Burst Size (EBS): The maximum number of bits in excess of CBS that the user can send during a predefined period of time Service Level Agreement (SLA) is a service contract between a customer and a (SLA) service provider that specifies the forwarding service a customer should receive. A customer may be a user organization (source domain) or another DS domain (upstream domain). A SLA may include traffic conditioning rules which constitute a TCA in whole or in part. Each service is assigned traffic profile parameters, such as, Committed Information Rate (CIR), Excess Information Rate (EIR), Committed Burst Size (CBS) and Excess Burst Size (EBS). User traffic complying with CIR/CBS will be provided guaranteed quality of service as per SLA where as user traffic with in EIR/EBS is admitted to the network on “Best Effort’ basis. The Peak Information Rate (PIR) = CIR + EIR. A sample CIR/EIR for different traffic priorities is given below.

Table 3: Sample CIR/EIR for COS Priority (IPP/COS) CIR% EIR%

6 100 0 5 100 0 4 80 20 3 60 40 2 40 60 1 20 80 0 20 80

A Typical case of a customer taking 10Mbps Bronze Bandwidth and assigning the CIR/EIR for various types of services in the 10Mbps Pipe is given below.

Table 4: Sample CIR/EIR Recommendations for a Customer MPLS VPN

CIR Mbps

EIR Mbps Total (PIR) Mbps

MPLS Bronze Video 3 0 3Mbps MPLS Bronze Voice 1 0 1Mbps MPLS Bronze Data 0 6 6Mbps Total % 40 60 100 Total Bandwidth 4 6 10Mbps

Page 16: Study paper on IP QoS.pdf

5.4 Marking process through Token Bucket Scheme

Marking is the process of setting the DS code point in a packet based on defined rules; pre-marking, re-marking. Metering is the process of measuring the temporal properties (e.g., rate) of a traffic stream selected by a classifier. The instantaneous state of this process may be used to affect the operation of a marker, shaper, or dropper, and/or may be used for accounting and measurement purposes. Marking is done to characterize traffic to indicate the class of the service, i.e., Real Time, Priority Data, Best Effort, etc. Two rate, three color marking scheme which is based on CIR/PIR values defined in SLA is used. The two rate three color marker algorithm police the data stream and marks it as green, yellow or red based on the CIR and PIR rates. First the input traffic is check if it is in the PIR range then it is compared if it is within CIR range. Excess bursts are permitted if they are above CIR but within PIR range.

Figure 9: Two Rate Three Color Marking/Policing Scheme

If the traffic is within CIR range, it is marked Green. If it exceeds CIR but within PIR, it is marked Yellow. If it is more than PIR, it is marked Red.

5.5 Shaping and Policing through Token Bucket Scheme Shaping is the process of delaying packets within a traffic stream to cause it to conform to some defined traffic profile. Policing is the process of discarding packets (by a dropper) within a traffic stream in accordance with the state of a corresponding meter enforcing a traffic profile. The shaping process uses rules to decide which packets are allowed to enter the egress queues instead of simply dropping all the red packets. The rules are different for each packet colour:

Green packets are subject to quite lenient rules, and most of these packets will be accepted into the egress queue.

Page 17: Study paper on IP QoS.pdf

Yellow packets are subject to more stringent rules, so less of them will get into the egress queues.

Red packets are subject to very stringent rules, especially when the interface becomes congested.

In this way, if there are multiple traffic classes passing through the device, each with different bandwidth limits, it is possible for an over-limit traffic class to make use of bandwidth made available by another traffic flow that is well below its bandwidth limit. But, if all traffic flows are at or above their limit, then the shaping process will make sure the flows do not encroach on each other’s allocated bandwidth. The difference between the Shaping and Policing process is given in the figure below.

Figure 10: Traffic Policing and Shaping comparison

5.6 Queuing/Scheduling

Traffic belonging to different service classes needs to be appropriately queued and scheduled onto the network links in order to meet the required service guarantees. Queuing is done to buffer the different type/class of traffic into respective queues. The scheduling process decides which packet is to be transmitted next based on scheduling algorithm.

Page 18: Study paper on IP QoS.pdf

Figure 11: Queuing Scheduling in MPLS network

The Weighted Fair Queuing (WFQ) works by dividing the bandwidth across queues of traffic based on the weights. All other services are given appropriate weights in line with the traffic contract. For example, the weights can be set proportional to the sum of the equivalent bandwidths of all the services being transported by that particular traffic class, where the equivalent bandwidth is computed based on the ingress bandwidth profiles.

Figure 12: Weighted Fair Queuing mechanism

Whenever packet arrives, then the classifier snoops its header and using this information (source address and port, destination address, ip precedence, protocol, etc.) calculates a number between "1" and "number of queues". Then, it locates the packet in the queue identified by this number. WFQ allots skewing bandwidth by favouring higher-priority flows (priority being a function of IPP marking). Certain flows are preferred over others, WFQ lost CQ's capability to provide bandwidth guarantees because bandwidth allocations continuously changed as flows were added or ended.

Page 19: Study paper on IP QoS.pdf

5.7 Dropping (Congestion Management) through WRED

Dropping is the process of discarding packets based on specified rules; policing. Weighted Random Early Discard prevents the queue tail drop behavior (dropping all subsequent packets when a queue overflows) by dropping lower priority traffic based on the thresholds set for the queue size. WRED can selectively discard lower priority traffic when the interface begins to get congested and provide differentiated performance characteristics for different classes of service. Drop decision is based on the current queue size. It drops more packets from users violating the traffic contract than users conforming to traffic contract so that sources generating more traffic will be slowed by the TCP protocol in times of congestion. WRED uses AF drop preference values to drop packets.

6. MPLS TRAFFIC ENGINEERING 6.1 MPLS DiffServ

MPLS does NOT define new QoS architectures. MPLS QoS uses Differentiated Services (DiffServ) architecture defined for IP QoS (RFC 2475). MPLS DiffServ is defined in RFC3270. The overall mapping of the DSCP Bits of the IP Packet to EXP Bits of the MPLS Label is already given in Table-2 for different traffic classes. A snap shot of the same is given below. PHB Type PHB Class DSCP Value MPLS EXP Bits Expedited Forwarding EF 101 110 101 Assured Forwarding – Class-1 AF1 001 010

001 100 001 110

001

Assured Forwarding – Class-2 AF2 010 010 010 100 010 110

010

Assured Forwarding – Class-3 AF3 011 010 011 100 011 110

011

Assured Forwarding – Class-4 AF4 100 010 100 100 100 110

100

Best Effort BF 000 000 000 It may be seen that DSCP has 6 Bits where as MPLS Label EXP Field has only 3 Bits. Hence if only 8 PHB’s are used, the EXP Field can be used. Such Packets are called E-LSP’s. If more than 8 PHB’s are required, the balance Label field is also used for certain packets and such packets are called L-LSP’s.

6.2 MPLS TE Generally in an MPLS domain, there will be multiple paths for sending a packet from a source Edge Router to a destination Edge Router. These Edge routers are called Label Edge Router [LER]. The intermediate routers called the core routers or Label Switch Routers [LSR] switch the labels. This path is called the Label Switch Path [LSP]. LSP’s

Page 20: Study paper on IP QoS.pdf

are created in the MPLS Routers by the Label Distribution Protocol [LDP] based on the Routing Protocol i.e. OSPF [Open Short Path First]. However if explicit Routing through a specific pre defined path is required, such a path is defined as the Traffic Engineered Path. In such case RSVP-TE Protocol is used for generating the LSP’s. In such cases, the protection paths are also defined. Here specific bandwidth constraint based on the EIR/CIR can be allocated to the customer.

6.3 DS-TE [DiffServ TE]

Regular Traffic Engineering [TE] allows for one reservable bandwidth amount per link. However in real situations, we would require more than one reservable bandwidth per link. DS-TE allows for more than one reservable bandwidth amount per link. DS-TE brings per-class dimension to TE i.e. DS-TE is QoS enabled TE. Basic idea here is to connect PHB class bandwidth to DS-TE bandwidth sub-pool. DS-TE supports per-class constrained-based routing and Per-class admission control. The basic requirement of DS-TE comes in the following scenario. Say a customer has been allocated a traffic engineered tunnel for a specific bandwidth. Within the traffic engineered tunnel, the customer wants to use for different types of services, say data, video, voice etc which are to be treated differently. DS-TE is implemented using two models i.e. Maximum Bandwidth allocation model [MAM] or Russian Roll Drums Model [RDM]. RDM is the most practiced model and hence is described in detail below. Russian Dolls Model (RDM) for Bandwidth Pools/Constraints (BC) in DS-TE: In the RDM the link bandwidth is allocated in Pools. Say there are three services Voice, Premium and Best effort traffic to be carried within the TE tunnel and are to be treated differently within the network in terms of QoS, the bandwidth allocation per traffic class follows the following way.

Figure 13: RDM Bandwidth Allocation Example

Page 21: Study paper on IP QoS.pdf

The Bandwidth allocation for different classes and configuring the various bandwidth constraints is given in the Tables below.

Table 5: COS and Allocation Methods

COS/CT Allocate based on the actual requirement across various

classes CT6 CT6% CT5 CT5% CT4 CT4% CT3 CT3% CT2 CT2% CT1 CT1% CT0 CT0%

BC Bandwidth as % of MRB (Maximum Reservable

Bandwidth) BC0 100% BC1 CT0% + CT1%+ CT2%+ CT3%+ CT4%+ CT5% BC2 CT0% + CT1%+ CT2%+ CT3%+ CT4% BC3 CT0% + CT1%+ CT2%+ CT3% BC4 CT0% + CT1%+ CT2% BC5 CT0% + CT1% BC6 CT0%

7. IMPLEMENTATION OF QOS IN DIFFERENT IP DOMAINS

This section describes the actual implementation method of QoS in an IP Network. As described earlier, an operator network may have deployed multiple IP core and aggregation technologies, each technology/vendor based on the QoS methodology adopted has to be separated into different QoS domains.

7.1 QoS Domains Here we are taking a case of a network with four domains i.e.,

1. IP-MPLS Domain

2. MPLS Aggregation Domain

3. RPR Aggregation Domain

4. LAN Switch Aggregation (L2) Domain

The QoS reference model indicating the various domains is given in the ‘Figure 14:

QoS Reference Model’ below.

Page 22: Study paper on IP QoS.pdf

Figure 14: QoS Reference Model

7.2 QoS Implementation across Network Domains The QoS actions to be taken at various Nodes and Interfaces in the network domains

given in the ‘Figure 15: QoS Implementation Flow’ and ‘Table 6: QoS

Implementation at every reference point’ below.

Figure 15: QoS Implementation Flow

DSLAM

DSLAM

FTTH

IP-MPLS

RPR

Aggregation

RPR Aggregation

PEPE

PE

P P

LAN Aggregation

LAN Aggregation

FTTH

DSLAMDSLAM

DSLAM

MPLS -TPAggregation

LAN Aggregation

Wi-MAX

DSLAMFTTH

MPLS Domain

MPLS-TP Aggregation

Domain

LAN

Aggregation

Domain

RPRDomain LAN

Aggregation

Domain

DSLAM

Tier-II

LANSwitch

IP-MPLSMPLS

Aggregation

LAN Aggregation

FTTH

LAN Sw

C,S,M

P,S,Q D Q, Sch ,D

C,Map,P,S Q,Sch,S,D

C,Map,P,S

Q,Sch,S,D

Q,Sch,S,DQ,Sch,S,D

Q,Sch,S,D

C,Map,P,S,Q,Sch,D

C,P Q,Sch,S,D

A

B C

D E

F

G

H I K

J L M

L2-Domain RPR-

Domain

MPLS-Domain L2-Domain

Map: Mapping

Q: Queuing

Sch: Scheduling

D: Dropping

C: Classification & Prioritization

P: Policing

S: Shaping

M: Marking

IP-MPLS MPLS-TP

Page 23: Study paper on IP QoS.pdf

Table 6: QoS Implementation at every reference point

S.N. Reference

Point

QoS Implementation Remarks

1. A Classification &

Prioritization

Based on DSCP. Refer Table 2: Traffic

Classification

2. Shaping As per L2 SLA using token bucket scheme

3. Marking Using DSCP. Refer Table 2: Traffic

Classification

4. B Policing & Shaping As per L2 SLA (CIR/EIR) using token bucket

scheme. Refer Table 3: Sample CIR/EIR for

COS

5. Queuing L2 Queuing using WFQ

6. Dropping WRED

7. C Queuing & Scheduling L2 Queuing using WFQ

8. Dropping WRED or Tail Dropping

9. D Classification,

Prioritization & Mapping

DSCP to RPR COS

10. Policing & Shaping As per L2 SLA (CIR/EIR) using token bucket

scheme

11. E Queuing & Scheduling L2 queuing using WFQ

12. Shaping As per L2 SLA (CIR/EIR) using token bucket

scheme.

13. Dropping As per L2 SLA (CIR/EIR) using WRED

14. F Classification,

Prioritization & Mapping

DSCP to EXP. Refer Table 2: Traffic

Classification

15. Policing & Shaping As per L3 SLA (CIR/EIR) using token bucket

scheme

16. Traffic Tunneling &

Bandwidth Reservation

DiffServ using MPLS Short Pipe and DS-TE

using RDM

17. G, H, I, J Queuing & Scheduling L3 queuing using WFQ

18. Shaping As per L3 SLA (CIR/EIR) using token bucket

scheme

19. Dropping As per L3 SLA (CIR/EIR) using WRED

20. K Classification,

Prioritization & Mapping

Based on DSCP. Refer Table 2: Traffic

Classification

21. Policing As per L3 SLA (CIR/EIR) using token bucket

scheme

22. Shaping As per L2 SLA (CIR/EIR) using token bucket

scheme

23. Traffic Tunneling MPLS Short Pipe to preserve Customer

DSCP.

24. Queuing & Scheduling L3 queuing using WFQ

Page 24: Study paper on IP QoS.pdf

S.N. Reference

Point

QoS Implementation Remarks

25. Dropping As per L2 SLA (CIR/EIR) using WRED

26. L Classification &

Prioritization

Based on DSCP

27. Policing As per L2 SLA (CIR/EIR) using token bucket

scheme

28. M Queuing & Scheduling L2 Queuing using WFQ

29. Shaping As per L2 SLA (CIR/EIR) using token bucket

scheme

30. Dropping As per L2 SLA (CIR/EIR) using WRED

7.3 QoS Implementation across Interconnect Domains

The interconnection between IP networks of different service providers is at the IP-

MPLS Layer.

In order to achieve an end to end QoS for the customer traffic following issues are to

be addressed at the interconnect points.

1. Bandwidth Profile for the traffic across the interfaces

2. QoS actions at the interfaces

There shall be an agreement on the Bandwidth Profile for various services flowing across the interfaces. Such a bandwidth profile shall be created for each service. Some of the bandwidth profile parameters which shall be agreed upon the interfaces are given below.

a. Bandwidth assigned for the Service

b. CIR % for the Service

c. EIR % for the Service

d. Defining the main and alternate paths and exchange of protocols for Traffic

Engineered paths

e. Interpretation method followed for the IP ToS Bits

A typical interconnection of IP-MPLS networks of two Service Providers and the QoS actions required is given in Figure-16 below.

Page 25: Study paper on IP QoS.pdf

Figure 16: QoS Actions at the Interconnect Networks

The QoS actions at the reference point are given in Table-7 below.

Table 7: QoS actions at Interconnect Reference Points

S.N. Reference

Point

QoS Implementation Remarks

1. A, E, F, J Classification,

Prioritization & Mapping

DSCP to EXP. Refer Table 2: Traffic

Classification

2. Policing & Shaping As per L3 SLA (CIR/EIR)

3. Traffic Tunneling &

Bandwidth Reservation

DiffServ using MPLS and DS-TE using RDM

4. B, C, D, G,

H, I

Queuing & Scheduling L3 queuing

5. Shaping As per L3 SLA (CIR/EIR)

6. Dropping As per L3 SLA (CIR/EIR)

8. CONCLUSION QoS implementation across all domains of IP networks as well as over interconnect interfaces is lacking today. This paper is an attempt to define the methodology with examples which can be used for such a QoS mechanism. Moreover, there are multiple mechanisms and standards are available for the different domains. This paper also tries to map such mechanisms so as to help telecom service providers to implement a comprehensive IP QoS across all domains of their IP Network.

9. REFERENCES

[1] IEEE 802.1q: IEEE standards for Local and metropolitan area networks: Media Access Control (MAC) Bridges and Virtual Bridge Local Area Networks

[2] IEEE 802.17: IEEE Standard for Information technology - Telecommunications and information exchange between systems - Local and metropolitan area networks: Resilient packet ring (RPR) access method and physical layer specifications

IP-MPLS

Service Provider-A

IP-MPLS

Service Provider-B

C,Map,P,S

Q,Sch,S,D

Q,Sch,S,D

Q,Sch,S,D Q,Sch,S,D Q,Sch,S,D

Q,Sch,S,DC,Map,P,SC,Map,P,S C,Map,P,S

A

B

C

D G

E F H J

I

Page 26: Study paper on IP QoS.pdf

[3] RFC 2309: Recommendations on Queue Management and Congestion Avoidance in the Internet

[4] RFC 2474: Definition of the Differentiated Services Field (DS Field)in the IPv4 and IPv6 Headers

[5] RFC 2475, An Architecture for Differentiated Services. [6] RFC 2698, A Two Rate Three Color Marker [7] RFC 2702: Requirements for Traffic Engineering Over MPLS [8] RFC 3140: Per Hop Behavior Identification Codes [9] RFC 3209: RSVP-TE: Extensions to RSVP for LSP Tunnels

[10] RFC 3270, Multi-Protocol Label Switching (MPLS) Support of Differentiated Services

[11] RFC 5462: Multiprotocol Label Switching (MPLS) Label Stack Entry: “EXP" Field Renamed to "Traffic Class" Field