74
Subjec t: MPLS Design and Optimization Methods & Procedures Date: 2022-6-6 From: Ahmet Akyamac 1 Benjamin Tang 1 Gary Atkinson 1 Rafal Szmidt 2 Ramesh Nagarajan 1 1 Bell Laboratories 2 LWS Europe +1 (732) 949-5413 +1 (732) 949-6477 +1 (732) 949-2920 +48 22 692 38 86 +1 (732) 949-2761 [email protected] [email protected] [email protected] [email protected] [email protected] 1. Introduction This document presents methods and procedures for performing DiffServ traffic engineering based (DS-TE) MPLS network design and optimization using the Opnet SP Guru Tool. The discussion here assumes that a network topology (nodes and link locations) is in place and does not consider Greenfield design. As of the writing of this document, the latest version of SP Guru is 11.5 and the methods and procedures presented here assume this version of the tool. As later versions become available, the user should consult the product documentation for any possible changes. The following figure shows a flow diagram of the general MPLS network design and optimization methodology used in this document. Lucent Technologies Inc. - Proprietary Use pursuant to Company instructions. 1

MPLS Design

  • Upload
    jgs9

  • View
    146

  • Download
    5

Embed Size (px)

Citation preview

Page 1: MPLS Design

Subject:

MPLS Design and Optimization Methods & Procedures Date: 2023-4-8

From: Ahmet Akyamac 1

Benjamin Tang 1

Gary Atkinson 1

Rafal Szmidt 2

Ramesh Nagarajan 1

1 Bell Laboratories2 LWS Europe+1 (732) 949-5413+1 (732) 949-6477+1 (732) 949-2920+48 22 692 38 86 +1 (732) [email protected]@[email protected]@[email protected]

1. IntroductionThis document presents methods and procedures for performing DiffServ traffic engineering based (DS-TE) MPLS network design and optimization using the Opnet SP Guru Tool. The discussion here assumes that a network topology (nodes and link locations) is in place and does not consider Greenfield design. As of the writing of this document, the latest version of SP Guru is 11.5 and the methods and procedures presented here assume this version of the tool. As later versions become available, the user should consult the product documentation for any possible changes. The following figure shows a flow diagram of the general MPLS network design and optimization methodology used in this document.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

1

Page 2: MPLS Design

Input Network topology and link class partitioning info

Input Multi-Class traffic and/or LSP information, (VoIP, VPN, 3G, IA, etc.), protection options (Path, FRR, etc.)

MPLS Multi-Class Network Design

TE constraints (subscription, hops, delay, etc.)

Routing and Performance Analysis

Reports

Reports

Input Network topology and link class partitioning info

Input Network topology and link class partitioning info

Input Multi-Class traffic and/or LSP information, (VoIP, VPN, 3G, IA, etc.), protection options (Path, FRR, etc.)

Input Multi-Class traffic and/or LSP information, (VoIP, VPN, 3G, IA, etc.), protection options (Path, FRR, etc.)

MPLS Multi-Class Network DesignMPLS Multi-Class Network Design

TE constraints (subscription, hops, delay, etc.)

TE constraints (subscription, hops, delay, etc.)

Routing and Performance AnalysisRouting and Performance Analysis

ReportsReports

ReportsReports

Figure 1: General MPLS Network Design Methodology

Typical inputs to the MPLS network design procedure include topology information (such as node locations, capacity/configurations; link types, connections, class type partitioning and subscription constraints) and traffic information (multi-class traffic and/or LSP information, protection - path based, FRR etc.). The traffic engineering constraints (hop, delay etc.), and design objectives (such as minimizing bandwidth, minimizing maximum subscription, etc.) are further inputs to the design process. Subsequent to network design, routing and performance analysis can be performed to estimate network performance in terms of capacity utilization, traffic quality of service measures such as delay, loss, etc. As a final step, the design output information is collected through a series of reports, which can be analyzed to compare design results to design objectives and specifications. This could result in recommended changes to the network, which would then be used to perform further network studies as part of a closed loop process (shown by the dashed lines in Figure 1).

In the remainder of this document, we will analyze each of the above blocks in detail and discuss the specific implementation methods and procedures in SP Guru. Each of the steps will be presented through specific examples, which may involve the use of certain files. Some of these files will be made available at a later time as a package to be downloaded.

2. Launching SP GuruAs of the writing of this document, the license server for SP Guru is located on usil100014svr23.ih.lucent.com. When SP Guru is first launched, it will attempt to obtain an available license from this server. If successful, a splash screen as in Figure 2 is displayed.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

2

Page 3: MPLS Design

Figure 2: Opnet SP Guru splash screen

To create a new project, use File->New and select type Project. In SP Guru, projects can contain multiple scenarios. The project name is the same as the file name. Each scenario could represent a different phase in the network design task, or could represent different stages of the network etc. Many actions in SP Guru can be time consuming (depending on the size of the network), and most actions are irreversible. Thus, it is recommended practice to split different phases of the design project into scenarios and to save frequently. The next window will ask for a project and scenario name. At this point, enter a meaningful scenario name (e.g., manual_topology_input), but do not change the project name. Also, unselect the use of the startup wizard for this project. This action generates the scenario manual_topology_input and shows the network screen, which currently contains the world map as in Figure 3. Next, save the project in a preferred directory using File->Save As. Note that to access this project at a later time, the model directory the project was saved into has to be added using File->Model Files->Add Model Directory from the splash screen.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

3

Page 4: MPLS Design

Figure 3: Network screen after project and scenario creation

3. Network Topology InputMPLS network design and optimization is performed on an existing network topology consisting of nodes and links. The network topology can be input to SP Guru in a number of ways, including manually through the GUI, using Cisco and Juniper configuration/configlet files and through text file import. These actions will add the node and link objects to the network model. In the following, we consider example scenarios for each of these actions.

3.1 Manual Topology Input

Objects can be added to the network model using the object palette, accessed using Topology->Open Object Palette or by using the icon with the paint brushes on the top left hand of the screen, as highlighted in Figure 3. The object palette tree can be used to access node and link objects (as well as other objects) in numerous ways, such as by device type, name, etc. Figure 4 shows the selection

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

4

Page 5: MPLS Design

of a Cisco 12410 router from Cisco-Node Models under Shared Object Palettes. These palettes are available to all projects. Custom palettes can also be defined and saved.

Figure 4: Cisco nodes on the shared object palette

Objects can be selected by using the icon and placing the object on a selected position on the map. Links can be placed by selecting a link object from the palette and clicking on two end nodes in the network view. Figure 5 shows two nodes placed at Philadelphia and Washington connected by a SONET OC-48 link (Note that multiple zooms in an area will eventually enable viewing of major city names around that area). Also shown are the link attributes.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

5

Page 6: MPLS Design

Figure 5: GUI view after manually adding two nodes and a link

For all objects, the associated attributes can be accessed by pointing the cursor on the object, right clicking and selecting Edit Attributes. This will open a window where attributes for the object can be accessed changed. Clicking on Advanced will bring up an advanced set of attributes, mainly related to the visualization and placement attributes for the objects (such as size, color, etc.).The manual topology input method is practical mainly for small to medium size network models. For bigger models, configuration file or text file import is the preferred method to enter network models. However, the manual entry method can be used to make incremental changes to an existing network model such as adding new nodes, links, etc.

3.2 Topology Import Using Configuration FilesNetwork models can be imported to SP Guru using Cisco or Juniper configuration/configlet files. The config file import tab can be accessed using Topology->Import Topology->From Device Configurations. This will open up the tab shown in Figure 6, which contains options such as replacing the model, merging with the model, updating device configurations etc.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

6

Page 7: MPLS Design

Figure 6: Device configuration import tab

The directories are specified for each vendor (Cisco or Juniper), or a combination of the two. Additionally, config files arranged in directories can also be imported. In the following example, we import a set of Cisco config files from the directory MH_Lab_Cisco_Configlet_Files in the file package. These config files were collected from a Lucent test lab. Note that during the import process, there are two pairs of interfaces with unresolved data rates on their incident links. We can set these data rates to T1. The resulting network model is shown in Figure 7. Note the different links (serial links in black and Ethernet links in brown) Cisco devices (shown in blue), Ethernet hubs (shown in gray) and edge LAN models. This particular import also included some MPLS LSPs (shown in green).

Configuration file import is a very useful and powerful method for importing entire networks into SP Guru as many of the detailed network parameters are immediately populated (as opposed to being modified manually, which is very time consuming). However, from time to time, configlet files may have some errors or information conflicts. Under these types of circumstances, SP Guru usually prompts the user to manually enter any additional information that may be required to complete a successful network import.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

7

Page 8: MPLS Design

Figure 7: Network model view in SP Guru after configlet file import

3.3 Topology Import From Text FilesNetwork models can also be entered using text file import (text file export can be used to output the network model). The text file import/export functionality was built into SP Guru 11.5 as a result of a customization requested by Lucent as part of the multi-class MPLS DS-TE capabilities. This functionality is specifically designed for MPLS networks and thus the imported objects are nodes and links that are part of the MPLS model and the imported options also pertain to MPLS. This functionality does not represent a general text network import/export feature. Furthermore, this text file import/export functionality also allows SP Guru to be used in conjunction with other internal tools while performing studies since the format defined for the import/export file is a simple field-based text format.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

8

Page 9: MPLS Design

There are three types of text import/export files; these files define the nodes, links and LSPs. Text file import can be accessed from Topology->Import Topology->From DS-TE Text Files. The import tab allows the user to specify which of the files are to be imported and the location of the files. An example import tab is shown in Figure 8. Note that we have unselected the LSP tab; more information on LSP import is included later in this document. Note that the node, link and LSP information can be exported to text files using Topology->Export Topology->To DS-TE Text Files.

Figure 8: Text file import options tab

The nodes import file needs to specify the Router Name, Router Model (this has to match a node model name in the SP Guru router library), longitude and latitude and bandwidth constraint model. The bandwidth constraint model is the MPLS subscription model used on this node. For Cisco routers, enter the text Russian Dolls Model (RDM) and for Juniper routers, use the text Maximum Allocation Model or the Extended MAM model (these models will be discussed later in this document). Figure 9 shows a part of a node import file that will be used later in this document. Comments can be entered by proceeding with a “#”, and it is generally good practice to include meaningful field names.

The link import file should specify the link name (optional, the names are automatically assigned if they are not specified here), link model (this has to match a link model name in the SP Guru link library), source router, destination router, data rate (normally blank, this is used for link models which allow user-specified data rates), direction (a good practice is to always use “BOTH”), RSVP bandwidth percentage (this is the percentage of bandwidth that can be used for MPLS subscription), number of class types the bandwidth is partitioned into (for single class, this is 1), and a list of class types and corresponding bandwidth partitions. The partitioning is based on the RSVP reservable bandwidth. Thus, if the RSVP bandwidth percentage of a link is 80%, then the class bandwidth partitioning will also be based on this 80% bandwidth (this bandwidth model, and the application of

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

9

Page 10: MPLS Design

RDM or MAM will be discussed later in this document). Figure 10 shows a part of a link import file that will be used later in this document.

Figure 9: Part of a node import file showing the required fields

Figure 10: Part of a link import file showing the required fields

4. Traffic Demands and LSP Information InputIn this section, we discuss the alternative methods of entering traffic demands and LSP information. For MPLS optimization and design, LSP information is sufficient and traffic demand information is not necessary (as the LSP bandwidths would normally be based on the expected traffic level). The MPLS design and optimization would result in the MPLS subscription on each link, which refers to the portion of the link bandwidth that is subscribed by the LSPs. If traffic demand information is also included, then the traffic can be routed across the MPLS network on the LSPs. This would result in

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

10

Page 11: MPLS Design

traffic utilization across the links, which is a ratio of the bandwidth of the traffic traversing the link to the link’s bandwidth capacity. Note that subscription and utilization are not necessarily equivalent, as the traffic being sent on an LSP can be controlled at the network ingress/entry point through policing mechanisms. Furthermore, there may be other traffic that take normal IP routing paths and do not traverse the LSPs. The utilization on a link may be higher or lower than the LSP subscription.

4.1 Traffic Demand Information InputTraffic information can be entered either manually through the GUI or by using spreadsheets. If traffic information has been collected using Cisco Netflow or cflow, then this traffic information can also be entered using Netflow or cflow data. In SP Guru, most simulation and analyses are based on defined time periods. A very common time period used is an hour, and would typically correspond to a daily busy hour for the network. Thus, a traffic demand profile would show the change in traffic demand bit rates for the busy hour. The start and stop time for these analyses can be manually modified, however it is always good practice to allow automatic setting of the analyzed hour throughout the tool.

To import traffic using Netflow or Cflowd data, use Traffic->Import Traffic Flows->From Cisco Netflow or Traffic->Import Traffic Flows->From Cflowd. SP Guru has the ability to import traffic from other traffic collectors, as can be seen from the additional menu items here. Figure 11 shows an example network for which traffic was imported using Cflowd data. In this figure, the traffic flows are shown in blue. One of the traffic flows is highlighted; the bit rate profile is retrieved directly from the Cflowd files.

Figure 11: Network view for an example network after flow import from Cflowd files

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

11

Page 12: MPLS Design

In the GUI, it is possible to select a group of nodes and create a full mesh or single directional demands between them. Apart from IP uni-cast and multi-cast flows, it is also possible to generate traffic profiles for VoIP and MPLS VPN traffic. To access all these options, use Traffic->Create Traffic Flows.

For data flows, the user can set the bit and packet rates, as well as the class of service fields (DSCP code point) and protocols. If specific ingress/egress interfaces are not specified, then the default loopback interfaces will be used. The user can also specify the flow duration, but this can be defaulted to the analysis timeframe of the network. The protocol field is used to determine overhead bytes and is thus added to the data rate.

For VoIP flows, the user can specify the call volume (in Erlangs), average call duration, flow duration, codec (G.711, G.729 etc.), class of service and protocols to determine header overhead. This flow creator will them create voice packet profiles based on the selected options. Figure 12 shows some sample VoIP options and the network of Figure 11 with VoIP flows added (shown in orange).

Figure 12: VoIP flow options and network view after VoIP flows have been generated

Traffic demands can also be imported from text-based spreadsheets. These files specify much of the information available from the GUI, such as source and destination information, protocols, class of service, name, packet size and bit rate profiles in the defined time units for the time duration the flows are defined for. To import traffic flows from spreadsheets, use Traffic->Import Traffic Flows->From Spreadsheet. The traffic flows can also be exported to spreadsheets using Traffic->Export Traffic Flows->To Spreadsheet. Thus, a traffic flow profile can be saved as a text-based spreadsheet to be used in other projects. In Figure 13, we show a sample text-based spreadsheet file (this text file corresponds to the data traffic from Figure 11) that highlights the required fields.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

12

Page 13: MPLS Design

Figure 13: Text -based spreadsheet for traffic flow import

4.2 LSP Information InputAs in the case of the node data input, LSP data can be input either using configuration files from Cisco and Juniper routers, manually using the SP Guru GUI, or through text file import. Note that MPLS-capable nodes are also referred to as label switched routers (LSR). LSPs are path entities and are uni-directional.

In Section 3.2, we showed an example where the configuration file import included LSP information.

To manually enter LSP’s, select the MPLS_E-LSP_DYNAMIC model from the MPLS palette, which is found under “Paths” in the main object palette. Once the LSP model is selected, create LSP’s one by one by selecting a source LSR, intermediate LSR’s (if necessary, by right clicking on each intermediate LSR to add it to the path), and a destination LSR (right click and cancel add action when done). Note that this creates a single LSP from source to destination. To create a pair of LSP’s, a second LSP must be created in the opposite direction. Once all LSP’s are entered, it is necessary to “commit” LSP information by selecting Protocols->MPLS->Update LSP Details. Additionally, it is necessary to modify the LSP parameters to configure traffic engineering parameters such as setup and holding priorities, class-based bandwidth requirements, etc. These attributes can be edited by right clicking on the LSP, and selecting “Edit Attributes”. In Figure 14, we show how to access the attributes for a newly created LSP. However, manual editing of LSPs is mainly practical for smaller networks or for networks for which incremental additions of LSPs are being made. For moderate to large size networks, manual entry of LSP information can become quite tedious. For these situations,

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

13

Page 14: MPLS Design

it is much more practical to use text file import such as described in Section 3.3. We discuss this method next.

Figure 14: Manual entry of LSPs and editing of LSP attributes

The LSP import file should specify the LSP name (optional, the names are automatically assigned if they are not specified here), LSP model (this has to match an LSP model name in the SP Guru LSP library, a typical model to use in most studies is MPLS_E-LSP_DYNAMIC), source LSR, destination LSR, hop limit constraint (optional), propagation delay constraint (optional), setup priority, holding priority, class type count (1 for single class LSPs, number of classes for multi-class LSPs; more one this aspect in the next section); and a list of class types and corresponding bandwidth requirements. Shows a part of an LSP import file. Prior to importing LSPs, the LSP import check box in Figure 8 should be checked.

Along with the node and link import files, the LSP import file enables the entire MPLS network topology to be imported into projects, exchanged between projects and input/output to different applications outside of SP Guru. These import files, in addition to the traffic flow import files enable a complete network model to be prepared for optimization, design and analysis with little or no

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

14

Page 15: MPLS Design

manual processing. In the next section, we will discuss the preparation of the MPLS network model through different architecture choices prior to optimization and design.

Figure 15:Part of an LSP import file showing the required fields

5. Preparing the MPLS Network For Optimization and DesignThe remaining inputs to the MPLS optimization and design process shown in Figure 1 are related to the different architectural choices and optimization objectives. These choices need to be pre-configured prior to proceeding with the optimization. Many network studies involve comparison of the different architectural choices based on different performance figures such as total consumed bandwidth, max bandwidth subscribed, QoS results, etc.

The different architectural choices and optimization objectives available in SP Guru can be summarized as follows:

- Bandwidth model (set on nodes): Russian Dolls Model (RDM) or Maximum Allocation Model (MAM). This determines the class-based bandwidth partitioning on the links incident to the nodes. All nodes in a study need to have the same bandwidth model. If not specified, the default is a single class network where the link bandwidth corresponds to that single class. Even is single class networks are being considered, it is recommended to choose a bandwidth model and set it to be single class.

- Link reservable bandwidth (RB): The RSVP reservable bandwidth on all links. This can be specified as a percentage of the available links bandwidth. It refers to the amount of bandwidth available on a link for LSP subscription. The total LSP bandwidth on the links

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

15

Page 16: MPLS Design

cannot exceed this subscription level (Note that for LDP emulation, this can be set to a very large number like 10000%).

- Link bandwidth partitioning: Depending on the bandwidth model, the bandwidth partitioning on the link relative to the different classes. The partitioning is performed on the link reservable bandwidth.

- LSP routing constraints: The hop and propagation delay constraints to be used for LSP routing.

- Single class LSPs (which specify bandwidth for a single class of traffic) or multi-class LSPs (which specify bandwidth for multiple classes of traffic simultaneously)

- LSP setup and holding priorities: There are between 0 and 7, 0 being the highest priority. Typically, the holding priorities are set to 1 and the setup priorities are set to a number between 1 and 7. The highest priority level of 0 is usually reserved for emergency purposes.

- LSP protection options: End-to-end protection or fast re-route (FRR). The FRR implementation in SP Guru Release 11.5 requires much manual input and is not currently viewed as practical for many of our network studies and it is not included as part of the optimization, although it is still being evaluated. Bell Labs internal tools will be integrated to the MPLS M&P in FY06 for a more preferred and flexible FRR optimization. For the time being, end-to-end path protection is used as the available protection option for optimization.

- Optimization objectives: The optimization objectives include minimizing total subscribed bandwidth; minimizing maximum subscribed bandwidth; maximizing minimum residual bandwidth (bandwidth remaining for subscription after the LSPs are routed).

5.1 Bandwidth Models and Link Reservations

SP Guru supports the RDM model for Cisco routers and the MAM model for Juniper routers.

For Cisco routers, the RDM model supports up to two classes of LSPs: ct0 and ct1. Two bandwidth pools are defined: the global pool and the sub-pool. The ct1 class LSPs can subscribe bandwidth in the sub-pool and the ct0 LSPs can subscribe bandwidth in both the global pool and sub-pool, if bandwidth is available in the sub-pool. LSPs of class ct1 are assigned higher setup priorities than LSPs of the ct0 class. Thus, the sub-pool is always available to the ct1 class and to the ct0 class as long as there is residual bandwidth available from the ct1 class. Let the global pool bandwidths be referred to as B0 and B1 and let the class type subscriptions be referred to as R0 and R1. Then, B0+B1 = RB; R1 ≤ B1; R0 ≤ B0+B1; R0+R1 ≤ RB. In a mixed VoIP/data network, VoIP LSPs would be of the ct1 class and data LSPs would be of the ct0 class. In Figure 16, we illustrate the concepts of link reservable bandwidth and global pool and sub-pool bandwidth.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

16

Page 17: MPLS Design

Link reservable bandwidth (RB)

B1 B0BW for ct1

BW for ct0

Available link bandwidth after overheadLink reservable bandwidth (RB)

B1 B0BW for ct1

BW for ct0

Available link bandwidth after overhead

Figure 16: Link partitioning using the RDM bandwidth model

For Juniper routers, the MAM model supports up to seven classes of LSPs: ct0 through ct7. In this model, total LSP subscription for each class (Ri) is limited the maximum bandwidth Bi available to that class. The total subscribed bandwidth among all classes cannot exceed the link reservable bandwidth RB. Thus, for 0 ≤ i ≤ 7, Bi ≤ RB; Ri ≤ Bi; R0+R1+…+R7 ≤ RB. Shows an illustration of this bandwidth model.

B0

B1

B2

B3

B4

B5

B6

B7

Available link bandwidth after overhead

Link reservable bandwidth (RB)

R0

R1

R2

R3

R4

R5

R6

R7

B0

B1

B2

B3

B4

B5

B6

B7

Available link bandwidth after overhead

Link reservable bandwidth (RB)

R0

R1

R2

R3

R4

R5

R6

R7

Figure 17: Link partitioning using the MAM bandwidth model

5.2 Other Network Settings

MPLS requires an underlying IGP protocol (such as OSPF) and traffic engineering is enabled through RSVP. Thus, prior to running the MPLS optimization and design procedures in SP Guru,

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

17

Page 18: MPLS Design

it is necessary to make sure these technologies are enabled. The following steps are necessary to prepare the network model for MPLS optimization and design:

- All nodes need at least one loopback interface. In most network models, this interface is available by default. However, if loopback interfaces are not available, use Protocols->IP->Interfaces->Create Loopback Interface to create loopback interfaces on selected or all routers.

- Enable OSPF on all interfaces using Protocols->IP->Routing->Configure Routing Protocols and selecting the OSPF protocol. If loopback interfaces were manually created using the above step, enable OSPF on these interfaces using Protocols->IP->Routing->Configure Routing Protocols On Loopback Interfaces.

- If interfaces do not already have IP addresses assigned, use Protocol->IP->Addressing->Auto Assign IP Addresses.

- Enable MPLS on all interfaces using Protocols->MPLS->Configure Interface Status to configure MPLS on all router interfaces or interfaces on selected links.

- Enable RSVP on all interfaces using Protocols->RSVP->Configure Interface Status to configure RSVP on all router interfaces or interfaces on selected links.

- Commit all LSPs using Protocols->MPLS->Update LSP Details

5.3 Link Class Partitioning and IP QoS Parameters

One of the input parameters for the mpls_ds_te action is the transmission link traffic class partitioning (ct0 to ct7). This value will describe how much of link available bandwidth can be allocated to a particular traffic class type. This parameter is expressed as a percentage of link bandwidth that can be allocated to LSPs carrying particular class type traffic. When importing topology from DS-TE as described in Section 3.3, this information is stored in the link topology file as depicted on Figure 10.

Link class partitioning is a parameter that is used by the mpls_ds_te design action as a design constraint. During the process of routing LSP across the network, it is influencing the way LSPs are routed based on the particular class traffic limitation existing on a transmission link. This parameter can be important when optimizing existing network LSP routing designs with particular link class partitioning already in place.

Link class partitioning parameters are stored inside SP Guru 11.5 indirectly. These values are not available as a part of link editable parameters. Rather they are reflected in the IP QoS parameters of transmission node. The detailed concept is based on the particular vendor implementation, i.e. Cisco or Juniper but has an overall common approach. The link traffic class partitioning is reflected in a traffic scheduler queue configuration for a specific MPLS DiffServ traffic class (ct0 to ct7). Below is a short description of the scheduling algorithms available on different routing platforms.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

18

Page 19: MPLS Design

5.3.1 Traffic Schedulers and Queue Configuration

In classical IP DiffServ models as well as MPLS DS-TE. The traffic stream is going to be marked on the edge of the network. This will allow the differentiation of traffic treatment across the core transmission network. This treatment is generally referred to as Per-Hop-Behavior (PHB). One of most important elements of PHB on the node is traffic assignment to a transmission queue on physical interface and queue scheduling discipline. While traffic marking and queue assignments are described in many Internet drafts, queue scheduling disciplines are vendor dependent and rely on some proprietary scheduling algorithms. In general, some generic queue disciplines can be described like round robin, modified deficit round robin, priority queuing etc. For the purpose of this document Juniper and Cisco implementations will be briefly mentioned.

Juniper routers use a queue discipline that is known as Deficit Weight Round Robin (DWRR). It is a version of the Modified Deficit Round Robin algorithm. While Juniper is naming its algorithm MDRR in fact the SP Guru configuration equivalent is a DWRR configuration profile on the IP QoS node configuration. MDRR configuration profile is also available but it doesn’t allow configuring multiple levels of queue priorities. It is also important to remember that even though the SP Guru node models for Juniper and Cisco routers are separate entities, their IP QoS configuration parameters of queue disciplines are the same and share a common template. It means that for Juniper router model Cisco proprietary algorithm can be configured (like Weighed Fair Queuing WFQ) and vice versa.

The model IP QoS generic template is displayed in Figure 18. It contains information about the node interface QoS information, traffic classes, traffic policies that are applied to these interfaces and queue discipline configuration including:

MDRR profiles WFQ/DWFQ profiles Priority Queue profiles DWRR profiles

The Cisco primary queuing discipline is WFQ with some flavors of class-based and flow based modifications. Its configuration parameters are also available inside IP QoS Parameters template available on the node configuration.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

19

Page 20: MPLS Design

Figure 18: Node IP QoS Parameters Table

5.3.2 Queue Configuration

The queue configuration parameters will constitute the values of link class partitioning that are requested in particular design. This is will be described here based on Juniper DWRR example. This approach is related to the fact that Juniper recommends link class partitioning in the MPLS DS-TE model that follows the requested scheduler configuration for output interface.

Example:

If we have 3 traffic class definitions (ct0, ct1, c2) and required link partitioning is 70%, 20%, and 10% of interface bandwidth respectively, our scheduler configuration should follow these values. In this case the interface output queue that will service ct0 traffic should be assigned 70% of interface

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

20

Page 21: MPLS Design

capacity and two other queues servicing ct1 and ct2 traffic should be assigned 20% and 10% of interface capacity respectively. The example of DWRR scheduler profile is pictured in Figure 19.

Figure 19: DWRR profiles table

We have three DWRR scheduler queues with 70%, 20%, and 10% of output interface bandwidth assignments. Note that we have three queue priority levels Low, High and Strict High. Strict High is a non-starving queue type.

When importing the network topology from DS-TE files, the scheduler configuration is going to be deployed without user intervention. So the DWRR profiles and queue bandwidth assignments will be performed by import script. This will be sufficient for mpls_ds_te design action to perform optimum LSP routing across the network. However, this configuration is not necessarily the one existing in the network that is going to be optimized. A special care should be taken when queue priorities are considered. The import script will assign all queues one default priority that is low.If this in not the case in the network under study then these parameters should be manually corrected.

When the network is not imported from DS-TE text files, the queue profiles could be created manually for each node or using SP Guru design action ip_qos_configuration available under Protocol Configuration tree. The design action can apply a user defined IP QoS template to all nodes and their interfaces inside network under study and is usually preferred to the manual method for moderate to large sized networks.

5.3.1 DiffServ Traffic Classification

The traffic demands stored inside an SP Guru network project should have proper QoS markings to allow the SP Guru to simulate a DiffServ environment properly and deliver desired results in the SP

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

21

Page 22: MPLS Design

Guru simulation and analysis tools. The process of traffic demand creation was described in Section 4.1.

One of parameters that need to be configured inside the IP QoS parameters template is traffic class field. It allows traffic classification based on different marking like DSCP or EXP MPLS header bits. An example of traffic class definition is depicted in Figure 20.

Figure 20: Traffic classes table

The traffic class has a Match Info property that describes the traffic characteristics:

Figure 21: Match Info table

Figure 21 shows DSCP code point markings and Forwarding class assignments. For the simulation and analysis modules, the traffic markings as configured here should refer to types of traffic

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

22

Page 23: MPLS Design

demands that are used in simulation and analysis. This will allow the simulation and analysis engine to match traffic demands to proper network queues that are used inside the scheduler.

These traffic classes can be configured by the user or configured or by the topology import script. In the latter case, user intervention is not required while using the design action mpls_ds_te. This is because a traffic class configuration is a part of the overall IP QoS Parameters configuration. However, for simulation purposes (Flow Analysis), it is necessary to add a match info properties attribute inside the traffic class tables. This match info should describe the traffic actually used by the user inside SP Guru project. This is because the topology import script will not recognize the user traffic characteristics applying only Forwarding Class property on configured traffic classes.

6. Multi-Class MPLS Network Optimization & Design, Performance Analysis and Reports

In this section, we discuss the multi-class network design action and performance analysis action, as described in Figure 1. The multi-class network design is performed using the mpls_ds_te design action. Performance and flow analysis is performed using the flow analysis (FLAN) module. We also discuss some of the reporting capabilities of SP Guru.

6.1 The mpls_ds_te Design Action

This design action performs the MPLS network optimization/design procedure. To access this design action, use Design->Configure/Run Design Action and choose the mpls_ds_te design action under the Traffic Engineering tab. This will bring up the parameters related to this design action. To access the particular attributes, choose to edit the attributes. The attributes are shown in Figure 22.

Note that the attributes chosen for the LSPs in the mpls_ds_te design attributes override other parameters that may have been imported or set on the LSPs, and these attributes apply to all LSPs. The following is a description of some of these attributes:

- Bandwidth model: RDM or MAM as discussed above. Note that this bandwidth model must match the bandwidth model on all the nodes in the network. Nodes with a different bandwidth model will be excluded from the design.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

23

Page 24: MPLS Design

Figure 22: Attributes for the mpls_ds_te design action

- Maximum considered paths: The algorithm uses a k-shortest paths approach to finding candidate routes for an LSP. This attribute specifies the parameter k. The default value of 8 should be sufficient for many design studies.

- Primary <-> Secondary ER Relationship: Specifies whether primary and backup paths are link, node or SRG disjoint. A value of none means that no backup paths will be computed.

- There are numerous fields under the Primary ER Computation tab:o Maximum link subscription: This is a multiplier factor for the maximum reservable

bandwidth and is used in determining the maximum primary LSP bandwidth subscription on a link. If the maximum reservable bandwidth is set to 80% of the interface, and this multiplier is set to 80%, then the overall maximum reservable bandwidth will be 64%. This multiplier is included here as a convenience so that

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

24

Page 25: MPLS Design

different levels of maximum reservable bandwidth can be analyzed by the design action without having to change all the interface settings.

o Maximum Link Subscription Per Class Type: These are additional multipliers for the maximum link subscription for each class type. These are applied after the Maximum Link Subscription multiplier is applied.

o Link Cost Metric: Used for determining the k-shortest path candidate routes for the LSPs. The k candidates are selected based on the cost metric defined here. The options include TE link cost, OSPF link cost, hops, distance, propagation delay etc.

o Max Hops Per LSP: The LSP hop constraint as discussed earlier.o Max Delay Per LSP: The LSP propagation delay constraint as discussed earlier.o Optimization Objective: The overall objective of the optimization. The choices are

minimize subscribed bandwidth, minimize maximum link subscription, and maximize residual bandwidth, as discussed earlier.

o Advanced options include the number of random cases, random seed, LSP randomization bucket size and number of iterations. These are related to the optimization algorithm, and are discussed below.

- Fields in the Backup ER Computation Tab: The fields are the same as in the Primary ER Computation tab. Most field definitions are identical, but the subscription fields refer to the total bandwidth subscribed including primary and backup bandwidth.

The TE LSP routing problem is NP-hard and the solution for primary path routing has complexity of ~2|D||A|, where |D| is the number of LSP’s and |A| is the number of unidirectional links (for example, in the typical design case study at the end of this document, there are |D| =184 LSPs and 12 bi-directional links, |A|=24). The complexity for a protection design is higher and includes an additional factor relating to the number of failures to be considered. The primary/backup path routing problem networks of the size considered in typical design studies is computationally notoriously difficult and finding optimal solutions is impossible. The mpls_ds_te design action uses a heuristic based approach to arrive at a quality solution satisfying the design objectives.

The mpls_ds_te design action uses a heuristic approach based on LSP ordering to arrive at a network design solution with a minimum cost objective. The following is a high-level overview of the steps taken by the heuristic algorithm:

- The heuristic algorithm creates an initial LSP order first by holding priority, then by setup priority, then by bandwidth (high to low). For equivalent entries, LSP names are used as tiebreakers. Then, the following are performed for each run:

o The algorithm first perturbs or reorders the initial LSP order using the random seed and the bucket size. The LSPs are always reordered in groups that have the same holding and setup priorities. The bucket size determines the number of LSPs perturbed at the same time – a bucket size larger than the group will cause the whole group to be perturbed simultaneously. Also, if all LSPs have the same priorities and a bucket size greater than the number of LSPs is chosen, the random reordering will apply to the entire group of LSPs.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

25

Page 26: MPLS Design

o The primary LSPs are routed in the created order and the LSP routes are determined based on the link cost metric, number of candidate paths in the k-shortest paths procedure and the optimization objective.

o For iterations after the first, all LSPs are sequentially un-routed/rerouted with the aim of achieving a better minimum cost.

o The algorithm then implements the above three steps for the secondary, or backup, LSPs to arrive at a complete solution for each run.

- The minimum cost solution among all runs is selected as the overall design solution.

An illustration of the above procedure is shown in Figure 23.

For each iteration, unroute/route LSP’s one by one to see if objective can be improved

Initial LSP order Run 1: Route LSP’s

Run 2: Route LSP’s

Run n: Route LSP’s

Ran

dom

ize

with

in b

ucke

t si

ze

Use Seed

Rou

ting

Ord

er

Rou

ting

Ord

er

Rou

ting

Ord

er

Iterations m

Choose solution with lowest cost objective

For each iteration, unroute/route LSP’s one by one to see if objective can be improved

Initial LSP order Run 1: Route LSP’s

Run 2: Route LSP’s

Run n: Route LSP’s

Ran

dom

ize

with

in b

ucke

t si

ze

Use Seed

Rou

ting

Ord

erR

outin

g O

rder

Rou

ting

Ord

erR

outin

g O

rder

Rou

ting

Ord

erR

outin

g O

rder

Iterations m

Choose solution with lowest cost objective

Figure 23: Illustration of mpls_ds_te design action algorithm heuristics

6.2 Flow Analysis Module

The Flow Analysis (FLAN) module generates the routing and forwarding tables based on the network configuration. It then routes all flows. As a result of the routing, interface level utilization information is generated. Numerous steady state performance analysis results are then inferred from this information and detailed reports are made available both in the GUI and through text-based spreadsheets.

Prior to running FLAN, select all LSPs by right clicking on an LSP and choosing Select Similar Paths. Then, edit the attributes and make sure the field “Announce IGP Shortcuts” is enabled. Click on Apply Changes to Selected Objects to update this attribute on all LSPs. Enabling this mode on all LSPs ensures that traffic flows are routed on the LSPs during FLAN. The FLAN configuration screen is shown in Figure 24.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

26

Page 27: MPLS Design

Figure 24: Flows analysis configuration window

The interval size refers to the analysis interval and typically covers the intervals for all the flows. There are different options available for collecting performance statistics:

- When total traffic is highest: This would correspond to a busy interval in the network, such as peak total flow times etc.

- Using peak level for each demand: This ensures the network analysis covers the worst-case scenario when the peak rates for all flows have to be carried simultaneously.

- Using average level for each demand: This would correspond to a mean or average utilization analysis, but would not cover flow traffic peaks.

- At specific interval: Snapshot at a certain network time

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

27

Page 28: MPLS Design

Note that the FLAN module also has some MPLS options. The FLAN module is capable of implementing sequential routing for LSPs (does not employ any special optimization algorithms). However, once LSPs are routed using the mpls_ds_te module, their explicit routes (for both primary and backup LSPs) are locked on the head-end LSRs. Thus, MPLS processing through FLAN is not necessary. As part of the MPLS M&P, we use FLAN to route traffic flows and for traffic steady state performance analysis.

6.3 Reports

The mpls_ds_te and FLAN modules generate two sets of reports: Logs, messages and overall design information; and detailed information about nodes, links, LSP routes, capacity and cost, flow routing and performance analysis. These reports can be accessed from Design->Results->View Reports and Flow Analysis->Results->View Reports. Most of the detailed reports are text-based spreadsheets and can be exported from the tool. Additionally, these reports contain drill-downs to further details and hyper-links to relevant node, link, LSP, flow etc. objects in the GUI. Figure 25 shows some sample log and spreadsheet reports.

Figure 25: Some sample log and spreadsheet reports

In Figure 28 and Figure 29, we show the list of reports available after FLAN and the mpls_ds_te design action, respectively, are implemented.

In addition to the reports, there are numerous views in the GUI that can be useful for network analysis. Some examples include the link and LSP route views illustrated in Figure 26, and the link utilization view illustrated in Figure 27. Also, subsequent to running the mpls_ds_te design action or FLAN, the node, link, LSP, flow etc. models are updated in the network. The new or updated attributes for any object(s) can be viewed from the GUI by selecting the object(s), right clicking and selecting Edit Attributes.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

28

Page 29: MPLS Design

Figure 26: Illustration of link and LSP route views in the GUI

Figure 27: Link utilization view by color and color legend

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

29

Page 30: MPLS Design

Figure 28: Available flow analysis (FLAN) reports

Figure 29: Available mpls_ds_te reports

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

30

Page 31: MPLS Design

7. MPLS Network Optimization and Design Case Study

This section presents a case study of MPLS network design which was performed for a customer using SP Guru The case study addresses a specific traffic engineering (TE) challenge facing service providers in the operation and growth of an MPLS network. We performed a DiffServ class-based traffic engineering (DS-TE) design to find the routing of both primary and backup LSPs carrying that multi-service traffic. The LSP routing is driven by the objective of minimizing the total link bandwidth subscribed by the routing of LSPs.

The study was for a core network that carries Internet Access (IA) traffic and four different classes of VPN traffic. The core consists of seven nodes (containing Juniper T640 routers) located at Beijing, Xian, Shanghai, Shenyang, Chengdu, Wuhan and Guangzhou. Beijing, Shanghai and Guangzhou are considered to be the gateway nodes for IA traffic. IA traffic aggregated at the gateway nodes does not traverse the MPLS core network but goes directly to the Internet off those gateway nodes. On the other hand, IA traffic aggregated at any of the four non-gateway nodes must traverse the core network to reach one of the gateway nodes. Each non-gateway node sends IA traffic to the two geographically closest gateway nodes, split equally to each of the two gateways.The IA traffic consists of 16 uni-directional flows and is 5508 Mbps in total. The IA traffic matrix is shown in Table 1. IA traffic is assumed to be of class type ct0.

433433Shenyang

460460Chengdu

188188Wuhan

296296Xian

460188Guangzhou

433296Shanghai

433460188296Beijing

ShenyangChengduWuhanXianGuangzhouShanghaiBeijingIA Traffic

433433Shenyang

460460Chengdu

188188Wuhan

296296Xian

460188Guangzhou

433296Shanghai

433460188296Beijing

ShenyangChengduWuhanXianGuangzhouShanghaiBeijingIA Traffic

Table 1: IA Traffic Demand Matrix (Mbps)

VPN traffic demands exist between each pair of core nodes. The 42 source-destination pairs are shown in Table 2 for a total VPN traffic of 15,655. The total VPN traffic between each pair of nodes was split into the four VPN classes for a total of 168 VPN demands, using the following percentages: 50% for VPN0, 25% for VPN1, 20% for VPN2 and 5% for VPN3. The VPN traffic was assumed to be of class types ct1 (for VPN0) to ct4 (for VPN3).

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

31

Page 32: MPLS Design

8534.554.5221160.5690Shenyang

8536.558234.5170.5736Chengdu

34.536.523.59468.5283Wuhan

54.55823.5149108.5456.5Xian

221234.594149451.52209.5Guangzhou

160.5170.568.5108.5451.51502.5Shanghai

690736283456.52209.51502.5Beijing

ShenyangChengduWuhanXianGuangzhouShanghaiBeijingTotal VPN Traffic

8534.554.5221160.5690Shenyang

8536.558234.5170.5736Chengdu

34.536.523.59468.5283Wuhan

54.55823.5149108.5456.5Xian

221234.594149451.52209.5Guangzhou

160.5170.568.5108.5451.51502.5Shanghai

690736283456.52209.51502.5Beijing

ShenyangChengduWuhanXianGuangzhouShanghaiBeijingTotal VPN Traffic

Table 2: Total VPN Traffic Demand Matrix (Mbps)

The goal of the MPLS-TE design is to find routing of LSPs carrying multi-class IA and IP VPN traffic such that the network cost, defined by total consumed link bandwidth, is minimized. In the subsequent descriptions we first define the three design scenarios addressed, followed by the procedure used in the MPLS network TE design using SP Guru. Lastly we present the TE design results with a comparison and further analysis.

7.1 Design Scenarios

For the MPLS network, all traffic demands are carried on LSPs. IA demands are carried on LSPs with zero TE bandwidth and as a result routed over shortest paths. IP VPN demands are carried on LSPs with non-zero TE bandwidth where the LSPs can be single or multi-class depending on the architecture choice. In the case of single-class LSP, the LSP carries traffic from a single grade of IP VPN service and contains bandwidth request (with MPLS framing overhead included) for that grade of service, while in the case of multi-class LSP, the LSP carries traffic from multiple grades of IP VPN service and contains multiple bandwidth requests, one for each grade of service. In either case, each LSP is assigned a setup and holding priority that, combined with class type(s) associated with the LSP, will be used to decide the routing priority of the LSP in the TE design. All LSPs are protected against single link failure by end-to-end path protection backup paths. The backup paths share capacity as long as the corresponding primary paths are link-disjoint.1

Both primary and backup LSPs are routed over the given MPLS network. On all network links, 100% of the available bandwidth (after deducting SONET framing overhead) is available to the routing of LSPs (link reservable bandwidth is 100%). In addition, the available bandwidth on a network link may be pre-partitioned to various class types (again depending on the architecture choice) and the routing of LSPs must be subject to this constraint. When class based partitioning of link bandwidth is applied, MAM bandwidth model is used as the bandwidth constraint model on all network links. The routing of primary and backup LSPs is decided through traffic engineering with

1 In some cases sharing of backup bandwidth is subject to the Shared Risk Group (SRG) constraint. That is, backup paths for link disjoint primary LSPs cannot share capacity if some of the disjoint links from the primary LSPs are in the same SRG – i.e., those disjoint links are actually provisioned through physical conduits that have shared risk such that one failure in the physical network may bring down all of the primary LSPs at the same time. Although we have the capability to address SRG in the protection design, we do not consider it in the case study.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

32

Page 33: MPLS Design

an objective of minimizing the total link bandwidth consumed (or subscribed) by the routing of LSPs.

Based on different architecture choices, three different design scenarios were considered in the MPLS TE design. These are summarized in Table 3 and Table 4, and discussed below.

LSP Type Class Based Link Bandwidth Partitioning

No YesSingle Class Scenario 1 Scenario 2Multi Class Scenario 3

Table 3: Summary of Design Scenarios

Setup Priorities Scenario 1 Scenario 2 Scenario 3VPN3 1 1

150% BW

VPN2 2 2 25% BWVPN1 3 3 20% BWVPN0 4 4 5% BWIA 7 7 7

Table 4: LSP Class Setup Priorities

Single class LSPs allow for more granular bandwidth control, whereas multi class LSPs present operational advantages.

Design Scenario 1 This design scenario corresponds to a traffic-engineered network with single-class LSPs and no class based partitioning of link bandwidth. Each LSP carries one traffic class, thus there are 5 types of LSPs: IA, VPN0 through VPN3. Each traffic type is bound to its corresponding LSP type, for example VPN3 traffic is carried on VPN3 LSPs. As discussed above, IA LSPs have zero TE bandwidth. For VPN LSPs, the TE bandwidth is set to the amount of traffic carried (plus required MPLS overhead). The links are not partitioned to classes and 100% of the link bandwidth is available for TE subscription. All LSPs have a holding priority of 1. The setup priorities are as follows: 1 for VPN3, 2 for VPN2, 3 for VPN1, 4 for VPN0 and 7 for IA LSPs. Thus, the VPN3 LSPs have the highest setup priority.

Design Scenario 2 This design scenario corresponds to a traffic-engineered network with single-class LSPs and class-based partitioning of link bandwidth. Each LSP carries one traffic class, thus there are 5 types of LSPs: IA, VPN0 through VPN3, same as Design Scenario 1. All of the link bandwidth is available for TE subscription, partitioned for various class types as follows: IA: 0%, VPN0: 50%, VPN1: 25%, VPN2: 20%, VPN3: 5%. Thus, the link partitioning uses a priori knowledge of the class type

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

33

Page 34: MPLS Design

bandwidth requirements. LSPs have the same holding and setup priorities as assigned in Design Scenario 1.

Design Scenario 3This design scenario corresponds to a traffic-engineered network with multi-class LSPs and class-based partitioning of link bandwidth. For Design Scenario 3, single-class LSPs are still used to carry IA traffic while the four classes of VPN traffic are carried altogether on multi-class VPN LSPs. Each multi-class LSP contain bandwidth requests for each of the four classes of VPN traffic. All of the link bandwidth is available for TE subscription, partitioned as follows: IA: 0%, VPN0: 50%, VPN1: 25%, VPN2: 20%, VPN3: 5%, same as in Design Scenario 2. All LSPs have a holding priority of 1. Single-class IA LSPs have a setup priority of 7 and multi-class VPN LSPs have a setup priority of 1.

7.2 MPLS TE Design Procedure Using SP Guru

Using SP Guru, the following steps were followed in the MPLS TE design:

Import network topology and traffic demands for multiple CTs Import LSP bandwidth requests for single or multiple CTs Run design action to find routing paths of primary and backup LSPs with the objective to

minimize the total consumed link bandwidth Run flow analysis that places traffic onto the routed LSPs and collects network performance

data (such as hop counts and link bandwidth subscription by both primary and backup LSPs)

The MPLS network topology (nodes and links) was imported to SP Guru via Juniper configlet files. Figure 30 shows a non-geographical display2 of the MPLS network consisting of 7 Juniper T640 routers and 11 OC-48 links as viewed via the graphical user interface (GUI) of SP Guru after the import of the configlet files. As shown in the figure there is only one OC48 link between Shenyang and Beijing. For redundancy purposes, a second OC48 link between the pair was added in the MPLS TE design.

2 Since no coordinates are provided in the Juniper configlet files, the display of network nodes in Figure 30 does not reflect their geographical locations. The nodes were later moved to proper locations on the map (Figure 31).

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

34

Page 35: MPLS Design

Figure 30: China Unicom MPLS network after configlet file import to the tool

Label edge routers (LERs) were attached to each T640 node – for Design Scenarios 1 and 2 where single-class LSPs are used, four LERs (each corresponding to one grade of IP VPN) were attached to each T640, while in Design Scenario 3 where multi-class LSPs are used, one LER (corresponding to all four grades of IP VPN) was attached to each T640. Figure 31 shows LERs attached to T640s in Design Scenarios 1 and 2 – e.g., T640 in Beijing has 4 LERs attached: Beijing_VPN0 through Beijing_VPN3. In these scenarios, an IP VPN traffic demand of a particular grade between a pair of T640s would be modeled as a demand between a pair of LERs that correspond to the same grade and are attached to the T640s respectively. A single-class LSP to carry this demand between the LERs would be determined in the TE design. The same traffic-to-LSP binding can be achieved in Design Scenario 3 by using multi-class LSPs between LERs. Note that in this modeling each LER is attached to a T640 via two OC48 links to make it feasible to find backup LSPs. In the study we create the LERs for IP VPN traffic only. For IA traffic we leave it to be carried by LSPs between T640s.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

35

Page 36: MPLS Design

Figure 31: Network model for Design Scenarios 1 and 2 after modifications

The traffic demands and LSP information (including bandwidth requests) was imported to SP Guru via text file import. Figure 32 shows the network model in Design Scenario 3 after traffic demands and LSP information are imported, where it can be seen one LER is attached to each T640, multi-class LSPs for IP VPN traffic are drawn between LERs and IA LSPs between T640s.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

36

Page 37: MPLS Design

Figure 32:Network model for Design Scenario 3 showing IA and multi-class VPN LSPs

7.3 MPLS Network Design

After network topology, traffic demands and LSPs import, the routing of primary and backup LSPs was determined by the mpls_ds_te design action. The mpls_ds_te design action uses a heuristic approach (as mentioned above) based on LSP ordering to arrive at a network design solution with a minimum cost objective. For this study, the cost is defined to be the total link bandwidth subscribed by the LSPs across the network. The outcome of the TE design is a set of explicitly routed LSPs, which can be evaluated based on the following metrics:

Minimum, maximum and average hop count of LSP explicit routes

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

37

VPN LSPs between LER’s

IA LSPs between T640’s

Page 38: MPLS Design

Minimum, maximum and average link TE subscription3

Subsequent to the TE design, we ran flow analysis (FLAN) available from the tool where traffic demands were placed on the primary LSP ER paths and link utilization4 was measured. We did not conduct failure analysis in this study.

7.4 Study Results and AnalysisSP Guru is capable of generating a rich set of class-based and summary reports on the outcome of the TE design and Flow Analysis. For example, Figure 33 to Figure 35 show portions of the LSP explicit routes and link TE subscription reports generated after the TE design, and link utilization report generated after the flow analysis, all for Design Scenario 2.

Figure 33: LSP Explicit Routes Report for Design Scenario 2 (Partial)

Figure 34: Link TE Subscription Report for Design Scenario 2 (Partial)

3 Link TE subscription refers to the portion of a link’s capacity that is reserved for all LSPs routed over the link during TE design. As discussed earlier, link subscription can be different from link utilization.4 Link utilization, as opposed to link TE subscription, refers to the portion of a link’s capacity that is consumed by actual traffic. Its measurement is usually obtained through a flow analysis where actual traffic is placed onto the network. A link’s utilization may be higher or lower than its TE subscription.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

38

Page 39: MPLS Design

Figure 35: Link Utilization Report for Design Scenario 2 (Partial)

Based on the information generated by the reports, we summarize the study results in the following descriptions with a comparison among the three alternative Design Scenarios. The study results are presented based on the measurement of three metrics: LSP hop count, link TE subscription and link utilization.

LSP Hop CountMinimum, average and maximum hop counts5 of primary and backup LSPs are summarized in Figure 36. Note that all primary LSPs were successfully routed in all design scenarios. However, in Design Scenario 1 backup paths failed for 5 VPN LSPs (corresponding to 5 unprotected VPN demands), while in Design Scenario 3 backup paths failed for 4 multi-class VPN LSPs (corresponding to 16 unprotected VPN demands). For Design Scenario 2, all backup LSPs were successfully routed.

5 The calculation of hop count excludes the one hop between LER and T640 that applies to IP VPN LSPs.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

39

Page 40: MPLS Design

Backup

Primary

LSP Hops (in Core Links)

553553553Max Hops

2.86 1,32.95 322.712.7722.57 22.63 22Avg Hops

111111111Min Hops

332332222Max Hops

1.53 11.571.1251.531.571.1251.501.541.125Avg Hops

111111111Min Hops

AllVPNIAAllVPNIAAllVPNIA

Scenario 3:58 LSP’sScenario 2:184 LSP’sScenario 1:184 LSP’s

Backup

Primary

LSP Hops (in Core Links)

553553553Max Hops

2.86 1,32.95 322.712.7722.57 22.63 22Avg Hops

111111111Min Hops

332332222Max Hops

1.53 11.571.1251.531.571.1251.501.541.125Avg Hops

111111111Min Hops

AllVPNIAAllVPNIAAllVPNIA

Scenario 3:58 LSP’sScenario 2:184 LSP’sScenario 1:184 LSP’s

1 : Normalized to number of demands per VPN LSP

2 : Backup paths failed for 5 LSP’s (5 VPN demands unprotected)

3 : Backup paths failed for 4 LSP’s (16 VPN demands unprotected)

Figure 36: LSP Hop Counts

For IP VPN, the hop count for primary and backup LSPs increases from Design Scenario1 to Scenarios 2, 3 since class-based link bandwidth partitioning employed in Design Scenarios 2 and 3 reduces the chances of picking topologically shortest paths. The hop count for backup LSPs increases from Design Scenario 2 to Scenario 3 since it is more difficult for Design Scenario 3 to route multi-class backup LSPs with a much higher total bandwidth (equivalent to routing 4 single-class LSPs simultaneously) over the residual capacity left after routing of primary LSPs, leaving the backup LSPs to be routed over longer paths. The same difficulty also results in the 4 un-routable backup LSPs in Design Scenario 3. For IA, the LSP hop counts are the same in all Design Scenarios since IA LSPs with zero TE bandwidth are always routed over shortest paths.

Link TE Subscription by Primary and Backup LSPsLink TE subscription, by primary and backup LSPs respectively, is summarized in Figure 37. In Design Scenario 1 where link bandwidth is not partitioned, LSPs of higher priority with smaller bandwidth get routed first6, causing those LSPs of lower priority with larger bandwidth to be routed over longer paths (this also accounts for the 5 un-routable backup IP VPN LSPs in Design Scenario 1 as noted in Figure 36). On the other hand class-based link partitioning was employed in Design Scenarios 2 and 3, preventing higher priority LSPs from using up link capacity and leaving room for lower priority IP VPN LSPs to find shorter paths. As a result, the average link TE subscription by primary LSPs (referred to as primary link TE subscription in subsequent discussion) in Design Scenario 1 is higher than those in Design Scenarios 2 and 3. The average primary link TE

6 This behavior is a result of the internal optimization algorithm used. The algorithm is being enhanced to arrive at a better solution for the TE design.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

40

Page 41: MPLS Design

subscriptions in Design Scenarios 2 and 3 are identical since the class-based link partitioning was chosen to be proportional to the distribution of multi-class traffic. Finally, the difficulty of routing multi-class LSPs, as mentioned above, led to a higher average backup link TE subscription for Design Scenario 3 as compared to the other two scenarios.

96.85

96.85

98.86

Max

0.00

0.00

0.00

Min

Link TE Subscription by Backup LSPs (%)

38.07

18.68

35.78

Avg

85.87

54.59

64.63

Max

52.29

23.67

52.73

Min

74.37

54.98

77.02

Avg MaxAvgMin

97.8136.300.00Scenario 3

99.3736.300.00Scenario 2

99.9041.320.00Scenario 1

Link TE Subscription by All LSPs (%)

Link TE Subscription by Primary LSPs (%)Design

Scenario

96.85

96.85

98.86

Max

0.00

0.00

0.00

Min

Link TE Subscription by Backup LSPs (%)

38.07

18.68

35.78

Avg

85.87

54.59

64.63

Max

52.29

23.67

52.73

Min

74.37

54.98

77.02

Avg MaxAvgMin

97.8136.300.00Scenario 3

99.3736.300.00Scenario 2

99.9041.320.00Scenario 1

Link TE Subscription by All LSPs (%)

Link TE Subscription by Primary LSPs (%)Design

Scenario

Figure 37: Link TE Subscription by Primary and Backup LSPs

Link Utilization by Primary LSPs Link utilization 7, in both forward and return directions, by traffic on the primary LSPs is summarized in Figure 38. Note that some links have link utilization greater than 100%. This is because IA LSPs are routed over shortest paths with zero TE bandwidth in this study, leaving them with no bandwidth reservation on the links. On certain links where IA traffic is routed and there is a high level of IP VPN traffic, the link utilization will be greater than 100%. Counting both forward and return directions, the total link capacity consumed by traffic on primary LSPs is 28,257 Mbps for Design Scenario 1 and 25,632 Mbps for both Design Scenarios 2 and 3.

As a general technical summary, Scenario 1 is best in terms of minimum hop counts. Scenario 2 is best if all LSPs are to be successfully protected. Scenario 2 is best in terms of overall TE subscription. Scenarios 2 and 3 are best in terms of link utilization.

12,816

12,816

14,520

Total Fwd Link Capacity Consumed (Mbps)

12,816102.3549.000.00112.0349.0012.36Scenario 3

12,816102.3549.000.00112.0349.0012.36Scenario 2

13,737116.1952.520.00117.8455.5112.36Scenario 1

Total Rtn Link Capacity Consumed (Mbps)

Max Rtn Util (%)

Avg Rtn Util (%)

Min Rtn Util (%)

Max Fwd Util (%)

Avg Fwd Util (%)

Min Fwd Util (%)

Design Scenario

12,816

12,816

14,520

Total Fwd Link Capacity Consumed (Mbps)

12,816102.3549.000.00112.0349.0012.36Scenario 3

12,816102.3549.000.00112.0349.0012.36Scenario 2

13,737116.1952.520.00117.8455.5112.36Scenario 1

Total Rtn Link Capacity Consumed (Mbps)

Max Rtn Util (%)

Avg Rtn Util (%)

Min Rtn Util (%)

Max Fwd Util (%)

Avg Fwd Util (%)

Min Fwd Util (%)

Design Scenario

Figure 38: Link Utilization by Traffic on Primary LSPs

7 The calculation of link utilization excludes those links between LERs and T640s.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

41

Page 42: MPLS Design

Scenario 2 generates a TE design with the lowest link utilization and overall link capacity consumed while maintaining only slightly higher hop counts for primary and backup LSPs. We discuss further considerations for service providers in the next section.

7.5 What Does It Mean To Service Providers?

From a service provider’s perspective, the various design scenarios addressed in the study represent different options for the service provider to use in designing its MPLS network. Each of the network performance metrics shown above corresponds to a certain key requirement for the operation of the MPLS network. For example, a maximum LSP hop count for a particular class of traffic may be needed in order to meet the end-to-end delay requirement for the class of traffic (such as max end-to-end delay for VoIP). Link subscription by LSPs reflects how efficient the capacity of the MPLS network is used by multi-class traffic, and could be used to derive the total network cost or cost per unit bandwidth carried. As the study showed various design scenarios led to different measurements of network performance metrics. Depending on the particular requirements set up by the service provider for the MPLS network, some design scenario would be the best option for the service provider to adopt. For example, Design Scenario 2 (with single-class LSPs and class-based link bandwidth partitioning) generates a TE design with the lowest link utilization and overall link capacity consumed while maintaining only slightly higher hop counts for primary and backup LSPs, will be a good choice for a service provider looking to enhance the MPLS network efficiency and reduce unit bandwidth cost.

Single class LSPs represent a more granular routing, thus making better use of available capacity. However, from a service provider’s point of view, multi class LSPs could present operational advantages. While partitioning bandwidth may result in under-utilization of existing resources, it also creates fairness in that a certain bandwidth is always available for each class type.

8. Interface to Other Tools

There are other tools and procedures within Bell Labs and Lucent that can be used for different studies involving network optimization, design and analysis. Some of these functionalities complement those found in SP Guru, while others have overlaps in features but involve different algorithms or mathematical methods, and some are tailored for specific conditions and applications. It may be advantageous to perform certain portions of a study using these other tools. Some examples include CPLEX, INDT, iOptimize, DBR, etc. The interface of SP Guru to other tools is typically provided through the text file import/export capabilities discussed earlier in this document. For purposes of MPLS analysis, most of the required information is found in the Node, Link and LSP text files that can be imported/exported. LSP paths can be imported/exported using the LSP explicit route import/export design action accessed from Design->Configure/Run Design Action and selecting Import or Export. The typical procedure is illustrated in Figure 39. It may be necessary to parse the SP Guru output files to a compatible format with the other tools, and it may also be

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

42

Page 43: MPLS Design

necessary to parse the output files from the other tools to a compatible format with Opnet. Furthermore, these import/export procedures may not be able to capture all required information for a given tool, and the missing information may need to be added manually or through the parser. However, both SP Guru and most internal tools employ simple text-based import/export formats and converting between these formats is usually not overly complex.

Text Files:

Input Data (Nodes, Links, LSP’s)

Text Files:

Output Data (Nodes, Links, LSP’s)

Parse SP Guru export files

Parse SP Guru import files

Lucent/Bell Labs

Tools/Algorithms

Text Files:

Input Data (Nodes, Links, LSP’s)

Text Files:

Output Data (Nodes, Links, LSP’s)

Parse SP Guru export files

Parse SP Guru import files

Lucent/Bell Labs

Tools/Algorithms

Lucent/Bell Labs

Tools/Algorithms

Figure 39: SP Guru interface to other Lucent/Bell Labs tools and algorithms

In the next section, we present methods and procedures for performing MPLS network design using CPLEX. The input files required by CPLEX are very similar to the Node, Link and LSP text files exported by SP Guru, and the parsing process can be accomplished by making minor modifications to these text files in Excel.

9. M & P for MPLS Network Design Using CPLEX

ILOG’s CPLEX is a commercially available, industrial-strength software package for solving linear and integer linear optimization problems. In this section we first provide a brief background on discrete optimization and its relevance to design problems for MPLS networks and then discuss how to apply CPLEX to some of these network design problems. Compared to the heuristic methods used in SP Guru, CPLEX has the ability to solve the problem to optimality, as we discuss further in the next section. The alternate design method through CPLEX can also provide a lower bound to indicate the quality of the solution. Additionally, multiple objectives (such as minimization of bandwidth and minimizing maximum link subscription) can be used simultaneously, with specifiable relative weights, thus the CPLEX method can be used in certain types of multi-objective problems that cannot be solved using SP Guru.

In the current version of the document, we use CPLEX for unprotected MPLS TE design. This document will be updated with a method to use CPLEX for MPLS TE design with protection.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

43

Page 44: MPLS Design

9.1 Discrete Optimization and Network Design - Brief overview

Many network planning and design problems can be formulated as optimization problems. These are mathematical problems in which a goal or objective is desired, such as cost minimization, when constraints exist on the possible ingredients in, or aspects of, candidate solutions, such as the capacity available on any link. Mixed integer linear optimization problems are a special case in which the solution ingredients are taken from a discrete versus a continuous set of possibilities. For example, when finding a path or route for an LSP from its source to its destination, a particular link or edge of the graph representing the network topology is either utilized or not – this can be represented with a value of 1 for the edge if it is used in a particular LSP’s route and 0 otherwise. The MPLS design can be formulated as a mixed integer linear optimization problem, as shown in Figure 40, where all of the variables occur linearly and some variables can only take on integer values (1 or 0) while others can vary continuously. Typically there are many combinations of links that would produce an LSP’s route through a network from a source node to a destination node. The integer optimization problem is to find combinations of links that provide a route for each LSP such that the link capacities are obeyed and the objective is achieved. A set of routes, one for each LSP, that obey the capacity constraints are called feasible solutions, and feasible solutions with the best objective value are called optimal solutions.

Definitions and Input ParametersN node setA (directed) arcs (Note: arc a (i,j) A N N)Ca capacity of arc a.D demand setsd (td) source (destination) node of demand d.Bd bandwidth of demand d.

Decision Variables (outputs)u [0, 1] the maximum utilization on any arcxd

a 1 if demand d uses arc a ; 0, o.w.

Formulation (auxiliary cut constraints not included)Minimize weighted mix of max utilization and total bandwidth across all arcs:

min u + aA dD Bdxda

subject to:

Connectivity (unit flow) conservation at each node:

(n,j)A xd

n,j (i,n)A xd

i,n = (n,sd– n,td

), nN, dDTotal flow on an arc cannot exceed the utilizable arc capacity:

dD Bd xda u Ca aA

Note: for

Bandwidth Minimization, Set u=1, =0,

Load Leveling, Let u be arb, =1,

Figure 40: Mixed Integer Linear Program formulation of the MPLS TE design problem.

Finding optimal solutions in such problems potentially can be difficult because of the many possible link combinations that need to be considered to construct routes and that do not violate the link

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

44

Page 45: MPLS Design

capacities. These problems are categorized as NP-hard. In practical terms this means that amount of computational work required to find optimal solutions possibly might increase exponentially with some measure of the problem size, e.g., the number of nodes, in which case finding optimal solutions could be impractical for even modest-sized problems. Because of the potential or actual difficulty of solving NP-hard network design problems to optimality, tools developed for solving them use approximate or heuristic methods to search among the possible. Such methods generate feasible solutions and might possibly even find optimal solutions although they usually cannot guarantee provable optimality no matter how much computational effort is applied. SP Guru and INDT are examples of tools that employ heuristic methods. For example, we illustrated the heuristic method used by SP Guru in Figure 23. There are also tools available that can find optimal solutions. CPLEX is an example of such a tool. It employs a search method known as branch-and-bound or branch-and-cut that systematically searches the set of feasible possibilities for an optimal solution. In principal, branch-and-bound methods can find optima when they exists; in practice that can take an unacceptably long time, but the technique produces feasible solutions of increasing quality during the search process.

Typically there is a tradeoff in solution quality (relative to optimality) and solution time. Solution techniques such as branch & bound can produce higher quality solutions than heuristic methods, but can also take more time to obtain them. Depending on the task, one tool may provide better quality solutions quicker than the others. Additionally, the customizable models available in integer programming methods can include features and capabilities that can be built in which are not currently available in other tools. As such, we provide options of both heuristic and integer programming methods for the network design to use as appropriate. In the next section we provide an introductory application of integer programming methods for MPLS design using CPLEX.

9.2 M & P for MPLS Network Design Using CPLEX

In this subsection, it is assumed that the user has access to a licensed and installed copy of CPLEX and AMPL. AMPL (Algebraic Math Programming Language) is a high level modeling language originally developed at Bell Labs to formulate mathematical programs such as linear, mixed integer, or non-linear optimization problems, and then interface them with solver software such as CPLEX. Once the problem is formulated in AMPL syntax, AMPL combines the formulation along with the input data to build a problem instance in an industry-standard mathematical programming format that it then hands to the solver software for solving. Once the solver has solved it according to the specified optimization criteria, the solver then returns the solution to AMPL, which post-processes the solution into a user specified output format. AMPL is now commercially available from ILOG as well. AMPL and CPLEX can be setup in either Windows or UNIX environments; in this discussion we assume they are set up in a UNIX operating environment.

Currently, the mixed integer programming models expressed in AMPL and available for solving under CPLEX are:

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

45

Page 46: MPLS Design

Pathplan – when solved it solutions provide unidirectional paths or routes for LSPs based on a user-specified weighted mix of two optimization criteria: (1) minimizing total bandwidth subscription on all links, and (2) minimizing the maximum utilization on any link.

Pathplan-sym – similar to Pathplan, this model allows the user to impose symmetric routing when forward and reverse LSPs are required between a source-destination node pair, i.e., the reverse route will be parallel to, but oppositely directed of, that of the forward LSP routing.

To utilize these models, three text input files need to be provided: nodes.txt – a list of network nodes arcs.txt – a file of unidirectional network arcs and their capacities demands.txt – a file of LSPs to be routed including LSP identifier, source node, destination

node, and LSP bandwidth. Sample formats for these text files are shown in Figure 41 for the CU network discussed previously. These sample formats can be developed from exported outputs into EXCEL from SP Guru of the network files and LSPs.

# File: nodes.txt # # Nodes for CU # # Name Beijing Shenyang Shanghai Wuhan Chengdu Guangzhou Xian

# File: arcs.txt # # Unidirectional arcs for CU # # Src Dest CAP # Forward Shenyang Beijing 2377.70 Beijing Xian 2377.70 Beijing Shanghai 2377.70 Beijing Chengdu 2377.70 Beijing Wuhan 2377.70 # File: demands.txt # # Unidirectional LSPs for CU # # LSP Name Src Dest BW Beijing_Guangzhou_0 Beijing Guangzhou 1151.37 Guangzhou_Beijing_0 Guangzhou Beijing 1151.37 Beijing_Shanghai_0 Beijing Shanghai 782.95 Shanghai_Beijing_0 Shanghai Beijing 782.95 Beijing_Guangzhou_1 Beijing Guangzhou 575.69

Figure 41: Formats of input files for the AMPL mixed integer programming models for MPLS design. Shown from top to bottom are sample files for node.txt, arcs.txt, and demands.txt.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

46

Page 47: MPLS Design

Once these input files have been developed the models can be run. To do this, at system prompt simply type ampl pathplan to execute the (non-symmetric) MPLS design model. This command will invoke the AMPL script to read the model and the input model data, compile a problem instance, and invoke CPLEX to solve the problem. The script is currently set up to solve the problem repeatedly for increasing quality solutions as indicated by the improving objective values of each subsequent solution. The script will continue to solve the model until a termination criterion is reached. Currently, that termination criterion is set to at 0.05% separation or gap between the best optimal solution objective value and the objective value of the (linear programming) lower bound. A sample output trace of CPLEX during the solving process is shown in Figure 42. The gap is indicated in the rightmost column of the trace.

Setting up to solve... Presolve eliminates 258 constraints and 180 variables. Adjusted problem: 3181 variables: 3180 binary variables 1 linear variable 2134 constraints, all linear; 18816 nonzeros 1 linear objective; 3180 nonzeros. CPLEX 7.5.0: integrality=1e-9 MIP emphasis: integer feasibility Nodes Cuts/ Node Left Objective IInf Best Integer Best Node ItCnt Gap 0 0 18839.0000 6 18862.1700 18839.0000 762 0.12% 18839.0000 12 18862.1700 Cuts: 8 786 1 1 18843.1581 9 18862.1700 18839.0000 796 0.12% 2 2 18846.7900 11 18862.1700 18839.0000 808 0.12% 3 3 18849.0081 3 18862.1700 18839.0000 814 0.12% * 4 2 18850.7000 0 18850.7000 18839.0000 819 0.06% Gomory fractional cuts applied: 2 Using devex. Iteration log . . . Iteration: 1 Objective = 18850.700000 Times (seconds): Input = 0.37 Solve = 0.36 Output = 0.1 CPLEX 7.5.0: mixed-integer solutions limit; objective 18850.7 819 MIP simplex iterations 5 branch-and-bound nodes solve_result_num = 420 solve_result = limit

Figure 42: Sample output trace of CPLEX during a model solve.

Alternatively, if the user wishes to terminate the process prematurely, simply enter <CTRL-Z>. Provided at least one integer solution has been found, the most recent solution is saved in the output file, pp.out. This file can be used to extract the solution for later import into SP Guru. Reformatting of the output file would be required. Samples of output files are shown in Figure 43.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

47

Page 48: MPLS Design

Max Link Load = 99.7540% Total BW = 18850.7000 Objective = 18850.7000 (W=0.0000, B=1.0000) Lower Bnd = 18839.0000 Link Usage: Src Dest BW (Util %) Shenyang Beijing 1298.0700 ( 54.594%) Beijing Xian 617.4900 ( 25.970%) Beijing Shanghai 2097.4300 ( 88.213%) Beijing Chengdu 1175.9800 ( 49.459%) Beijing Wuhan 461.2700 ( 19.400%) Xian Shanghai 268.3600 ( 11.287%) Beijing Guangzhou 2371.8500 ( 99.754%) Chengdu Guangzhou 290.4700 ( 12.216%) Wuhan Guangzhou 99.8600 ( 4.200%) Shanghai Guangzhou 744.5700 ( 31.315%) Beijing Shenyang 1298.0700 ( 54.594%) Xian Beijing 617.4900 ( 25.970%) Shanghai Beijing 2139.9300 ( 90.000%) Chengdu Beijing 1131.8100 ( 47.601%) Wuhan Beijing 462.9400 ( 19.470%) Shanghai Xian 268.3600 ( 11.287%) Guangzhou Beijing 2371.8500 ( 99.754%) Guangzhou Chengdu 246.3000 ( 10.359%) Guangzhou Wuhan 101.5300 ( 4.270%) Guangzhou Shanghai 787.0700 ( 33.102%) Primary Hops Max/Avg: 3 / 1.5476 Demands: Beijing_Guangzhou_0 1151.37 (Beijing,Guangzhou) Guangzhou_Beijing_0 1151.37 (Guangzhou,Beijing) Beijing_Shanghai_0 782.95 (Beijing,Shanghai) Shanghai_Beijing_0 782.95 (Shanghai,Beijing) Beijing_Guangzhou_1 575.69 (Beijing,Guangzhou) Guangzhou_Beijing_1 575.69 (Guangzhou,Beijing) ing_Shenyang_0 359.56 (Beijing,Shenyang) Shenyang_Beijing_0 359.56 (Shenyang,Beijing)

Figure 43: Sample of the solution output file pp.out.

Also output is the values of all solutions and the progression of the optimization process as the solver seeks solution improvement. This output is provided in the file pp.val, a sample of which is shown in Figure 44.

Start time: Sat Oct 22 05:40:45 EDT 2005 Max Util Total BW Obj Fcn Lower Bnd Gap Soln Time Stamp 99.755% 18862.1700 18862.1700 18839.0000 0.123% 10/22/05 05:40:50 99.754% 18850.7000 18850.7000 18839.0000 0.062% 10/22/05 05:40:52 99.755% 18850.6800 18850.6800 18839.0000 0.062% 10/22/05 06:03:48 99.755% 18850.6600 18850.6600 18839.0000 0.062% 10/22/05 06:03:51 99.755% 18850.6600 18850.6600

Figure 44: Sample of the output file pp.val showing progression of the optimization.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

48

Page 49: MPLS Design

An additional feature of this model is that the user has the ability to change the objective function from total bandwidth minimization to minimizing the maximum link utilization (note that these are equivalent to the objective options available in SP Guru). This is done by modifying B and W parameters in the file pathplan.parm as shown in Figure 45. Note that setting W=0 results in an objective that is purely total bandwidth minimization while setting B=0 produces an objective that is minimization of maximum link utilization. Setting both of these parameters to non-zero values yields and objective that is a weighted mix of these two. This is useful if, e.g., one wants to simultaneously minimize the two objective terms, total bandwidth and maximum link utilization. This weighted implementation of simultaneous objectives in not available in SP Guru.

### File: pathplan.parm ### Parameters for model PATHPLAN (v2005.05.21) ### G. Atkinson, 2005.05.21 #param W := 100000; # Weight for max link utilization objective term param W := 0; # Weight for max link utilization objective term param B := 1; # Weight for bandwidth objective term

Figure 45: Sample of the parameter file pathplan.parm showing the B and W parameters.

An identical process is used for the model pathplan-sym. Note that the import and export procedures and associated file formatting required to provide inputs to, or extract results from, the model solutions is currently done manually. However, this process can and likely will be streamlined by automating it in the near future using Opnet’s ODK available with SP Guru.

10. Some Other Features Available in SP Guru

SP Guru is a large tool and encompasses many Layer 2 and Layer 3 networking technologies and incorporates numerous analysis modules. This document focuses on MPLS optimization and design. Additional features in SP Guru are beyond the scope of this document. However, in this section, we briefly highlight some features that may be useful for network studies that involve MPLS design.

10.1 Failure Analysis

The FLAN module contains an extension that performs failure analysis. Failure analysis involves failing single or multiple links or nodes and analyzing the steady state impact on the network. The failure analysis module first fails the selected object(s) and then re-runs FLAN to determine the new routing and forwarding tables. Flows are then routed using these new forwarding tables. The failure analysis module reports on the effects of the failure in terms of unroutable flows, overloaded links, etc. Note that failure analysis considers the new network view after steady state, and does not

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

49

Page 50: MPLS Design

involve transient analysis to determine how long the network is expected to take to achieve this new state. To access the failure analysis module options, use Flow Analysis->Configure/Run Failure Impact Analysis. The options available are shown in Figure 46.

Figure 46: Failure analysis configuration options

The failed objects can be selected from the GUI by right-clicking on the objects and selecting to fail them, or the failure analysis module can iterate through all single and pair-wise link, node and/or shared risk group failures and report on the comparative impact of each failure. An arbitrary number of selected objects can also be failed simultaneously, but most practical failure analyses typically involve single or double failures. Figure 47 Shows a sample report generated by the failure analysis module after iterating over all single link failures that summarizes the impact of these failures.

Figure 47: Failure analysis reports

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

50

Page 51: MPLS Design

10.2 NetDoctor

NetDoctor is a module available in SP Guru that can be used to analyze configuration errors, policy violations and inefficiencies in a network. The configuration and setup is compared against a set of policies and rules. The configuration analysis is then reported in HTML format and can be referred to in order to make changes and corrections in the network configuration. To run NetDoctor, select NetDoctor->Configure/Run NetDoctor. This will load the pre-defined rules and open a window in which the rules to check can be selected. In Figure 48, we show an example NetDoctor configuration window with IP and MPLS rules are selected. It is also possible to generate custom rules, but those methods are beyond the scope of this document.

Figure 48: NetDoctor pre-defined rules for IP and MPLS

In Figure 49, we show a sample NetDoctor report summary page that lists the potential issues that were found for a customer study. Each of the fields can be drilled down to in order to access specific reports, as shown in Figure 50, that shows a portion of the detailed information related to the static routes. For this particular network, the warnings highlighted some issues such as unadvertised interfaces, static route definitions with unknown next hops, overlapping subnets and network statements referencing invalid interfaces.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

51

Page 52: MPLS Design

Figure 49: Sample NetDoctor HTML report summary page

Figure 50: Portion of detailed static routes report

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

52

Page 53: MPLS Design

10.3 Identify Unreachable Interfaces

The Identify Unreachable Interfaces module is another extension of the FLAN module. The FLAN module has to be run prior to the identification of the unreachable interfaces since this determination can be made only after the routing/forwarding tables have been generated. This module sequentially creates demands between all pairs of interfaces and checks whether or not the demand is routable. To access this module, first run FLAN then select Flow Analysis->Identify Unreachable Interfaces. The options are to identify unreachable interfaces to/from selected nodes or between all nodes. An example report is shown in Figure 51. The network shown in this example had particular configuration errors and, as a result, 74% of the 918 possible source-destination based uni-directional demands were unroutable.

Figure 51: Example results from Identify Unreachable Interfaces module

Note that if there are unreachable interfaces, this may be due to configuration error and a detailed analysis of the configuration can be performed using NetDoctor to identify the problems.

Lucent Technologies Inc. - ProprietaryUse pursuant to Company instructions.

53