155
1 © 2001, Cisco Systems, Inc. All rights reserved. Session Number Presentation_ID MPLS Introduction

MPLS Cisco (1)

Embed Size (px)

DESCRIPTION

mpls

Citation preview

Cisco Presentation GuideSession Number Presentation_ID
Presentation_ID
Presentation_ID
Label indicates service class and destination
Label Switch Router (LSR)
Label Distribution Protocol (LDP)
Edge Label Switch Router
(ATM Switch or Router)
At Edge:
Classify packets
Label them
We’ve given you broad definitions of what MPLS does in the network. Now let’s discuss the essential concept of how MPLS actually works. Let’s start with terminology.
At the edge of the network, traffic is processed, classified, and given an MPLS label that determines how it will travel through the core to its final destination.
*
*
*
Presentation_ID
Packet forwarding is done based on Labels.
Labels are assigned when the packet enters into the network.
Labels are on top of the packet.
*
*
*
Presentation_ID
enters the network.
In the core, packets are forwarded without
having to re-classify them.
- No further packet analysis
Presentation_ID
establish reachability to destination networks.
1b. Label Distribution Protocol (LDP)
establishes label to destination
2. Ingress Edge LSR receives packet, performs Layer 3 value-added
services, and labels(PUSH) packets.
4. Edge LSR at egress removes(POP) label and delivers packet.
Here is how MPLS actually works.
Step 1A: a routing protocol such as OSPF, EIGRP, or IS-IS determines the layer 3 topology. A router builds a routing table as it “listens” to the network. A Cisco router or IP+ATM switch can have a routing function inside that does this. All devices in the network are building the layer 3 topology.
Step 1B: The Label Distribution Protocol establishes label values for each device according to the routing topology, to pre-configure maps to destination points. Unlike ATM PVCs where the VPI/VCIs are manually assigned, labels are assigned automatically by LDP.
Step 2: An ingress packet enters the Edge LSR. The LSR labels it, does all the layer 3 value-added services, including QoS, Bandwidth management, and so forth. It then applies a label to it based on the information in the forwarding tables. (This also reflects QoS, which we’ll discuss in detail in the next section).
Step 3: the core LSR read the labels on each packet on the ingress interface, and based on what the label says, sends the packet out the appropriate egress interface with a new label.
*
*
*
Presentation_ID
LSPs are unidirectional
LSP diverges from IGP shortest path
IGP domain with a label
distribution protocol
distribution protocol
Presentation_ID
LAN MAC Label Header
A tag can be as small as three bytes, or may be larger depending on the application and on the specific media type and encapsulation method used.
Tag information can be carried in a packet in a variety of ways:
between the PPP and Network Layer headers;
as part of the ATM header
as part of the Network Layer header (e.g., using the Flow Label field in IPv6).
It is therefore possible to implement tag switching over virtually any media type including
point-to-point links
multi-access links
Presentation_ID
Can be used over Ethernet, 802.3, or PPP links
Contains everything needed at forwarding time
Label = 20 bits EXP = Class of Service, 3 bits
S = Bottom of Stack, 1 bit TTL = Time to Live, 8 bits
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
Label
EXP
S
TTL
The tag frame encapsulation uses what’s called a shim header. It’s a header that sits between the MAC layer header and the layer 3 header in the packet. It consists of one or more entries that look like this.
Its a 32-bit word per entry which contains:
the tag your forwarding on
a 3-bit class of service field
an 8-bit time to live field, and
a 1-bit end of stack.
*
*
*
Presentation_ID
Loops and TTL
In IP networks TTL is used to prevent packets to travel indefinitely in the network
MPLS may use same mechanism as IP, but not on all encapsulations
TTL is present in the label header for PPP and LAN headers (shim headers)
ATM cell header does not have TTL
*
*
*
Presentation_ID
TTL is decremented prior to enter the non-TTL capable LSP
If TTL is 0 the packet is discarded at the ingress point
TTL is examined at the LSP exit
IGP domain with a label
distribution protocol
Presentation_ID
Labels are assigned and exchanged
between adjacent neighboring LSR
Presentation_ID
Rtr-C is the downstream neighbor of Rtr-B for destination 171.68.10/24
Rtr-B is the downstream neighbor of Rtr-A for destination 171.68.10/24
LSRs know their downstream neighbors through the IP routing protocol
Next-hop address is the downstream neighbor
171.68.10/24
Rtr-B
Rtr-A
Rtr-C
171.68.40/24
Presentation_ID
171.68.10/24
Rtr-B
Rtr-A
Rtr-C
171.68.40/24
171.68.10/24
171.68.10/24
Presentation_ID
Downstream LSRs distribute labels upon request
171.68.10/24
Rtr-B
Rtr-A
Rtr-C
171.68.40/24
171.68.10/24
171.68.10/24
Presentation_ID
LSR retains labels from all neighbors
Improve convergence time, when next-hop is again available after IP convergence
Require more memory and label space
Conservative retention mode
LSR discards all labels for FECs without next-hop
Free memory and label space
Label Retention Modes
Presentation_ID
Independent LSP control
LSR binds a Label to a FEC independently, whether or not the LSR has received a Label the next-hop for the FEC
The LSR then advertises the Label to its neighbor
Ordered LSP control
LSR only binds and advertise a label for a particular FEC if:
it is the egress LSR for that FEC or
it has already received a label binding from its next-hop
Label Distribution Modes
Presentation_ID
Data
Address
Prefix
128.89
171.69
1
1
I/F
I/F
Data
Data
128.89.25.4
128.89.25.4
128.89.25.4
Now when a packet comes into a router a look up is done based on the IP address in the packet, at match is obtained, and the packet is forwarded out the appropriate interface.
*
*
*
Presentation_ID
You Can Reach 171.69 Thru Me
You Can Reach 128.89 Thru Me
In
Label
Address
Prefix
128.89
171.69
1
1
Out
171.69
Tag edge routers and tag switches use standard IP routing protocols to identify routes through the network. Theses fully interoperate with non-tag switching routers
*
*
*
Presentation_ID
Use Label 5 for 171.69
Use Label 7 for 171.69
Use Label 9 for 128.89
In
Label
Address
Prefix
128.89
171.69
1
1
Out
171.69
Tag routers and switches use the tables generated by the standard routing protocols to assign and distribute tag information via the tag distribution protocol (TDP). Tag routers receive the TDP information and build a forwarding database, which makes use of the tags.
*
*
*
Presentation_ID
In
Label
Address
Prefix
128.89
171.69
0
1
Out
Data
128.89.25.4
Data
128.89.25.4
Data
128.89.25.4
Data
128.89
1
0
1
0
128.89.25.4
4
9
When we get to the first router, the one performing the tag imposition, there’s an IP look-up based on the IP prefix. It finds the forwarding table entry and it discovers that to get to the destination it should use tag x. It sticks that tag on the front of the packet and forwards it along to the next hop tag switch.
At this point the router can just do pure tag forwarding, gets in the packet with tag x, figures our that the outgoing interface is y, and the outgoing tag replaces the incoming tag. Note packet is forwarded based solely on the tag without re-analyzing the network layer header. This provides the essential separation of routing and forwarding referred to earlier.
*
*
*
Presentation_ID
Presentation_ID
MPLS Unicast IP Routing
MPLS introduces a new field that is used for forwarding decisions.
Although labels are locally significant, they have to be advertised to directly reachable peers.
One option would be to include this parameter into existing IP routing protocols.
The other option is to create a new protocol to exchange labels.
*
*
*
Presentation_ID
Used to distribute labels in a MPLS network
Forwarding equivalence class
Switched Paths)
Neighbor discovery
Presentation_ID
LSR
Presentation_ID
LSR
Presentation_ID
LSR
LDP: 10.0.0.0/8, L=5
Presentation_ID
Label allocation and distribution in packet-mode MPLS environment follows these steps:
1. IP routing protocols build the IP routing table.
2. Each LSR assigns a label to every destination in the IP routing table independently.
3. LSRs announce their assigned labels to all other LSRs.
*
*
*
Presentation_ID
Building the IP Routing Table
*
*
*
Presentation_ID
Allocating Labels
Every LSR allocates a label for every destination in the IP routing table.
Labels have local significance.
Label allocations are asynchronous.
*
*
*
Presentation_ID
LIB and LFIB Set-up
LIB and LFIB structures have to be initialized on the LSR allocating the label.
Router B assigns label 25 to destination X.
Outgoing action is POP as B has received no label for X from C.
Local label is stored in LIB.
*
*
*
Presentation_ID
Label Distribution
The allocated label is advertised to all neighbor LSRs, regardless of whether the neighbors are upstream or downstream LSRs for the destination.
X = 25
X = 25
X = 25
Presentation_ID
Every LSR stores the received label in its LIB.
Edge LSRs that receive the label from their next-hop also store the label information in the FIB.
X = 25
X = 25
X = 25
Presentation_ID
Interim Packet Propagation
Forwarded IP packets are labeled only on the path segments where the labels have already been assigned.
IP: X
Lab: 25
IP: X
*
*
*
Presentation_ID
Every LSR will eventually assign a label for every destination.
Router C assigns label 47 to destination X.
X = 47
X = 47
Presentation_ID
Every LSR stores received information in its LIB.
LSRs that receive their label from their next-hop LSR will also populate the IP forwarding table (FIB).
X = 47
X = 47
Presentation_ID
Populating LFIB
Router B has already assigned label to X and created an entry in LFIB.
Outgoing label is inserted in LFIB after the label is received from the next-hop LSR.
Label
Action
Presentation_ID
IP: X
Lab: 25
Lab: 47
IP: X
Ingress LSR
Egress LSR
Presentation_ID
Steady State Description
After the LSRs have exchanged the labels, LIB, LFIB and FIB data structures are completely populated.
Convergence in Packet-mode MPLS
Presentation_ID
Link Failure Actions
Routing protocol neighbors and LDP neighbors are lost after a link failure.
Entries are removed from various data structures.

*
*
*
Presentation_ID
Routing Protocol Convergence

*
*
*
Presentation_ID
MPLS Convergence

*
*
*
Presentation_ID
MPLS Convergence After a Link Failure
MPLS convergence in packet-mode MPLS does not impact the overall convergence time.
MPLS convergence occurs immediately after the routing protocol convergence, based on labels already stored in LIB.
*
*
*
Presentation_ID
*
*
*
Presentation_ID
IP routing protocols rebuild the IP routing table.
FIB and LFIB are also rebuilt, but the label information might be lacking.
C
C
Presentation_ID
Routing protocol convergence optimizes the forwarding path after a link recovery.
LIB might not contain the label from the new next-hop by the time the IP convergence is complete.
End-to-end MPLS connectivity might be intermittently broken after link recovery.
Use MPLS Traffic Engineering for make-before-break recovery.
*
*
*
Presentation_ID
LDP Session Establishment
LDP and TDP use a similar process to establish a session:
Hello messages are periodically sent on all interfaces enabled for MPLS.
If there is another router on that interface it will respond by trying to establish a session with the source of the hello messages.
UDP is used for hello messages. It is targeted at “all routers on this subnet” multicast address (224.0.0.2).
TCP is used to establish the session.
*
*
*
Presentation_ID
LDP Neighbor Discovery
LDP Session is established from the router with higher IP address.
1.0.0.1
1.0.0.3
MPLS_A
NO_MPLS_C
1.0.0.4
MPLS_D
1.0.0.2
MPLS_B
Presentation_ID
Peers first exchange initialization messages.
The session is ready to exchange label mappings after receiving the first keepalive.
1.0.0.1
MPLS_A
1.0.0.2
MPLS_B
Presentation_ID
Double Lookup Scenario
Double lookup is not an optimal way of forwarding labeled packets.
A label can be removed one hop earlier.
MPLS Domain
1. LFIB: remove the label.
2. FIB: forward the IP packet based on IP next-hop address.
10.0.0.0/8
LFIB
Presentation_ID
Penultimate Hop Popping
A label is removed on the router before the last hop within an MPLS domain.
MPLS Domain
LFIB
Presentation_ID
Penultimate hop popping optimizes MPLS performace (one less LFIB lookup).
PHP does not work on ATM (VPI/VCI cannot be removed).
*
*
*
Presentation_ID
Used to discover and maintain the presence of new peers
Hello packets (UDP) sent to all-routers multicast address
*
*
*
Presentation_ID
Advertisement messages
Notification messages
Error signalling
Presentation_ID
Presentation_ID
What Is a VPN?
VPN is a set of sites which are allowed to communicate with each other.
VPN is defined by a set of administrative policies
Policies determine both connectivity and QoS
among sites.
Policies could be implemented completely by VPN service providers.
Using BGP/MPLS VPN mechanisms
Presentation_ID
Flexible inter-site connectivity
Ranging from complete to partial mesh
Sites may be either within the same or in different organizations
VPN can be either intranet or extranet
Site may be in more than one VPN
VPNs may overlap
Not all sites have to be connected to the same service provider
VPN can span multiple providers
*
*
*
Presentation_ID
Virtual
Router
There are two fundamental types of IP VPNs: dial and dedicated.
Dial IP VPNs offer remote Intranet access for mobile users and telecommuters. This is the most common form of IP VPN deployed today. These services are sometimes known as “corporate dial outsourcing” solutions. There are two types of Dial IP VPNs, which are classified by where the tunnel is created: on the subscriber’s PC or by the Service Provider on the NAS (Network Access Server).
While dedicated IP VPNs come in many forms, the common element is the delivery of IP services to the subscriber. This is typically done over an IP network using devices such as a security appliance or router on the customer site. IP VPN services may also be offered by providing an IP interface over a Frame Relay or ATM network. The application of these dedicated services is in connecting remote offices to corporate Intranets and Extranets over the WAN. These services are characterized by multiple users and higher speed connections. To provide complete VPN services, businesses and Service Providers frequently combine these dedicated VPNs with remote access solutions.
*
*
*
Presentation_ID
Customer Network (C-Network)
interfaces to a PE router
*
*
*
Presentation_ID
Set of (sub)networks part of the C-network and co-located
A site is connected to the VPN backbone through one or more PE/CE links
PE router
Provider Edge router. Part of the P-Network and interfaces to CE routers
P router
*
*
*
Presentation_ID
Route Distinguisher
Attributes of each route used to uniquely identify prefixes among VPNs (64 bits)
VRF based (not VPN based)
VPN-IPv4 addresses
*
*
*
Presentation_ID
VPN-Aware network
*
*
*
Presentation_ID
MPLS VPN Connection Model
A VPN is a collection of sites sharing a common routing information (routing table)
A site can be part of different VPNs
A VPN has to be seen as a community of interest (or Closed User Group)
Multiple Routing/Forwarding instances (VRF) on PE routers
*
*
*
Presentation_ID
MPLS VPN Connection Model
A site belonging to different VPNs may or MAY NOT be used as a transit point between VPNs
If two or more VPNs have a common site, address space must be unique among these VPNs
Site-1
Site-3
Site-4
Site-2
VPN-A
VPN-C
VPN-B
Presentation_ID
PE routers (edge LSRs)
P routers (core LSRs)
PE routers are faced to CE routers and distribute VPN information through
MP-BGP to other PE routers
VPN-IPv4 addresses, Extended Community, Label
*
*
*
Presentation_ID
iBGP sessions
P routers (LSRs) are in the core of the MPLS cloud
PE routers use MPLS with the core and plain IP with CE routers
P and PE routers share a common IGP
PE router are MP-iBGP fully meshed
VPN_A
10.2.0.0
CE
VPN_A
VPN_B
VPN_B
10.1.0.0
10.2.0.0
11.6.0.0
CE
PE
PE
CE
CE
Presentation_ID
EBGP, OSPF , RIPv2, Static routing
CE router run standard routing software
EBGP,OSPF, RIPv2,Static
Presentation_ID
The global routing table
Populated by the VPN backbone IGP (ISIS or OSPF)
VRF (VPN Routing and Forwarding)
Routing and Forwarding table associated with one or more directly connected sites (CEs)
VRF are associated to (sub/virtual/tunnel)interfaces
Interfaces may share the same VRF if the connected sites may share the same routing information
PE
CE
CE
EBGP,OSPF, RIPv2,Static
Presentation_ID
MPLS VPN Connection Model
The routes the PE receives from CE routers are installed in the appropriate VRF
The routes the PE receives through the backbone IGP are installed in the global routing table
By using separate VRFs, addresses need NOT to be unique among VPNs
PE
CE
CE
Presentation_ID
The Global Routing Table is populated by IGP protocols.
In PE routers it may contain the BGP Internet routes (standard BGP-4 routes)
BGP-4 (IPv4) routes go into global routing table
MP-BGP (VPN-IPv4) routes go into VRFs
*
*
*
Presentation_ID
PE
P
P
P
P
PE and P routers share a common IGP (ISIS or OSPF)
PEs establish MP-iBGP sessions between them
PEs use MP-BGP to exchange routing information related to the connected sites and VPNs
VPN-IPv4 addresses, Extended Community, Label
*
*
*
Presentation_ID
PE routers translate into VPN-IPv4
Assign a SOO and RT based on configuration
Re-write Next-Hop attribute
Send MP-iBGP update to all PE neighbors
BGP,RIPv2 update for Net1,Next-Hop=CE-1
VPN-IPv4 update:
CE-1
VPN-IPv4 update is translated into IPv4 address (Net1) put into VRF green since RT=Green and advertised to CE-2
CE-2
Site-2
Site-1
Presentation_ID
Insert the route into the VRF identified by the
RT attribute (based on PE configuration)
The label associated to the VPN-IPv4 address will be set on packet forwarded towards the destination
PE-1
Next-Hop=CE-1
VPN-IPv4 update:
CE-1
VPN-IPv4 update is translated into IPv4 address (Net1) put into VRF green since RT=Green and advertised to CE-2
CE-2
Site-2
Site-1
Presentation_ID
MPLS VPN Connection Model
Route distribution to sites is driven by the Site of Origin (SOO) and Route-target attributes
BGP Extended Community attribute
A route is installed in the site VRF corresponding to the Route-target attribute
Driven by PE configuration
*
*
*
Presentation_ID
RD is configured in the PE for each VRF
RD may or may not be related to a site or a VPN
IPv4 address (32bits)
Site of Origin (SOO): identifies the originating site
*
*
*
Presentation_ID
Local Preference
The VRF where a lookup has to be done
*
*
*
Presentation_ID
General form
Registered AS number
Registered IP address
Presentation_ID
Identify one or more routers where the route has
been originated (site)
Route-Target
Presentation_ID
MP-BGP Update
The Label can be assigned only by the router which address is the Next-Hop attribute
PE routers re-write the Next-Hop with their own address (loopback interface address)
“Next-Hop-Self” BGP command towards iBGP neighbors
Loopback addresses are advertised into the backbone IGP
PE addresses used as BGP Next-Hop must be uniquely known in the backbone IGP
No summarisation of loopback addresses in the core
*
*
*
Presentation_ID
MPLS Forwarding
Packet forwarding
PE and P routers have BGP next-hop reachability through the backbone IGP
Labels are distributed through LDP (hop-by-hop) corresponding to BGP Next-Hops
Label Stack is used for packet forwarding
Top label indicates BGP Next-Hop (interior label)
*
*
*
Presentation_ID
BGP route with Next-Hop and
Label is found
through IGP route with
IP
packet
P routers switch the packets based on the IGP label (label on top of the stack)
VPN Label
P2 remove the top label
This has been requested through LDP by PE2
IP
packet
PE2 receives the packets with the label corresponding to the outgoing interface (VRF)
One single lookup
IP
packet
Presentation_ID
<RD_B,10.2> , iBGP NH= PE2 , T2 T8
Ingress PE receives normal IP Packets from CE router
PE router does “IP Longest Match” from VPN_B FIB , find iBGP next hop PE2 and impose a stack of labels:
exterior Label T2 + Interior Label T8
Data
T8T2
Data
VPN_A
10.2.0.0
CE
VPN_A
VPN_B
VPN_B
10.1.0.0
10.2.0.0
11.6.0.0
CE
PE1
PE2
CE
CE
Presentation_ID
VPN_A
VPN_A
VPN_B
10.3.0.0
10.1.0.0
11.5.0.0
P
P
P
P
PE
CE
CE
CE
T7
T8
T9
Ta
Tb
Tu
Tw
Tx
Ty
Tz
Solely on Interior Label
Egress PE uses Exterior Label to select which VPN/CE
to forward the packet to.
Exterior Label is removed and packet routed to CE router
T2
Data
Data
TAT2
T2
Data
VPN_A
10.2.0.0
CE
VPN_A
VPN_B
VPN_B
10.1.0.0
10.2.0.0
11.6.0.0
CE
PE1
PE2
CE
CE
Presentation_ID
Packet Forwarding Example 2
In VPN 12, host 130.130.10.1 sends a packet with destination 130.130.11.3
Customer sites are attached to Provider
Edge (PE) routers A & B.
130.130.10.1
130.130.11.3
12
12
A
B
Presentation_ID
VPN-ID
...
...
...
...
...
2. PE router A selects the correct VPN forwarding table based on the links’ VPN ID (12).
12
link on PE router A.
A
Presentation_ID
Packet Forwarding Example 2 (cont.)
3. PE router A matches the incoming packet’s destination address with VPN 12’s forwarding table.
130.130.11.3
12
A
989
101
*
*
*
Presentation_ID
A
B
5. Packet is label-switched from PE router A to PE B based on the top label, using normal MPLS.
*
*
*
Presentation_ID
B
12
6. PE router B identifies the correct site in VPN 12 from the inner label.
130.130.11.3
*
*
*
Presentation_ID
VRF: VPN Routing and Forwarding Instance
VRF Routing Protocol Context
Presentation_ID
VRF and Multiple Routing Instances
VRF Routing table contains routes which should be available to a particular set of sites
Analogous to standard IOS routing table, supports the same set of mechanisms
Interfaces (sites) are assigned to VRFs
One VRF per interface (sub-interface, tunnel or virtual-template)
Possible many interfaces per VRF
*
*
*
Presentation_ID
Routing processes run within specific routing contexts
Populate specific VPN routing table and FIBs (VRF)
Interfaces are assigned to VRFs
Static
BGP
RIP
Presentation_ID
Logical view
Routing view
Presentation_ID
VPN_A
VPN_A
VPN_B
10.3.0.0
10.1.0.0
11.5.0.0
P
P
P
P
PE
PE
CE
CE
CE
VPN-IPv4 address are propagated together with the associated label in BGP Multiprotocol extension
Extended Community attribute (route-target) is associated to each VPN-IPv4 address, to populate the site VRF
iBGP sessions
Presentation_ID
VPN sites with optimal intra-VPN routing
Each site has full routing knowledge of all other sites (of same VPN)
Each CE announces his own address space
MP-BGP VPN-IPv4 updates are propagated between PEs
Routing is optimal in the backbone
Each route has the BGP Next-Hop closest to the destination
No site is used as central point for connectivity
*
*
*
Presentation_ID
VRF
RD:N1, NH=PE1,Label=IntCE1, RT=Blue
RD:N2, NH=PE2,Label=IntCE2, RT=Blue
RD:N3, NH=PE3,Label=IntCE3, RT=Blue
IntCE1
IntCE3
N1
Presentation_ID
VPN sites with Hub & Spoke routing
One central site has full routing knowledge of all other sites (of same VPN)
Hub-Site
Other sites will send traffic to Hub-Site for any destination
Spoke-Sites
Use of central services at Hub-Site
*
*
*
Presentation_ID
PE2
PE1
PE3
N3
RD:N1, NH=PE3,Label=IntCE3-Spoke, RT=Spoke
RD:N2, NH=PE3,Label=IntCE3-Spoke, RT=Spoke
RD:N3, NH=PE3,Label=IntCE3-Spoke, RT=Spoke
Site-3
RD:N1, NH=PE1,Label=IntCE1, RT=Hub
VPN-IPv4 update advertised by PE2
RD:N2, NH=PE2,Label=IntCE2, RT=Hub
IntCE2 VRF
BGP/RIPv2
BGP/RIPv2
Routes are imported/exported into VRFs based on RT value of the VPN-IPv4 updates
PE3 uses 2 (sub)interfaces with two different VRFs
Site-1
N1
Site-2
N2
Presentation_ID
PE2
PE1
PE3
N3
Site-3
BGP/RIPv2
BGP/RIPv2
Traffic from one spoke to another will travel across the hub site
Hub site may host central services
Security, NAT, centralised Internet access
Site-1
N1
Site-2
N2
Presentation_ID
In a VPN, sites may need to have Internet connectivity
Connectivity to the Internet means:
Being able to reach Internet destinations
Being able to be reachable from any Internet source
The Internet routing table is treated separately
In the VPN backbone the Internet routes are in the Global routing table of PE routers
Labels are not assigned to external (BGP) routes
*
*
*
Presentation_ID
MPLS VPN Internet routing
VRF specific default route
A default route is installed into the site VRF and pointing to a Internet Gateway
The default route is NOT part of any VPN
A single label is used for packets forwarded according to the default route
The label is the IGP label corresponding to the IP address of the Internet gateway
Known in the IGP
Presentation_ID
Customer (site) routes are known in the site VRF
Not in the global table
The PE/CE interface is NOT known in the global table. However:
*
*
*
Presentation_ID
MPLS VPN Internet routing
VRF specific default route
The Internet Gateway specified in the default route (into the VRF) need NOT to be directly connected
Different Internet gateways can be used for different VRFs
Using default route for Internet routing does NOT allow any other default route for intra-VPN routing
As in any other routing scheme
*
*
*
Presentation_ID
ip route vrf VPN-A 0.0.0.0 0.0.0.0 192.168.1.1 global
BGP-4
MP-BGP
Site-1
Presentation_ID
Presentation_ID
PE routers will use BGP-4 sessions to originate customer routes
Packet forwarding is done with a single label identifying the Internet Gateway IP address
More labels if Traffic Engineering is used
*
*
*
Presentation_ID
Separated (sub)interfaces
If CE wishes to receive and announce routes from/to the Internet
A dedicated BGP session is used over a separate (sub) interface
The PE imports CE routes into the global routing table and advertise them to the Internet
The interface is not part of any VPN and does not use any VRF
Default route or Internet routes are exported to the CE
PE needs to have Internet routing table
*
*
*
Presentation_ID
One (sub)interface for VPN routing
associated to a VRF
Associated to the global routing table
*
*
*
Presentation_ID
Presentation_ID
Presentation_ID
Scaling
Existing BGP techniques can be used to scale the route distribution: route reflectors
Each edge router needs only the information for the VPNs it supports
Directly connected VPNs
*
*
*
Presentation_ID
Each RR store routes for a set of VPNs
Thus, no BGP router needs to store ALL VPNs information
PEs will peer to RRs according to the VPNs they directly connect
VPN_A
10.2.0.0
CE
VPN_A
VPN_B
VPN_B
10.1.0.0
10.2.0.0
11.6.0.0
CE
PE1
PE2
CE
CE
VPN_A
VPN_A
VPN_B
10.3.0.0
10.1.0.0
11.5.0.0
P
P
P
P
PE
PE
CE
CE
CE
RR
RR
Presentation_ID
BGP updates filtering
iBGP full mesh between PEs results in flooding all VPNs routes to all PEs
Scaling problems when large amount of routes. In addition PEs need only routes for attached VRFs
Therefore each PE will discard any VPN-IPv4 route that hasn’t a route-target configured to be imported in any of the attached VRFs
This reduces significantly the amount of information each PE has to store
*
*
*
Presentation_ID
Policies use route-target attribute (extended community)
PE receives MP-iBGP updates for VPN-IPv4 routes
If route-target is equal to any of the import values configured in the PE, the update is accepted
Otherwise it is silently discarded
PE
VPN-IPv4 update:
Import RT=yellow
Import RT=green
Presentation_ID
MPLS-VPN Scaling
Route Refresh
Policy may change in the PE if VRF modifications are done
New VRFs, removal of VRFs
However, the PE may not have stored routing information which become useful after a change
PE request a re-transmission of updates to neighbors
Route-Refresh
PE
VPN-IPv4 update:
Import RT=yellow
Import RT=green
Import RT=red
1. PE doesn’t have red routes (previously filtered out)
2. PE issue a Route-Refresh to all neighbors in order to ask for re-transmission
*
*
*
Presentation_ID
PE router will discard update with unused route-target
Optimization requires these updates NOT to be sent
Outbound Route Filter (ORF) allows a router to tell its neighbors which filter to use prior to propagate BGP updates
PE
VPN-IPv4 update:
Import RT=yellow
Import RT=green
1. PE doesn’t need red routes
2. PE issue a ORF message to all neighbors in order not to receive red routes
*
*
*
Presentation_ID
PE router have to be configured for
VRF and Route Distinguisher
Routing protocol used with CEs
MP-BGP between PE routers
BGP for Internet routers
With other PE routers
Presentation_ID
VRFs are associated to RDs in each PE
Common (good) practice is to use the same RD for the same VPN in all PEs
But not mandatory
VRF configuration command
ip vrf <vrf-symbolic-name>
Presentation_ID
Presentation_ID
Routing contexts are defined within the routing protocol instance
Address-family router sub-command
Presentation_ID
Router BGP <asn>

ip route vrf <vrf-symbolic-name> …
Presentation_ID
Show ip route vrf <vrf-symbolic-name> ...
Show ip protocol vrf <vrf-symbolic-name>
Show ip cef <vrf-symbolic-name> …
telnet /vrf <vrf-symbolic-name>
ping vrf <vrf-symbolic-name>
Presentation_ID
Presentation_ID
Presentation_ID
Increases value add by the VPN Service Provider
Decreases Service Provider’s cost of providing VPN services
Mechanisms are general enough to enable VPN Service Provider to support a wide range of VPN customers
See RFC2547
Presentation_ID
BGP/MPLS VPNs: routing peering
Amount of routing peering maintained by CE is O(1) - CE peers only with directly attached PE
independent of the total number of sites within a VPN
scales to VPNs with large number of sites (100s - 1000s sites per VPN)
Site
Mesh of point-to-point connections requires each (virtual) router to maintain O(n) peering (where n is the number of sites)
does not scale to VPNs with large number of sites (due to the properties of existing routing protocols)
I’d like to bring to your attention few points.
First of all, observe that a CE router has routing peering just with its directly connected PE router, but not with CE routers in other sites of that VPN. That facilitates support of large VPNs with 100s and 1000s of sites per VPN. This is because the amount of routing peering that a CE router has to perform is independent of the total number of sites within a VPN.
In contrast, with any tunnel-based approach, like IPSec, in order to maintain a full mesh connectivity, a CE router has to maintain routing peering with all other CE routers in all other sites.
*
*
*
Presentation_ID
Point-to-point connections vs BGP/MPLS VPNs: provisioning
Amount of configuration changes needed to add a new site (new CE) is O(1):
need to configure only the directly attached PE
independent of the total number of sites within a VPN
All other sites
Mesh of point-to-point connections requires O(n) configuration changes (where n is the number of sites) when adding a new site
New
Site
Config
change
New
Site
I’d like to bring to your attention few points.
First of all, observe that a CE router has routing peering just with its directly connected PE router, but not with CE routers in other sites of that VPN. That facilitates support of large VPNs with 100s and 1000s of sites per VPN. This is because the amount of routing peering that a CE router has to perform is independent of the total number of sites within a VPN.
In contrast, with any tunnel-based approach, like IPSec, in order to maintain a full mesh connectivity, a CE router has to maintain routing peering with all other CE routers in all other sites.
*
*
*
Presentation_ID
Presentation_ID
show tag-switching interface
show tag-switching tdp discovery
*
*
*
Presentation_ID
Protocol version: 1
Session hold time: 180 sec; keep alive interval: 60 sec
Discovery hello: holdtime: 15 sec; interval: 5 sec
Discovery directed hello: holdtime: 180 sec; interval: 5 sec
*
*
*
Presentation_ID
Interface Serial1/0.1:
Tagging operational
MTU = 1500
Interface Serial1/0.2:
Tagging operational
MTU = 1500
Presentation_ID
Local TDP Identifier:
Presentation_ID
router(config)#
show tag-switching tdp bindings
*
*
*
Presentation_ID
Peer TDP Ident: 192.168.3.100:0; Local TDP Ident 192.168.3.102:0
TCP connection: 192.168.3.100.711 - 192.168.3.102.11000
Up time: 00:43:26
TDP discovery sources:
192.168.3.10 192.168.3.14 192.168.3.100
Presentation_ID
Router#show tag-switching tdp neighbors detail
Peer TDP Ident: 192.168.3.100:0; Local TDP Ident 192.168.3.102:0
TCP connection: 192.168.3.100.711 - 192.168.3.102.11000
rev sent 26
TDP discovery sources:
Addresses bound to peer TDP Ident:
192.168.3.10 192.168.3.14 192.168.3.100
Peer holdtime: 180000 ms; KA interval: 60000 ms; Peer state:
estab
Presentation_ID
local binding: tag: 28
tib entry: 192.168.3.2/32, rev 8
local binding: tag: 27
tib entry: 192.168.3.3/32, rev 7
local binding: tag: 26
tib entry: 192.168.3.10/32, rev 6
local binding: tag: imp-null(1)
*
*
*
Presentation_ID
show ip cef detail
router(config)#
*
*
*
Presentation_ID
tags Match tag values
tsp-tunnel TSP Tunnel id
Presentation_ID
Local Outgoing Prefix Bytes tag Outgoing Next Hop
tag tag or VC or Tunnel Id switched interface
26 Untagged 192.168.3.3/32 0 Se1/0.3 point2point
MAC/Encaps=0/0, MTU=1504, Tag Stack{}
27 Pop tag 192.168.3.4/32 0 Se0/0.4 point2point
MAC/Encaps=4/4, MTU=1504, Tag Stack{}
20618847
MAC/Encaps=4/8, MTU=1500, Tag Stack{29}
18718847 0001D000
Presentation_ID
192.168.20.0/24, version 23, cached adjacency to Serial1/0.2
0 packets, 0 bytes
via 192.168.3.10, Serial1/0.2, 0 dependencies
next hop 192.168.3.10, Serial1/0.2
*
*
*
Presentation_ID
debug tag-switching tdp ...
debug tag-switching tfib ...
debug tag-switching packets [ interface ]
router(config)#
Disables fast or distributed tag switching.
*
*
*
Presentation_ID
Labels are not allocated or distributed.
Packets are not labeled although the labels have been distributed.
MPLS intermittently breaks after an interface failure.
Large packets are not propagated across the network.
*
*
*
Presentation_ID
Symptom
show tag tdp discovery does not display expected TDP neighbors.
Diagnosis
Verification
*
*
*
Presentation_ID
Symptom
Diagnosis
Label distribution protocol mismatch - TDP on one end, LDP on the other end.
Verification
*
*
*
Presentation_ID
Symptom
Diagnosis
Verification
Verify access-list contents with show access-list.
*
*
*
Presentation_ID
Symptom
TDP neighbors discovered, TDP session is not established.
show tdp neighbor does not display a neighbor in Oper state.
Diagnosis
Connectivity between loopback interfaces is broken - TDP session is usually established between loopback interfaces of adjacent LSRs.
Verification
*
*
*
Presentation_ID
show tag-switching forwarding-table does not display any labels
Diagnosis
*
*
*
Presentation_ID
Labels are allocated, but not distributed.
show tag-switching tdp bindings on adjacent LSR does not display labels from this LSR
Diagnosis
Verification
Debug label distribution with debug tag tdp advertisement.
Examine the neighbor TDP router IDP with show tag tdp discovery.
*
*
*
Presentation_ID
show interface statistic does not labeled packets being sent
Diagnosis
CEF is not enabled on input interface (potentially due to conflicting feature being configured).
Verification
*
*
*
Presentation_ID
Internet address is 192.168.3.5/30
IP unicast RPF check is disabled
Inbound access list is not set
Outbound access list is not set
IP policy routing is disabled
Interface is marked as point to point interface
Hardware idb is Serial1/0
IP CEF switching enabled
Input fast flags 0x1000, Output fast flags 0x0
ifindex 3(3)
Transmit limit accumulator 0x0 (0x0)
IP MTU 1500
Presentation_ID
Symptom
Overall MPLS connectivity in a router intermittently breaks after an interface failure.
Diagnosis
IP address of a physical interface is used for TDP/LDP identifier. Configure a loopback interface on the router.
Verification
*
*
*
Presentation_ID
Large packets are not propagated across the network.
Extended ping with varying packet sizes fails for packet sizes close to 1500
In some cases, MPLS might work, but MPLS/VPN will fail.
Diagnosis
Tag MTU issues or switches with no support for jumbo frames in the forwarding path.
Verification
Trace the forwarding path; identify all LAN segments in the path.
Verify Tag MTU setting on routers attached to LAN segments.
Check for low-end switches in the transit path.
*
*
*
Presentation_ID
Summary
After completing this lesson, you will be able to perform the following tasks:
Describe procedures for monitoring MPLS on IOS platforms.
List the debugging commands associated with label switching, LDP and TDP.
Identify common configuration or design errors.
Use the available debugging commands in real-life troubleshooting scenarios.
*
Session Number Presentation_ID
Presentation_ID
150+ Deployments Today
Session Number Presentation_ID