OSPF sequence numbers – why 80 million is smaller than 70 million

So a bit of a specific topic today. Going through Doyle’s Routing TCP/IP Volume 1, I felt my brain melt as he went through explaining sequence numbers in link-state advertisements (in a general sense, not specific to just OSPF). He describes two types of number “spaces” – the range of possible values – to describe how protocols sequence their LSA’s.

Ignoring the historic bits, such as Radia Perlman’s “lollipop space”, which is essentially a combination of cycling numbers with a fixed initialization value (this was part of the first version of OSPF drafts – not relevant for OSPFv2 or anything else), numbering spaces either follow linearly or circular.

In linear spaces, numbers start at x and end at y. The issue with linear space is that you could potentially “run out” of number space. This could cause a link-state protocol to be unable to distinguish between LSA’s that are the most recent from the originating router or just LSA’s being flooded from one router to the next. Link-state protocols, when receiving an LSA with the highest possible sequence number, shut down and age out it’s link-state database (LSDB) to flush all the older LSA’s out. To mitigate this, the designers had to make sure the field for a sequence number was large enough so as to never reasonably hit that highest possible value (y). Both OSPFv2 and IS-IS uses this number space scheme.

Circular number spaces never end – once a maximum value number is reached, it “resets” back to the lower boundary of the space. Since IS-IS and OSPFv2 use linear spaces, this is included for completeness. Perlman’s lollipop scheme used both linear and circular as a combination but these are not included in modern link state protocols.

IS-IS uses a rather simple scheme for it’s number space. A router will originate it’s own directly-connected link states with a sequence number of one (0x00000001), with a maximum sequence number of 4.2 billion (0xFFFFFFFF). This is because the IS-IS field for sequence numbers in its LSP’s (link state packet) uses unsigned 32-bit integers. These values range from 1 – 4294967295 in decimal.

OSPF, on the other hand, uses signed 32-bit integers. While it uses the same scheme for number spaces as IS-IS (linear), the way the values are represented (especially on a router’s database outputs) is…different.

Observe:

Net Link States (Area 1)

Link ID         ADV Router      Age         Seq#       Checksum
192.168.1.112   10.0.0.112      1862        0x80000237 0x00D860
192.168.7.113   10.0.0.113      12          0x80000001 0x00E8F5

So…it starts are 80 000 000?

Obviously, the seq. number is represented in hexadecimal format…but why 0x80000001? Doesn’t that translate to 2 billion decimal? The detail to note is the fact that this field is a signed integer. That means the integers actually range from – 2147483648 to + 2147483648. When processing this field in binary, the CPU needs a way of comparing sequence numbers to determine which one is “higher” – in this case, closer to positive +2147483648.

Programming languages such as C/C++ must pay particular attention to integers declared as signed vs unsigned. Some google- and wiki-fu later, the reason we see sequence numbers starting at 0x80000001 (0x80000000 is reserved via the RFC standard) is because the left-most/most significant bit determines whether a number is represented as a positive value or a negative value. When the MSB is set, the integer is a negative value. When the MSB is not set, it is a positive integer.

 

So…
0x80000001 is 1000 0000 …. 0000 0001 in binary
Since the MSB is set, this is the “first” integer value in a 32-bit signed integer range. It doesn’t make sense to think of these values in decimal values, since this does indeed translate “directly” to 2 billion. These sequence numbers will increment 0x80000002….all the way to 0xFFFFFFFF (-1 in decimal). Incrementing one more time would start the sequence at decimal 0. This is because the MSB must become “unset” for it to represent positive values. The range then continues from 0x00000001 until 0x7FFFFFFE. Again, from the RFC, 0x7FFFFFFF is reserved (actually, an LSA received with this maximum possible sequence number triggers OSPF to flush its LSDB…more nuts and bolts to be expanded on later).

 

The choice of using signed vs unsigned gets kind of blurred between hardware and software. The use of signed integers simplifies ALU designs for CPUs and most (if not all) programming languages implement signedness in their integer data types…Why the IETF chose to use signed integers as part of the OSPFv2 spec? Who knows…

 

Anyways, this really bothered me for a couple days. I feel better now that it’s on paper. Any gross errors or omissions, leave it in the comments!

 

PS: More math-savvy folks will scream at this in regards to two’s complement role here with signed integer binary representation…I just wanted to know and jot down why IOS shows the starting sequence numbers in show ip ospf database as 0x80000001. So there you have it. Further reading for the curious

CCIE or bust! And other going-on’s

The time has come…

I’ve finally made the committed decision to pursue my number for CCIE Routing and Switching. Like most folks in networking, I’ve gotten to the point where I’m feeling quite confident in my skills; solid foundations with just a few cobwebs here and there to knock out (mostly due to re-focusing). This decision has come to me after moving on from my VAR support job, which covered the entire breadth of Cisco but prevented my skills from becoming specialized, to a network engineer doing implementation for a financial org. Since I’m settling in to the new job, I’ve come to realize that all the nitty-gritty routing and switching bits are what interest me the most. Sure, I’ve done a bit of this and a bit of that in other areas (mostly in wireless and data center) but I’m an R&S guy.

Which brings me to my next bit of personal news – I’ve now gone from support to implementation. For those in the NOC’s or VAR’s, I would highly recommend it as a next step after you’ve gotten your feet wet in the trenches of support. It’s nice to learn what happens when things break and how to resolve issues, however, in my humble opinion, in order to have that deep understanding, you have to be there to know *why* something is configured or designed a certain way. Delving into the world of real-world business challenges and requirements, as well as ITIL and change management (ugh, how I loathe thee…a “necessary evil” some may say), I know get to make decisions on how my network looks and how it functions to accomplish a certain goal. Whatever those goals may be, such as a new project or business requirement. For those who are looking to move up in the world of networking, implementation is required experience.

So, while I haven’t been blogging much here (seriously, just so much to learn and write about…some may say too much!), I will be focusing on hitting the books and lab prep. I’m shooting for a Q2 2014 target. Wish me luck!

PS: There are so many good blogs out there with CCIE “notes” – however, I could start banging out tidbits here and there for things that stump me or just bother me…More to come.

Modular Chassis vs Fixed Configuration Switches – Part 2 When/Where

Part 2 of my chassis vs fixed config switch showdown. In this post, I’ll provide some examples of where you might find both form factors deployed and some of the use cases for both. Click here for part 1 of this series.

 

Fixed Configuration

  • Campus

One of the more obvious places to find fixed configuration switches is in the campus network. This may include a campus of all sizes, from SMB to large enterprise. Here, the best bang for your buck lends itself to fixed-port switches since you typically will see “dumb switches” deployed in the access layer for port density and provide host connectivity to your network. Examples include lower-end Catalyst 2900 series and Catalyst 3600/3700 series switches.

Not only will you find these switches in the access of campus networks. Many smaller joints will use mid-range Catalyst 3750’s (for example) and stack them for distribution and/or core layer functions. Some of the advantages of using a switch stack for your core are distributed control plane across the stack, as well as port density for the rack space. If you’re connecting a bunch of dumb Layer 2 switches across your access, you can easily uplink them to slightly-beefier Layer 3 switches such as 3560’s and 3750’s arranged in a stack configuration. Of course, you are subject to hardware failures since these platforms do not have any redundancy built in to each member switch. However, for smaller organizations that just require the number of ports in a (relatively) small package, these do just fine.

  • Data Center

Fixed-port switches also find themselves in many “top-of-rack” (ToR) deployments. One only has to look as far as the Nexus 5000 Series, Juniper’s EX switching line, Brocade’s VDX line and countless other vendors. These switches typically have the 10 Gigabit density requires to connect large numbers of servers. Depending on your requirements, native Fibre Channel can also be used (in the case of Cisco Nexus 5500’s and Brocade VDX 6700’s) to provide FCoE connectivity down to servers. Rack space and port density again are factors when deploying these fixed configuration switches. In the case of data centers, the FCoE capabilities are also typically found on these platforms where they may not exist in chassis offerings.

  • ISP

One lesser-seen place to find fixed-port switches is in an ISP environment. Metro Ethernet comes to mind here. Providing Layer 2 VPN (L2VPN) services, ISP’s need equipment that can interface with the CPE and provide the connectivity up to the SP core. Cisco’s ME Series finds that place on this list, providing Q-in-Q/PBB as “dumb SP access”.

Modular Chassis

  • Campus

In the campus network, you’ll seen chassis’s in all places. On the one hand, with proper density and power requirements, you can easily deploy something like a Cisco Catalyst 4500E in a wiring closest for host connectivity. On the other hand, you would also see Catalyst 6500E’s in the campus core, providing high-speed switching with chassis-style redundancy to protect against single-device failures. Redundant power and supervisors help mitigate failures of one core device.

  • Data Center

In classical designs, the data center core is where you’d most likely see a chassis-based switch deployed. High-speed switching is the name of the game here, as well as resiliency against failures. Again, the redundant hardware found in chassis’s protect against failures in the data, control and management plane. Cisco Nexus 7000’s and Juniper’s high-end EX8200 platforms

In some designs, you may see a chassis switch deployed in an “End of Row” (EoR) fashion. This follows the design principles of the fixed-config ToR deployments, except here you could deploy a chassis switch to (again) improve redundancy. While definitely not required for all environments, if you couldn’t possibly allow a failure of your first switch that touches the end hosts, the extra redundancy (supervisors, power, switch fabrics, etc.) fit the bill.

  • ISP

Since I don’t work in the field of service providers, I’ll present what I feel are appropriate places to find a chassis-based switch. More than likely you’ll be using chassis-based routers here but I’ll include it for completeness. Cisco 6500/7600, ASR1000/9000’s, Juniper MX, all chassis offerings that you could see in an ISP’s network. This includes (but not limited to) ISP core for high-speed MPLS or IP switching/routing and PE deployments. This is where rich services provided by these offerings live, in order for service providers to be able to offer an array of MPLS-based services. This extends from L3VPNs to L2VPNs as well (either L3TPv3-based, EoMPLS-based or VPLS-based deployments). Being critical/core devices, I would imagine most service providers to require the level of fault-tolerance offered by a chassis-based system, especially with strict SLA’s with customers.

 

And there you have it. Hopefully, I’ve given you a good overview of the what, where, when, and why of fixed configuration and modular chassis switches. Given the state of things with our vendors, you can typically find any and all form factors provided by each and hopefully will choose the fit platform to fit the requirements of your environment. Cisco, Juniper, Brocade and HP are the first names that I can think of. There are other players as well including Arista, BNT, Dell, Huawei, Alcatel-Lucent, and a slew of others might fit specific markets and environments but may not cover the broad spectrum required for every network. As always, do your research and you’ll find what you need just fine.

 

Any corrections or feedback, you know what to do! Thanks for reading.

Modular Chassis vs Fixed Configuration Switches – Part 1 The Why

One thing that seems to be brought up a lot in conversation around the office, especially for newer folks entering the networking biz, is the choice of using larger modular chassis-based switches versus the smaller simpler fixed-configuration cousins. In fact, most people (myself included when I got my start) don’t even know that “chassis” are an option for switch platforms. This is completely understandable for the typical college and/or Cisco NetAcad. graduate, since the foundational education is focused almost exclusively on networking theory and basic IOS operations. So when, where and why do you use a chassis-based versus a fixed configuration switch?

PS: I’ve decided to make this a two-part series, due to the verbosity of the information. This article will focus on comparing the two different classes of switches and why you might use over the other. In the second part, I’ll provide some real world use cases for both and where they might be typically deployed.

In this article, I’ll be using the following Cisco platforms* for comparison between the two options:

*Note: I’ll be sticking to Cisco purely for simplicity’s sake. Other vendors such as Juniper, HP and Brocade carry their own lines of switching platforms with similar properties. With a bit of research, you can apply the same logic when evaluating the different platforms for your specific implementation.

WHY

Port Density

This is one of the more obvious variables. The Catalyst 6509, a 14RU 9-slot chassis, can have upwards of 384 (336 in dual-supervisor setups) Gigabit copper interfaces. These chassis can also utilize 10Gbps line cards, with those densities in just over 100 10GbE ports per chassis. However, it’ll depend on (especially on 6500’s) what supervisor modules you’re running along with your PFC/CFC/DFC daughtercards that will determine if those densities are a bit lower or higher than those numbers. This is where it’s critical to do your research before placing your orders or moving forward with implementations.

On the flip-side, fixed-configuration switches are just that – fixed chassis with limited modular functionality. Some exceptions apply, such as the Nexus 5500 switches that have slots to add additional modules to. However, generally speaking, WYSIWYG with these classes of switches. If we look at the same rack space of 14RU as with the previous example, that could potential be fourteen 48-port Gigabit Ethernet switches. A slew of Catalyst 3750’s gives you a whooping 672 ports in the same rack space. However, keep in mind that’s fourteen switches, as opposed to a single chassis-based switch. You’ll have to keep this in mind when putting this hardware into your topology (to be discussed below). Unless of course you plan to run your switches in a stack via technology such as Cisco StackWise, which helps reduce the burden of managing so many switches separately. Since Cisco stacks are restricted to a maximum of 9 switches per stack, you’ll be looking to manage at least two different switch stacks to match the 14RU and achieve the most port density in this comparison.

A side note for 10-Gig connectivity. The Nexus 5548UP can use up to 32 10GbE plus 16 with optional expansion module in a 1RU form factor. To compare to the 6509, that’s over 600 10GbE ports in a 14RU space. While it may be unfair to compare a newer Nexus series switch directly to a 6500 chassis, it’s comparison is purely for the discussion of form factor differences. A quick look at the Nexus 7009 chassis has similar 10GbE density as the 6509, which increasing densities with the larger chassis’s.

At the end of the day, fixed-configuration switches (generally speaking) packs more ports in the same amount of rack space as chassis switches.

Interface Selection

Typically, fixed-configuration switches are copper-based FastEthernet and/or GigE. Exceptions again exist, as with the Nexus 5500’s (Unified Ports models) being able to run their interfaces in Ethernet or native Fibre Channel. In your fixed-config’ers, you’ll also typically have higher-speed uplink ports that support higher-speed optics such as SFP/SFP+, GBIC, XENPAK and X2’x. And of course, there are SFP-based flavours that give you all fiber ports if that’s what tickles your fancy.

On your chassis-based switches, you will see that you have a wider choice of interface types. You’ll have your 10GbE line cards, which can be a combination of SFP/SFP+, XENPAK (older), X2, RJ45 and GBIC transceivers. You can also make use of DWDM/CWDM modules and transceivers for your single-mode long-range runs. Also, with 40 Gigabit Ethernet becoming more relevant and deployed, QSFP+ and CFP connectivity is an option as well (if the chassis in question can properly support it). The only restriction on chassis-based switches are what line cards you have to work with.

Due to the nature of fixed switches, it’s natural that a chassis comprised of modular line cards has more interface selection.

Performance

Here’s where things become a little more subtle. Again, for simplicity’s sake, I’ll restrict this section to the following models of switches:

  • Catalyst 6509 Chassis w/ Supervisor 720
  • Catalyst 3750-X 48-port Gigabit Switch
  • Nexus 5548UP Switch

Let’s start with the fixed switches. The fixed-configuration Catalyst 3750-X switches are equipped with 160 Gbps switch fabrics which should be ample capacity for line-rate gigabit speeds. Let’s also not forget that fabric throughput is not the only measure of performance. These switches, and most similarly designed fixed-configuration “access” switches, have smaller shared port buffers which can become problematic with very bursty traffic.

On the Nexus 5548UP’s, we see a different story. Being a 10GbE switch with a 960Gbps fabric, naturally we see the 5548 have much higher performance than it’s LAN cousins. Port buffers on the Nexus 5500’s are dedicated per port (640KB), allowing these switches to handle bursts very easily.

The Catalyst 6509, being a modular chassis-based switch, bases its performance on the Supervisor modules in use, as well as the specific line card+daughter card combination. For simplicity’s sake, let’s assume a Supervisor 720 (since the switch fabric is located on the supervisor in these switches, that’s 720Gbps switching capacity across the entire chassis) and X6748-GE-TX 48-port 1Gig ethernet modules. Due to the hardware architecture of these chassis, each slot is constrained to 40 Gbps capacity per slot, so slight over-subscription for 48x1Gbps line cards will occur. Luckily, each port is given a 1.3MB buffer so bursty traffic is handled just fine on these line cards. When using DFC daughter cards, line cards will even handle their own local switching and so won’t be constrained to the 40Gbps-per-slot restriction. This is because packets don’t need to traverse the backplane when switched port-to-port on the same line card. May I reiterate that for these kind of switches, do your homework. The performance of a chassis-based switch depends on more factors due to the combination of supervisor, switch fabric and line cards in use.

Redundancy/HA

One of the most obvious benefits of a chassis-based switch is redundant and highly-available hardware. The Catalyst 6500 is typically deployed with two Supervisor modules. By having two supervisors, you’re protected by failures in the data plane (due to redundant active/standby switch fabrics), the control plane (NSF/SSO) as well as the management plane (IOS is loaded on both supervisors and thus can continue to operate even when an active Sup fails). Chassis switches also utilize redundant power supplies to protect against electrical failures.

On the other side of the coin, you have fixed configuration switches. While some newer switches do utilize redundant power supplies, none of them use separate modules or components for data, control or management plane. They utilize on-board switch fabrics for forwarding as well as a single CPU for running your IOS images and control plane protocols.

Chassis here is the clear winner.

Services

Let me start this section by delving into what I mean by “services”. This includes features that are considered additional to basic switching. This will include support for things such as partial and/or full Layer 3 routing, MPLS, service modules such as firewalls and wireless LAN controllers (WLC) and other extras you may find on these platforms.

I think it’s safe to say that, generally speaking, fixed configuration switches are simple devices. As with the Catalyst line, with software, you can utilize full Layer 3 routing and most normal routing protocols such as OSPF, EIGRP and BGP. Keep in mind, however, that you will often be limited by TCAM capacity for IP routes so forget about full BGP tables and the like. However, they get the job done. One great benefit is support for Power over Ethernet (PoE), which is usually standard on copper-based fixed configuration switches.

The Nexus 5500’s are a bit of an exception. On the one hand, out of the box, they are Layer 2 only devices that can only support (albeit limited) Layer 3 with an expansion module. However, with the Unified Ports, they also support native Fibre Channel as well as modern NX-OS software. I would say that, for specific use cases, the compromise is quite reasonable. Elaborating on that will be in my next post.

The Catalyst 6500 is the champion of services. Being a modular chassis, Cisco developed many modules that were specifically designed with services in mind. This includes the Firewall Services Module (FWSM), Wireless Service Module (WiSM) controllers, SSL VPN module, and ACE/CSS load balancers. While these modules have fallen out of favour due to performance constraints of the chassis itself as well as lack of development interest and feature parity with standalone counterparts, the fact of the matter is that there are still many Cisco customers in the field with these in place. The WiSM, for example, is essentially a WLC4400 on a blade. Being able to convenient integrate that directly into a chassis saves rack space as well as ports by using the backplane directly to communicate with the wired LAN. Other services supported on the 6500 from a software standpoint include Virtual Switching System (VSS) (with use of the VS-SUP720 or SUP2T), full Layer 3 routing with large TCAM, MPLS and VPLS support (with proper line cards) and PoE.

The chassis win in this category due to the modular “swiss army knife” designs.

Cost

I’ll just briefly mention the cost comparison between the two configurations. You will typically see a chassis-based switch eclipse a fixed configuration switch in terms of cost, due to the complexity of the hardware design as well as all the modularity with chassis. You’ll always have a long laundry list of parts that will have to be purchased in order to build a chassis-based switch, including the chassis itself, power supplies, supervisors, line cards and daughter cards (if applicable). Fixed configuration switches typically have a much lower cost of entry, with only limited modularity with certain platforms.

And there you have it. Hopefully, this will give you an insight into why you might use one form factor over the other. In my next part, I’ll provide some use-case examples for each and where you may typically deploy one versus the other.

CCIP completed, onto a different brand of Koolaid

Earlier this month, I sat for my Qos 642-642 exam to complete my CCIP certification. Other than a few gripes with out-dated information, the exam went over pretty smoothly and I hammered out a pass. I’ve written previously of my motivations for obtaining the CCIP cert and am glad to have stuck with it. Even though the certification will officially retire in a week or so, a lot of the topics covered will also be on the CCIE R&S version 4.0 blueprint. I doubt I’m finished with BGP, MPLS and QoS so I’m keeping that knowledge tucked away for the time being 😉

Just one last note on CCIP, I would highly recommend Wendell Odom’s Cisco QoS Exam Cert Guide for anyone looking to learn about QoS on Cisco IOS. This is one of the best Cisco Press books I’ve read and continue to reference it for everything IOS QoS.

Now that I’ve a broad brush of Cisco R&S technologies with my CCNP and CCIP, I’ve decided to re-visit my Juniper studies. While we don’t work all that much with Juniper at $DAYJOB, we have Juniper gear in the lab to play with. Recently, I’ve been using EX4200 and EX4500 switches as well as working through Juniper’s free JNCIS-ENT study guide. Coming from a Cisco background and particular having gone through CCNP, I’m finding there’s a good amount of overlap. It’s just learning all the JUNOS hierarchies and “where is that feature” in JUNOS.

Upcoming posts will cover some basic JUNOS switching on EX and interoperating with Cisco Catalyst 3560/3750’s. I’ll also be finishing a lot of my draft posts from earlier this year covering BGP, MPLS and some vendor ranting 😛

Stay tuned.

MPLS VPN Label Basics – The LIB, the LFIB and the RIB(s)

LDP, or Label Distribution Protocol, is used to advertise label bindings to peers in an MPLS network.

The Label Information Base, or LIB, contains all received labels from remote peers and is similar to the IP RIB. Not all labels received from LDP neighbors are used since there will be a best path selected and to be used for forwarding for each prefix. Forwarding decisions are based on the Label Forwarding Information Base, or LFIB, once the best path towards the next-hop LSR is determined. How this is determined is based on the close relationship between the LIB, the LFIB and the IP routing table (RIB).

For clarity, we’ll be talking about non-ATM MPLS forwarding. ATM MPLS uses different LDP discovery, label retention and distribution methods because of ATM’s unique forwarding method and encapsulation(s).

Here’s our simple MPLS topology. We have two PE routers, connecting two customer sites. We also have a route reflector to reduce the number of IBGP connections required between PE routers. This is part of my MPLS lab so the irrelevant routers and configs will be omitted.

PE1 Router ID: 10.255.255.3/32
PE2 Router ID: 10.255.255.4/32
RR Router ID: 10.255.255.2/32

Routing within the MPLS network is provided by basic single-area IS-IS.

So how does MPLS build its Label FIB? First, let’s look at the VRF’s defined for this customer. We’ll be using VRF “Red” on both PE routers:

PE1#show ip vrf
  Name                             Default RD          Interfaces
  Red                              65000:1             Fa1/0
----
PE2#show ip vrf
  Name                             Default RD          Interfaces
  Red                              65000:1             Fa1/0

For VPNv4 routing between customer sites, MP-BGP is used to distribute label bindings for VRF routes. LDP will distribute label bindings for the Loopback0 BGP next-hop’s. OSPF is used between CE and PE routers.

On PE1, here are all the customer routes connected via Fa1/0

PE1#show ip route vrf Red ospf | in FastEthernet1/0
O IA    10.10.1.0/24 [110/2] via 10.1.1.2, 00:23:55, FastEthernet1/0
O       10.30.100.0/30 [110/101] via 10.1.1.2, 00:23:55, FastEthernet1/0
O       10.30.1.101/32 [110/2] via 10.1.1.2, 00:23:55, FastEthernet1/0

OSPF routes running in VRF Red are redistributed into MP-BGP under “address-family ipv4 vrf Red”.

PE1#show ip bgp vpnv4 rd 65000:1 10.10.1.0/24
BGP routing table entry for 65000:1:10.10.1.0/24, version 14
Paths: (1 available, best #1, table Red)
  Advertised to update-groups:
        1
  Local
    10.1.1.2 from 0.0.0.0 (10.255.255.3)
      Origin IGP, metric 2, localpref 100, weight 32768, valid, sourced, best
      Extended Community: RT:65000:1 OSPF DOMAIN ID:0x0005:0x000000010200 
        OSPF RT:0.0.0.0:3:0 OSPF ROUTER ID:10.100.1.101:0
      mpls labels in/out 25/nolabel
PE1#

Here we can see the MPLS label binding that will be sent to other PE routers. PE routers with a VRF matching the same route targets will import these routes into the VRF of other sites.

In the MPLS LDP Forwarding table, an entry is created for these “local” VRF routes. That is, the routes reachable via the next-hop CE router:

PE1#show mpls forwarding-table vrf Red 10.10.1.0
Local  Outgoing      Prefix            Bytes Label   Outgoing   Next Hop    
Label  Label or VC   or Tunnel Id      Switched      interface              
25     No Label      10.10.1.0/24[V]   0             Fa1/0      10.1.1.2

This is the label that will be advertised to MP-BGP peers (in this case, reflected to PE2).

PE1 will also have a label binding for its own BGP next-hop IP address, which is the Loopback0 interface under the global routing table:

PE1#show mpls ldp bindings local 10.255.255.3 32
  lib entry: 10.255.255.3/32, rev 4
        local binding:  label: imp-null

This is advertised as an Implicit Null label, to avoid performing two lookups (once in the LFIB and another in the RIB for its connected prefix). Core P routers will have a label binding for this prefix:

CoreP#show mpls ldp bindings local
...
  lib entry: 10.255.255.3/32, rev 14
        local binding:  label: 17

In order for the correct labels to be used for forwarding, two labels will have to be used. The top label will be used to forward packets in the core (P) MPLS network to the BGP next-hop (either the loopback of PE1 or PE2, depending on the packet destination from the CE sites). The bottom label will be used to identify the VRF and outgoing interface to route packets towards the customer router(s).

So, for customer at Site B to reach network 10.10.1.0/24 at Site A, PE2 will use the following labels:

  • Label 17 for the transport label to PE1, received from MPLS core router(s); identified via RIB lookup in the VRF “Red” to identify next-hop IP address
  • Label 25 for the VPN label, received from PE1 via MP-BGP; identified in the VPNv4 BGP RIB

To verify:

Packet received on Fa1/0 destined for 10.10.1.1/24 from Site B router(s), performs VRF Red RIB lookup:

PE2#show ip route vrf Red ospf  
Routing Table: Red

     10.0.0.0/8 is variably subnetted, 9 subnets, 3 masks
O IA    10.10.1.0/24 [110/52] via 10.255.255.3, 00:46:32

PE2 identifies next-hop IP address, which is the BGP next-hop of PE1. Since it is traversing the MPLS network on the outgoing interface FastEthernet2/0 into the core, it needs to be labeled before transit:

PE2#show ip bgp vpnv4 rd 65000:1 10.10.1.0/24
BGP routing table entry for 65000:1:10.10.1.0/24, version 34
Paths: (1 available, best #1, table Red, RIB-failure(17))
  Not advertised to any peer
  Local
    10.255.255.3 (metric 20) from 10.255.255.2 (10.255.255.2)
      Origin IGP, metric 2, localpref 100, valid, internal, best
      Extended Community: RT:65000:1 OSPF DOMAIN ID:0x0005:0x000000010200 
        OSPF RT:0.0.0.0:3:0 OSPF ROUTER ID:10.100.1.101:0
      Originator: 10.255.255.3, Cluster list: 10.255.255.2
      mpls labels in/out nolabel/25
PE2#show mpls ldp bindings 
...
lib entry: 10.255.255.3/32, rev 8
        local binding:  label: 17
        remote binding: lsr: 10.255.255.1:0, label: 17

Therefore, packets destined for customer Site A will be sent with the labels 17 and 25.

PE2#traceroute vrf Red 10.10.1.1

Type escape sequence to abort.
Tracing the route to 10.10.1.1

  1 10.10.1.9 [MPLS: Labels 17/25 Exp 0] 76 msec 52 msec 72 msec
  2 10.1.1.1 [MPLS: Label 25 Exp 0] 84 msec 40 msec 40 msec
  3 10.1.1.2 132 msec *  60 msec
PE2#

Below I will attempt to illustrate the decision process and relationship between all the entries in an MPLS router to demonstrate these relationships:

  1. An incoming packet from Site B, destined for 10.10.1.1, is received on PE2’s VRF interface Fa1/0.
  2. IP lookup is performed in the VRF table “Red” and identifies next-hop IP address known via global routing table. This route was redistributed from BGP into OSPF (hence the RIB failure) via PE1 next-hop of 10.255.255.3.
  3. BGP RIB lookup is performed to identify the VPN label. Under the VPNv4 address family, outgoing label is 25, as advertised by PE1
  4. Global RIB lookup is performed for BGP next-hop learned in VRF. Actual IP next hop in the MPLS core is identified (10.10.1.9) via outgoing interface FastEthernet2/0.
  5. Outgoing interface is an MPLS-enabled interface. LIB lookup performed to find bound address of the MPLS core next-hop of 10.10.1.9. Based on LDP neighbor that has bound IP address 10.10.1.9, remote label received from that LDP neighbor is used for transport label to PE1 loopback.
  6. LFIB entry created with Label 17, outgoing interface FastEthernet2/0 with next-hop IP address of 10.10.1.9 into core MPLS network and is routed onto PE1.

Example of “show mpls ldp neighbor” displays bound addresses for core P router(s). LIB entry selected for forwarding in LFIB is based on which LDP neighbor this next-hop IP address in the global RIB is bound to. In this case, only one LDP neighbor exists:

PE2#show mpls ldp neighbor 
    Peer LDP Ident: 10.255.255.1:0; Local LDP Ident 10.255.255.4:0
        TCP connection: 10.255.255.1.646 - 10.255.255.4.34846
        State: Oper; Msgs sent/rcvd: 118/119; Downstream
        Up time: 01:33:30
        LDP discovery sources:
          FastEthernet2/0, Src IP addr: 10.10.1.9
        Addresses bound to peer LDP Ident:
          10.10.1.1       10.255.255.1    10.10.1.5       10.10.1.9      
          10.10.1.13      
PE2#

In an MPLS VPN network, the label bindings received from remote peers (LIB), the label forwarding table (LFIB) and the various IP routing tables (VRF RIB, global RIB, BGP RIB, etc.) all work together in tandem to create the label stack used to forward packets from one VPN site to another. This is the basic forwarding paradigm of Multiprotocol Label Switching and enables service providers to provide L3VPN services to customers along with proper separation of customer routing via the use of VRF’s. References used in this post are Luc De Ghein’s MPLS Fundamentals book from Cisco Press and Cisco documentation, found at http://www.cisco.com/go/mpls.

BGP+MPLS Exam Passed! QoS and other things

Hi All,
I’ve been staying away from the Twitters and blogging to focus down on my BGP+MPLS composite exam. I wrote it this afternoon and passed, w00t! I wanted to give a HUGE thanks to Jarek Rek at his blog hackingcisco.blogspot.com. His labs are great to practice configuring Cisco IP routing and I recommend anyone preparing for CCNP ROUTE, CCIE R&S or anything routing-related to check it out. Thanks again Jarek!

So other than beating my chest, I will be finishing up some outstanding blog posts around my BGP and MPLS studies before moving on to my QOS exam. I’ve also been involved more and more with Juniper at work, along with trying to get up to speed with L2VPN technologies like basic EoMPLS. Metro Ethernet is a whole other rabbit hole that I wish to descend into eventually but at the moment, it’s still a bit of a mystery. It makes keeping up with blogging and goofing off at home challenging since I’m in study mode for CCIP while getting pulled in twenty different directions for real-world job stuff.

I’m currently looking for my next book go to through in prep of my QOS exam. My coworker had recommended Cisco Press’ “End to End QoS Network Design” while most of Learning@Cisco seems to recommend the IP Telephony QOS Exam study guide. That’s still up in the air until I review the exam topics. If anyone has a solid recommendation for 642-642, please let me know in the comments!

Last update, I picked up the newest edition of “TCP/IP Illustrated Volume 1”. Stevens book is often recommended by the experts and is considered the bible of Layers 4 and up. It’s a comprehensive tome and a great reference.

More technical posts coming shortly.