Encapsulating, shim headers, tunnelling – does it matter?

I overheard my manager today giving one of the new junior guys the run-down on basic encapsulation methods and general IP. Basically, how to TCP/IP. In explaining 802.1Q and VLAN tagging, my boss uttered the following phrase:

“…a switch encapsulates the frame with the VLAN number…”

It was at that point I ran over to his desk and put in my point (a bit of semantics, especially for a newbie) that technically, a 802.1Q tag actually modifies the Ethernet header and is more like a shim header.


An 802.1Q frame

But then GRE and IP tunnelling was brought up. And MPLS. And IPSec. And after much debate, I let the coaching continue and probably caused quite a bit of confusion for my green colleague. I’ve come to realize that I tend to fly off the handle and get a bit too far in a technical discussion that I lose those who might not have learnt or had exposure to the technology that myself and others know intimately.

Anyways, onto the whole encap-vs-tunnel-vs-shim debate. Here’s how I like to explain it and understand it myself as it applies generally:

  • encapsulation distinctly divides bits on the wire. When viewing a packet in Wireshark for example, an IP packet is encapsulated in an Ethernet header and an IP header, followed by whatever transport protocol (TCP, UDP or ICMP), and then finally application data. Encapsulation forms the basic structure of data packets, with a clear division of labor. Routers inspect IP headers for destination networks. Switches inspect Ethernet headers for destination MAC addresses.
  • tunnels utilize tunnel and multiple IP headers to create overlay networks riding over underlying infrastructure. GRE (with IPSec for encryption) is the most widely used tunnelling mechanism. It works great for connecting remote sites over the Internet. Internet routers only inspect the outer IP header, which is routed to the tunnel endpoint, at which point the far-end will strip the outer IP and GRE headers, and then make its own routing decisions base on the “inside” IP header. MPLS* also functions in a similar way for VPN applications.
  • shim headers are the muddiest of the bunch. MPLS is often called a shim header because it inserts a small 40 byte header within a data packet, which is then processed and/or inspected by MPLS-enabled routers. It also doesn’t have only one location where it could appear. In the case of pure IP L3VPNs, it’s shimmed between inner- and outer IP headers. It could also appear between disparate Layer 2 headers, in the case of Any Transport over MPLS (AToM). 802.1Q is further harder to define since it modifies existing Ethernet headers. A frame could have multiple .1Q tags in the case of dot1q tunnelling (confused yet?).

And yet, after writing those short descriptions of each, it’s apparent to me that all of those terms are muddy. You can’t always define a data type as one over the other, since you’ll find numerous exceptions and special use cases (that are not so special and very widely deployed) that break any rigidly-defined “layering rules”. Smarter people than me have agreed to that fact as it relates to explaining TCP and using dated reference models such as the OSI model. The IETF has also agreed that reference models
that adhere to strict onion layers as it relates to data networks hurts more than it helps.

But I digress. The concepts of encapsulation/decapsulation and tunnelling are central concepts that all networks use. Never will you (or should you) see an end host spit out an IP packet without its data link header (mostly Ethernet these days) along with its IP header and any associated transport/application data. It’s just the way TCP/IP evolved over its development several decades ago. And it’s the best we got. Sometimes, it’s less important about terminology and semantics, and more important of the overall goal said method is trying to achieve.

If I’m grossly mistaken, be sure to let me know in the comments. I’ll try my best not to harass anyone less technical and nerdy than myself with (sometimes) unimportant details.

OSPF sequence numbers – why 80 million is smaller than 70 million

So a bit of a specific topic today. Going through Doyle’s Routing TCP/IP Volume 1, I felt my brain melt as he went through explaining sequence numbers in link-state advertisements (in a general sense, not specific to just OSPF). He describes two types of number “spaces” – the range of possible values – to describe how protocols sequence their LSA’s.

Ignoring the historic bits, such as Radia Perlman’s “lollipop space”, which is essentially a combination of cycling numbers with a fixed initialization value (this was part of the first version of OSPF drafts – not relevant for OSPFv2 or anything else), numbering spaces either follow linearly or circular.

In linear spaces, numbers start at x and end at y. The issue with linear space is that you could potentially “run out” of number space. This could cause a link-state protocol to be unable to distinguish between LSA’s that are the most recent from the originating router or just LSA’s being flooded from one router to the next. Link-state protocols, when receiving an LSA with the highest possible sequence number, shut down and age out it’s link-state database (LSDB) to flush all the older LSA’s out. To mitigate this, the designers had to make sure the field for a sequence number was large enough so as to never reasonably hit that highest possible value (y). Both OSPFv2 and IS-IS uses this number space scheme.

Circular number spaces never end – once a maximum value number is reached, it “resets” back to the lower boundary of the space. Since IS-IS and OSPFv2 use linear spaces, this is included for completeness. Perlman’s lollipop scheme used both linear and circular as a combination but these are not included in modern link state protocols.

IS-IS uses a rather simple scheme for it’s number space. A router will originate it’s own directly-connected link states with a sequence number of one (0x00000001), with a maximum sequence number of 4.2 billion (0xFFFFFFFF). This is because the IS-IS field for sequence numbers in its LSP’s (link state packet) uses unsigned 32-bit integers. These values range from 1 – 4294967295 in decimal.

OSPF, on the other hand, uses signed 32-bit integers. While it uses the same scheme for number spaces as IS-IS (linear), the way the values are represented (especially on a router’s database outputs) is…different.

Observe:

Net Link States (Area 1)

Link ID         ADV Router      Age         Seq#       Checksum
192.168.1.112   10.0.0.112      1862        0x80000237 0x00D860
192.168.7.113   10.0.0.113      12          0x80000001 0x00E8F5

So…it starts are 80 000 000?

Obviously, the seq. number is represented in hexadecimal format…but why 0x80000001? Doesn’t that translate to 2 billion decimal? The detail to note is the fact that this field is a signed integer. That means the integers actually range from – 2147483648 to + 2147483648. When processing this field in binary, the CPU needs a way of comparing sequence numbers to determine which one is “higher” – in this case, closer to positive +2147483648.

Programming languages such as C/C++ must pay particular attention to integers declared as signed vs unsigned. Some google- and wiki-fu later, the reason we see sequence numbers starting at 0x80000001 (0x80000000 is reserved via the RFC standard) is because the left-most/most significant bit determines whether a number is represented as a positive value or a negative value. When the MSB is set, the integer is a negative value. When the MSB is not set, it is a positive integer.

 

So…
0x80000001 is 1000 0000 …. 0000 0001 in binary
Since the MSB is set, this is the “first” integer value in a 32-bit signed integer range. It doesn’t make sense to think of these values in decimal values, since this does indeed translate “directly” to 2 billion. These sequence numbers will increment 0x80000002….all the way to 0xFFFFFFFF (-1 in decimal). Incrementing one more time would start the sequence at decimal 0. This is because the MSB must become “unset” for it to represent positive values. The range then continues from 0x00000001 until 0x7FFFFFFE. Again, from the RFC, 0x7FFFFFFF is reserved (actually, an LSA received with this maximum possible sequence number triggers OSPF to flush its LSDB…more nuts and bolts to be expanded on later).

 

The choice of using signed vs unsigned gets kind of blurred between hardware and software. The use of signed integers simplifies ALU designs for CPUs and most (if not all) programming languages implement signedness in their integer data types…Why the IETF chose to use signed integers as part of the OSPFv2 spec? Who knows…

 

Anyways, this really bothered me for a couple days. I feel better now that it’s on paper. Any gross errors or omissions, leave it in the comments!

 

PS: More math-savvy folks will scream at this in regards to two’s complement role here with signed integer binary representation…I just wanted to know and jot down why IOS shows the starting sequence numbers in show ip ospf database as 0x80000001. So there you have it. Further reading for the curious

CCIE or bust! And other going-on’s

The time has come…

I’ve finally made the committed decision to pursue my number for CCIE Routing and Switching. Like most folks in networking, I’ve gotten to the point where I’m feeling quite confident in my skills; solid foundations with just a few cobwebs here and there to knock out (mostly due to re-focusing). This decision has come to me after moving on from my VAR support job, which covered the entire breadth of Cisco but prevented my skills from becoming specialized, to a network engineer doing implementation for a financial org. Since I’m settling in to the new job, I’ve come to realize that all the nitty-gritty routing and switching bits are what interest me the most. Sure, I’ve done a bit of this and a bit of that in other areas (mostly in wireless and data center) but I’m an R&S guy.

Which brings me to my next bit of personal news – I’ve now gone from support to implementation. For those in the NOC’s or VAR’s, I would highly recommend it as a next step after you’ve gotten your feet wet in the trenches of support. It’s nice to learn what happens when things break and how to resolve issues, however, in my humble opinion, in order to have that deep understanding, you have to be there to know *why* something is configured or designed a certain way. Delving into the world of real-world business challenges and requirements, as well as ITIL and change management (ugh, how I loathe thee…a “necessary evil” some may say), I know get to make decisions on how my network looks and how it functions to accomplish a certain goal. Whatever those goals may be, such as a new project or business requirement. For those who are looking to move up in the world of networking, implementation is required experience.

So, while I haven’t been blogging much here (seriously, just so much to learn and write about…some may say too much!), I will be focusing on hitting the books and lab prep. I’m shooting for a Q2 2014 target. Wish me luck!

PS: There are so many good blogs out there with CCIE “notes” – however, I could start banging out tidbits here and there for things that stump me or just bother me…More to come.

Modular Chassis vs Fixed Configuration Switches – Part 2 When/Where

Part 2 of my chassis vs fixed config switch showdown. In this post, I’ll provide some examples of where you might find both form factors deployed and some of the use cases for both. Click here for part 1 of this series.

 

Fixed Configuration

  • Campus

One of the more obvious places to find fixed configuration switches is in the campus network. This may include a campus of all sizes, from SMB to large enterprise. Here, the best bang for your buck lends itself to fixed-port switches since you typically will see “dumb switches” deployed in the access layer for port density and provide host connectivity to your network. Examples include lower-end Catalyst 2900 series and Catalyst 3600/3700 series switches.

Not only will you find these switches in the access of campus networks. Many smaller joints will use mid-range Catalyst 3750’s (for example) and stack them for distribution and/or core layer functions. Some of the advantages of using a switch stack for your core are distributed control plane across the stack, as well as port density for the rack space. If you’re connecting a bunch of dumb Layer 2 switches across your access, you can easily uplink them to slightly-beefier Layer 3 switches such as 3560’s and 3750’s arranged in a stack configuration. Of course, you are subject to hardware failures since these platforms do not have any redundancy built in to each member switch. However, for smaller organizations that just require the number of ports in a (relatively) small package, these do just fine.

  • Data Center

Fixed-port switches also find themselves in many “top-of-rack” (ToR) deployments. One only has to look as far as the Nexus 5000 Series, Juniper’s EX switching line, Brocade’s VDX line and countless other vendors. These switches typically have the 10 Gigabit density requires to connect large numbers of servers. Depending on your requirements, native Fibre Channel can also be used (in the case of Cisco Nexus 5500’s and Brocade VDX 6700’s) to provide FCoE connectivity down to servers. Rack space and port density again are factors when deploying these fixed configuration switches. In the case of data centers, the FCoE capabilities are also typically found on these platforms where they may not exist in chassis offerings.

  • ISP

One lesser-seen place to find fixed-port switches is in an ISP environment. Metro Ethernet comes to mind here. Providing Layer 2 VPN (L2VPN) services, ISP’s need equipment that can interface with the CPE and provide the connectivity up to the SP core. Cisco’s ME Series finds that place on this list, providing Q-in-Q/PBB as “dumb SP access”.

Modular Chassis

  • Campus

In the campus network, you’ll seen chassis’s in all places. On the one hand, with proper density and power requirements, you can easily deploy something like a Cisco Catalyst 4500E in a wiring closest for host connectivity. On the other hand, you would also see Catalyst 6500E’s in the campus core, providing high-speed switching with chassis-style redundancy to protect against single-device failures. Redundant power and supervisors help mitigate failures of one core device.

  • Data Center

In classical designs, the data center core is where you’d most likely see a chassis-based switch deployed. High-speed switching is the name of the game here, as well as resiliency against failures. Again, the redundant hardware found in chassis’s protect against failures in the data, control and management plane. Cisco Nexus 7000’s and Juniper’s high-end EX8200 platforms

In some designs, you may see a chassis switch deployed in an “End of Row” (EoR) fashion. This follows the design principles of the fixed-config ToR deployments, except here you could deploy a chassis switch to (again) improve redundancy. While definitely not required for all environments, if you couldn’t possibly allow a failure of your first switch that touches the end hosts, the extra redundancy (supervisors, power, switch fabrics, etc.) fit the bill.

  • ISP

Since I don’t work in the field of service providers, I’ll present what I feel are appropriate places to find a chassis-based switch. More than likely you’ll be using chassis-based routers here but I’ll include it for completeness. Cisco 6500/7600, ASR1000/9000’s, Juniper MX, all chassis offerings that you could see in an ISP’s network. This includes (but not limited to) ISP core for high-speed MPLS or IP switching/routing and PE deployments. This is where rich services provided by these offerings live, in order for service providers to be able to offer an array of MPLS-based services. This extends from L3VPNs to L2VPNs as well (either L3TPv3-based, EoMPLS-based or VPLS-based deployments). Being critical/core devices, I would imagine most service providers to require the level of fault-tolerance offered by a chassis-based system, especially with strict SLA’s with customers.

 

And there you have it. Hopefully, I’ve given you a good overview of the what, where, when, and why of fixed configuration and modular chassis switches. Given the state of things with our vendors, you can typically find any and all form factors provided by each and hopefully will choose the fit platform to fit the requirements of your environment. Cisco, Juniper, Brocade and HP are the first names that I can think of. There are other players as well including Arista, BNT, Dell, Huawei, Alcatel-Lucent, and a slew of others might fit specific markets and environments but may not cover the broad spectrum required for every network. As always, do your research and you’ll find what you need just fine.

 

Any corrections or feedback, you know what to do! Thanks for reading.

Modular Chassis vs Fixed Configuration Switches – Part 1 The Why

One thing that seems to be brought up a lot in conversation around the office, especially for newer folks entering the networking biz, is the choice of using larger modular chassis-based switches versus the smaller simpler fixed-configuration cousins. In fact, most people (myself included when I got my start) don’t even know that “chassis” are an option for switch platforms. This is completely understandable for the typical college and/or Cisco NetAcad. graduate, since the foundational education is focused almost exclusively on networking theory and basic IOS operations. So when, where and why do you use a chassis-based versus a fixed configuration switch?

PS: I’ve decided to make this a two-part series, due to the verbosity of the information. This article will focus on comparing the two different classes of switches and why you might use over the other. In the second part, I’ll provide some real world use cases for both and where they might be typically deployed.

In this article, I’ll be using the following Cisco platforms* for comparison between the two options:

*Note: I’ll be sticking to Cisco purely for simplicity’s sake. Other vendors such as Juniper, HP and Brocade carry their own lines of switching platforms with similar properties. With a bit of research, you can apply the same logic when evaluating the different platforms for your specific implementation.

WHY

Port Density

This is one of the more obvious variables. The Catalyst 6509, a 14RU 9-slot chassis, can have upwards of 384 (336 in dual-supervisor setups) Gigabit copper interfaces. These chassis can also utilize 10Gbps line cards, with those densities in just over 100 10GbE ports per chassis. However, it’ll depend on (especially on 6500’s) what supervisor modules you’re running along with your PFC/CFC/DFC daughtercards that will determine if those densities are a bit lower or higher than those numbers. This is where it’s critical to do your research before placing your orders or moving forward with implementations.

On the flip-side, fixed-configuration switches are just that – fixed chassis with limited modular functionality. Some exceptions apply, such as the Nexus 5500 switches that have slots to add additional modules to. However, generally speaking, WYSIWYG with these classes of switches. If we look at the same rack space of 14RU as with the previous example, that could potential be fourteen 48-port Gigabit Ethernet switches. A slew of Catalyst 3750’s gives you a whooping 672 ports in the same rack space. However, keep in mind that’s fourteen switches, as opposed to a single chassis-based switch. You’ll have to keep this in mind when putting this hardware into your topology (to be discussed below). Unless of course you plan to run your switches in a stack via technology such as Cisco StackWise, which helps reduce the burden of managing so many switches separately. Since Cisco stacks are restricted to a maximum of 9 switches per stack, you’ll be looking to manage at least two different switch stacks to match the 14RU and achieve the most port density in this comparison.

A side note for 10-Gig connectivity. The Nexus 5548UP can use up to 32 10GbE plus 16 with optional expansion module in a 1RU form factor. To compare to the 6509, that’s over 600 10GbE ports in a 14RU space. While it may be unfair to compare a newer Nexus series switch directly to a 6500 chassis, it’s comparison is purely for the discussion of form factor differences. A quick look at the Nexus 7009 chassis has similar 10GbE density as the 6509, which increasing densities with the larger chassis’s.

At the end of the day, fixed-configuration switches (generally speaking) packs more ports in the same amount of rack space as chassis switches.

Interface Selection

Typically, fixed-configuration switches are copper-based FastEthernet and/or GigE. Exceptions again exist, as with the Nexus 5500’s (Unified Ports models) being able to run their interfaces in Ethernet or native Fibre Channel. In your fixed-config’ers, you’ll also typically have higher-speed uplink ports that support higher-speed optics such as SFP/SFP+, GBIC, XENPAK and X2’x. And of course, there are SFP-based flavours that give you all fiber ports if that’s what tickles your fancy.

On your chassis-based switches, you will see that you have a wider choice of interface types. You’ll have your 10GbE line cards, which can be a combination of SFP/SFP+, XENPAK (older), X2, RJ45 and GBIC transceivers. You can also make use of DWDM/CWDM modules and transceivers for your single-mode long-range runs. Also, with 40 Gigabit Ethernet becoming more relevant and deployed, QSFP+ and CFP connectivity is an option as well (if the chassis in question can properly support it). The only restriction on chassis-based switches are what line cards you have to work with.

Due to the nature of fixed switches, it’s natural that a chassis comprised of modular line cards has more interface selection.

Performance

Here’s where things become a little more subtle. Again, for simplicity’s sake, I’ll restrict this section to the following models of switches:

  • Catalyst 6509 Chassis w/ Supervisor 720
  • Catalyst 3750-X 48-port Gigabit Switch
  • Nexus 5548UP Switch

Let’s start with the fixed switches. The fixed-configuration Catalyst 3750-X switches are equipped with 160 Gbps switch fabrics which should be ample capacity for line-rate gigabit speeds. Let’s also not forget that fabric throughput is not the only measure of performance. These switches, and most similarly designed fixed-configuration “access” switches, have smaller shared port buffers which can become problematic with very bursty traffic.

On the Nexus 5548UP’s, we see a different story. Being a 10GbE switch with a 960Gbps fabric, naturally we see the 5548 have much higher performance than it’s LAN cousins. Port buffers on the Nexus 5500’s are dedicated per port (640KB), allowing these switches to handle bursts very easily.

The Catalyst 6509, being a modular chassis-based switch, bases its performance on the Supervisor modules in use, as well as the specific line card+daughter card combination. For simplicity’s sake, let’s assume a Supervisor 720 (since the switch fabric is located on the supervisor in these switches, that’s 720Gbps switching capacity across the entire chassis) and X6748-GE-TX 48-port 1Gig ethernet modules. Due to the hardware architecture of these chassis, each slot is constrained to 40 Gbps capacity per slot, so slight over-subscription for 48x1Gbps line cards will occur. Luckily, each port is given a 1.3MB buffer so bursty traffic is handled just fine on these line cards. When using DFC daughter cards, line cards will even handle their own local switching and so won’t be constrained to the 40Gbps-per-slot restriction. This is because packets don’t need to traverse the backplane when switched port-to-port on the same line card. May I reiterate that for these kind of switches, do your homework. The performance of a chassis-based switch depends on more factors due to the combination of supervisor, switch fabric and line cards in use.

Redundancy/HA

One of the most obvious benefits of a chassis-based switch is redundant and highly-available hardware. The Catalyst 6500 is typically deployed with two Supervisor modules. By having two supervisors, you’re protected by failures in the data plane (due to redundant active/standby switch fabrics), the control plane (NSF/SSO) as well as the management plane (IOS is loaded on both supervisors and thus can continue to operate even when an active Sup fails). Chassis switches also utilize redundant power supplies to protect against electrical failures.

On the other side of the coin, you have fixed configuration switches. While some newer switches do utilize redundant power supplies, none of them use separate modules or components for data, control or management plane. They utilize on-board switch fabrics for forwarding as well as a single CPU for running your IOS images and control plane protocols.

Chassis here is the clear winner.

Services

Let me start this section by delving into what I mean by “services”. This includes features that are considered additional to basic switching. This will include support for things such as partial and/or full Layer 3 routing, MPLS, service modules such as firewalls and wireless LAN controllers (WLC) and other extras you may find on these platforms.

I think it’s safe to say that, generally speaking, fixed configuration switches are simple devices. As with the Catalyst line, with software, you can utilize full Layer 3 routing and most normal routing protocols such as OSPF, EIGRP and BGP. Keep in mind, however, that you will often be limited by TCAM capacity for IP routes so forget about full BGP tables and the like. However, they get the job done. One great benefit is support for Power over Ethernet (PoE), which is usually standard on copper-based fixed configuration switches.

The Nexus 5500’s are a bit of an exception. On the one hand, out of the box, they are Layer 2 only devices that can only support (albeit limited) Layer 3 with an expansion module. However, with the Unified Ports, they also support native Fibre Channel as well as modern NX-OS software. I would say that, for specific use cases, the compromise is quite reasonable. Elaborating on that will be in my next post.

The Catalyst 6500 is the champion of services. Being a modular chassis, Cisco developed many modules that were specifically designed with services in mind. This includes the Firewall Services Module (FWSM), Wireless Service Module (WiSM) controllers, SSL VPN module, and ACE/CSS load balancers. While these modules have fallen out of favour due to performance constraints of the chassis itself as well as lack of development interest and feature parity with standalone counterparts, the fact of the matter is that there are still many Cisco customers in the field with these in place. The WiSM, for example, is essentially a WLC4400 on a blade. Being able to convenient integrate that directly into a chassis saves rack space as well as ports by using the backplane directly to communicate with the wired LAN. Other services supported on the 6500 from a software standpoint include Virtual Switching System (VSS) (with use of the VS-SUP720 or SUP2T), full Layer 3 routing with large TCAM, MPLS and VPLS support (with proper line cards) and PoE.

The chassis win in this category due to the modular “swiss army knife” designs.

Cost

I’ll just briefly mention the cost comparison between the two configurations. You will typically see a chassis-based switch eclipse a fixed configuration switch in terms of cost, due to the complexity of the hardware design as well as all the modularity with chassis. You’ll always have a long laundry list of parts that will have to be purchased in order to build a chassis-based switch, including the chassis itself, power supplies, supervisors, line cards and daughter cards (if applicable). Fixed configuration switches typically have a much lower cost of entry, with only limited modularity with certain platforms.

And there you have it. Hopefully, this will give you an insight into why you might use one form factor over the other. In my next part, I’ll provide some use-case examples for each and where you may typically deploy one versus the other.

On preventing burn out and spreading yourself too thin

Networking is full of what I like to call “rabbit holes”. You start looking into a technology or a solution and before you know it, you’ve lost hours of time pouring over white papers, best-practice design guides, sample configurations, blog posts and labs. There’s a lot of pieces that make up the networks that we work with daily, from QOS to routing, switching, WAN, hardware architectures, protocols…the list goes on and on. Depending on your role in your organization, you could be working with a few technologies and platforms very intimately or you could be spread across multiple parts of the overall infrastructure.

Working for a VAR for multiple vendors, I find it difficult sometimes to find a middle ground between knowing enough about the equipment and environments we support to solve the problems our customers have, what’s to come from the vendors and getting the expert knowledge I crave. In the last year alone, I’ve touched probably everything under the sun from our lovely vendors, including data center gear, wireless, security and SP (only missing voice). While I certainly know a lot more than I did a year ago, I also find that I’m unable to really dive into any one part in particular.

Like I said, its highly dependent on your role in your organization. I’m sure there’s a lot of folks out there that would love to get away from the hundreds of ASA’s they support or the 6500’s that are still chugging along in their campus core (*shudder*) and wiring closets to get their hands on something new. A compromise between both extremes is, in my opinion, a sweet spot.

I’m constantly challenged, engaged on new projects and new solutions, realizing customer goals and solving complex technical issues. I just warn to my fellow networking colleagues that it’s very easy to spread yourself too thin. I find myself having to stop myself from sticking my nose into every new thing that comes into my office, just so that I can focus on what’s on my plate.

Don’t get me wrong, I’m loving the challenge and wouldn’t want to work in any other part of our IT industry. I just want to avoid being that “jack of all trades, master at none“. With all the new technologies coming out (especially in the data center), you got to keep your head above water from drowning in all the stuff that puts everything together.

Maybe it’s time for a change of pace or at least a change in attitude. I’m currently back reviewing my R&S to possibly put myself on the coveted path of the CCIE lab (I actually had a dream/nightmare about getting thrown into the lab exam…it was exciting but terrifying at the same time). I just hope that I don’t spread out too thin that I burn out. I’m sure we’ve all been there at some point or another.

I’d love to hear your thoughts on this so please leave a comment or send a tweet my way. In the meantime, keeping plumbing!

PS: I love Ivan’s post about knowledge and complexity. Given the nature of this post, I find it rings true to home/work a lot. Great advice from Mr. Pepelnjak as always.

Junos public/private key SSH authentication

Hi Everyone,
Just a quick one today. I was reconfiguring my lab SRX for direct SSH access and in the interest of security, wanted to use RSA public/private keys for authentication. I did my usual key generation using puttygen (sorry guys, Windows user here), copied the OpenSSH authorized_keys public key string that Junos uses, applied it to the user of my choice and off I went…or so I thought. Here was my initial configuration:

[edit]
admin@LabSRX# show system login
user admin {
    uid 2002;
    class super-user;
    authentication {
        encrypted-password "<plaintext passwd hash>"; ## SECRET-DATA
        ssh-rsa "ssh-rsa <key data>"; ## SECRET-DATA
    }
}

Seems simple enough. However, when I went to login using the private key that I had just created for this public key pair, my SRX complained:

Using username "admin".
Authenticating with public key ""
Server refused public-key signature despite accepting key!

Huh? I could’ve sworn that pair was correct. I tried generating another pair, just to be sure but the SRX still didn’t want to accept it.

After fiddling with the SSH protocol version and other non-related parameters, I logged into one of my work’s lab SRX’s to see if anyone was using RSA there.

Lo and behold, I forgot the one part in key string needed to authenticate with it: appending the user name to the public key string:

admin@LabSRX# show system login
user admin {
    uid 2002;
    class super-user;
    authentication {
        encrypted-password ...
        ssh-rsa "ssh-rsa <key data> admin"; ## SECRET-DATA
    }
}
[edit system login user admin]
admin@LabSRX# commit
commit complete

After my commit, I was able to use my private key to authenticate to the SRX.

You can have puttygen append the username using the “Key comment” field:

I did some digging around but couldn’t find any mention of this in the Junos documentation. My guess is that OpenSSH includes the username when using ssh-keygen in Linux/Unix. Regardless, just something I’ll have to remember when doing this again.

Follow

Get every new post delivered to your Inbox.

Join 254 other followers