“Glue addresses” in networking

Imagine this scenario.  You have bought an IP or DIA circuit from someone that is going to provide your network with bandwidth.  Typically this company will make the connection, IP wise, over a /30 or even a /29 of IP space.  I have called this the “glue address” for many years.  This is the IP address that binds (the glue reference) you to the other provider’s network. They can route you IP blocks over that glue address or you can establish BGP across it, but it is the static address which binds the two networks together.

Some network folks call this a peering address.  This isn’t wrong but can infer you are doing BGP peering across the address.  You aren’t always doing BGP across the glue address.

#routinglight #packetsdownrange

Noction: BGP in Large Networks

Are you running a large scale BGP network? Need some tips and help on what to optimize and what your next steps to optimize your setup?

Using iBGP with loopback addresses
Making sure all routers know next hop and loopback addresses
Whether to use route reflectors rather than an iBGP full mesh
Where to originate prefixes
Where and how to filter announcements

Using BGP in large scale networks and how to get the most out of it. Paper by Noction

How I learned to love BGP communities and so can you

This content is for Patreon subscribers of the j2 blog. Please consider becoming a Patreon subscriber for as little as $1 a month. This helps to provide higher quality content, more podcasts, and other goodies on this blog.
To view this content, you must be a member of Justin Wilson's Patreon at "Access to patro..." or higher tier

BGP Confederations

In network routing, BGP confederation is a method to use Border Gateway Protocol (BGP) to subdivide a single autonomous system (AS) into multiple internal sub-AS’s, yet still advertise as a single AS to external peers. This is done to reduce the number of entries in the iBGP routing table.  If you are familiar with breaking OSPF domains up into areas, BGP confederations are not that much different, at least from a conceptual view.

And, much like OSPF areas, confederations were born when routers had less CPU and less ram than they do in today’s modern networks. MPLS has superseded the need for confederations in many cases. I have seen organizations, who have different policies and different admins break up their larger networks into confederations.  This allows each group to go their own directions with routing policies and such.

if you want to read the RFC:https://tools.ietf.org/html/rfc5065

The problem with peering from a logistics standpoint

Many ISPs run into this problem as part of their growing pains.  This scenario usually starts happening with their third or 4th peer.

Scenario.  ISP grows beyond the single connection they have.  This can be 10 meg, 100 meg, gig or whatever.  They start out looking for redundancy. The ISP brings in a second provider, usually at around the same bandwidth level.  This way the network has two pretty equal paths to go out.

A unique problem usually develops as the network grows to the point of peaking the capacity of both of these connections.  The ISP has to make a decision. Do they increase the capacity to just one provider? Most don’t have the budget to increase capacities to both providers. Now, if you increase one you are favoring one provider over another until the budget allows you to increase capacity on both. You are essentially in a state where you have to favor one provider in order to keep up capacity.  If you fail over to the smaller pipe things could be just as bad as being down.

This is where many ISPs learn the hard way that BGP is not load balancing. But what about padding, communities, local-pref, and all that jazz? We will get to that.  In the meantime, our ISP may have the opportunity to get to an Internet Exchange (IX) and offload things like streaming traffic.  Traffic returns to a little more balance because you essentially have a 3rd provider with the IX connection. But, they growing pains don’t stop there.

As ISP’s, especially WISPs, have more and more resources to deal with cutting down latency they start seeking out better-peered networks.  The next growing pain that becomes apparent is the networks with lots of high-end peers tend to charge more money.  For the ISP to buy bandwidth, they usually have to do it in smaller quantities from these types of providers. Buying this way introduces the probably of a mismatched pipe size again with a twist. The twist is the more, and better peers a network has the more traffic is going to want to travel to that peer. So, the more expensive peer, which you are probably buying less of, now wants to handle more of your traffic.

So, the network geeks will bring up things like padding, communities, local-pref, and all the tricks BGP has.  But, at the end of the day, BGP is not load balancing.  You can *influence* traffic, but BGP does not allow you to say “I want 100 megs of traffic here, and 500 megs here.”  Keep in mind BGP deals with traffic to and from IP blocks, not the traffic itself.

So, how does the ISP solve this? Knowing about your upstream peers is the first thing.  BGP looking glasses, peer reports such as those from Hurricane Electric, and general news help keep you on top of things.  Things such as new peering points, acquisitions, and new data centers can influence an ISPs traffic.  If your equipment supports things such as NetFlow, sflow, and other tools you can begin to build a picture of your traffic and what ASNs it is going to. This is your first major step. Get tools to know what ASNs the traffic is going to   You can then take this data, and look at how your own peers are connected with these ASNs.  You will start to see things like provider A is poorly peered with ASN 2906.

Once you know who your peers are and have a good feel on their peering then you can influence your traffic.  If you know you don’t want to send traffic destined for ASN 2906 in or out provider A you can then start to implement AS padding and all the tricks we mentioned before.  But, you need the greater picture before you can do that.

One last note. Peering is dynamic.  You have to keep on top of the ecosystem as a whole.

Mikrotik releases some new certifications

Mikrotik has released some new certifications. https://mikrotik.com/training/about

  • MTCNA – MikroTik Certified Network Associate (view outline)
  • MTCRE – MikroTik Certified Routing Engineer (view outline)
  • MTCWE – MikroTik Certified Wireless Engineer (view outline)
  • MTCTCE – MikroTik Certified Traffic Control Engineer (view outline)
  • MTCUME – MikroTik Certified User Management Engineer (view outline)
  • MTCIPv6E – MikroTik Certified IPv6 Engineer (view outline)
  • MTCINE – MikroTik Certified Inter-networking Engineer (view outline)
  • MTCSE – MikroTik Certified Security Engineer (view outline)

Linkback
Business Directory

Are you ready for 768k day?

As the global routing table increases, routers use more and more memory to hold these tables in memory. Most routers use what is called TCAM memory to hold routing tables. TCAM memory is much faster than normal RAM, which makes it ideal for accessing large routing tables on a router.  However, TCAM is generally viewed as a more expensive type of memory.

According to the CIDR report, the global routing table as of April 15 2019 was 772,711 routes. But Justin, you are warning me about 768,000 routes.  This is more than that already.  The short answer is many providers attempt to do some sort of aggregation with prefixes, which shrinks this number.

So why is this number important? in 2014 a similar situation arose.  This was called “512k day”.  Many vendors released patches and advisories recommending folks to raise their limit to 768,000 routes.  Why not just raise this to a million you say? Remember that TCAM memory is expensive so it’s not like normal ram.  As more and more folks run IPV6, it takes away TCAM memory and allocates it to the IPV6 routing table. In the “old days” we just had to worry about one IPV4 table taking up the TCAM memory. Now we have an ever-growing IPV6 table, which takes up memory as well. 768,000 was recommended by many vendors as a decent tradeoff of memory utilization.

Many experts do not expect this 768k day to be as service impacting as 512k day was.  Firmware updates, newer hardware, and increased awareness are some factors more operators are aware of.  However, there is a bunch of older hardware out there. One of the biggest concerns is the TCAM memory in the Cisco 6500 and 7600 routing platforms. These platforms simply do not have more memory to allocate.

If you own a 6500/7600 platform and are taking in full routes there are a few things you can do to help mitigate this. Obviously upgrading hardware is a choice.  Not everyone can do that.  One of the methods of dealing with this is to receive a default route from your upstream providers in addition to the full routes.  If you do this you can filter out /24 routes and decrease the routing table your router has to keep track of.  Anything that is a /24, which won’t be in the routing table at this point, will be sent to the default route.  You won’t have as much control over your routes to these destinations, but at least your router won’t be puking on itself.

 

As more and more smaller ISPs buy /24 allocations on the secondary market, we will see this problem increase.  IPV4 is not going away. Smaller ISPs are buying blocks to service their growing customer base and can’t afford to buy large allocations all at once.  So now we are seeing ISPs end up with four or five /24’s that can not be aggregated down like they could in the past.