FD-IX: Local-pref and default routes

I just finished up an article over on the FD-IX blog about local-prefs, default routes, and Internet exchanges.

https://www.fd-ix.com/uncategorized/local-pref-and-default-routes/

Not everyone on the Internet needs full feeds from their provider. In this case, how does learning routes from an Internet Exchange such as FD-IX benefit you if all you are doing is default routes?

So let’s take a scenario. You are a local hosting company. You don’t provide Internet to customers, you just do hosting of websites and data. You have a couple of providers you are buying Internet from, mainly for redundancy. One of these is primary and the other is a backup. You are doing BGP just because. All you are receiving from these providers is a default route and that is it. Why would you want to receive all these routes from an IX?

What is routing? MANRS

The Internet has over 68,000 publicly visible networks, which means it’s impractical to know about the existence of every other network or how they’re connected. Networks can also appear and disappear, whilst connections are constantly coming and going due to various faults and reconfigurations. This makes it too complex to take manual decisions about how to route packets across the Internet.

Hurricane Electric Route Filtering Algorithm

The following is from http://routing.he.net/algorithm.html . This outlines the criteria HE.NET uses for filtering routes from peers and customers.

This is the route filtering algorithm for customers and peers that have explicit filtering:

1. Attempt to find an as-set to use for this network.
1.1 Inspect the aut-num for this ASN to see if we can extract from their IRR policy for what they would announce to Hurricane by finding export or mp-export to AS6939, ANY, or AS-ANY.
1.2 Also see if they set what looks like a valid IRR as-set name in peeringdb.

2. Collect the received routes for all BGP sessions with this ASN. This details both accepted and filtered routes.

3. For each route, perform the following rejection tests:
3.1 Reject default routes 0.0.0.0/0 and ::/0.
3.2 Reject paths using BGP AS_SET notation (i.e. {1} or {1 2}, etc). See draft-ietf-idr-deprecate-as-set-confed-set.
3.3 Reject prefix lengths less than minimum and greater than maximum. For IPv4 this is 8 and 24. For IPv6 this is 16 and 48.
3.4 Reject bogons (RFC1918, documentation prefix, etc).
3.5 Reject exchange prefixes for all exchanges Hurricane Electric is connected to.
3.6 Reject routes that have RPKI status INVALID_ASN or INVALID_LENGTH based on the origin AS and prefix.

4. For each route, perform the following acceptance tests:
4.1 If the origin is the neighbor AS, accept routes that have RPKI status VALID based on the origin AS and prefix.
4.2 If the prefix is an announced downstream route that is a subnet of an accepted originated prefix that was accepted due to either RPKI or an RIR handle match, accept the prefix.
4.3 If RIR handles match for the prefix and the peer AS, accept the prefix.
4.4 If this prefix exactly matches a prefix allowed by the IRR policy of this peer, accept the prefix.
4.5 If the first AS in the path matches the peer and path is two hops long and the origin AS is in the expanded as-set for the peer AS and either the RPKI status is VALID or there is an RIR handle match for the origin AS and the prefix, accept the prefix.

5. Reject all prefixes not explicitly accepted

Don’t try this at home kids. Automated BGP Optimization

https://radar.qrator.net/blog/as10990-routing-optimization-tale
Conclusion? Do not try to optimize the routes with automated software – BGP is a distance-vector routing protocol that has proved, throughout the years, its ability to handle the traffic. Software, wanting to “optimize” the system involving thousands of members would never be smart enough to compute all the possible outcomes of such manipulation.

Preseem and Switches in switch centric design

Anyone who follows me knows I am a big fan of switch centric designs. This usually involves a router on a stick paired with a high port count switch. Recently I had a client that installed a Preseem appliance in their network.

Equipment used in this setup
-Dell R710 with a 4 Port SFP+ card running Preseem
-Cisco 3064-X 48 Port switch
-Maxxwave Vengeance router with dual QSF+ card and 4 Port SFP+ card

A visio diagram of how this looks

We have two transport links coming into the switch on the left. These are dumped into VLANs 506 and 507. We then come out of the switch into the Preseem box via 2 SFP+ ports, one for each VLAN. In this case, we just used DAC cables In the future, we can turn these into trunk ports to pass more VLANS through.

The data then leaves the Preseem box over dual SFP fibers directly into the router’s SFP+ ports. If the Preseem appliance fails we have a secondary OSPF/IBGP path from the router’s 40 GIG QSFP down to the switch. This is a bypass in case the Preseem appliance hardware fails.

If you start flowing more than 10 Gigs through a single link you can upgrade to more SFP+ ports into your appliance and a 40 Gig QSFP+ card. You then link the appliance to the spare QSFP port on your router.

Siklu Case study 80 GHZ Indianapolis Indiana

Some photos from a Siklu 80GHZ deployment in Downtown Indianapolis, Indiana. This was deployed by On-Ramp Indiana (https://www.ori.net). The problem being solved is moving video files around a network in order to get it to smart screens and projectors. This is a very urban area and wireless was pretty much the only option to get from building to building.

Siklu 80GHZ was on the shortlist due to the distances involved. Another consideration was the footprint of the equipment. The equipment had to be as low profile as possible.

Another needed aspect of this network was the ability to move traffic around at layer 2. Not all traffic is IP based in this type of network.

Equipment used
Ether Haul 1200FX
https://www.siklu.com/product/etherhaul-kilo-series/

Right above the observation windows, you can see the Siklu just to the right of the center corner

Some technical Details

Average traffic over the past 2 months

As you can see traffic is reasonably consistent in the 80-100 meg range. We needed a solution that did not slow down due to interference. A possible 10’s of thousands of visitors to this attraction in a weekend, reliability and performance were critical. When this was installed we did not know about COVID, but this is an attraction people can enjoy from their cars and social distancing. This use added to the visibility of this attraction, thus making the reliability even more crucial.

Articles about the finished product
https://www.wthr.com/article/news/local/monument-circle-get-new-light-show-time-holidays/531-ef1819ca-5f27-4886-9283-17e481c33f39

https://www.wthr.com/article/news/local/new-light-show-sound-system-entertain-monument-circle-visitors/531-576ce095-501c-41c6-913a-518a0cc05779

On-Ramp Indiana Contacts www.ori.net 317.774.2100