This content is for Patreon subscribers of the j2 blog. Please consider becoming a Patreon subscriber for as little as $1 a month. This helps to provide higher quality content, more podcasts, and other goodies on this blog.
Gonna try these out for mounting equipment in a rack with square holes. www.rackstuds.com
As some of you may have heard Mikrotik has added in some VXLAN support in the latest RouterOS7 beta. What is VXLAN and how would service providers use it? Let’s start out with some broad information about VXLAN
The always interesting RFC read
This document describes Virtual eXtensible Local Area Network (VXLAN), which is used to address the need for overlay networks within virtualized data centers accommodating multiple tenants. The scheme and the related protocols can be used in networks for cloud service providers and enterprise data centers
Boil it down for me. What is vxlan?
In short, VXLAN allows you to create a layer2 network on top of a layer3 network. It allows you to bind separate layer2 domains and make them look like one. If you are thinking this looks like a GRE tunnel, you are correct except the layer2 domains are still separate with tunnels. VXLAN is mainly touted as a way to interconnect data centers. If you are having to use spanning-tree then VLXLAN is an answer.
Okay, but why not use tunnels or MPLS?
VXLAN allows you to accomplish what GRE does without having to change the network design. By using VXLAN you are also able to have standalone layer2 domains that talk to each other. With the tunnel approach, you have to do a lot of manual configuration.
Is this just a data center thing?
VXLAN was designed to solve many of the edge computing and hyper-scale computing issues. Imagine having compute nodes in different parts of a data center or even in different data centers. You want all of those nodes on the same VLAN. With GRE you could extend that VLAN, but with VXLAN you can have two standalone layer2 VLANs that are merged together. VXLAN also solves the 4096 VLAN issue. This is important in hyper-scale cloud computing.
VXLAN benefits in a nutshell
- increases layer2 segments to 16 million
- Centralize control
VXLAN downsides in a nutshell
- Multicast must be available
- more overhead to layer2 packet
- no built-in encryption
- Slow adoption of ipv6 support by open source
What about the service provider? How can I use this?
In a service-provider network, you have things like broadcast issues. Basically, bridging is bad. Your layer2 networks need to be contained. Imagine you are a service provider who is providing LTE services. You may have an LTE VLAN on your network. Historically you would have to extend your VLAN across the network in order to do management and access your LTE core. Now you have this large broadcast domain across your entire network. Or worse yet, you have tunnels to other cities or locations you don’t have physically connected to your network. Now you have tunnels a part of your LTE VLAN. MTU issues and other things are now a part of your life.
With VXLAN each LTE node can have its own layer2 VLAN but still talk to the others. This prevents the broadcast storms which can occur.
Another use for VXLAN is a way to allow managed service providers to deploy large scale networks over the 4000 limits of VLANs. You could literally deploy thousands of layer2 segments to tenants
Why I should or should not care about VXLAN as a service provider?
If you just have a couple of layer2 networks to extend across your network VXLAN is not for you. However, VXLAN does allow for multipath routing and other protocols to be extended to remote networks.
VXLAN adds 50+ bytes of overhead to the layer2 frame. In many service provider networks, this is not an issue due to MTU being raised for MPLS, etc. IP multicast must be extended across the entire network. Mac addresses are used in creating a distribution network across all of the routed layer2 domains.
Large service providers have started looking at segment routing to solve many of the issues I talk about. This causing them to gravitate toward EVPN. EVPN allows for BGP for the control plane and MPLS for the data plane. More on this coming soon.
In closing, VXLAN is an ultra-cool technology and has use cases for service providers. Other methods also exist to solve these issues in the service provider world. For those of you looking to learn all you can, I will be posting a list of links for my Patreon folks.
I have a few nitpicky things and the video seems a little contrived, but it’s decent nonetheless. WISPs are not really mentioned, but others are not as well.
We like to refer to Indianapolis, Indiana as an “NFL City” when explaining the connectivity and peering landscape. It is not a large network presence like Chicago or Ashburn but has enough networks to make it a place for great interconnects.
At the heart of Indianapolis is the Indy Telcom complex. www.indytelcom.com (currently down as of this writing). This is also referred to as the “Henry Street” complex because West Henry Street runs past several of the buildings. This is a large complex with many buildings on it.
One of the things many of our clients ask about is getting connectivity from building to building on the Indy Telcom campus. Lifeline Data Centers ( www.lifelinedatacenters.com ) operates a carrier hotel at 733 Henry. With at least 30 on-net carriers and access to many more 733 is the place to go for cross-connect connectivity in Indianapolis. We have been told by Indy Telcom the conduits between the buildings on the campus are 100% full. This makes connectivity challenging at best when going between buildings. The campus has lots of space, but the buildings are on islands if you wish to establish dark fiber cross-connects between buildings. Many carriers have lit services, but due to the ways many carriers provision things getting a strand, or even a wave is not possible. We do have some options from companies like Zayo or Lightedge for getting connectivity between buildings, but it is not like Chicago or other big Date centers. However, there is a solution for those looking for to establish interconnections. Lifeline also operates a facility at 401 North Shadeland, which is referred to as the EastGate facility. This facility is built on 41 acres, is FEDRAMP certified, and has a bunch of features. There is a dark fiber ring going between 733 and 401. This is ideal for folks looking for both co-location and connectivity. Servers and other infrastructure can be housed at Eastgate and connectivity can be pulled from 733. This solves the 100% full conduit issue with Indy Telcom. MidWest Internet Exchange ( www.midwest-ix.com ) is also on-net at both 401 and 733.
Another location where MidWest-IX is at is 365 Data Centers (http://www.365datacenters.com ) at 701 West Henry. 365 has a national footprint and thus draws some different clients than some of the other facilities. 365 operates Data centers in Tennessee, Michigan, New York, and others. MidWest has dark fiber over to 365 in order to bring them on their Indy fabric.
Another large presence at Henry Street is Lightbound ( www.lightbound.com ). They have a couple of large facilities. According to PeeringDB, only three carriers are in their 731 facility. However, their web-site lists 18+ carriers in their facilities. The web-site does not list these carriers.
I am a big fan of peeringdb for knowing who is at what facilities, where peering points are, and other geeky information. Many of the facilities in Indianapolis are not listed on peering DB. Some other Data Centers which we know about:
On the north side of Indianapolis, you have Expedient ( www.expedient.com ) in Carmel. Expedient says they have “dozens of on net carriers among all markets”. There are some other data centers in the Indianapolis Metro area. Data Cave in Columbus is within decent driving distance.
A great article on explaining what OTV is and how it compares to VXLAN.
OTV(Overlay Transport Virtualization) is a technology that provide layer2 extension capabilities between different data centers. In its most simplest form OTV is a new DCI (Data Center Interconnect) technology that routes MAC-based information by encapsulating traffic in normal IP packets for transit”
Great example of in-floor cabling