VXLAN terminology and what it means

What are some of the various terminology used in VXLAN?

VNI/VNID: Virtual Network Identifier or VXLAN Network Identifier

Layer2 VNI carried in VXLAN bridged packets. VNI is configured per VLAN.

Layer3 VNI carried in VXLAN routed packets across VTEPs. One L3 VNI per tenant (VRF) Note: Tenant, VRF, L3VNI are sometimes used interchangeably

VTEP: VXLAN Tunnel End Point

Performs VXLAN encapsulation/decapsulation

NVE: Network Virtualization Edge

Logical representation of the VTEP

VXLAN Gateway

Device that forwards traffic between VXLANs

It could be both Layer 2 and Layer 3 forwarding.

Anycast Gateway

All VTEPs are configured with the same IP and MAC on a host facing SVI

Underlay Network: Provides the transport for VXLAN

VXLAN an us OSPF,ISIS,EIGRP, BGP, or Multicast routing

Cisco and Verizon demonstrate multi-haul

As Internet traffic grows and becomes more dynamic, optical transport networks for sub-sea, terrestrial long haul and metro need more capacity. The ability to deploy capacity quickly is equally important to handle the increasingly dynamic nature of the traffic. The concept of a multi-haul transport platform, as introduced by Andrew Schmitt of Cignal AI, becomes very appealing for achieving this ability to scale with speed while maintaining operational simplicity – a single platform for all requirements. A critical element of the multi-haul optical platform is the flexibility of the coherent optics to be tuned to fine granularity in order to meet the reach-capacity target of any given network.


Ethernet MTU and overhead

This content is for Patreon subscribers of the j2 blog. Please consider becoming a Patreon subscriber for as little as $1 a month. This helps to provide higher quality content, more podcasts, and other goodies on this blog.
To view this content, you must be a member of Justin Wilson's Patreon at $0.01 or more
Already a qualifying Patreon member? Refresh to access this content.

Cisco 2960X I/O usage

While double checking some stats on a network I came across this in Libre.   84% is usually something that would cause me to be alarmed, as Libre is trying to tell us.

After some research, I found the following.

While it is not documented, it was noted that this was by design and that it would not affect the switch as the switchport becomes more and more loaded.

The switch allocates dedicated memory to certain processes / resources by default and then additional resources when the configuration is added. This ensures proper functionality and is again by design.

The I/O Memory pool buffers information transmitted to and from the CPU, and does not affect the actual forwarding of packets on the switch.

Translation: The switch uses up these resources by default, even if they aren’t all being used.  Think of it as setting it aside for future use without dynamic allocation of them.