Understanding VxLAN Bridging Configuration

Welcome to the fifth part of our VxLAN series. In this article, we will delve into the configuration aspect, specifically focusing on bridging. By the end of this article, you will gain a solid understanding of how to configure VxLAN with data plane learning on Nexus 9000 series switches.

Understanding VxLAN Bridging Configuration
Understanding VxLAN Bridging Configuration

Simplifying Learning with Data Plane Configuration

In part 4, we discussed the two methods of address learning: control plane learning and data plane learning. While control plane learning is preferred, we will configure data plane learning in this article. Why? Because it simplifies the learning process, allowing us to easily grasp VxLAN concepts such as VTEP configuration, VNIs, overlay, and verification. In part 6, we will explore the more advanced control plane configuration.

Basic Topology for Easy Configuration

To keep things simple, we will use a basic topology consisting of two Nexus 9000 switches. Each switch will have a VTEP interface, and there will be a single routed link between the switches to simulate the underlay network. Additionally, we have attached a host to each switch, connected via access ports configured with VLAN 1000. Initially, these hosts won’t be able to communicate due to the routed link, but we will resolve this using VxLAN.

Configuring the Underlying Infrastructure

To begin with, we need to configure the underlying infrastructure. This includes enabling features, setting the MTU, mapping VLAN 1000 to VNI 5000, configuring the routed link, configuring host ports, and creating loopback interfaces.

Further reading:  OSPF Default Route: A Simplified Guide

First, we enable OSPF to manage the underlay, PIM for multicast (handling BUM traffic), and the nv overlay feature for VxLAN. Note that for Nexus 9000 series on versions 7.0(3)I5(1) and earlier, a system routing template is required. However, newer versions handle this transparently, eliminating the need for manual configuration.

Next, we define VLAN 1000 for the hosts and map it to VNI 5000 using the vn-segment command. Keep in mind that the additional VxLAN headers increase packet size, so the MTU needs to be adjusted accordingly. It’s important to note that the maximum MTU may vary depending on the platform.

After configuring the routing template, a reboot is required. Once the reboot is complete, we move on to configuring the routed link by setting port 49 as the routed link with the no switchport command. We assign an IP address to the interface and add it to OSPF to establish connectivity. Since we will be running multicast, the interface is set to sparse mode.

To verify that the configuration is working, we can perform a ping and check if the OSPF neighbors form successfully. For the host ports, we simply configure them as access ports in VLAN 1000 without any special VxLAN configuration.

We also create loopback interfaces for multiple purposes. Firstly, we use the loopback IP as the rendezvous point in the multicast topology. Additionally, the VTEP gets its IP address from the loopback interface. Ensure that all loopback interfaces are added to OSPF and set to sparse mode.

Configuring Multicast Infrastructure

Next, we move on to configuring the multicast infrastructure, which involves setting up the rendezvous point address and configuring RP anycast for the two routers. These steps help to establish a simple multicast topology, which is outside the scope of this article.

Further reading:  How to Determine the Number of Subnets from a Mask

Setting Up the VTEP (NVE Interface)

Now, let’s focus on configuring the VTEP, or the NVE interface in Nexus terminology. The IP address for the NVE interface is not configured directly but is derived from the loopback interface. This is also where we bind VNIs to their multicast groups, enabling proper handling of BUM (Broadcast, Unknown unicast, Multicast) traffic. For a more in-depth understanding of multicast and BUM traffic, you can refer back to part 4. To verify the configuration, you can use the show nve interface command, which displays the active NVE interface, its VXLAN configuration, learning mode (in this case, data plane learning), and the IP address derived from the loopback interface.

Verification and Troubleshooting

To ensure that everything is functioning as expected, we can use various verification commands. For example, show nve peers displays the discovered VTEPs, which are cached and time out over time due to the flood-and-learn behavior. We can also use show nve vni to check the learning mode and verify the VNI to multicast group mappings. Additionally, show nve interface provides information about the learning mode and the NVE interface’s IP address, derived from loopback0.

Conclusion

In this article, we explored the configuration of VxLAN with data plane learning on Nexus 9000 series switches. By configuring the underlying infrastructure, multicast topology, and VTEP, we were able to establish connectivity and successfully handle BUM traffic. Stay tuned for part 6, where we will tackle the control plane learning aspect and delve into EVPN configuration. If you found this article informative, please subscribe to our channel and let us know your thoughts in the comments section. See you in part 6!

Further reading:  Demystifying NAT and PAT: Understanding the Basics
YouTube video
Understanding VxLAN Bridging Configuration