Ethernet VPN (EVPN) control plane was created, utilizing a shiny new address family in Multi-Protocol BGP (MP-BGP).
A new address family in BGP was created to exchange Network Layer Reachability Information (NLRI) via a series of route types.
The beauty with BGP as the control plane for VXLAN is that we can use a single routing protocol with familiar concepts to manage new capabilities such as MAC address learning and VRF multi-tenancy while providing optimized equal-cost multi-pathing (ECMP) across data centers and within the enterprise.
In the context of a VXLAN control plane, we use EVPN as a Network Virtualization Overlay (NVO).
You can think of Type-2 routes as VLAN-based, advertising an end host’s MAC and IP address within the VLAN over an IP network.
A VXLAN Network Identifier (VNI) is mapped to a VLAN. Any Leaf in any pod configured with the VNI will be able to share end-host MAC addresses to provide Layer 2 reachability.
As a Leaf switch learns a locally attached MAC address, it will advertise it into EVPN. Other Leaf VTEPs with this VNI configured will install the MAC in the CAM.
Type-5 routes are IP Prefix-based, advertising a route prefix as opposed to a MAC Address.
A VXLAN Network Identifier (VNI) is mapped to a Virtual Routing & Forwarding (VRF) context that identifies the customer/tenant/segment uniquely within the fabric, allowing for multi-tenancy and route tables to coexist.
The advertisement of the type 5 EVPN attribute will provide the NLRI between subnets and routing contexts, allowing for learning of prefixes (not MACs) that are advertised across different VRFs in the fabric.
A VTEP configured for a VNI will advertise itself as a member of that EVPN Instance (EVI) to all other VTEPs that are part of that EVI.
Each VTEP maintains a flood list of other VTEPs in that EVI, and perform headend replication
With Asymmetric IRB, if the host on the left wants to reach the host on the right, the local VTEP will receive the packet, then route it to the local SVI, then VXLAN Bridge it to the remote VTEP, where the remote VTEP will finally switch the packet. Return traffic would hit the local VTEP, get routed to the local SVI, then VXLAN Bridged back to the remote VTEP where it is finally switched to the host. This flow is Asymmetric in that each VTEP is performing a local ingress routing function before forwarding the traffic. This type of asymmetric flow requires that each VTEP in the fabric is configured for an Anycast gateway for all VLANs in the environment and that each switch maintains ARP and CAM tables for each of the VLANs.
With Symmetric IRB, each VTEP performs ingress and egress routing. Traffic between these two segments is symmetrical in that traffic is routed to the VRF, over the VNI, and back in the same manner.
#####
Configure Multi-Chassis Link Aggregation (MLAG)
Configure Underlay Point-to-Point Interfaces
Every leaf connects to every spine. Each interface will be setup as a /31 point-to-point
Configure BGP Process
Each Leaf will establish an IBGP relationship with its peer Leaf
A /32 Loopback interface will be configured on each leaf and spine. These Loopback IP addresses will be used as the router-id in the BGP process on each switch.
Configure the BGP Process, assigning an AS number for each pair of devices
Configure BGP EVPN Underlays
Each Spine will peer with each Leaf over each L3 point-to-point interface
The iBGP Session will allow the traffic to flow over the peer-link between the MLAG peers in case of the failure scenario in which the Leaf switch loses its links to the Spines
Configure BGP EVPN Overlays
Enable EVPN Capability
Configure VXLAN Tunnel Endpoints (VTEP)
extending Layer 2 across a data center or campus network via EVPN’s network virtualization overlay
isolate traffic into a VRF and transport that VRF over the EVPN network virtualization overlay using EVPN Type-5 routes.
attach a router to VTEP. The router will peer via BGP with a specific VRF. The router will inject a default route which will be transported throughout the EVPN fabric in that specific VRF.