Arista BGP EVPN+VXLAN for DCI

In this post I will go through the BGP EVPN + VXLAN for Data Center Interconnect with Arista switches. VXLAN provides the ability to decouple and abstract the logical topology by using MAC in IP encapsulation, from the physical underlay network. The VXLAN is describes in the RFC 7348 where you can read more about this technology. The initial VXLAN standard describe a multicast flood-and -learn for the overlay broadcast, unknown unicast and multicast traffic. Such flooding introduces some scalability concerns and to overcome the limitation of the flood-and-learn VXLAN the BGP EVPN can be used as the control-plane for VXLAN. The BGP EVPN has been define as the standard control-plane in the RFC 7432 for VXLAN overlays. The MP-BGP EVPN control-plane provides VTEP peer discovery and end-host reachability information across the fabric. In addition MP-BGP EVPN inherits multitenancy supports with VRF construct.

Below are shown some design options for the BGP EVPN + VXLAN fabric.

Option 1 – Single-AS model

The first option is called single-AS model because all leafs and spines switches in the fabric are placed into single BGP AS. In such deployment there is a requirement to have IGP + iBGP with full mesh of iBGP between all the switches in the fabric. Design with full mesh of iBGP peers is not scalable and it might suite for small fabric only with less then few switches. For bigger deployment better approach is to relay on RR configured on the spine switches, which provides better scalability and does not require full mesh between all the peers.

Option 2 –  Two-AS model

The second option is called two-AS model because all spine switches are placed into one BGP AS and all leaf switches are placed into another BGP AS. In such design there is no requirement to run any IGP routing protocol for underlay network as eBGP is running between spine and leaf switches directly over physical links. For the overlay network multi-hop eBGP needs to be configured between spine and leaf loopbacks. In addition, on the spine next-hop-unchanged must be set up to make sure EVPN routes point to the proper VTEP node. Also allowas-in must be configured on the leafs switches to allow accept BGP routes originated from the same BG AS.

Option 3 – Multi-AS model

The third option is called multi-AS model because all spine switches are placed into one BGP AS and all leaf switches are placed into separate BGP AS. In such design there is no requirement to run any IGP routing protocol for underlay network as eBGP is running between spine and leaf switches directly over physical links. For the overlay network multi-hop eBGP needs to be configured between spine and leaf switches loopbacks. In addition, on the spine next-hop-unchanged must be set up to make sure EVPN routes point to the proper VTEP node.

In this document I am going to focus on the BGP EVPN + VXLAN configuration between two different locations for L2 extention. I will use below network topology to demonstrate how to provide L2 extension between two DC’s using Arista switches. In the DC1 I will use two-AS model and in the DC2 I will use single-AS model.

EVPN+VXLAN DCI

Before we dive into more details some explanation is provided, how the above network is setup. The DC1 is setup in such way that eBGP is used for underlay and overlay network between spine and leaf switches. There is eBGP for for address-family ipv4 and evpn. The spine switch is configured with BGP AS 100 whereas leaf switches are configured within BGP AS 200. The DC2 is setup with OSPF for underlay network and iBGP for overlay network. Between both DC’s there is eBGP for ipv4 and evpn address-family.

In addition VLAN 100 is map to VNI 1100 and VRF A is map to VNI 1000. The VNI 1000 is responsible for the inter-vlan routing.

Below config is for DC1 side to provide basic eBGP connectivity between spine and leafs switches. The config includes configuration for IPv4 and EVPN address-family to provide underlay and overlay connectivity.

leaf1-dc1

service routing protocols model multi-agent
!
hostname leaf1-dc1
!
vlan 100
!
vrf definition A
!
interface Ethernet1
   no switchport
   ip address 10.1.11.11/24
!
interface Ethernet3
   switchport access vlan 100
!
interface Loopback0
   ip address 11.11.11.11/32
!
interface Loopback1
   ip address 11.11.11.111/32
!
interface Vlan100
   vrf forwarding A
   ip address 100.1.1.11/24
   ip virtual-router address 100.1.1.1
!
interface Vxlan1
   vxlan source-interface Loopback1
   vxlan udp-port 4789
   vxlan vlan 100 vni 1100
   vxlan vrf A vni 1000
   vxlan flood vtep 12.12.12.112 151.1.1.3
!
ip virtual-router mac-address 00:00:00:00:01:00
!
ip routing
ip routing vrf A
!
router bgp 200
   router-id 11.11.11.11
   neighbor EVPN_SPINE peer-group
   neighbor EVPN_SPINE remote-as 100
   neighbor EVPN_SPINE update-source Loopback1
   neighbor EVPN_SPINE allowas-in 1
   neighbor EVPN_SPINE ebgp-multihop 2
   neighbor EVPN_SPINE send-community extended
   neighbor EVPN_SPINE maximum-routes 12000 
   neighbor SPINE peer-group
   neighbor SPINE remote-as 100
   neighbor SPINE allowas-in 1
   neighbor SPINE maximum-routes 12000 
   neighbor 1.1.1.11 peer-group EVPN_SPINE
   neighbor 10.1.11.1 peer-group SPINE
   redistribute attached-host
   !
   vlan 100
      rd 100:100
      route-target both 100:100
      redistribute learned
   !
   address-family evpn
      neighbor EVPN_SPINE activate
   !
   address-family ipv4
      neighbor SPINE active
      no neighbor EVPN_SPINE activate
      network 11.11.11.11/32
      network 11.11.11.111/32
   !
   vrf A
      rd 100:100
      route-target import evpn 100:100
      route-target export ev

leaf2-dc1

service routing protocols model multi-agent
!
hostname leaf2-dc1
!
vlan 100,200
!
vrf definition A
!
interface Ethernet1
   no switchport
   ip address 10.1.12.12/24
!
interface Ethernet3
   switchport access vlan 100
!
interface Ethernet4
   switchport access vlan 200
!
interface Loopback0
   ip address 12.12.12.12/32
!
interface Loopback1
   ip address 12.12.12.112/32
!
interface Vlan100
   vrf forwarding A
   ip address 100.1.1.12/24
   ip virtual-router address 100.1.1.1
!
interface Vlan200
   vrf forwarding A
   ip address 100.1.12.22/24
   ip virtual-router address 100.1.12.1
!
interface Vxlan1
   vxlan source-interface Loopback1
   vxlan udp-port 4789
   vxlan vlan 100 vni 1100
   vxlan vlan 200 vni 1200
   vxlan vrf A vni 1000
   vxlan flood vtep 12.12.12.112 151.1.1.3
!
ip virtual-router mac-address 00:00:00:00:01:00
!
ip routing
ip routing vrf A
!
router bgp 200
   router-id 12.12.12.12
   neighbor EVPN_SPINE peer-group
   neighbor EVPN_SPINE remote-as 100
   neighbor EVPN_SPINE update-source Loopback1
   neighbor EVPN_SPINE allowas-in 1
   neighbor EVPN_SPINE ebgp-multihop 2
   neighbor EVPN_SPINE send-community extended
   neighbor EVPN_SPINE maximum-routes 12000 
   neighbor SPINE peer-group
   neighbor SPINE remote-as 100
   neighbor SPINE allowas-in 1
   neighbor SPINE maximum-routes 12000 
   neighbor 1.1.1.11 peer-group EVPN_SPINE
   neighbor 10.1.12.1 peer-group SPINE
   !
   vlan 100
      rd 100:100
      route-target both 100:100
      redistribute learned
   !
   vlan 200
      rd 100:100
      route-target both 100:100
      redistribute learned
   !
   address-family evpn
      neighbor EVPN_SPINE activate
   !
   address-family ipv4
      neighbor SPINE active
      no neighbor EVPN_SPINE activate
      network 12.12.12.12/32
      network 12.12.12.112/32
   !
   vrf A
      rd 100:100
      route-target import evpn 100:100
      route-target export evpn 100:100
      redistribute connected

spine1-dc1

service routing protocols model multi-agent
!
hostname spine1-dc1
!
interface Ethernet1
   no switchport
   ip address 10.1.11.1/24
!
interface Ethernet2
   no switchport
   ip address 10.1.12.1/24
!
interface Ethernet3
   switchport access vlan 100
   no switchport
!
interface Loopback0
   ip address 1.1.1.1/32
!
interface Loopback1
   ip address 1.1.1.11/32
!
ip routing
!
router bgp 100
   router-id 1.1.1.1 
   neighbor EVPN_LEAF peer-group
   neighbor EVPN_LEAF remote-as 200
   neighbor EVPN_LEAF next-hop-unchanged
   neighbor EVPN_LEAF update-source Loopback1
   neighbor EVPN_LEAF ebgp-multihop 2
   neighbor EVPN_LEAF send-community extended
   neighbor EVPN_LEAF maximum-routes 12000 
   neighbor LEAF peer-group
   neighbor LEAF remote-as 200
   neighbor LEAF maximum-routes 12000 
   neighbor 10.1.11.11 peer-group LEAF
   neighbor 10.1.12.12 peer-group LEAF
   neighbor 11.11.11.111 peer-group EVPN_LEAF
   neighbor 12.12.12.112 peer-group EVPN_LEAF
   redistribute connected
   !
   address-family evpn
      neighbor EVPN_LEAF activate
   !
   address-family ipv4
      neighbor LEAF activate
      no neighbor EVPN_LEAF activate
      network 1.1.1.1/32
      network 1.1.1.11/32

To support MP-BGP on the Arista switches for ipv4 and evpn address-families there is a requirement to apply following command on all switches service routing protocols model multi-agent. The rest of the config is pretty much self explanatory. Both leafs switches for the EVPN address-family will use following BGP command allowas-in which allows to accept BGP  updates for the prefixes originated from the same BGP AS, in this example both leafs are in the AS 200 and spine switch is in the AS 100. Without this command end-to-end reachability won’t be available for VTEP’s.

With the above config basic IP connectivity should be obtain between leafs switches and hosts on the VLAN 100 should be able to ping each other.

Below output, taken on the leaf1-dc1 switch shows that switch leaf1-dc1 learns remote mac of the server 100.1.1.7 (0050.7966.6807) through the interface VXLAN and local server 100.1.1.6 (0050.7966.6806) was learned on the local port Et3.

leaf1-dc1#sh mac address-table vlan 100
          Mac Address Table
------------------------------------------------------------------

Vlan    Mac Address       Type        Ports      Moves   Last Move
----    -----------       ----        -----      -----   ---------
 100    0050.7966.6806    DYNAMIC     Et3        1       0:00:18 ago
 100    0050.7966.6807    DYNAMIC     Vx1        1       0:00:18 ago

Lets try to ping the server 100.1.1.7 from the server connected to the leaf1-dc1 switch.

server1-dc1> ping 100.1.1.7

84 bytes from 100.1.1.7 icmp_seq=1 ttl=64 time=611.308 ms
84 bytes from 100.1.1.7 icmp_seq=2 ttl=64 time=81.349 ms
84 bytes from 100.1.1.7 icmp_seq=3 ttl=64 time=70.616 ms
84 bytes from 100.1.1.7 icmp_seq=4 ttl=64 time=103.143 ms
84 bytes from 100.1.1.7 icmp_seq=5 ttl=64 time=72.063 ms

The ping was successful and according to the the packet capture VNI 1100 was used for encapsulation. The outer IP’s 11.11.11.11 and 12.12.12.12 are IP’s ot VTEP on the switch leaf1-dc1 and leaf2-dc1 respectively.

To verify if BGP EVPN properly learn expected MAC’s run the following command sh bgp evpn route-type mac-ip which shows what MAC’s are learned. As you can see one mac was learned locally and the other was learned from the remote VTEP with the IP address of 12.12.12.12. In addition, there is information what AS originated specific MAC address.

leaf1-dc1#sh bgp evpn route-type mac-ip
BGP routing table information for VRF default
Router identifier 11.11.11.11, local AS number 200
Route status codes: s - suppressed, * - valid, > - active, # - not installed, E - ECMP head, e - ECMP
                    S - Stale, c - Contributing to ECMP, b - backup
                    % - Pending BGP convergence
Origin codes: i - IGP, e - EGP, ? - incomplete
AS Path Attributes: Or-ID - Originator ID, C-LST - Cluster List, LL Nexthop - Link Local Nexthop

         Network             Next Hop         Metric  LocPref Weight Path
 * >     RD: 100:100 mac-ip 0050.7966.6806
                             -                -       -       0       i
 * >     RD: 100:100 mac-ip 0050.7966.6807
                             12.12.12.112     -       100     0      100 200 i
leaf-1#

In this example there is also one server on the VLAN 200 100.1.12.8/24 connected to the leaf2-dc1 switch which is mapped to the VNI 1200.

leaf2-dc1#show vxlan vni
VNI to VLAN Mapping for Vxlan1
VNI        VLAN        Source       Interface       802.1Q Tag 
---------- ----------- ------------ --------------- ---------- 
1000       1007*       evpn         Vxlan1          1007       
1100       100         static       Ethernet3       untagged   
1200       200         static       Ethernet4       untagged 

Checking VRF A routing table on the leaf1-dc1 switch you can see that such prefix is learned via eBGP and VNI 1000 is associated with this prefix. As mentioned early in the post VNI 1000 is mapped to the VRF A on the all switches. This specific VNI 1000 is used for inter-vlan routing.

leaf-1#sh ip route vrf A

VRF: A
Codes: C - connected, S - static, K - kernel, 
       O - OSPF, IA - OSPF inter area, E1 - OSPF external type 1,
       E2 - OSPF external type 2, N1 - OSPF NSSA external type 1,
       N2 - OSPF NSSA external type2, B I - iBGP, B E - eBGP,
       R - RIP, I L1 - IS-IS level 1, I L2 - IS-IS level 2,
       O3 - OSPFv3, A B - BGP Aggregate, A O - OSPF Summary,
       NG - Nexthop Group Static Route, V - VXLAN Control Service,
       DH - DHCP client installed default route, M - Martian,
       DP - Dynamic Policy Route

Gateway of last resort is not set

 C      100.1.1.0/24 is directly connected, Vlan100
 B E    100.1.12.0/24 [200/0] via VTEP 12.12.12.112 VNI 1000 router-mac 50:00:00:03:37:66

Ping from the server 100.1.1.6 on the VLAN 100 to the server 100.1.12.8 on the VLAN 200 works as expected.

server1-dc1> ping 100.1.12.8

84 bytes from 100.1.12.8 icmp_seq=1 ttl=63 time=533.109 ms
84 bytes from 100.1.12.8 icmp_seq=2 ttl=63 time=85.119 ms
84 bytes from 100.1.12.8 icmp_seq=3 ttl=63 time=87.279 ms
84 bytes from 100.1.12.8 icmp_seq=4 ttl=63 time=85.868 ms
84 bytes from 100.1.12.8 icmp_seq=5 ttl=63 time=78.452 ms

The highlighted outer IP’s 11.11.11.11 and 12.12.12.12 are VTEP’s on the leaf1-dc1 and leaf2-dc1 switches respectively. The packet capture below clearly indicates that VNI 1000 was used for data-plane traffic between both servers.

The TCAM on the leaf1-dc1 shows that MAC address of the server 100.1.12.8 (0050.7966.6808) was learned through the VXLAN interface.

leaf-1#show mac address-table
          Mac Address Table
------------------------------------------------------------------

Vlan    Mac Address       Type        Ports      Moves   Last Move
----    -----------       ----        -----      -----   ---------
 100    0050.7966.6806    DYNAMIC     Et3        1       0:00:39 ago
 100    0050.7966.6807    DYNAMIC     Vx1        1       0:00:06 ago
 100    0050.7966.6808    DYNAMIC     Vx1        1       0:00:39 ago

To verify VLAN to VNI mapping below command can be used. As shown below VLAN 100 is mapped to VNI 1100 and VLAN 200 is mapped to VNI 1200. The VLAN 1007 was dynamically created VRF A was mapped to VNI 1000.

leaf1-dc1#sh vxlan vni 
VNI to VLAN Mapping for Vxlan1
VNI        VLAN        Source       Interface       802.1Q Tag 
---------- ----------- ------------ --------------- ---------- 
1000       1007*       evpn         Vxlan1          1007       
1100       100         static       Ethernet3       untagged   

Note: * indicates a Dynamic VLAN


leaf2-dc1#show vxlan vni
VNI to VLAN Mapping for Vxlan1
VNI        VLAN        Source       Interface       802.1Q Tag 
---------- ----------- ------------ --------------- ---------- 
1000       1007*       evpn         Vxlan1          1007       
1100       100         static       Ethernet3       untagged   
1200       200         static       Ethernet4       untagged   

To verify what prefixes learned for inter-vlan routing run following command sh bgp evpn route-type ip-prefix ipv4. Below output indicated that prefix 100.1.12.0/24 for the VLAN 200 on the switch leaf2-dc1 is learned from the VTEP 12.12.12.12 and was originated by BGP AS 200. As you can note even there is eBGP EVPN session between leaf1-dc1 and spine1-dc1 the next-hop address was not change and point directly to the remote VTEP. This behavior is due to that following command next-hop-unchanged was added to the EVPN peer-group.

leaf-1#sh bgp evpn route-type ip-prefix ipv4 
BGP routing table information for VRF default
Router identifier 11.11.11.11, local AS number 200
Route status codes: s - suppressed, * - valid, > - active, # - not installed, E - ECMP head, e - ECMP
                    S - Stale, c - Contributing to ECMP, b - backup
                    % - Pending BGP convergence
Origin codes: i - IGP, e - EGP, ? - incomplete
AS Path Attributes: Or-ID - Originator ID, C-LST - Cluster List, LL Nexthop - Link Local Nexthop

         Network             Next Hop         Metric  LocPref Weight Path
 * >     RD: 100:100 ip-prefix 100.1.1.0/24
                             -                -       -       0       i
 * >     RD: 100:100 ip-prefix 100.1.12.0/24
                             12.12.12.112     -       100     0      100 200 i

The DC2 in this example uses different approach where IGP is used for underlay network and iBGP is used for overlay network and this design model is called single-AS as described early.

Below config is for leaf2-dc2

service routing protocols model multi-agent
!
hostname leaf1-dc2
!
vlan 100
!
vrf definition A
!
interface Ethernet1
   switchport access vlan 100
!
interface Ethernet2
   no switchport
   ip address 10.10.13.3/24
   ip ospf network point-to-point
   ip ospf area 0.0.0.0
!
interface Loopback0
   ip address 150.1.1.3/32
   ip ospf area 0.0.0.0
!
interface Loopback1
   ip address 151.1.1.3/32
   ip ospf area 0.0.0.0
!
interface Vlan100
   vrf forwarding A
   ip address 100.1.1.22/24
   ip virtual-router address 100.1.1.1
!
interface Vxlan1
   vxlan source-interface Loopback1
   vxlan udp-port 4789
   vxlan vlan 100 vni 1100
   vxlan vrf A vni 1000
   vxlan flood vtep 11.11.11.111 11.11.11.112
!
ip virtual-router mac-address 00:00:00:00:01:00
!
ip routing
ip routing vrf A
!
router bgp 300
   router-id 150.1.1.3
   neighbor iBGP peer-group
   neighbor iBGP remote-as 300
   neighbor iBGP update-source Loopback0
   neighbor iBGP send-community extended
   neighbor iBGP maximum-routes 12000 
   neighbor 150.1.1.1 peer-group iBGP
   !
   vlan 100
      rd 100:100
      route-target both 100:100
      redistribute learned
   !
   address-family evpn
      neighbor iBGP activate
   !
   address-family ipv4
      neighbor iBGP activate
      network 150.1.1.3/32
      network 151.1.1.3/32
   !
   vrf A
      rd 100:100
      route-target import evpn 100:100
      route-target export evpn 100:100
      redistribute connected
!
router ospf 1
   router-id 150.1.1.3
   max-lsa 12000

and following config is for spine1-dc2

service routing protocols model multi-agent
!
hostname spine1-dc2
!
interface Ethernet1
   no switchport
   ip address 172.16.1.2/30
!
interface Ethernet2
!
interface Ethernet3
   no switchport
   ip address 10.10.13.1/24
   ip ospf network point-to-point
   ip ospf area 0.0.0.0
!
interface Loopback0
   ip address 150.1.1.1/32
   ip ospf area 0.0.0.0
!
ip routing
!
router bgp 300
   neighbor iBGP peer-group
   neighbor iBGP remote-as 300
   neighbor iBGP update-source Loopback0
   neighbor iBGP route-reflector-client
   neighbor iBGP send-community extended
   neighbor iBGP maximum-routes 12000 
   neighbor 150.1.1.3 peer-group iBGP
   redistribute connected
   !
   address-family evpn
      neighbor DCI activate
      neighbor iBGP activate
   !
   address-family ipv4
      neighbor DCI activate
      neighbor iBGP activate
      network 150.1.1.1/32
!
router ospf 1
   router-id 150.1.1.1
   max-lsa 12000

At this stage it should be basic reachability between spine and leafs loopbacks, in addition VLAN 100 of the leaf2-dc2 switch should be advertised via MP-BGP EVPN. Below is the output from the spine1-dc2 switch which shows that this switch learned correct EVPN information about the VLAN 100 prefix 100.1.1.1.0/24

spine1-dc2#sh bgp evpn route-type ip-prefix 100.1.1.0/24
BGP routing table information for VRF default
Router identifier 150.1.1.1, local AS number 300
BGP routing table entry for ip-prefix 100.1.1.0/24, Route Distinguisher: 100:100
 Paths: 2 available
  Local (Received from a RR-client)
    151.1.1.3 from 150.1.1.3 (150.1.1.3)
      Origin IGP, metric -, localpref 100, weight 0, valid, internal, best
      Extended Community: Route-Target-AS:100:100 TunnelEncap:tunnelTypeVxlan EvpnRouterMac:50:00:00:15:f4:e8
      VNI: 1000

To provide L2 extension between both DC’s MP-BGP EVPN needs to be configured between these two locations to allow exchanging BGP EVPN end-host and prefix availability information.

To do so, the following config is applied on the spine1-dc1 and
spine1-dc2 switches respectively, which provides transport for ipv4 and evpn between both location. There is no requirements to run eBGP for ipv4 between DC1 and DC2 however it make sense to avoid mutual redistribution between different routing protocols. It’s worth to mention that DCI connectivity does not have to happen between spines switches and can be done between dedicated border leafs. If for some reason you won’t be bale to run BGP in the backbone for IPv4 between your DC’s any IGP routing protocol can be used to provide transport for the VTEP’s endpoint. In the scenario, where BGP is not an option for the core backbone, you can use eBGP multihop for IPv4 and EVPN address-families to provide BGP advertisement between different sites..

spine1-dc1

router bgp 100
   router-id 1.1.1.1
   neighbor DCI peer-group
   neighbor DCI remote-as 300
   neighbor DCI send-community extended
   neighbor DCI maximum-routes 12000 
   neighbor 172.16.1.2 peer-group DCI
   !
   address-family evpn
      neighbor DCI activate
   !
   address-family ipv4
      neighbor DCI activate
   

spine1-dc2

router bgp 300
   router-id 150.1.1.1
   neighbor DCI peer-group
   neighbor DCI remote-as 100
   neighbor DCI send-community extended
   neighbor DCI maximum-routes 12000 
   neighbor 172.16.1.1 peer-group DCI
   !
   address-family evpn
      neighbor DCI activate
   !
   address-family ipv4
      neighbor DCI activate

When MP-BGP EVPN and IPv4 is configured between DC1 and DC2, both location should start exchanging IPv4 and EVPN information for underlay and overlay network. The output from the spine1-dc1 switch shows that switch spine-1-dc1 learned DC2 EVPN prefixe for the server 100.1.1.9 (0050.7966.6809) with the next-hop of 151.1.1.3 which is the leaf2-dc2 VTEP.

spine1-dc1#sh bgp evpn next-hop 151.1.1.3 
BGP routing table information for VRF default
Router identifier 1.1.1.1, local AS number 100
Route status codes: s - suppressed, * - valid, > - active, # - not installed, E - ECMP head, e - ECMP
                    S - Stale, c - Contributing to ECMP, b - backup
                    % - Pending BGP convergence
Origin codes: i - IGP, e - EGP, ? - incomplete
AS Path Attributes: Or-ID - Originator ID, C-LST - Cluster List, LL Nexthop - Link Local Nexthop

         Network             Next Hop         Metric  LocPref Weight Path
 * >     RD: 100:100 mac-ip 0050.7966.6809
                             151.1.1.3        -       100     0      300 i
 * >     RD: 100:100 mac-ip 0050.7966.6809 100.1.1.9
                             151.1.1.3        -       100     0      300 i
 * >     RD: 100:100 imet 151.1.1.3
                             151.1.1.3        -       100     0      300 i
 *       RD: 100:100 ip-prefix 100.1.1.0/24
                             151.1.1.3        -       100     0      300 i

Looking at leaf2-dc2 switch BGP EVPN routuing table you can see similar output where VLAN100 and VLAN200 entries exists.

leaf1-dc2#sh bgp evpn 
BGP routing table information for VRF default
Router identifier 150.1.1.3, local AS number 300
Route status codes: s - suppressed, * - valid, > - active, # - not installed, E - ECMP head, e - ECMP
                    S - Stale, c - Contributing to ECMP, b - backup
                    % - Pending BGP convergence
Origin codes: i - IGP, e - EGP, ? - incomplete
AS Path Attributes: Or-ID - Originator ID, C-LST - Cluster List, LL Nexthop - Link Local Nexthop

         Network             Next Hop         Metric  LocPref Weight Path
 * >     RD: 100:100 mac-ip 0050.7966.6806
                             11.11.11.111     -       100     0      100 200 i
 * >     RD: 100:100 mac-ip 0050.7966.6806 100.1.1.6
                             11.11.11.111     -       100     0      100 200 i
 * >     RD: 100:100 mac-ip 0050.7966.6807
                             12.12.12.112     -       100     0      100 200 i
 * >     RD: 100:100 mac-ip 0050.7966.6807 100.1.1.7
                             12.12.12.112     -       100     0      100 200 i
 * >     RD: 100:100 mac-ip 0050.7966.6808
                             12.12.12.112     -       100     0      100 200 i
 * >     RD: 100:100 mac-ip 0050.7966.6808 100.1.12.8
                             12.12.12.112     -       100     0      100 200 i
 * >     RD: 100:100 mac-ip 0050.7966.6809
                             -                -       -       0       i
 * >     RD: 100:100 mac-ip 0050.7966.6809 100.1.1.9
                             -                -       -       0       i
 * >     RD: 100:100 imet 11.11.11.111
                             11.11.11.111     -       100     0      100 200 i
 * >     RD: 100:100 imet 12.12.12.112
                             12.12.12.112     -       100     0      100 200 i
 * >     RD: 100:100 imet 151.1.1.3
                             -                -       -       0       i
 * >     RD: 100:100 ip-prefix 100.1.1.0/24
                             -                -       -       0       i
 * >     RD: 100:100 ip-prefix 100.1.12.0/24
                             12.12.12.112     -       100     0      100 200 i

From the logical perspective the topology looks like one presented below where BGP EVPN is configured between both location and is responsible for BGP EVPN route-type 2 and route-type 5 propagation. BGP EVPN works as control plane and VXLAN works as data-plane. The core DCI network does not have to support BGP EVPN functionality and can be IP only. If a DCI core canot support BGP EVPN then multi-hop BGP EVPN must be configured between both sites.

As demonstrated above L2 VXLAN extension with BGP EVPN between different location with Arista switches is pretty straightforward and below there is a list of some benefits to use BGP EVPN as control plane:

  • Standards based BGP control plane for VXLAN
  • Reuse of well known and mature MP-BGP concepts, to deliver multi-tenant Layer 2 and layer 3 VPNs
  • MAC address learning in the control plane using BGP, rather than flood and learn
  • Optional ARP (MAC to IP) learning / suppression for the reduction of traffic flooding across layer 2 domains
  • MAC flapping prevention using address damping techniques
  • Support for active-active and active-standby multi-homing of end nodes, providing an important optimization over other L2VPN’s which only provide one active path into and out of the VPN  (to avoid loops).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s