Blog

Data Center Interconnect (DCI) using E-VPN and PBB-EVPN

July 21, 2014 by Ixia Blog Team

In my previous blog, I highlighted how VXLAN can help to connect data center sites. Leading networking vendors (like Cisco, Juniper, and Alcatel) are promoting another PBB-EVPN and EVPN technology to address challenges in data center area.

Over the years, MPLS based L2VPN services have been proven and successfully deployed by the service providers and enterprise campuses. Initially it was deployed for fast switching, but due to its scalability, resiliency, and protocol-agnostic nature it became more successful across the network.

VPLS (Virtual Private LAN Services) is one of the service offerings in MPLS that helps to provide extension of broadcast domain from one to multiple sites over the WAN. Data centers are often situated in different locations, to be geo-redundant for the purpose of workload mobility and business continuity. The physical location of the data center has to be transparent to users and applications. Therefore layer-2 connectivity is required between the sites.

While Ethernet over MPLS (EoMPLS) and VPLS have been used for this purpose, data center interconnect (DCI) presents new requirements and challenges not fully addressed today.

Current challenges with VPLS:

  1. Scaling of thousands of MAC addresses. Due to the growth in virtualization, single server can host hundreds of VMs (virtual machine) and every machine requires one or more MAC address – which increases MAC-address scaling needs.
  2. Optimal forwarding of multicast. Multicast LSP can be formed in conjunction with VPLS but limited to point-t- multipoint, which consumes more network resources as there is no defined parameters in VPLS to create multipoint-to-multipoint multicast LSPs.
  3. Multi-homing. This is another key challenge. VPLS supports active/standby BGP multi-homing type. All active/active circuits is not possible. This means user can only use 50% bandwidth instead of use both links at100%
  4. C-MAC (customer MAC) transparency. Current VPLS solution doesn’t support transparency of customer mac
  5. Fast Convergence. In the event of failure of virtual machine or physical server, the network re-convergence will occur which may lead of MAC flushing problems.

EVPN and PBB-EVPN were solutions developed to address these challenges. Both EVPN and PBB-EVPN are new drafts in IETF L2VPN working group. The solutions still rely on MPLS forwarding, and it uses BGP for distributing customer MAC address reachability information over an MPLS cloud. Unlike existing L2VPN solutions where MAC addresses are always learned through the data plane, in EVPN the learning of MAC addresses is done via the control plane (i.e., MAC routing). Control plane based learning brings the advantage of using BGP-based policy control. Customers can build any topology using Route Targets. A full mesh of pseudowires is no longer required – often a scalability concern in VPLS.

Another key feature of E-VPN is the multi-homing capability. It supports active-active per-service and active-active per-flow, leading to better load balancing across peering PEs. It also supports multi-homing device (MHD) and multi-homed network (MHN) topologies with two or more routers in the same redundancy group.

PBB-EVPN addresses MAC-address scalability issue, it offers Provider Backbone Bridging (PBB) and EVPN functionality into a single device. Using PBB’s MAC-in-MAC encapsulation, PBB-EVPN separates customers MAC address (C-MACs) from backbone MAC addresses (B-MACs). In contrast to EVPN, PBB-EVPN uses BGP to advertise B-MAC reachability, while data-plane learning is still used for remote C-MAC to remote B-MAC binding. As a result, the number of MAC addresses in provider backbone is reduced to number of PEs which is typically hundreds as opposed to millions of customer MAC addresses.

EVPN operation

PBB-EVPN operation

How Ixia can help validating EVPN/PBB-EVPN solution?

EVPN and PBB-EVPN solution was first introduced in IxNetwork 7.10 EA. The realistic emulation offers to validate various implementations such as single-home and multi-homed scenarios.

It supports following key features:

  • Open configuration for EVPN AFI/SAFI for BGP capability advertisement, and new NLRI encoding
  • Open RD, ESI and EVI/Tag configuration
    • Support many EVPN instances each with many ESI, and in turn many EVI/Tags
  • Support NLRI Type 1,2,3,4
  • 0x1 – Ethernet Auto-Discovery Route
  • 0x2 – Mac Advertisement Route
  • 0x3 – Inclusive Multicast Route
  • 0x4 – Ethernet Segment Route
  • Options to enable/disable the new extended community (customer input of TLV)
  • It also emulates large number of PEs and CEs devices to test performance and scalability of device under test (DUT)

Additional Resources:

Ixia infrastructure testing

IxNetwork