Multicast Mode

Chapter Overview

Multicast mode uses IP multicast in the underlay network to efficiently distribute BUM traffic across VXLAN networks. This approach leverages existing multicast infrastructure for optimal bandwidth utilization.

Multicast Mode Overview

In multicast mode, each VNI is associated with a multicast group in the underlay network:

VNI 10100
Group 239.1.1.1
All VTEPs
Local Segments

Multicast Group Assignment

VNI-to-multicast group mapping can be done using different strategies:

Strategy Description Example Pros/Cons
One-to-One Each VNI maps to unique group VNI 10100 → 239.1.1.1 Simple, uses many groups
Many-to-One Multiple VNIs share group VNI 10100-10199 → 239.1.1.1 Efficient, less isolation
Hash-Based VNI hash determines group hash(VNI) → group Automatic, even distribution
Range-Based VNI ranges map to groups VNI 10000-19999 → 239.1.1.x Hierarchical, scalable

Multicast Group Configuration

# One-to-One Mapping
interface nve1
  member vni 10100
    mcast-group 239.1.1.1
  member vni 10200
    mcast-group 239.1.1.2
  member vni 10300
    mcast-group 239.1.1.3

# Range-Based Mapping
ip multicast-routing
ip pim sparse-mode
multicast-routing
  vrf default
    address-family ipv4
      mcast-group 239.1.1.0/24

PIM Configuration Requirements

Protocol Independent Multicast (PIM) is required for multicast routing in the underlay network. Key components include:

PIM Sparse Mode

Most common mode for VXLAN deployments:

  • Uses shared trees
  • Requires Rendezvous Point (RP)
  • Efficient for sparse groups
  • Supports source-specific trees
Anycast RP

Provides RP redundancy:

  • Multiple RPs with same address
  • MSDP for RP coordination
  • Fast convergence
  • Load distribution

Basic PIM Configuration

# Enable multicast routing
ip multicast-routing

# Configure PIM on interfaces
interface ethernet1/1
  ip pim sparse-mode

# Configure RP
ip pim rp-address 192.168.1.100 group-list 239.0.0.0/8

# Enable IGMP
interface nve1
  ip igmp version 3

BUM Traffic Flow

When BUM traffic needs to be distributed across the VXLAN network:

Host sends BUM
Ingress VTEP
Multicast Group
All Member VTEPs
Local Hosts

Multicast Benefits

  • Bandwidth Efficiency: Single copy per network segment
  • Optimal Replication: Network handles distribution
  • Hardware Acceleration: ASIC-based multicast support
  • Proven Technology: Mature multicast protocols

Multicast Challenges

  • Complex Configuration: PIM, RP, and IGMP setup
  • Troubleshooting: Multicast-specific tools required
  • State Management: Multicast forwarding tables
  • Underlay Dependency: All devices must support multicast

Unicast Mode (Ingress Replication)

Section Overview

Unicast mode, also known as ingress replication, eliminates the need for multicast in the underlay by having the ingress VTEP replicate BUM traffic to all remote VTEPs using unicast.

Ingress Replication Process

When BUM traffic is received, the ingress VTEP:

BUM Traffic
Ingress VTEP
Replicate N times
N Remote VTEPs

Peer Discovery Methods

VTEPs must know about remote peers for replication:

Method Description Configuration Scalability
Static Configuration Manually configure peer list peer-ip x.x.x.x Poor
BGP EVPN BGP distributes peer information bgp evpn Excellent
Controller-based SDN controller pushes peer list Via API/OVSDB Good
Flood and Learn Learn peers from received traffic Auto-discovery Limited

Ingress Replication Configuration

# Static peer configuration
interface nve1
  member vni 10100
    ingress-replication protocol static
      peer-ip 192.168.1.2
      peer-ip 192.168.1.3
      peer-ip 192.168.1.4

# BGP EVPN peer discovery
interface nve1
  member vni 10100
    ingress-replication protocol bgp

Bandwidth Implications

Ingress replication creates multiple copies of BUM traffic:

Bandwidth Usage

For N VTEPs in a VNI:

  • Ingress link: (N-1) × original traffic
  • Core links: Multiple copies
  • Scales poorly with VTEP count
  • Asymmetric bandwidth usage
Example Calculation

100 Mbps BUM traffic, 10 VTEPs:

  • Original: 100 Mbps
  • Replicated: 900 Mbps
  • 9x bandwidth increase
  • Linear growth with VTEPs

Unicast Mode Benefits

  • No Multicast Dependency: Pure unicast underlay
  • Simplified Network: Standard IP routing
  • Easy Troubleshooting: Standard unicast tools
  • Cloud-Friendly: Works over internet/WAN

BGP EVPN

Section Overview

BGP EVPN (Ethernet VPN) is the most advanced control plane for VXLAN, providing MAC/IP learning distribution, optimal forwarding, and advanced features like multihoming and mobility.

EVPN Route Types

BGP EVPN uses different route types for various purposes:

Type Name Purpose VXLAN Usage
1 Ethernet AD Ethernet Auto-Discovery Multihoming
2 MAC/IP Advertisement Host reachability MAC/IP learning
3 Inclusive Multicast Multicast tree building BUM traffic distribution
4 Ethernet Segment ES route Multihoming
5 IP Prefix IP route advertisement Inter-subnet routing

Type-2 Route Structure

Most important route type for VXLAN MAC/IP learning:

Type-2 Route Example

Route Type: 2 (MAC/IP Advertisement)
Route Distinguisher: 192.168.1.1:100
Ethernet Segment Identifier: 00:00:00:00:00:00:00:00:00:00
Ethernet Tag: 0
MAC Address Length: 48
MAC Address: 00:11:22:33:44:55
IP Address Length: 32
IP Address: 192.168.100.10
MPLS Label: 10100
Next Hop: 192.168.1.1

EVPN-VXLAN Benefits

Optimal Forwarding
  • Suppress unknown unicast flooding
  • Direct MAC-to-VTEP mapping
  • Proactive MAC learning
  • Optimal unicast paths
Advanced Features
  • Host mobility detection
  • Multihoming support
  • ARP/ND suppression
  • Inter-subnet routing

BGP EVPN Configuration

# Enable EVPN
nv overlay evpn

# BGP EVPN configuration
router bgp 65001
  neighbor 192.168.1.100
    remote-as 65001
    update-source loopback0
    address-family l2vpn evpn
      send-community extended
      route-reflector-client

# Associate VNI with EVPN
evpn
  vni 10100 l2
    rd auto
    route-target import auto
    route-target export auto

EVPN Advantages

  • Industry Standard: RFC 7432 and related standards
  • Vendor Neutral: Supported by major vendors
  • Feature Rich: Advanced L2/L3 services
  • Scalable: BGP-based distribution

Controller-Based Approaches

Section Overview

Controller-based VXLAN uses Software-Defined Networking (SDN) principles where a centralized controller manages the VXLAN fabric configuration and operations.

Controller Architecture

SDN Controller
Control Plane APIs
VTEP Configuration
Data Plane

Common Controller Protocols

Protocol Description Use Case Examples
OVSDB Open vSwitch Database Software VTEP management OpenStack, OpenDaylight
OpenFlow Flow table programming Forwarding control ODL, ONOS
NETCONF Network configuration protocol Device configuration Cisco NSO, Juniper
REST API RESTful web services Application integration Vendor-specific

Controller Functions

Configuration
  • VNI provisioning
  • VTEP configuration
  • Policy enforcement
  • Service chaining
Monitoring
  • Topology discovery
  • Performance monitoring
  • Fault detection
  • Health checks
Intelligence
  • Path optimization
  • Load balancing
  • Auto-remediation
  • Analytics

Controller Benefits

  • Centralized Management: Single point of control
  • Automation: Programmatic configuration
  • Consistency: Uniform policies across fabric
  • Visibility: Global network view

Controller Challenges

  • Single Point of Failure: Controller availability
  • Scalability: Controller performance limits
  • Vendor Lock-in: Proprietary solutions
  • Complexity: Additional layer of abstraction

Control Plane Comparison

Section Overview

Each VXLAN control plane approach has distinct advantages and trade-offs. This comparison helps you choose the right approach for your environment.

Feature Comparison Matrix

Feature Flood & Learn Multicast Ingress Replication BGP EVPN Controller
Complexity Low Medium Low High High
Scalability Poor Excellent Fair Excellent Good
Convergence Slow Medium Medium Fast Fast
BUM Efficiency Poor Excellent Poor Good Good
Underlay Req. Basic IP Multicast Basic IP BGP API Support
Mobility Limited Limited Limited Excellent Good

Deployment Scenarios

Enterprise Data Center

Recommended: BGP EVPN

  • Advanced features needed
  • VM mobility requirements
  • Skilled network team
  • Multi-vendor environment
Service Provider

Recommended: BGP EVPN + Multicast

  • Massive scale requirements
  • Multi-tenancy
  • SLA requirements
  • Advanced services
Greenfield Deployment

Recommended: BGP EVPN

  • Latest technology
  • Future-proof design
  • Optimal performance
  • Standard compliance
Legacy Migration

Recommended: Ingress Replication

  • No multicast support
  • Simple underlay
  • Phased migration
  • Limited resources

Selection Criteria

Key Decision Factors

  • Scale Requirements: Number of VTEPs, VNIs, and hosts
  • Performance Needs: Latency, bandwidth, convergence time
  • Feature Requirements: Mobility, multihoming, L3 services
  • Operational Model: Manual vs automated management
  • Existing Infrastructure: Underlay capabilities and constraints
  • Team Expertise: Available skills and training

Migration Paths

Evolution path between control plane approaches:

Flood & Learn
Ingress Replication
BGP EVPN

Migration Considerations

  • Gradual Migration: Phase-by-phase approach
  • Coexistence: Multiple modes during transition
  • Testing: Validate each phase thoroughly
  • Rollback Plan: Ability to revert if needed