-
Notifications
You must be signed in to change notification settings - Fork 461
Use Cases Data Center Fabrics
ExaBGP enables EVPN-based data center fabrics for VXLAN control plane, multi-tenancy, and leaf-spine architectures.
- Overview
- Architecture
- Configuration Examples
- Multi-Tenancy
- Integration Patterns
- Troubleshooting
- See Also
Modern data centers require:
- Scalability: Support thousands of VMs/containers
- Multi-tenancy: Isolated networks for different customers/applications
- Mobility: VM/container migration across hosts
- Layer 2 extension: Stretch VLANs across the fabric
- Layer 3 routing: Efficient inter-subnet routing
- Automation: Dynamic network provisioning
EVPN (Ethernet VPN, RFC 7432) provides:
- MAC/IP advertisement: Distribute endpoint information via BGP
- Multi-tenancy: VNI (VXLAN Network Identifier) isolation
- ARP suppression: Reduce broadcast traffic
- Integrated routing and bridging (IRB): L2/L3 integration
- Multi-homing: Active-active host connectivity
ExaBGP acts as an EVPN control plane:
- Route advertisement: Advertise MAC/IP bindings via BGP EVPN
- Route learning: Receive EVPN routes from fabric switches
- Integration: Connect SDN controllers, orchestrators, or monitoring systems
- Automation: Dynamic endpoint provisioning via API
Important: ExaBGP does NOT manipulate forwarding tables. It exchanges EVPN routes via BGP. Your application must program VXLAN tunnels and forwarding rules (e.g., via Linux kernel, OVS, hardware ASICs).
Standard data center fabric architecture:
[Spine 1] [Spine 2]
| \/ |
| /\ |
[Leaf 1] [Leaf 2] [Leaf 3]
| | |
[VMs] [VMs] [VMs]
BGP Configuration:
- Leaf-Spine: eBGP (different AS per leaf) or iBGP (same AS)
- EVPN address family: l2vpn evpn
- Underlay: IPv4/IPv6 unicast for VTEP reachability
- Overlay: EVPN for MAC/IP distribution
ExaBGP can integrate in multiple ways:
[Leaf Switch] <--EVPN--> [Compute Host with ExaBGP]
|
[Containers/VMs]
Use case: Advertise container/VM endpoints directly from hosts
[Controller with ExaBGP] <--EVPN--> [Leaf Switches]
|
[Orchestrator]
Use case: Centralized EVPN route injection for automation
[Monitor with ExaBGP] <--EVPN--> [Fabric]
|
[Analytics]
Use case: Collect EVPN routes for visibility and analytics
Configuration (/etc/exabgp/evpn-leaf.conf):
process evpn-controller {
run python3 /etc/exabgp/evpn-announce.py;
encoder json;
}
neighbor 10.0.0.1 {
router-id 10.0.1.1;
local-address 10.0.1.1;
local-as 65001;
peer-as 65000;
family {
l2vpn evpn;
}
api {
processes [ evpn-controller ];
}
}
neighbor 10.0.0.2 {
router-id 10.0.1.1;
local-address 10.0.1.1;
local-as 65001;
peer-as 65000;
family {
l2vpn evpn;
}
api {
processes [ evpn-controller ];
}
}For larger fabrics, use route reflectors:
neighbor 10.0.0.100 {
router-id 10.0.1.1;
local-address 10.0.1.1;
local-as 65000;
peer-as 65000;
family {
l2vpn evpn;
}
# Enable route reflection
capability {
route-refresh;
}
api {
processes [ evpn-controller ];
}
}Advertise endpoint with VNI for tenant isolation:
API Program (/etc/exabgp/evpn-announce.py):
#!/usr/bin/env python3
import sys
def announce_mac_ip(mac, ip, vni, rd, rt):
"""
Announce EVPN Type 2 route (MAC/IP Advertisement)
mac: MAC address (e.g., "00:11:22:33:44:55")
ip: IP address (e.g., "192.168.1.10")
vni: VXLAN Network Identifier (e.g., 10000)
rd: Route Distinguisher (e.g., "10.0.1.1:1")
rt: Route Target (e.g., "65000:10000")
"""
print(f"announce route-distinguisher {rd} "
f"mac {mac} ip {ip} "
f"label {vni} "
f"route-target {rt}", flush=True)
# Example: Announce VM endpoint
announce_mac_ip(
mac="00:11:22:33:44:55",
ip="192.168.1.10",
vni=10000,
rd="10.0.1.1:1",
rt="65000:10000"
)
# Keep running
while True:
line = sys.stdin.readline().strip()
if not line:
breakAdvertise VTEP for BUM (Broadcast, Unknown Unicast, Multicast) traffic:
#!/usr/bin/env python3
import sys
def announce_imet(vtep_ip, vni, rd, rt):
"""
Announce EVPN Type 3 route (Inclusive Multicast Ethernet Tag)
vtep_ip: VTEP IP address (e.g., "10.0.1.1")
vni: VXLAN Network Identifier
rd: Route Distinguisher
rt: Route Target
"""
# EVPN Type 3 via text API
print(f"announce route-distinguisher {rd} "
f"multicast {vtep_ip} "
f"label {vni} "
f"route-target {rt}", flush=True)
# Example: Announce VTEP for VNI 10000
announce_imet(
vtep_ip="10.0.1.1",
vni=10000,
rd="10.0.1.1:1",
rt="65000:10000"
)
while True:
line = sys.stdin.readline().strip()
if not line:
breakIsolate tenants using different VNIs and Route Targets:
#!/usr/bin/env python3
import sys
# Tenant configuration
TENANTS = {
'tenant-a': {
'vni': 10000,
'rt': '65000:10000',
'subnets': ['192.168.10.0/24']
},
'tenant-b': {
'vni': 20000,
'rt': '65000:20000',
'subnets': ['192.168.20.0/24']
},
'tenant-c': {
'vni': 30000,
'rt': '65000:30000',
'subnets': ['192.168.30.0/24']
}
}
VTEP_IP = "10.0.1.1"
RD_BASE = "10.0.1.1"
def announce_tenant_vtep(tenant_id):
"""Announce VTEP for tenant's VNI"""
config = TENANTS.get(tenant_id)
if not config:
return
vni = config['vni']
rt = config['rt']
rd = f"{RD_BASE}:{vni}"
print(f"announce route-distinguisher {rd} "
f"multicast {VTEP_IP} "
f"label {vni} "
f"route-target {rt}", flush=True)
# Announce all tenants
for tenant_id in TENANTS:
announce_tenant_vtep(tenant_id)
while True:
line = sys.stdin.readline().strip()
if not line:
breakIntegrate with Docker/Kubernetes networking:
#!/usr/bin/env python3
import sys
import json
import subprocess
VTEP_IP = "10.0.1.1"
VNI = 10000
RD = "10.0.1.1:1"
RT = "65000:10000"
def get_container_endpoints():
"""Get container network endpoints from Docker"""
try:
result = subprocess.run(
['docker', 'network', 'inspect', 'bridge'],
capture_output=True, text=True
)
network_info = json.loads(result.stdout)
containers = network_info[0].get('Containers', {})
endpoints = []
for container_id, info in containers.items():
endpoints.append({
'mac': info.get('MacAddress'),
'ip': info.get('IPv4Address', '').split('/')[0]
})
return endpoints
except:
return []
def announce_endpoint(mac, ip):
"""Announce container endpoint via EVPN"""
print(f"announce route-distinguisher {RD} "
f"mac {mac} ip {ip} "
f"label {VNI} "
f"route-target {RT}", flush=True)
# Announce all container endpoints
endpoints = get_container_endpoints()
for ep in endpoints:
if ep['mac'] and ep['ip']:
announce_endpoint(ep['mac'], ep['ip'])
while True:
line = sys.stdin.readline().strip()
if not line:
breakIntegrate with OpenStack Neutron:
#!/usr/bin/env python3
import sys
import time
from neutronclient.v2_0 import client as neutron_client
# OpenStack credentials
OS_AUTH_URL = "http://controller:5000/v3"
OS_USERNAME = "admin"
OS_PASSWORD = "secret"
OS_PROJECT_NAME = "admin"
VTEP_IP = "10.0.1.1"
def get_neutron_client():
"""Create Neutron client"""
return neutron_client.Client(
username=OS_USERNAME,
password=OS_PASSWORD,
project_name=OS_PROJECT_NAME,
auth_url=OS_AUTH_URL
)
def announce_neutron_ports():
"""Announce all Neutron ports via EVPN"""
neutron = get_neutron_client()
ports = neutron.list_ports()['ports']
for port in ports:
mac = port['mac_address']
fixed_ips = port.get('fixed_ips', [])
for ip_info in fixed_ips:
ip = ip_info.get('ip_address')
network_id = port['network_id']
# Derive VNI from network ID (simplified)
vni = int(network_id[:8], 16) % 16777215
rd = f"{VTEP_IP}:{vni}"
rt = f"65000:{vni}"
print(f"announce route-distinguisher {rd} "
f"mac {mac} ip {ip} "
f"label {vni} "
f"route-target {rt}", flush=True)
# Initial announcement
announce_neutron_ports()
# Periodic refresh
while True:
time.sleep(300) # Re-announce every 5 minutes
announce_neutron_ports()Handle VM migration by updating EVPN routes:
#!/usr/bin/env python3
import sys
import json
def handle_vm_migration(mac, old_ip, new_ip, vni):
"""Handle VM migration between hosts"""
rd = f"10.0.1.1:{vni}"
rt = f"65000:{vni}"
# Withdraw old endpoint
if old_ip:
print(f"withdraw route-distinguisher {rd} "
f"mac {mac} ip {old_ip}", flush=True)
# Announce new endpoint
print(f"announce route-distinguisher {rd} "
f"mac {mac} ip {new_ip} "
f"label {vni} "
f"route-target {rt}", flush=True)
# Listen for migration events (from orchestrator, message queue, etc.)
while True:
line = sys.stdin.readline().strip()
if not line:
break
try:
event = json.loads(line)
if event.get('type') == 'vm-migrate':
handle_vm_migration(
mac=event['mac'],
old_ip=event.get('old_ip'),
new_ip=event['new_ip'],
vni=event['vni']
)
except:
passProblem: Leaf switches don't receive EVPN routes from ExaBGP
Debugging:
# Check BGP session
exabgpcli show neighbor summary
# Check EVPN capability negotiation
exabgpcli show neighbor 10.0.0.1 capabilities
# Verify routes announced
exabgpcli show neighbor 10.0.0.1 advertised-routesSolution:
- Ensure
family { l2vpn evpn; }configured - Verify peer supports EVPN
- Check route-target configuration
Problem: Duplicate MAC addresses or missing entries
Debugging:
# Check EVPN Type 2 routes
show bgp l2vpn evpn route-type 2
# Verify MAC/IP bindings
bridge fdb showSolution:
- Ensure unique Route Distinguishers per VTEP
- Verify MAC addresses are correct
- Check for MAC flapping (VM migration loops)
Problem: No connectivity between endpoints
Debugging:
# Check VXLAN interfaces
ip -d link show type vxlan
# Verify VTEP reachability
ping 10.0.1.1
# Check VXLAN forwarding
bridge fdb show dev vxlan10Solution:
- Verify underlay routing (IP reachability between VTEPs)
- Ensure VXLAN interface created with correct VNI
- Check firewall rules (UDP 4789 for VXLAN)
Monitor EVPN fabric health:
#!/usr/bin/env python3
from prometheus_client import start_http_server, Gauge
import json
import sys
# Metrics
evpn_routes = Gauge('evpn_routes_total', 'Total EVPN routes')
evpn_macs = Gauge('evpn_mac_addresses', 'Total MAC addresses')
evpn_vteps = Gauge('evpn_vteps_total', 'Total VTEPs')
start_http_server(9100)
# Parse ExaBGP updates
while True:
line = sys.stdin.readline().strip()
if not line:
break
try:
data = json.loads(line)
# Update metrics based on BGP updates
# ... parsing logic ...
except:
passFor large deployments:
- Route Reflectors: Use RR to reduce iBGP mesh
- Route Target Filtering: Use RT-Constrain to reduce routes
- Batching: Batch route announcements to reduce BGP churn
- Caching: Cache EVPN state locally to avoid re-computation
- EVPN Overview - Detailed EVPN documentation
- Address Families Overview
- Data Center Interconnect - DCI with EVPN
- Multi-Tenant - Multi-tenancy patterns
- SDN Integration - SDN controller integration
Getting Started
Configuration
- Configuration Syntax
- Neighbor Configuration
- Directives A-Z
- Templates
- Environment Variables
- Process Configuration
API
- API Overview
- Text API Reference
- JSON API Reference
- API Commands
- Writing API Programs
- Error Handling
- Production Best Practices
Address Families
- Overview
- IPv4 Unicast
- IPv6 Unicast
- FlowSpec
- EVPN
- L3VPN
- BGP-LS
- VPLS
- SRv6 / MUP
- Multicast
- RT Constraint
Features
Use Cases
Tools
Operations
Reference
- Architecture
- Design
- Attribute Reference
- Command Reference
- BGP State Machine
- Capabilities
- Communities
- Examples Index
- Glossary
- RFC Support
Integration
Migration
Community
External