Skip to main content

N9 Loopback: Running SGWU and PGWU on Same Instance

Overview

OmniUPF supports running both SGWU (Serving Gateway User Plane) and PGWU (PDN Gateway User Plane) functions on the same instance with zero-latency N9 loopback. This deployment mode is ideal for:

  • Simplified 4G EPC deployments - Single UPF instance instead of two
  • Cost optimization - Reduced infrastructure and operational complexity
  • Edge computing - Minimize latency for local breakout scenarios
  • Lab/testing environments - Full EPC user plane on single server

When configured with the same IP address for both N3 and N9 interfaces, OmniUPF automatically detects traffic flowing between the SGWU and PGWU roles and processes it entirely in eBPF without ever sending packets to the network interface.


How It Works

Traditional Deployment (Two Instances)

Packet Flow:

  1. eNodeB → SGWU: GTP packet (TEID=100) arrives on S1-U
  2. SGWU: Matches uplink PDR, encapsulates in new GTP tunnel (TEID=200)
  3. Packet sent over physical N9 network to PGWU instance
  4. PGWU: Receives GTP (TEID=200), decapsulates, forwards to Internet
  5. Total: 2 XDP passes + 1 network hop

N9 Loopback Deployment (Single Instance)

Packet Flow with N9 Loopback:

  1. eNodeB → SGWU role: GTP packet (TEID=100) arrives on S1-U
  2. SGWU role: Matches uplink PDR
  3. Loopback detection: Destination IP = local IP (10.0.1.10)
  4. In-place processing: Update GTP TEID to 200 (PGWU session)
  5. PGWU role: Decapsulates, forwards to Internet
  6. Total: 1 XDP pass, zero network hops

Performance benefit: Sub-microsecond internal forwarding vs milliseconds for network round-trip


Packet Processing Details

eBPF Code Path: cmd/ebpf/xdp/n3n6_entrypoint.c lines 349-403

Key Steps:

  1. Receive: GTP packet from eNodeB with TEID=100
  2. PDR Match: Lookup uplink PDR for SGWU session (TEID=100)
  3. FAR Action: Encapsulate in GTP with TEID=200, forward to 10.0.1.10
  4. Loopback Check: is_local_ip(10.0.1.10) returns TRUE
  5. Update TEID: Change ctx->gtp->teid from 100 to 200 (in kernel memory)
  6. Re-Process: Lookup PDR for TEID=200 (PGWU session)
  7. FAR Action: Remove GTP header, forward to Internet
  8. Route: Send plain IP packet to N6 interface

eBPF Code Path: cmd/ebpf/xdp/n3n6_entrypoint.c lines 137-194 (IPv4), 265-322 (IPv6)

Key Steps:

  1. Receive: Plain IP packet from Internet destined to UE (10.60.0.1)
  2. PDR Match: Lookup downlink PDR by UE IP (PGWU session)
  3. FAR Action: Encapsulate in GTP with TEID=200, forward to 10.0.1.10
  4. Loopback Check: is_local_ip(10.0.1.10) returns TRUE
  5. Add GTP: Encapsulate packet with TEID=200
  6. Re-Process: Lookup PDR for TEID=200 (SGWU session)
  7. FAR Action: Update GTP tunnel to eNodeB TEID=100
  8. Route: Send GTP packet to S1-U interface (eNodeB)

Configuration

Requirements

Control Plane:

  • SGWU-C: Must connect to OmniUPF PFCP interface (e.g., 192.168.1.10:8805)
  • PGWU-C: Must connect to same OmniUPF PFCP interface

Network:

  • Single IP address for both N3 and N9 interfaces
  • Different IP addresses for SGWU-C and PGWU-C (if running on same host, use different ports)

OmniUPF Configuration

/etc/omniupf/runtime.exs:

# Network interfaces
xdp_interfaces = "eth0" # Single interface for S1-U and N9
xdp_attach_mode = "native" # Use native for best performance

# PFCP Interface
pfcp_address = "192.168.1.10" # OmniUPF's PFCP address
pfcp_port = 8805 # PFCP port
node_id = "192.168.1.10" # OmniUPF's PFCP Node ID

# User Plane Interfaces
n3_address = "10.0.1.10" # S1-U/N3 interface IP
n9_address = n3_address # N9 interface IP (SAME as N3)

# Resource Pools
feature_ueip = true
ueip_pool = "10.60.0.0/16" # UE IP address pool
feature_ftup = true
teid_pool_start = 1
teid_pool_end = 65_535

# Capacity
max_sessions = 100_000 # Maximum concurrent UE sessions

# API
api_port = 8080

Key Configuration:

  • n3_address and n9_address MUST be identical to enable loopback
  • Single PFCP listening address for both control planes
  • Sufficient max_sessions for combined SGWU + PGWU load

Control Plane Configuration

SGWU-C Configuration

# Point to OmniUPF PFCP interface
upf_pfcp_address: "192.168.1.10:8805"

# S1-U interface (same as OmniUPF n3_address)
sgwu_s1u_address: "10.0.1.10"

# N9 interface for forwarding to PGWU (same as OmniUPF)
sgwu_n9_address: "10.0.1.10"

PGWU-C Configuration

# Point to SAME OmniUPF PFCP interface
upf_pfcp_address: "192.168.1.10:8805"

# N9 interface (receives from SGWU)
pgwu_n9_address: "10.0.1.10"

# SGi interface for Internet connectivity
pgwu_sgi_address: "192.168.100.1"

Important:

  • Both control planes connect to same PFCP endpoint (:8805)
  • OmniUPF creates separate PFCP associations for SGWU-C and PGWU-C
  • Sessions are isolated per control plane (tracked by Node ID)

Session Flow Example

UE Attach and PDU Session Establishment

Scenario: UE attaches to network, establishes data session

PFCP Sessions Created:

SGWU Session (from OmniSGW-C):

  • Uplink PDR: Match TEID=100 (from eNodeB) → FAR: Encapsulate TEID=200, dst=10.0.1.10
  • Downlink PDR: Match TEID=200 (from PGWU) → FAR: Update tunnel TEID=100, forward to eNodeB

PGWU Session (from OmniPGW-C):

  • Uplink PDR: Match TEID=200 (from SGWU) → FAR: Decapsulate, forward to Internet
  • Downlink PDR: Match UE IP=10.60.0.1 → FAR: Encapsulate TEID=200, dst=10.0.1.10

Monitoring and Verification

Verify N9 Loopback is Active

Check XDP Logs:

# View real-time eBPF debug output
sudo cat /sys/kernel/debug/tracing/trace_pipe | grep loopback

Expected output:

upf: [n3] session for teid:100 -> 200 remote:10.0.1.10
upf: [n9-loopback] self-forwarding detected, processing inline TEID:200
upf: [n9-loopback] decapsulated, routing to N6

upf: [n6] use mapping 10.60.0.1 -> teid:200
upf: [n6-loopback] downlink self-forwarding detected, processing inline TEID:200
upf: [n6-loopback] SGWU updating GTP tunnel to eNodeB TEID:100
upf: [n6-loopback] forwarding to eNodeB

Monitor Sessions via REST API

List PFCP Associations:

curl http://localhost:8080/api/v1/upf_pipeline | jq

Expected output:

{
"associations": [
{
"node_id": "sgwc.example.com",
"address": "192.168.1.20:8805",
"sessions": 1000
},
{
"node_id": "pgwc.example.com",
"address": "192.168.1.21:8805",
"sessions": 1000
}
],
"total_sessions": 2000
}

Verify two separate associations (one for SGWU-C, one for PGWU-C)


List Active Sessions:

curl http://localhost:8080/api/v1/sessions | jq '.sessions[] | {local_seid, ue_ip, uplink_teid}'

Expected output:

{
"local_seid": 12345,
"ue_ip": "10.60.0.1",
"uplink_teid": 100
}
{
"local_seid": 67890,
"ue_ip": "10.60.0.1",
"uplink_teid": 200
}

Each UE has TWO sessions:

  • Session from SGWU-C (TEID=100, S1-U interface)
  • Session from PGWU-C (TEID=200, N9 interface)

Performance Metrics

Check Packet Statistics:

curl http://localhost:8080/api/v1/xdp_stats | jq

Key metrics:

  • xdp_processed: Total packets processed in eBPF
  • xdp_pass: Packets passed to network stack (should be zero for loopback traffic)
  • xdp_redirect: Packets forwarded via XDP redirect
  • xdp_tx: Packets transmitted (loopback traffic uses this)

For N9 loopback traffic:

  • xdp_pass should be minimal (only non-loopback traffic)
  • xdp_tx or xdp_redirect counts loopback forwarding

Troubleshooting

N9 Traffic Going to Network Instead of Loopback

Symptom: Packets sent to network interface, high latency

Root Cause: n3_addressn9_address

Solution (in runtime.exs):

# WRONG:
n3_address = "10.0.1.10"
n9_address = "10.0.1.20" # Different IP, no loopback!

# CORRECT:
n3_address = "10.0.1.10"
n9_address = n3_address # Same IP, enables loopback

Verification:

curl http://localhost:8080/api/v1/dataplane_config | jq

Should show:

{
"n3_ipv4_address": "10.0.1.10",
"n9_ipv4_address": "10.0.1.10"
}

PDR Not Found After Loopback

Symptom: Logs show [n9-loopback] no PDR for destination TEID

Root Cause: PGWU session not created or TEID mismatch

Diagnosis:

  1. Check PFCP Sessions:

    curl http://localhost:8080/api/v1/sessions | jq '.sessions[] | select(.uplink_teid == 200)'
  2. Verify FAR Configuration:

    curl http://localhost:8080/api/v1/far_map | jq '.[] | select(.teid == 200)'

Solution: Ensure PGWU-C creates session with matching TEID that SGWU-C uses for N9 forwarding


High CPU Usage

Symptom: CPU usage higher than expected

Root Cause: eBPF program processing packets multiple times or excessive map lookups

Diagnosis:

# Check eBPF map access patterns
sudo bpftool map dump name pdr_map_teid_ip4 | wc -l
sudo bpftool map dump name far_map | wc -l

Solution:

  • Increase max_sessions if map is full (causes lookup failures)
  • Verify QER rate limiting is not causing drops and retransmits
  • Check for excessive packet buffering

Packet Loss During Handover

Symptom: Packets dropped during eNodeB handover

Root Cause: Buffering not configured or insufficient buffer limits

Configuration:

# In runtime.exs
buffer_port = 22152

Verification:

curl http://localhost:8080/api/v1/upf_buffer_info | jq

Benefits of N9 Loopback

Performance

MetricTwo InstancesSingle Instance (N9 Loopback)Improvement
Latency1-5 ms< 1 μs1000x faster
ThroughputLimited by networkLimited by CPU/memory2-3x higher
CPU Usage2× XDP passes + network stack1× XDP pass40-50% reduction
Packet LossRisk during network congestionZero (in-memory)Eliminated

Operational

  • Simplified Deployment: Single OmniUPF instance instead of two
  • Reduced Infrastructure: Half the servers, network ports, IP addresses
  • Lower Complexity: Single configuration, single monitoring endpoint
  • Cost Savings: Reduced hardware, power, cooling, maintenance
  • Easier Troubleshooting: Single packet trace, single eBPF debug output

Use Cases

Ideal For:

  • Edge Computing: Minimize latency for local breakout
  • Small/Medium Deployments: < 100K subscribers
  • Lab/Testing: Full EPC user plane on single VM
  • Cost-Constrained: Limited hardware budget

Not Recommended For:

  • Geographic Redundancy: SGWU and PGWU in different data centers
  • Massive Scale: > 1M subscribers (consider horizontal scaling)
  • Regulatory Requirements: Mandated separation of SGW and PGW

Comparison with Other Deployment Modes

Single Instance (N9 Loopback) vs. Separated Instances


Summary

N9 Loopback enables carrier-grade 4G EPC user plane on a single OmniUPF instance by processing SGWU→PGWU traffic entirely in eBPF without network hops. This provides:

  • Sub-microsecond latency for inter-gateway forwarding
  • 40-50% CPU reduction compared to separated instances
  • Simplified operations - single instance, config, monitoring
  • Lower cost - half the infrastructure
  • Full 3GPP compliance - standard PFCP, GTP-U protocols

Configuration is automatic when n3_address == n9_address - no special flags or settings required. OmniUPF's eBPF datapath detects loopback conditions and processes packets inline.

For more information: