N9 Loopback: Running SGWU and PGWU on Same Instance
Overview
OmniUPF supports running both SGWU (Serving Gateway User Plane) and PGWU (PDN Gateway User Plane) functions on the same instance with zero-latency N9 loopback. This deployment mode is ideal for:
- Simplified 4G EPC deployments - Single UPF instance instead of two
- Cost optimization - Reduced infrastructure and operational complexity
- Edge computing - Minimize latency for local breakout scenarios
- Lab/testing environments - Full EPC user plane on single server
When configured with the same IP address for both N3 and N9 interfaces, OmniUPF automatically detects traffic flowing between the SGWU and PGWU roles and processes it entirely in eBPF without ever sending packets to the network interface.
How It Works
Traditional Deployment (Two Instances)
Packet Flow:
- eNodeB → SGWU: GTP packet (TEID=100) arrives on S1-U
- SGWU: Matches uplink PDR, encapsulates in new GTP tunnel (TEID=200)
- Packet sent over physical N9 network to PGWU instance
- PGWU: Receives GTP (TEID=200), decapsulates, forwards to Internet
- Total: 2 XDP passes + 1 network hop
N9 Loopback Deployment (Single Instance)
Packet Flow with N9 Loopback:
- eNodeB → SGWU role: GTP packet (TEID=100) arrives on S1-U
- SGWU role: Matches uplink PDR
- Loopback detection: Destination IP = local IP (10.0.1.10)
- In-place processing: Update GTP TEID to 200 (PGWU session)
- PGWU role: Decapsulates, forwards to Internet
- Total: 1 XDP pass, zero network hops
Performance benefit: Sub-microsecond internal forwarding vs milliseconds for network round-trip
Packet Processing Details
Uplink Flow: eNodeB → SGWU → PGWU → Internet
eBPF Code Path: cmd/ebpf/xdp/n3n6_entrypoint.c lines 349-403
Key Steps:
- Receive: GTP packet from eNodeB with TEID=100
- PDR Match: Lookup uplink PDR for SGWU session (TEID=100)
- FAR Action: Encapsulate in GTP with TEID=200, forward to 10.0.1.10
- Loopback Check:
is_local_ip(10.0.1.10)returns TRUE - Update TEID: Change
ctx->gtp->teidfrom 100 to 200 (in kernel memory) - Re-Process: Lookup PDR for TEID=200 (PGWU session)
- FAR Action: Remove GTP header, forward to Internet
- Route: Send plain IP packet to N6 interface
Downlink Flow: Internet → PGWU → SGWU → eNodeB
eBPF Code Path: cmd/ebpf/xdp/n3n6_entrypoint.c lines 137-194 (IPv4), 265-322 (IPv6)
Key Steps:
- Receive: Plain IP packet from Internet destined to UE (10.60.0.1)
- PDR Match: Lookup downlink PDR by UE IP (PGWU session)
- FAR Action: Encapsulate in GTP with TEID=200, forward to 10.0.1.10
- Loopback Check:
is_local_ip(10.0.1.10)returns TRUE - Add GTP: Encapsulate packet with TEID=200
- Re-Process: Lookup PDR for TEID=200 (SGWU session)
- FAR Action: Update GTP tunnel to eNodeB TEID=100
- Route: Send GTP packet to S1-U interface (eNodeB)
Configuration
Requirements
Control Plane:
- SGWU-C: Must connect to OmniUPF PFCP interface (e.g.,
192.168.1.10:8805) - PGWU-C: Must connect to same OmniUPF PFCP interface
Network:
- Single IP address for both N3 and N9 interfaces
- Different IP addresses for SGWU-C and PGWU-C (if running on same host, use different ports)
OmniUPF Configuration
/etc/omniupf/runtime.exs:
# Network interfaces
xdp_interfaces = "eth0" # Single interface for S1-U and N9
xdp_attach_mode = "native" # Use native for best performance
# PFCP Interface
pfcp_address = "192.168.1.10" # OmniUPF's PFCP address
pfcp_port = 8805 # PFCP port
node_id = "192.168.1.10" # OmniUPF's PFCP Node ID
# User Plane Interfaces
n3_address = "10.0.1.10" # S1-U/N3 interface IP
n9_address = n3_address # N9 interface IP (SAME as N3)
# Resource Pools
feature_ueip = true
ueip_pool = "10.60.0.0/16" # UE IP address pool
feature_ftup = true
teid_pool_start = 1
teid_pool_end = 65_535
# Capacity
max_sessions = 100_000 # Maximum concurrent UE sessions
# API
api_port = 8080
Key Configuration:
n3_addressandn9_addressMUST be identical to enable loopback- Single PFCP listening address for both control planes
- Sufficient
max_sessionsfor combined SGWU + PGWU load
Control Plane Configuration
SGWU-C Configuration
# Point to OmniUPF PFCP interface
upf_pfcp_address: "192.168.1.10:8805"
# S1-U interface (same as OmniUPF n3_address)
sgwu_s1u_address: "10.0.1.10"
# N9 interface for forwarding to PGWU (same as OmniUPF)
sgwu_n9_address: "10.0.1.10"
PGWU-C Configuration
# Point to SAME OmniUPF PFCP interface
upf_pfcp_address: "192.168.1.10:8805"
# N9 interface (receives from SGWU)
pgwu_n9_address: "10.0.1.10"
# SGi interface for Internet connectivity
pgwu_sgi_address: "192.168.100.1"
Important:
- Both control planes connect to same PFCP endpoint (
:8805) - OmniUPF creates separate PFCP associations for SGWU-C and PGWU-C
- Sessions are isolated per control plane (tracked by Node ID)
Session Flow Example
UE Attach and PDU Session Establishment
Scenario: UE attaches to network, establishes data session
PFCP Sessions Created:
SGWU Session (from OmniSGW-C):
- Uplink PDR: Match TEID=100 (from eNodeB) → FAR: Encapsulate TEID=200, dst=10.0.1.10
- Downlink PDR: Match TEID=200 (from PGWU) → FAR: Update tunnel TEID=100, forward to eNodeB
PGWU Session (from OmniPGW-C):
- Uplink PDR: Match TEID=200 (from SGWU) → FAR: Decapsulate, forward to Internet
- Downlink PDR: Match UE IP=10.60.0.1 → FAR: Encapsulate TEID=200, dst=10.0.1.10
Monitoring and Verification
Verify N9 Loopback is Active
Check XDP Logs:
# View real-time eBPF debug output
sudo cat /sys/kernel/debug/tracing/trace_pipe | grep loopback
Expected output:
upf: [n3] session for teid:100 -> 200 remote:10.0.1.10
upf: [n9-loopback] self-forwarding detected, processing inline TEID:200
upf: [n9-loopback] decapsulated, routing to N6
upf: [n6] use mapping 10.60.0.1 -> teid:200
upf: [n6-loopback] downlink self-forwarding detected, processing inline TEID:200
upf: [n6-loopback] SGWU updating GTP tunnel to eNodeB TEID:100
upf: [n6-loopback] forwarding to eNodeB
Monitor Sessions via REST API
List PFCP Associations:
curl http://localhost:8080/api/v1/upf_pipeline | jq
Expected output:
{
"associations": [
{
"node_id": "sgwc.example.com",
"address": "192.168.1.20:8805",
"sessions": 1000
},
{
"node_id": "pgwc.example.com",
"address": "192.168.1.21:8805",
"sessions": 1000
}
],
"total_sessions": 2000
}
Verify two separate associations (one for SGWU-C, one for PGWU-C)
List Active Sessions:
curl http://localhost:8080/api/v1/sessions | jq '.sessions[] | {local_seid, ue_ip, uplink_teid}'
Expected output:
{
"local_seid": 12345,
"ue_ip": "10.60.0.1",
"uplink_teid": 100
}
{
"local_seid": 67890,
"ue_ip": "10.60.0.1",
"uplink_teid": 200
}
Each UE has TWO sessions:
- Session from SGWU-C (TEID=100, S1-U interface)
- Session from PGWU-C (TEID=200, N9 interface)
Performance Metrics
Check Packet Statistics:
curl http://localhost:8080/api/v1/xdp_stats | jq
Key metrics:
xdp_processed: Total packets processed in eBPFxdp_pass: Packets passed to network stack (should be zero for loopback traffic)xdp_redirect: Packets forwarded via XDP redirectxdp_tx: Packets transmitted (loopback traffic uses this)
For N9 loopback traffic:
xdp_passshould be minimal (only non-loopback traffic)xdp_txorxdp_redirectcounts loopback forwarding
Troubleshooting
N9 Traffic Going to Network Instead of Loopback
Symptom: Packets sent to network interface, high latency
Root Cause: n3_address ≠ n9_address
Solution (in runtime.exs):
# WRONG:
n3_address = "10.0.1.10"
n9_address = "10.0.1.20" # Different IP, no loopback!
# CORRECT:
n3_address = "10.0.1.10"
n9_address = n3_address # Same IP, enables loopback
Verification:
curl http://localhost:8080/api/v1/dataplane_config | jq
Should show:
{
"n3_ipv4_address": "10.0.1.10",
"n9_ipv4_address": "10.0.1.10"
}
PDR Not Found After Loopback
Symptom: Logs show [n9-loopback] no PDR for destination TEID
Root Cause: PGWU session not created or TEID mismatch
Diagnosis:
-
Check PFCP Sessions:
curl http://localhost:8080/api/v1/sessions | jq '.sessions[] | select(.uplink_teid == 200)' -
Verify FAR Configuration:
curl http://localhost:8080/api/v1/far_map | jq '.[] | select(.teid == 200)'
Solution: Ensure PGWU-C creates session with matching TEID that SGWU-C uses for N9 forwarding
High CPU Usage
Symptom: CPU usage higher than expected
Root Cause: eBPF program processing packets multiple times or excessive map lookups
Diagnosis:
# Check eBPF map access patterns
sudo bpftool map dump name pdr_map_teid_ip4 | wc -l
sudo bpftool map dump name far_map | wc -l
Solution:
- Increase
max_sessionsif map is full (causes lookup failures) - Verify QER rate limiting is not causing drops and retransmits
- Check for excessive packet buffering
Packet Loss During Handover
Symptom: Packets dropped during eNodeB handover
Root Cause: Buffering not configured or insufficient buffer limits
Configuration:
# In runtime.exs
buffer_port = 22152
Verification:
curl http://localhost:8080/api/v1/upf_buffer_info | jq
Benefits of N9 Loopback
Performance
| Metric | Two Instances | Single Instance (N9 Loopback) | Improvement |
|---|---|---|---|
| Latency | 1-5 ms | < 1 μs | 1000x faster |
| Throughput | Limited by network | Limited by CPU/memory | 2-3x higher |
| CPU Usage | 2× XDP passes + network stack | 1× XDP pass | 40-50% reduction |
| Packet Loss | Risk during network congestion | Zero (in-memory) | Eliminated |
Operational
- Simplified Deployment: Single OmniUPF instance instead of two
- Reduced Infrastructure: Half the servers, network ports, IP addresses
- Lower Complexity: Single configuration, single monitoring endpoint
- Cost Savings: Reduced hardware, power, cooling, maintenance
- Easier Troubleshooting: Single packet trace, single eBPF debug output
Use Cases
Ideal For:
- ✅ Edge Computing: Minimize latency for local breakout
- ✅ Small/Medium Deployments: < 100K subscribers
- ✅ Lab/Testing: Full EPC user plane on single VM
- ✅ Cost-Constrained: Limited hardware budget
Not Recommended For:
- ❌ Geographic Redundancy: SGWU and PGWU in different data centers
- ❌ Massive Scale: > 1M subscribers (consider horizontal scaling)
- ❌ Regulatory Requirements: Mandated separation of SGW and PGW
Comparison with Other Deployment Modes
Single Instance (N9 Loopback) vs. Separated Instances
Summary
N9 Loopback enables carrier-grade 4G EPC user plane on a single OmniUPF instance by processing SGWU→PGWU traffic entirely in eBPF without network hops. This provides:
- ✅ Sub-microsecond latency for inter-gateway forwarding
- ✅ 40-50% CPU reduction compared to separated instances
- ✅ Simplified operations - single instance, config, monitoring
- ✅ Lower cost - half the infrastructure
- ✅ Full 3GPP compliance - standard PFCP, GTP-U protocols
Configuration is automatic when n3_address == n9_address - no special flags or settings required. OmniUPF's eBPF datapath detects loopback conditions and processes packets inline.
For more information:
- Configuration: CONFIGURATION.md
- Architecture: ARCHITECTURE.md
- Metrics Reference: METRICS.md
- Monitoring: MONITORING.md
- Operations: OPERATIONS.md
- Troubleshooting: TROUBLESHOOTING.md