OmniUPF Troubleshooting Guide
Table of Contents
- Overview
- Diagnostic Tools
- Installation Issues
- Configuration Problems
- PFCP Association Issues
- Packet Processing Problems
- XDP and eBPF Issues
- Performance Issues
- Hypervisor-Specific Issues
- NIC and Driver Issues
- Session Establishment Failures
- Buffering Issues
Overview
This guide provides systematic troubleshooting procedures for common OmniUPF issues. Each section includes symptoms, diagnosis steps, root causes, and resolution procedures.
Quick Diagnostic Checklist
Before deep troubleshooting, verify:
# 1. Check OmniUPF is running
systemctl status omniupf
# 2. Check PFCP association
curl http://localhost:8080/api/v1/upf_pipeline
# 3. Check eBPF maps are loaded
ls /sys/fs/bpf/
# 4. Check XDP program is attached
ip link show | grep -i xdp
# 5. Check kernel logs for errors
dmesg | tail -50
journalctl -u omniupf -n 50
Diagnostic Tools
OmniUPF REST API
Check UPF status:
curl http://localhost:8080/api/v1/upf_status
Check PFCP associations:
curl http://localhost:8080/api/v1/upf_pipeline
Check session count:
curl http://localhost:8080/api/v1/sessions | jq 'length'
Check eBPF map capacity:
curl http://localhost:8080/api/v1/map_info
Check packet statistics:
curl http://localhost:8080/api/v1/packet_stats
Check XDP statistics:
curl http://localhost:8080/api/v1/xdp_stats
eBPF Map Inspection
List all eBPF maps:
ls -lh /sys/fs/bpf/
bpftool map list
Show map details:
bpftool map show
bpftool map dump name pdr_map_downlin
Count entries in map:
bpftool map dump name far_map | grep -c "key:"
XDP Program Inspection
Check if XDP program is attached:
ip link show eth0 | grep xdp
List all XDP programs:
bpftool net list
Show XDP program details:
bpftool prog show
Dump XDP statistics:
bpftool prog dump xlated name xdp_upf_func
Network Debugging
Capture PFCP traffic on N4 (control plane):
# PFCP is not processed by XDP, tcpdump works normally
tcpdump -i eth0 -n udp port 8805 -w /tmp/pfcp_traffic.pcap
Capture GTP-U traffic on N3 (requires out-of-band capture):
# WARNING: Standard tcpdump on UPF host CANNOT capture XDP-processed packets!
# XDP processes GTP-U before the kernel network stack sees packets.
# Use out-of-band capture instead:
# 1. Network TAP between gNB and UPF
# 2. Switch port mirroring/SPAN to copy N3 traffic
# 3. Virtual switch port mirroring to analyzer VM
# On analyzer/monitoring host (NOT on UPF):
# tcpdump -i <mirror_interface> -n udp port 2152 -w /tmp/n3_capture.pcap
# Or use statistics API for packet counts:
curl http://localhost:8080/api/v1/packet_stats
curl http://localhost:8080/api/v1/n3n6_stats
Monitor packet counters:
watch -n 1 'ip -s link show eth0'
Check routing table:
ip route show
ip route get 10.45.0.100 # Check route for UE IP
Check ARP table:
ip neigh show
Installation Issues
Issue: "eBPF filesystem not mounted"
Symptoms:
ERRO[0000] failed to load eBPF objects: mount bpf filesystem at /sys/fs/bpf
Cause: eBPF filesystem not mounted
Resolution:
# Mount eBPF filesystem
sudo mount bpffs /sys/fs/bpf -t bpf
# Make persistent (add to /etc/fstab)
echo "bpffs /sys/fs/bpf bpf defaults 0 0" | sudo tee -a /etc/fstab
# Verify mount
mount | grep bpf
Issue: Kernel version too old
Symptoms:
ERRO[0000] kernel version 5.4.0 is too old, minimum required is 5.15.0
Cause: Linux kernel version below minimum requirement
Resolution:
# Check kernel version
uname -r
# Upgrade kernel (Ubuntu/Debian)
sudo apt update
sudo apt install linux-generic-hwe-22.04
sudo reboot
# Verify new kernel
uname -r # Should be >= 5.15.0
Issue: Missing libbpf dependency
Symptoms:
error while loading shared libraries: libbpf.so.0: cannot open shared object file
Cause: libbpf library not installed
Resolution:
# Install libbpf (Ubuntu/Debian)
sudo apt update
sudo apt install libbpf-dev
# Verify installation
ldconfig -p | grep libbpf
Configuration Problems
Issue: Invalid configuration file
Symptoms:
ERRO[0000] unable to read config file: unmarshal errors
Cause: YAML syntax error in config file
Resolution:
# Validate YAML syntax
cat config.yml | python3 -c "import yaml, sys; yaml.safe_load(sys.stdin)"
# Common issues:
# - Incorrect indentation (use spaces, not tabs)
# - Missing colons after keys
# - Unquoted strings with special characters
# - List items without hyphens
# Example of correct YAML:
cat > config.yml <<EOF
interface_name: [eth0]
xdp_attach_mode: generic
api_address: :8080
pfcp_address: :8805
EOF
Issue: Interface name not found
Symptoms:
ERRO[0000] interface eth0 not found
Cause: Configured interface does not exist
Resolution:
# List all network interfaces
ip link show
# Check interface status
ip addr show eth0
# If interface has different name, update config.yml:
interface_name: [ens1f0] # Use actual interface name
# For VMs, check interface naming scheme
ls /sys/class/net/
Issue: Port already in use
Symptoms:
ERRO[0000] failed to start API server: address already in use
Cause: Port 8080, 8805, or 9090 already bound by another process
Resolution:
# Find process using port
sudo lsof -i :8080
sudo netstat -tulpn | grep :8080
# Kill conflicting process
sudo kill <PID>
# Or change OmniUPF port in config
api_address: :8081
pfcp_address: :8806
metrics_address: :9091
Issue: Invalid PFCP Node ID
Symptoms:
ERRO[0000] invalid pfcp_node_id: must be valid IPv4 address
Cause: PFCP Node ID is not a valid IPv4 address
Resolution:
# Correct: Use IP address (not hostname)
pfcp_node_id: 10.100.50.241
# Incorrect:
# pfcp_node_id: localhost
# pfcp_node_id: upf.example.com
PFCP Association Issues
Issue: No PFCP associations established
Symptoms:
- Web UI shows "No associations"
- SMF logs show "PFCP Association Setup failure"
Diagnosis:
# 1. Check if PFCP server is listening
sudo netstat -ulpn | grep 8805
# 2. Check firewall rules
sudo iptables -L -n | grep 8805
sudo ufw status
# 3. Capture PFCP traffic
tcpdump -i any -n udp port 8805 -vv
# 4. Check PFCP associations via API
curl http://localhost:8080/api/v1/upf_pipeline
Common Causes & Resolutions:
Firewall blocking PFCP
Resolution:
# Allow PFCP traffic (UDP 8805)
sudo ufw allow 8805/udp
sudo iptables -A INPUT -p udp --dport 8805 -j ACCEPT
Wrong PFCP Node ID
Resolution:
# Set PFCP Node ID to correct N4 interface IP
pfcp_node_id: 10.100.50.241 # Must match IP on N4 network
Network unreachable to SMF
Resolution:
# Test connectivity to SMF
ping <SMF_IP>
# Check routing to SMF
ip route get <SMF_IP>
# Add route if missing
sudo ip route add <SMF_NETWORK>/24 via <GATEWAY>
SMF configured with wrong UPF IP
Resolution:
- Check SMF configuration for UPF address
- Ensure SMF has UPF's
pfcp_node_idIP configured - Verify SMF can route to UPF's N4 network
Issue: PFCP heartbeat failures
Symptoms:
WARN[0030] PFCP heartbeat timeout for association 10.100.50.10
Diagnosis:
# Check PFCP statistics
curl http://localhost:8080/api/v1/upf_pipeline | jq '.associations[] | {remote_id, uplink_teid_count}'
# Monitor heartbeat logs
journalctl -u omniupf -f | grep heartbeat
Causes & Resolutions:
Network packet loss
Resolution:
# Check packet loss to SMF
ping -c 100 <SMF_IP> | grep loss
# If high loss, investigate network:
# - Check link status
# - Check switch/router health
# - Check for congestion
Heartbeat interval too aggressive
Resolution:
# Increase heartbeat interval
heartbeat_interval: 30 # Increase from 5 to 30 seconds
heartbeat_retries: 5 # Increase retries
heartbeat_timeout: 10 # Increase timeout
Packet Processing Problems
Issue: No packets flowing (RX/TX counters at 0)
Symptoms:
- Statistics page shows 0 RX/TX packets
- UE cannot establish data session
Diagnosis:
# 1. Check if XDP program is attached
ip link show eth0 | grep xdp
# 2. Check interface is UP
ip link show eth0
# 3. Check packet statistics (XDP-aware)
# Note: tcpdump cannot see XDP-processed GTP-U packets
curl http://localhost:8080/api/v1/packet_stats
Resolutions:
XDP program not attached
Resolution:
# Restart OmniUPF to re-attach XDP
sudo systemctl restart omniupf
# Verify attachment
ip link show eth0 | grep xdp
bpftool net list
Interface down or no link
Resolution:
# Bring interface up
sudo ip link set eth0 up
# Check link status
ethtool eth0 | grep "Link detected"
# If link down, check physical connection or VM network config
Wrong interface configured
Resolution:
# Update config.yml with correct interface
interface_name: [ens1f0] # Use actual interface name from 'ip link show'
Issue: Packets received but not forwarded (high drop rate)
Symptoms:
- RX counters increasing but TX counters not
- Drop rate > 1%
Diagnosis:
# Check drop statistics
curl http://localhost:8080/api/v1/xdp_stats | jq '.drop'
# Check route statistics
curl http://localhost:8080/api/v1/packet_stats | jq '.route_stats'
# Monitor packet drops
watch -n 1 'curl -s http://localhost:8080/api/v1/packet_stats | jq ".total_rx, .total_tx, .total_drop"'
Common Causes:
No PDR match (unknown TEID or UE IP)
Resolution:
# Check if sessions exist
curl http://localhost:8080/api/v1/sessions
# If no sessions, verify:
# - PFCP association is established
# - SMF has created sessions
# - Session establishment was successful
# Check PDR map entries
bpftool map dump name pdr_map_teid_ip | grep -c key
bpftool map dump name pdr_map_downlin | grep -c key
Routing failures
Resolution:
# Check FIB lookup failures
curl http://localhost:8080/api/v1/packet_stats | jq '.route_stats'
# Test routing for UE IP
ip route get 10.45.0.100
# Add missing route
sudo ip route add 10.45.0.0/16 dev eth1 # Route UE pool to N6
QER rate limiting
Symptoms:
- Throughput lower than expected
- Traffic capped at a specific rate
- URR volume counters show plateau behavior
- XDP drop counters increasing during traffic bursts
Diagnosis:
-
Check configured MBR for the session:
# Find the session's QER ID
curl http://localhost:8080/api/v1/pfcp_sessions | jq '.data[] | select(.ue_ip == "10.45.0.1")'
# Look up the QER configuration
curl http://localhost:8080/api/v1/qer_map | jq '.data[] | select(.qer_id == 1)' -
Verify gate status:
# Gate status should be 0 (OPEN) for both uplink and downlink
curl http://localhost:8080/api/v1/qer_map | jq '.data[] | {qer_id, ul_gate: .ul_gate_status, dl_gate: .dl_gate_status}' -
Calculate actual throughput from URR:
# Query URR volume counters at two points in time
curl http://localhost:8080/api/v1/urr_map | jq '.data[] | select(.urr_id == 0)'
# Calculate throughput (manual):
# throughput_kbps = (volume_delta_bytes × 8) / time_delta_seconds / 1000 -
Compare MBR vs. actual throughput:
- Expected throughput ≈ 95-98% of MBR (due to protocol overhead)
- If throughput is significantly below MBR, check for other bottlenecks
- If throughput matches MBR exactly, rate limiting is working as expected
Resolution:
- If MBR is too low: Request SMF to update QER with higher MBR via PFCP Session Modification
- If gate is closed: Investigate why SMF closed the gate (policy, quota, or error)
- If rate limiting is unexpected: Verify SMF policy configuration and QoS profile
Understanding MBR Enforcement:
OmniUPF uses a sliding window algorithm to enforce MBR limits at nanosecond precision in the eBPF datapath. See Rules Management Guide - MBR Enforcement Mechanism for detailed explanation of:
- How packet size and rate determine drop decisions
- Why observed throughput differs from configured MBR
- Per-direction (uplink/downlink) rate limiting
- 5ms sliding window behavior
Common Scenarios:
- VoIP calls dropping: Check if MBR is sufficient for codec bitrate (G.711 = ~80 kbps)
- Video streaming buffering: Ensure MBR > video bitrate + overhead (1080p = ~5-10 Mbps)
- Burst traffic: Small bursts allowed within 5ms window, sustained traffic rate-limited
Issue: One-way traffic (uplink works, downlink doesn't)
Symptoms:
- RX N3 packets but no TX N3 packets (downlink problem)
- RX N6 packets but no TX N6 packets (uplink problem)
Diagnosis:
# Check N3/N6 interface statistics (XDP-aware method)
curl http://localhost:8080/api/v1/n3n6_stats
curl http://localhost:8080/api/v1/packet_stats
# Note: Standard tcpdump cannot capture XDP-processed GTP-U traffic
# Use statistics API or xdpdump for traffic analysis
# See "Packet Capture with XDP" section for details
Uplink Failure (RX N3, no TX N6):
Cause: No FAR action or routing issue to N6
Resolution:
# Check FAR has FORWARD action
curl http://localhost:8080/api/v1/sessions | jq '.[].fars[] | select(.applied_action == 2)'
# Check N6 route exists
ip route get 8.8.8.8 # Test route to internet
# Add default route if missing
sudo ip route add default via <N6_GATEWAY> dev eth1
Downlink Failure (RX N6, no TX N3):
Cause: No downlink PDR or missing GTP encapsulation
Resolution:
# Check downlink PDR exists for UE IP
curl http://localhost:8080/api/v1/sessions | jq '.[].pdrs[] | select(.pdi.ue_ip_address)'
# Verify FAR has OUTER_HEADER_CREATION
curl http://localhost:8080/api/v1/sessions | jq '.[].fars[] | .outer_header_creation'
# Check gNB reachability
ping <GNB_N3_IP>
XDP and eBPF Issues
For detailed XDP configuration, mode selection, and troubleshooting, see the XDP Modes Guide.
Issue: XDP program failed to load
Symptoms:
ERRO[0000] failed to load XDP program: invalid argument
Diagnosis:
# Check kernel XDP support
grep XDP /boot/config-$(uname -r)
# Should show:
# CONFIG_XDP_SOCKETS=y
# CONFIG_BPF=y
# CONFIG_BPF_SYSCALL=y
# Check dmesg for detailed error
dmesg | grep -i bpf
Causes & Resolutions:
Kernel lacks XDP support
Resolution:
# Rebuild kernel with XDP support or upgrade to newer kernel
# Ubuntu 22.04+ has XDP enabled by default
sudo apt install linux-generic-hwe-22.04
sudo reboot
XDP program verification failure
Resolution:
# Check OmniUPF logs for verifier errors
journalctl -u omniupf | grep verifier
# Common issues:
# - eBPF complexity exceeds limits (increase kernel limits)
# - Invalid memory access (bug in eBPF code)
# Increase eBPF verifier log level for debugging
sudo sysctl kernel.bpf_stats_enabled=1
Issue: XDP aborted count increasing
Symptoms:
- XDP stats show aborted > 0
- Packet drops increasing
Diagnosis:
# Check XDP aborted count
curl http://localhost:8080/api/v1/xdp_stats | jq '.aborted'
# Monitor XDP stats
watch -n 1 'curl -s http://localhost:8080/api/v1/xdp_stats'
Cause: eBPF program encountered runtime error
Resolution:
# Check kernel logs for eBPF errors
dmesg | grep -i bpf
# Restart OmniUPF to reload eBPF program
sudo systemctl restart omniupf
# If issue persists, enable eBPF logging (requires rebuild):
# Build OmniUPF with BPF_ENABLE_LOG=1
Issue: eBPF map full (capacity exhausted)
Symptoms:
- Session establishment fails
- Map capacity at 100%
Diagnosis:
# Check map capacity
curl http://localhost:8080/api/v1/map_info | jq '.[] | {map_name, capacity, used, usage_percent}'
# Identify full maps
curl http://localhost:8080/api/v1/map_info | jq '.[] | select(.usage_percent > 90)'
Immediate Mitigation:
# 1. Identify stale sessions
curl http://localhost:8080/api/v1/sessions | jq '.[] | {seid, uplink_teid, created_at}'
# 2. Request SMF to delete old sessions
# (via SMF admin interface or API)
# 3. Monitor map usage decrease
watch -n 5 'curl -s http://localhost:8080/api/v1/map_info | jq ".[] | select(.map_name==\"pdr_map_downlin\") | .usage_percent"'
Long-term Resolution:
# Increase map capacity in config.yml
max_sessions: 200000 # Increase from 100000
# Or set individual map sizes
pdr_map_size: 400000
far_map_size: 400000
qer_map_size: 200000
Important: Changing map sizes requires OmniUPF restart and clears all existing sessions.
Performance Issues
Issue: Low throughput (below expected)
Symptoms:
- Throughput < 1 Gbps despite capable NIC
- High CPU utilization
Diagnosis:
# Check packet rate
curl http://localhost:8080/api/v1/packet_stats | jq '.total_rx, .total_tx'
# Check NIC statistics
ethtool -S eth0 | grep -i drop
# Check XDP mode
ip link show eth0 | grep xdp
Resolutions:
Using generic XDP mode
Resolution:
# Switch to native mode for better performance
xdp_attach_mode: native # Requires XDP-capable NIC/driver
Single-core bottleneck
Resolution:
# Enable RSS (Receive Side Scaling) on NIC
ethtool -L eth0 combined 4 # Use 4 RX/TX queues
# Verify RSS enabled
ethtool -l eth0
# Pin interrupts to specific CPUs
# See /proc/interrupts and use irqbalance or manual affinity
Buffer bloat
Resolution:
# Reduce buffer limits to decrease latency
buffer_max_packets: 5000
buffer_packet_ttl: 15
Issue: High latency
Symptoms:
- Ping latency > 50ms
- User experience degradation
Diagnosis:
# Test latency to UE
ping -c 100 <UE_IP> | grep avg
# Check buffered packets
curl http://localhost:8080/api/v1/upf_buffer_info | jq '.total_packets_buffered'
# Check route cache performance
curl http://localhost:8080/api/v1/packet_stats | jq '.route_stats'
Resolutions:
Packets being buffered excessively
Resolution:
# Check why packets are buffered
curl http://localhost:8080/api/v1/upf_buffer_info | jq '.buffers[] | {far_id, packet_count, direction}'
# Clear buffers if stuck
# (restart OmniUPF or trigger PFCP session modification to apply FAR)
FIB lookup latency
Resolution:
# Ensure route cache is enabled (build-time option)
# Build with BPF_ENABLE_ROUTE_CACHE=1
# Optimize routing table
# Use fewer, more specific routes instead of many small routes
Issue: Packet drops under load
Symptoms:
- Drop rate increases with traffic
- RX errors on NIC
Diagnosis:
# Check NIC errors
ethtool -S eth0 | grep -E "drop|error|miss"
# Check ring buffer size
ethtool -g eth0
# Monitor drops in real-time
watch -n 1 'ethtool -S eth0 | grep -E "drop|miss"'
Resolution:
# Increase RX ring buffer size
ethtool -G eth0 rx 4096
# Increase TX ring buffer size
ethtool -G eth0 tx 4096
# Verify new settings
ethtool -g eth0
Hypervisor-Specific Issues
For step-by-step hypervisor configuration instructions, see the XDP Modes Guide.
Proxmox: XDP not working in VM
Symptoms:
- Cannot attach XDP program in native mode
- Only generic mode works
Cause: VM using bridged networking without SR-IOV
Resolution:
Option 1: Use generic mode (simplest)
xdp_attach_mode: generic
Option 2: Configure SR-IOV passthrough
# On Proxmox host:
# 1. Enable IOMMU
nano /etc/default/grub
# Add: intel_iommu=on iommu=pt
update-grub
reboot
# 2. Create VFs
echo 4 > /sys/class/net/eth0/device/sriov_numvfs
# 3. Assign VF to VM in Proxmox UI
# Hardware → Add → PCI Device → Select VF
# In VM:
interface_name: [ens1f0] # SR-IOV VF
xdp_attach_mode: native
VMware: Promiscuous mode required
Symptoms:
- Packets not received by OmniUPF
Cause: vSwitch blocking non-matching MAC addresses
Resolution:
# Enable promiscuous mode on vSwitch (in vSphere Client):
# 1. Select vSwitch → Edit Settings
# 2. Security → Promiscuous Mode: Accept
# 3. Security → MAC Address Changes: Accept
# 4. Security → Forged Transmits: Accept
VirtualBox: Performance very low
Symptoms:
- Throughput < 100 Mbps
Cause: VirtualBox does not support SR-IOV or native XDP
Resolution:
# Use generic mode (only option)
xdp_attach_mode: generic
# Optimize VirtualBox settings:
# - Use VirtIO-Net adapter (if available)
# - Enable "Allow All" promiscuous mode
# - Allocate more CPU cores to VM
# - Use bridged networking instead of NAT
# Consider migrating to KVM/Proxmox for better performance
NIC and Driver Issues
Issue: NIC driver does not support XDP
Symptoms:
ERRO[0000] failed to attach XDP program: operation not supported
Diagnosis:
# Check NIC driver
ethtool -i eth0 | grep driver
# Check if driver supports XDP
modinfo <driver_name> | grep -i xdp
# List XDP-capable interfaces
ip link show | grep -B 1 "xdpgeneric\|xdpdrv\|xdpoffload"
Resolution:
Option 1: Use generic mode
xdp_attach_mode: generic
Option 2: Update NIC driver
# Check for driver updates (Ubuntu)
sudo apt update
sudo apt install linux-modules-extra-$(uname -r)
# Or install vendor-specific driver
# Example for Intel:
# Download from https://downloadcenter.intel.com/
Option 3: Replace NIC
# Use XDP-capable NIC:
# - Intel X710, E810
# - Mellanox ConnectX-5, ConnectX-6
# - Broadcom BCM57xxx (bnxt_en driver)
Issue: Driver crashes or kernel panics
Symptoms:
- Kernel panic after attaching XDP
- NIC stops responding
Diagnosis:
# Check kernel logs
dmesg | tail -100
# Check for driver bugs
journalctl -k | grep -E "BUG:|panic:"
Resolution:
# 1. Update kernel and drivers
sudo apt update
sudo apt upgrade
sudo reboot
# 2. Disable XDP offload (use native only)
xdp_attach_mode: native
# 3. Use generic mode as workaround
xdp_attach_mode: generic
# 4. Report bug to NIC vendor or Linux kernel team
Session Establishment Failures
Issue: Session establishment fails
Symptoms:
- SMF reports session establishment failure
- UE cannot establish PDU session
See PFCP Cause Codes Reference for common failure scenarios and resolutions.
Diagnosis:
# Check OmniUPF logs for session errors
journalctl -u omniupf | grep -i "session establishment"
# Check PFCP session count
curl http://localhost:8080/api/v1/sessions | jq 'length'
# Capture PFCP traffic during session establishment
tcpdump -i any -n udp port 8805 -w /tmp/pfcp_session.pcap
Common Causes:
Map capacity full
Resolution:
# Check map usage
curl http://localhost:8080/api/v1/map_info | jq '.[] | select(.usage_percent > 90)'
# Increase capacity (see eBPF map full section above)
Invalid PDR/FAR parameters
Resolution:
# Check OmniUPF logs for validation errors
journalctl -u omniupf | grep -E "invalid|error" | tail -20
# Common issues:
# - Invalid UE IP address (0.0.0.0 or duplicate)
# - Invalid TEID (0 or duplicate)
# - Missing FAR for PDR
# - Invalid FAR action
# Verify SMF configuration and session parameters
Feature not supported (UEIP/FTUP)
Resolution:
# Enable required features if needed
feature_ueip: true # UE IP allocation by UPF
ueip_pool: 10.60.0.0/16
feature_ftup: true # F-TEID allocation by UPF
teid_pool: 100000
Buffering Issues
Issue: Packets stuck in buffer
Symptoms:
- Buffered packet count increasing
- Packets not delivered after handover
Diagnosis:
# Check buffer statistics
curl http://localhost:8080/api/v1/upf_buffer_info
# Check individual FAR buffers
curl http://localhost:8080/api/v1/upf_buffer_info | jq '.buffers[] | {far_id, packet_count, oldest_packet_ms}'
# Monitor buffer size
watch -n 5 'curl -s http://localhost:8080/api/v1/upf_buffer_info | jq ".total_packets_buffered"'
Causes & Resolutions:
FAR never updated to FORWARD
Cause: SMF never sent PFCP Session Modification to apply FAR
Resolution:
# Check FAR status
curl http://localhost:8080/api/v1/sessions | jq '.[].fars[] | {far_id, applied_action}'
# Action BUFF = 1 (buffering)
# Action FORW = 2 (forwarding)
# If stuck in BUFF state, request SMF to:
# - Send PFCP Session Modification Request
# - Update FAR with FORW action
Buffer TTL expired
Cause: Packets expired before FAR update
Resolution:
# Increase buffer TTL
buffer_packet_ttl: 60 # Increase from 30 to 60 seconds
Buffer overflow
Cause: Too many packets buffered per FAR
Resolution:
# Increase buffer limits
buffer_max_packets: 20000 # Per FAR
buffer_max_total: 200000 # Global limit
Advanced Debugging
Enable Debug Logging
logging_level: debug # trace | debug | info | warn | error
# Restart OmniUPF with debug logging
sudo systemctl restart omniupf
# Monitor logs in real-time
journalctl -u omniupf -f --output cat
eBPF Program Tracing
# Trace eBPF program execution (requires bpftrace)
sudo bpftrace -e 'tracepoint:xdp:* { @[probe] = count(); }'
# Trace map operations
sudo bpftrace -e 'tracepoint:bpf:bpf_map_lookup_elem { printf("%s\n", str(args->map_name)); }'
Packet Capture with XDP
Understanding XDP Packet Capture Limitations:
XDP processes packets before the kernel network stack, so standard tcpdump cannot see XDP-processed traffic. GTP-U packets (UDP port 2152) on N3 are processed by XDP and will not appear in tcpdump on the UPF host.
Recommended Methods for Traffic Analysis:
# Method 1: Use statistics API for monitoring (RECOMMENDED)
curl http://localhost:8080/api/v1/xdp_stats
curl http://localhost:8080/api/v1/packet_stats | jq
curl http://localhost:8080/api/v1/n3n6_stats
# Method 2: Capture PFCP traffic (not affected by XDP)
tcpdump -i any -n udp port 8805 -w /tmp/pfcp.pcap
# Method 3: Out-of-band packet capture (RECOMMENDED for GTP-U)
# Use network TAP or switch port mirroring to capture traffic
# Examples:
# - Physical TAP between gNB and UPF
# - Switch SPAN/mirror port copying N3 traffic to analyzer
# - Virtual switch port mirroring in hypervisor
#
# On capture host (NOT the UPF):
# tcpdump -i <mirror_interface> -n udp port 2152 -w /tmp/n3_mirror.pcap
Out-of-Band Capture Setup Examples:
Physical Network:
# Use a network TAP or configure switch port mirroring
# Example: Cisco switch SPAN configuration
(config)# monitor session 1 source interface Gi1/0/1
(config)# monitor session 1 destination interface Gi1/0/24
# On monitoring host connected to Gi1/0/24:
tcpdump -i eth0 -n udp port 2152 -w /tmp/n3_capture.pcap
Virtual Environment (VMware, KVM, etc.):
# Configure virtual switch port mirroring to send UPF traffic to analyzer VM
# Example: Linux bridge with tcpdump on different VM
# On hypervisor, mirror UPF's N3 interface to analyzer interface
# On analyzer VM:
tcpdump -i eth1 -n udp port 2152 -w /tmp/n3_virtual.pcap
Why Out-of-Band is Required:
- XDP bypasses the kernel network stack entirely
- Packets are processed in the NIC driver or hardware
- Host-based tcpdump sees packets AFTER XDP processing (too late)
- Out-of-band capture sees raw wire traffic before UPF processing
What You CAN Capture on UPF Host:
- ✅ PFCP traffic (UDP 8805) - control plane, not processed by XDP
- ✅ API responses and metrics
- ❌ GTP-U traffic (UDP 2152) - dataplane, processed by XDP
Getting Help
If troubleshooting steps do not resolve your issue:
-
Collect diagnostic information:
# System info
uname -a
cat /etc/os-release
# OmniUPF info
curl http://localhost:8080/api/v1/upf_status
curl http://localhost:8080/api/v1/map_info
curl http://localhost:8080/api/v1/packet_stats
# Logs
journalctl -u omniupf --since "1 hour ago" > /tmp/omniupf.log
dmesg > /tmp/dmesg.log
# Network info
ip addr > /tmp/network.txt
ip route >> /tmp/network.txt
ethtool eth0 >> /tmp/network.txt -
Report issue with:
- OmniUPF version
- Linux kernel version
- Network topology diagram
- Configuration file (redact sensitive info)
- Relevant log excerpts
- Steps to reproduce
Related Documentation
- Configuration Guide - Configuration parameters and examples
- Architecture Guide - eBPF/XDP internals and performance tuning
- Monitoring Guide - Statistics, capacity, and alerting
- Metrics Reference - Prometheus metrics for troubleshooting
- PFCP Cause Codes - PFCP error codes and troubleshooting
- Rules Management Guide - PDR, FAR, QER, URR concepts
- Operations Guide - UPF architecture and overview