OmniUPF Operations Guide
Table of Contents
- Overview
- Understanding 5G User Plane Architecture
- UPF Components
- PFCP Protocol and SMF Integration
- Common Operations
- Troubleshooting
- Additional Documentation
- Glossary
Overview
OmniUPF (eBPF-based User Plane Function) is a high-performance 5G/LTE User Plane Function that provides carrier-grade packet forwarding, QoS enforcement, and traffic management for mobile networks. Built on Linux eBPF (extended Berkeley Packet Filter) technology and enhanced with comprehensive management capabilities, OmniUPF delivers the core packet processing infrastructure required for 5G SA, 5G NSA, and LTE networks.
What is a User Plane Function?
The User Plane Function (UPF) is the 3GPP-standardized network element responsible for packet processing and forwarding in 5G and LTE networks. It provides:
- High-speed packet forwarding between mobile devices and data networks
- Quality of Service (QoS) enforcement for different traffic types
- Traffic detection and routing based on packet filters and rules
- Usage reporting for charging and analytics
- Packet buffering for mobility and session management scenarios
- Lawful intercept support for regulatory compliance
OmniUPF implements the full UPF functionality defined in 3GPP TS 23.501 (5G) and TS 23.401 (LTE), providing a complete, production-ready user plane solution using Linux kernel eBPF technology for maximum performance.
OmniUPF Key Capabilities
Packet Processing:
- Full 3GPP-compliant user plane packet processing
- eBPF-based datapath for kernel-level performance
- GTP-U (GPRS Tunneling Protocol) encapsulation and decapsulation
- IPv4 and IPv6 support for both access and data networks
- XDP (eXpress Data Path) for ultra-low latency processing
- Multi-threaded packet processing
QoS and Traffic Management:
- QoS Enforcement Rules (QER) for bandwidth management
- Packet Detection Rules (PDR) for traffic classification
- Forwarding Action Rules (FAR) for routing decisions
- Service Data Flow (SDF) filtering for application-specific routing
- Usage Reporting Rules (URR) for volume tracking and charging
Control and Management:
- PFCP (Packet Forwarding Control Protocol) interface to SMF/PGW-C
- RESTful API for monitoring and diagnostics
- Real-time statistics and metrics
- eBPF map capacity monitoring
- Web-based control panel
Performance Features:
- Zero-copy packet processing via eBPF
- Kernel-level packet forwarding (no userspace overhead)
- Multi-core scalability
- Offload-capable for hardware acceleration
- Optimized for cloud-native deployments
For detailed control panel usage, see Web UI Operations.
Understanding User Plane Architecture
OmniUPF is a unified user plane solution providing carrier-grade packet forwarding for 5G Standalone (SA), 5G NSA, and 4G LTE/EPC networks. OmniUPF is a single product that can simultaneously function as:
- UPF (User Plane Function) - 5G/NSA user plane (controlled by OmniSMF via N4/PFCP)
- PGW-U (PDN Gateway User Plane) - 4G EPC gateway to external networks (controlled by OmniPGW-C via Sxc/PFCP)
- SGW-U (Serving Gateway User Plane) - 4G EPC serving gateway (controlled by OmniSGW-C via Sxb/PFCP)
OmniUPF can operate in any combination of these modes:
- UPF-only: Pure 5G deployment
- PGW-U + SGW-U: Combined 4G gateway (typical EPC deployment)
- UPF + PGW-U + SGW-U: Simultaneous 4G and 5G support (migration scenario)
All modes use the same eBPF-based packet processing engine and PFCP protocol, providing consistent high performance whether operating as UPF, PGW-U, SGW-U, or all three simultaneously.
5G Network Architecture (SA Mode)
The OmniUPF solution sits at the data plane of 5G networks, providing the high-speed packet forwarding layer that connects mobile devices to data networks and services.
4G LTE/EPC Network Architecture
OmniUPF also supports 4G LTE and EPC (Evolved Packet Core) deployments, functioning as either OmniPGW-U or OmniSGW-U depending on the network architecture.
Combined PGW-U/SGW-U Mode (Typical 4G Deployment)
In this mode, OmniUPF acts as both SGW-U and PGW-U, controlled by separate control plane functions.
Separated SGW-U and PGW-U Mode (Roaming/Multi-Site)
In roaming or multi-site deployments, two separate OmniUPF instances can be deployed - one as SGW-U and one as PGW-U.
How User Plane Functions Work in the Network
The user plane function (OmniUPF, OmniPGW-U, or OmniSGW-U) operates as the forwarding plane controlled by the respective control plane:
-
Session Establishment
- 5G: OmniSMF establishes PFCP association via N4 interface with OmniUPF
- 4G: OmniPGW-C or OmniSGW-C establishes PFCP association via Sxb/Sxc with OmniPGW-U/OmniSGW-U
- Control plane creates PFCP sessions for each UE PDU session (5G) or PDP context (4G)
- User plane receives PDR, FAR, QER, and URR rules via PFCP
- eBPF maps are populated with forwarding rules
-
Uplink Packet Processing (UE → Data Network)
- 5G: Packets arrive on N3 interface from gNB with GTP-U encapsulation
- 4G: Packets arrive on S1-U interface (SGW-U) or S5/S8 interface (PGW-U) from eNodeB with GTP-U encapsulation
- User plane matches packets against uplink PDRs based on TEID
- eBPF program applies QER (rate limiting, marking)
- FAR determines forwarding action (forward, drop, buffer, duplicate)
- GTP-U tunnel removed, packets forwarded to N6 (5G) or SGi (4G) interface
- URR tracks packet and byte counts for charging
-
Downlink Packet Processing (Data Network → UE)
- 5G: Packets arrive on N6 interface as native IP
- 4G: Packets arrive on SGi interface as native IP
- User plane matches packets against downlink PDRs based on UE IP address
- SDF filters may further classify traffic by port, protocol, or application
- FAR determines GTP-U tunnel and forwarding parameters
- GTP-U encapsulation added with appropriate TEID
- 5G: Packets forwarded to N3 interface toward gNB
- 4G: Packets forwarded to S1-U (SGW-U) or S5/S8 (PGW-U) toward eNodeB
-
Mobility and Handover
- 5G: OmniSMF updates PDR/FAR rules during handover scenarios
- 4G: OmniSGW-C/OmniPGW-C updates rules during inter-eNodeB handover or TAU (Tracking Area Update)
- User plane may buffer packets during path switch
- Seamless transition between base stations without packet loss
Integration with Control Plane (4G and 5G)
OmniUPF integrates with both 5G and 4G control plane functions via standard 3GPP interfaces:
5G Interfaces
| Interface | From → To | Purpose | 3GPP Spec |
|---|---|---|---|
| N4 | OmniSMF ↔ OmniUPF | PFCP session establishment, modification, deletion | TS 29.244 |
| N3 | gNB → OmniUPF | User plane traffic from RAN (GTP-U) | TS 29.281 |
| N6 | OmniUPF → Data Network | User plane traffic to DN (native IP) | TS 23.501 |
| N9 | OmniUPF ↔ OmniUPF | Inter-UPF communication for roaming/edge | TS 23.501 |
4G/EPC Interfaces
| Interface | From → To | Purpose | 3GPP Spec |
|---|---|---|---|
| Sxb | OmniSGW-C ↔ OmniUPF (SGW-U mode) | PFCP session control for serving gateway | TS 29.244 |
| Sxc | OmniPGW-C ↔ OmniUPF (PGW-U mode) | PFCP session control for PDN gateway | TS 29.244 |
| S1-U | eNodeB → OmniUPF (SGW-U mode) | User plane traffic from RAN (GTP-U) | TS 29.281 |
| S5/S8 | OmniUPF (SGW-U) ↔ OmniUPF (PGW-U) | Inter-gateway user plane (GTP-U) | TS 29.281 |
| SGi | OmniUPF (PGW-U mode) → PDN | User plane traffic to data network (native IP) | TS 23.401 |
Note: All PFCP interfaces (N4, Sxb, Sxc) use the same PFCP protocol defined in TS 29.244. The interface names differ but the protocol and message formats are identical.
For PFCP session management, see PFCP Operations.
UPF Components
eBPF Datapath
The eBPF datapath is the core packet processing engine that runs in the Linux kernel for maximum performance.
Core Functions:
- GTP-U Processing: Encapsulation and decapsulation of GTP-U tunnels
- Packet Classification: Matching packets against PDR rules using TEID, UE IP, or SDF filters
- QoS Enforcement: Apply rate limiting and packet marking per QER rules
- Forwarding Decisions: Execute FAR actions (forward, drop, buffer, duplicate, notify)
- Usage Tracking: Increment URR counters for volume-based charging
eBPF Maps: The datapath uses eBPF maps (hash tables in kernel memory) for rule storage:
| Map Name | Purpose | Key | Value |
|---|---|---|---|
uplink_pdr_map | Uplink PDRs | TEID (32-bit) | PDR info (FAR ID, QER ID, URR IDs) |
downlink_pdr_map | Downlink PDRs (IPv4) | UE IP address | PDR info |
downlink_pdr_map_ip6 | Downlink PDRs (IPv6) | UE IPv6 address | PDR info |
far_map | Forwarding rules | FAR ID | Forwarding parameters (action, tunnel info) |
qer_map | QoS rules | QER ID | QoS parameters (MBR, GBR, marking) |
urr_map | Usage tracking | URR ID | Volume counters (uplink, downlink, total) |
sdf_filter_map | SDF filters | PDR ID | Application filters (ports, protocols) |
Performance Characteristics:
- Zero-copy: Packets processed entirely in kernel space
- XDP support: Attach at network driver level for sub-microsecond latency
- Multi-core: Scales across CPU cores with per-CPU map support
- Capacity: Millions of PDRs/FARs in eBPF maps (limited by kernel memory)
For capacity monitoring, see Capacity Management.
PFCP Interface Handler
The PFCP interface implements 3GPP TS 29.244 for communication with SMF or PGW-C.
Core Functions:
- Association Management: PFCP heartbeat and association setup/release
- Session Lifecycle: Create, modify, and delete PFCP sessions
- Rule Installation: Translate PFCP IEs into eBPF map entries
- Event Reporting: Notify SMF of usage thresholds, errors, or session events
PFCP Message Support:
| Message Type | Direction | Purpose |
|---|---|---|
| Association Setup | SMF → UPF | Establish PFCP control association |
| Association Release | SMF → UPF | Tear down PFCP association |
| Heartbeat | Bidirectional | Keep association alive |
| Session Establishment | SMF → UPF | Create new PDU session with PDR/FAR/QER/URR |
| Session Modification | SMF → UPF | Update rules for mobility, QoS changes |
| Session Deletion | SMF → UPF | Remove session and all associated rules |
| Session Report | UPF → SMF | Report usage, errors, or events |
Information Elements (IE) Supported:
- Create PDR, FAR, QER, URR
- Update PDR, FAR, QER, URR
- Remove PDR, FAR, QER, URR
- Packet Detection Information (UE IP, F-TEID, SDF filter)
- Forwarding Parameters (network instance, outer header creation)
- QoS Parameters (MBR, GBR, QFI)
- Usage Report Triggers (volume threshold, time threshold)
For detailed PFCP operations, see PFCP Operations Guide.
REST API Server
The REST API provides programmatic access to UPF state and operations.
Core Functions:
- Session Monitoring: Query active PFCP sessions and associations
- Rule Inspection: View PDR, FAR, QER, URR configurations
- Statistics: Retrieve packet counters, route stats, XDP stats
- Buffer Management: View and control packet buffers
- Map Information: Monitor eBPF map usage and capacity
API Endpoints: (34 total endpoints)
| Category | Endpoints | Description |
|---|---|---|
| Health | /health | Health check and status |
| Config | /config | UPF configuration |
| Sessions | /pfcp_sessions, /pfcp_associations | PFCP session/association data |
| PDRs | /uplink_pdr_map, /downlink_pdr_map, /downlink_pdr_map_ip6, /uplink_pdr_map_ip6 | Packet detection rules |
| FARs | /far_map | Forwarding action rules |
| QERs | /qer_map | QoS enforcement rules |
| URRs | /urr_map | Usage reporting rules |
| Buffers | /buffer | Packet buffer status and control |
| Statistics | /packet_stats, /route_stats, /xdp_stats, /n3n6_stats | Performance metrics |
| Capacity | /map_info | eBPF map capacity and usage |
| Dataplane | /dataplane_config | N3/N9 interface addresses |
For API details and usage, see PFCP Operations Guide and Monitoring Guide.
Web Control Panel
The Web Control Panel provides a real-time dashboard for UPF monitoring and management.
Features:
- Sessions View: Browse active PFCP sessions with UE IP, TEID, and rule counts
- Rules Management: View and manage PDRs, FARs, QERs, and URRs across all sessions
- Buffer Monitoring: Track buffered packets and control buffering per FAR
- Statistics Dashboard: Real-time packet, route, XDP, and N3/N6 interface statistics
- Capacity Monitoring: eBPF map usage with color-coded capacity indicators
- Configuration View: Display UPF configuration and dataplane addresses
- Logs Viewer: Live log streaming for troubleshooting
For detailed UI operations, see Web UI Operations Guide.
PFCP Protocol and SMF Integration
PFCP Association
Before sessions can be created, the SMF must establish a PFCP association with the UPF.
Association Lifecycle:
Key Points:
- Each SMF establishes one association with the UPF
- UPF tracks association by Node ID (FQDN or IP address)
- Heartbeat messages maintain association liveness
- All sessions under an association are deleted if association is released
For viewing associations, see Sessions View.
PFCP Session Creation
When a UE establishes a PDU session (5G) or PDP context (LTE), the SMF creates a PFCP session at the UPF.
Session Establishment Flow:
Typical Session Contents:
- Uplink PDR: Match on N3 TEID, forward via FAR to N6
- Downlink PDR: Match on UE IP address, forward via FAR to N3 with GTP-U encapsulation
- FAR: Forwarding parameters (outer header creation, network instance)
- QER: QoS limits (MBR, GBR) and packet marking (QFI)
- URR: Volume reporting for charging (optional)
For session monitoring, see PFCP Operations.
PFCP Session Modification
SMF can modify sessions for mobility events (handover), QoS changes, or service updates.
Common Modification Scenarios:
-
Handover (N2-based)
- Update uplink FAR with new gNB tunnel endpoint (F-TEID)
- Optionally buffer packets during path switch
- Flush buffer to new path when ready
-
QoS Change
- Update QER with new MBR/GBR values
- May add/remove SDF filters in PDR for application-specific QoS
-
Service Update
- Add new PDRs for additional traffic flows
- Modify FARs for routing changes
Session Modification Flow:
For rule management, see Rules Management Guide.
PFCP Session Deletion
When a PDU session is released, SMF deletes the PFCP session at UPF.
Session Deletion Flow:
Cleanup Performed:
- All PDRs removed (uplink and downlink)
- All FARs, QERs, URRs removed
- Packet buffers cleared
- Final usage report sent to SMF for charging
Common Operations
OmniUPF provides comprehensive operational capabilities through its web-based control panel and REST API. This section covers common operational tasks and their significance.
Session Monitoring
Understanding PFCP Sessions:
PFCP sessions represent active UE PDU sessions (5G) or PDP contexts (LTE). Each session contains:
- Local and remote SEIDs (Session Endpoint Identifiers)
- PDRs for packet classification
- FARs for forwarding decisions
- QERs for QoS enforcement (optional)
- URRs for usage tracking (optional)
Key Session Operations:
- View all sessions with UE IP addresses, TEIDs, and rule counts
- Filter sessions by IP address or TEID
- Inspect session details including full PDR/FAR/QER/URR configurations
- Monitor session counts per PFCP association
For detailed session procedures, see Sessions View.
Rule Management
Packet Detection Rules (PDR):
PDRs determine which packets match specific traffic flows. Operators can:
- View uplink PDRs keyed by TEID from N3 interface
- View downlink PDRs keyed by UE IP address (IPv4 and IPv6)
- Inspect SDF filters for application-specific classification
- Monitor PDR counts and capacity usage
Forwarding Action Rules (FAR):
FARs define what to do with matched packets. Operators can:
- View FAR actions (FORWARD, DROP, BUFFER, DUPLICATE, NOTIFY)
- Inspect forwarding parameters (outer header creation, destination)
- Monitor buffering status per FAR
- Toggle buffering for specific FARs during troubleshooting
QoS Enforcement Rules (QER):
QERs apply bandwidth limits and packet marking. Operators can:
- View QoS parameters (MBR, GBR, packet delay budget)
- Monitor active QERs per session
- Inspect QFI markings for 5G QoS flows
Usage Reporting Rules (URR):
URRs track data volumes for charging. Operators can:
- View volume counters (uplink, downlink, total bytes)
- Monitor usage thresholds and reporting triggers
- Inspect active URRs across all sessions
For rule operations, see Rules Management Guide.
Packet Buffering
Why Buffering is Critical for UPF
Packet buffering is one of the most important functions of a UPF because it prevents packet loss during mobility events and session reconfigurations. Without buffering, mobile users would experience dropped connections, interrupted downloads, and failed real-time communications every time they move between cell towers or when network conditions change.
The Problem: Packet Loss During Mobility
In mobile networks, users are constantly moving. When a device moves from one cell tower to another (handover), or when the network needs to reconfigure the data path, there's a critical window where packets are in flight but the new path isn't ready yet:
Without buffering: Packets arriving during this critical window would be dropped, causing:
- TCP connections to stall or reset (web browsing, downloads interrupted)
- Video calls to freeze or drop (Zoom, Teams, WhatsApp calls fail)
- Gaming sessions to disconnect (online gaming, real-time apps fail)
- VoIP calls to have gaps or drop entirely (phone calls interrupted)
- Downloads to fail and need to restart
With buffering: OmniUPF temporarily holds packets until the new path is established, then forwards them seamlessly. The user experiences zero interruption.
When Buffering Happens
OmniUPF buffers packets in these critical scenarios:
1. N2-Based Handover (5G) / X2-Based Handover (4G)
When a UE moves between cell towers:
Timeline:
- T+0ms: Old path still active
- T+10ms: SMF tells UPF to buffer (old path closing, new path not ready)
- T+10-50ms: Critical buffering window - packets arrive but can't be forwarded
- T+50ms: New path ready, SMF tells UPF to forward
- T+50ms+: UPF flushes buffered packets to new path, then forwards new packets normally
Without buffering: ~40ms of packets (potentially thousands) would be lost. With buffering: Zero packet loss, seamless handover.
2. Session Modification (QoS Change, Path Update)
When the network needs to change session parameters:
- QoS upgrade/downgrade: User moves from 4G to 5G coverage (NSA mode)
- Policy change: Enterprise user enters corporate campus (traffic steering changes)
- Network optimization: Core network reroutes traffic to closer UPF (ULCL update)
During the modification, the control plane may need to update multiple rules atomically. Buffering ensures packets aren't forwarded with partial/inconsistent rule sets.
3. Downlink Data Notification (Idle Mode Recovery)
When a UE is in idle mode (screen off, battery saving) and downlink data arrives:
Without buffering: The initial packet that triggered the notification would be lost, requiring the sender to retransmit (adds latency). With buffering: The packet that woke up the UE is delivered immediately when the UE reconnects.
4. Inter-RAT Handover (4G ↔ 5G)
When a UE moves between 4G and 5G coverage:
- Architecture changes (eNodeB ↔ gNB)
- Tunnel endpoints change (different TEID allocation)
- Buffering ensures smooth transition between RAT types
How Buffering Works in OmniUPF
Technical Mechanism:
OmniUPF uses a two-stage buffering architecture:
- eBPF Stage (Kernel): Detects packets requiring buffering based on FAR action flags
- Userspace Stage: Stores and manages buffered packets in memory
Buffering Process:
Key Details:
- Buffer Port: UDP port 22152 (packets sent from eBPF to userspace)
- Encapsulation: Packets wrapped in GTP-U with FAR ID as TEID
- Storage: In-memory per-FAR buffers with metadata (timestamp, direction, packet size)
- Limits:
- Per-FAR limit: 10,000 packets (default)
- Global limit: 100,000 packets across all FARs
- TTL: 30 seconds (default) - packets older than TTL are discarded
- Cleanup: Background process removes expired packets every 60 seconds
Buffer Lifecycle:
- Buffering Enabled: SMF sets FAR action BUFF=1 (bit 2) via PFCP Session Modification
- Packets Buffered: eBPF detects BUFF flag, encapsulates packets, sends to port 22152
- Userspace Storage: Buffer manager stores packets with FAR ID, timestamp, direction
- Buffering Disabled: SMF sets FAR action FORW=1, BUFF=0 with new forwarding parameters
- Flush Buffer: Userspace replays buffered packets using new FAR rules (new tunnel endpoint)
- Resume Normal: New packets forwarded immediately via new path
Why This Matters for User Experience
Real-World Impact:
| Scenario | Without Buffering | With Buffering |
|---|---|---|
| Video Call During Handover | Call freezes for 1-2 seconds, may drop | Seamless, no interruption |
| File Download at Cell Edge | Download fails, must restart | Download continues uninterrupted |
| Online Gaming While Moving | Connection drops, kicked from game | Smooth gameplay, no disconnects |
| VoIP Call in Car | Call drops every handover | Crystal clear, no drops |
| Streaming Video on Train | Video buffers, quality drops | Smooth playback |
| Mobile Hotspot for Laptop | SSH session drops, video call fails | All connections maintained |
Network Operator Benefits:
- Reduced Call Drop Rate (CDR): Critical KPI for network quality
- Higher Customer Satisfaction: Users don't notice handovers
- Lower Support Costs: Fewer complaints about dropped connections
- Competitive Advantage: "Best network for coverage" marketing
Buffer Management Operations
Operators can monitor and control buffering via the Web UI and API:
Monitoring:
- View buffered packets per FAR ID (count, bytes, age)
- Track buffer usage against limits (per-FAR, global)
- Alert on buffer overflow or excessive buffering duration
- Identify stuck buffers (packets buffered > TTL threshold)
Control Operations:
- Flush buffers: Manually trigger buffer replay (troubleshooting)
- Clear buffers: Discard buffered packets (clean up stuck buffers)
- Adjust TTL: Change packet expiration time
- Modify limits: Increase per-FAR or global buffer capacity
Troubleshooting:
- Buffer not flushing: Check if SMF sent FAR update to disable buffering
- Buffer overflow: Increase limits or investigate why buffering duration is excessive
- Old packets in buffer: TTL may be too high, or FAR update delayed
- Excessive buffering: May indicate mobility issues or SMF problems
For detailed buffer operations, see Buffer Management Guide.
Buffer Configuration
Configure buffering behavior in config.yml:
# Buffer settings
buffer_port: 22152 # UDP port for buffered packets (default)
buffer_max_packets: 10000 # Max packets per FAR (prevent memory exhaustion)
buffer_max_total: 100000 # Max total packets across all FARs
buffer_packet_ttl: 30 # TTL in seconds (discard old packets)
buffer_cleanup_interval: 60 # Cleanup interval in seconds
Recommendations:
- High-mobility networks (highways, trains): Increase
buffer_max_packetsto 20,000+ - Dense urban areas (frequent handovers): Decrease
buffer_packet_ttlto 15s - Low-latency applications: Set
buffer_packet_ttlto 10s to prevent stale data - IoT networks: Decrease limits (IoT devices generate less traffic during handover)
For complete configuration options, see Configuration Guide.
Statistics and Monitoring
Packet Statistics:
Real-time packet processing metrics including:
- RX packets: Total received from all interfaces
- TX packets: Total transmitted to all interfaces
- Dropped packets: Packets discarded due to errors or policy
- GTP-U packets: Tunneled packet counts
Route Statistics:
Per-route forwarding metrics:
- Route hits: Packets matched by each route
- Forwarding counts: Success/failure per destination
- Error counters: Invalid TEIDs, unknown UE IPs
XDP Statistics:
eXpress Data Path performance metrics:
- XDP processed: Packets handled at XDP layer
- XDP passed: Packets sent to network stack
- XDP dropped: Packets dropped at XDP layer
- XDP aborted: Processing errors
N3/N6 Interface Statistics:
Per-interface traffic counters:
- N3 RX/TX: Traffic to/from RAN (gNB/eNodeB)
- N6 RX/TX: Traffic to/from data network
- Total packet counts: Aggregate interface statistics
For monitoring details, see Monitoring Guide.
Capacity Management
eBPF Map Capacity Monitoring:
UPF performance depends on eBPF map capacity. Operators can:
- Monitor map usage with real-time percentage indicators
- View capacity limits for each eBPF map
- Color-coded alerts:
- Green (<50%): Normal
- Yellow (50-70%): Caution
- Amber (70-90%): Warning
- Red (>90%): Critical
Critical Maps to Monitor:
uplink_pdr_map: Uplink traffic classificationdownlink_pdr_map: Downlink IPv4 traffic classificationfar_map: Forwarding rulesqer_map: QoS rulesurr_map: Usage tracking
Capacity Planning:
- Each PDR consumes one map entry (key size + value size)
- Map capacity is configured at UPF startup (kernel memory limit)
- Exceeding capacity causes session establishment failures
For capacity monitoring, see Capacity Management.
Configuration Management
UPF Configuration:
View and verify UPF operational parameters:
- N3 Interface: IP address for RAN connectivity (GTP-U)
- N6 Interface: IP address for data network connectivity
- N9 Interface: IP address for inter-UPF communication (optional)
- PFCP Interface: IP address for SMF connectivity
- API Port: REST API listening port
- Metrics Endpoint: Prometheus metrics port
Dataplane Configuration:
Active eBPF datapath parameters:
- Active N3 address: Runtime N3 interface binding
- Active N9 address: Runtime N9 interface binding (if enabled)
For configuration viewing, see Configuration View.
Troubleshooting
This section covers common operational issues and their resolution strategies.
Session Establishment Failures
Symptoms: PFCP sessions fail to create, UE cannot establish data connectivity
Common Root Causes:
-
PFCP Association Not Established
- Verify SMF can reach UPF PFCP interface (port 8805)
- Check PFCP association status in Sessions view
- Verify Node ID configuration matches between SMF and UPF
-
eBPF Map Capacity Exhausted
- Check Capacity view for red (>90%) map usage
- Increase eBPF map sizes in UPF configuration
- Delete stale sessions if map is full
-
Invalid PDR/FAR Configuration
- Verify UE IP address is unique and valid
- Check TEID allocation doesn't conflict
- Ensure FAR references valid network instances
-
Interface Configuration Issues
- Verify N3 interface IP is reachable from gNB
- Check routing tables for N6 connectivity to data network
- Confirm GTP-U traffic is not blocked by firewall
For detailed troubleshooting, see Troubleshooting Guide.
Packet Loss or Forwarding Issues
Symptoms: UE has connectivity but experiences packet loss or no traffic flow
Common Root Causes:
-
PDR Misconfiguration
- Verify uplink PDR TEID matches gNB-assigned TEID
- Check downlink PDR UE IP matches assigned IP
- Inspect SDF filters for overly restrictive rules
-
FAR Action Issues
- Verify FAR action is FORWARD (not DROP or BUFFER)
- Check outer header creation parameters for GTP-U
- Ensure destination endpoint is correct
-
QoS Limits Exceeded
- Check QER MBR (Maximum Bit Rate) settings
- Verify GBR (Guaranteed Bit Rate) allocation
- Monitor packet drops due to rate limiting
-
Interface MTU Issues
- Verify GTP-U overhead (40-50 bytes) doesn't cause fragmentation
- Check N3/N6 interface MTU configuration
- Monitor for ICMP fragmentation needed messages
Buffer-Related Issues
Symptoms: Packets buffered indefinitely, buffer overflow
Common Root Causes:
-
Buffering Not Disabled After Handover
- Check FAR buffering flag (bit 2)
- Verify SMF sent Session Modification to disable buffering
- Manually disable buffering via control panel if stuck
-
Buffer TTL Expiration
- Check packet age in buffer view
- Verify buffer TTL configuration (default may be too long)
- Clear expired buffers manually
-
Buffer Capacity Exhausted
- Monitor total buffer usage and per-FAR limits
- Check for misconfigured rules causing excessive buffering
- Adjust max_per_far and max_total buffer limits
For buffer troubleshooting, see Buffer Operations.
Statistics Anomalies
Symptoms: Unexpected packet counters, missing statistics
Common Root Causes:
-
Counter Overflow
- eBPF maps use 64-bit counters (should not overflow)
- Check for counter reset events in logs
- Verify URR reporting is functioning
-
Route Statistics Not Updating
- Verify eBPF program is attached to interfaces
- Check kernel version supports required eBPF features
- Review XDP statistics for processing errors
-
Interface Statistics Mismatch
- Compare N3/N6 stats with kernel interface counters
- Check for traffic bypassing eBPF (e.g., local routing)
- Verify all traffic flows through XDP hooks
Performance Degradation
Symptoms: High latency, low throughput, CPU saturation
Diagnosis:
- Monitor XDP Statistics: Check for XDP drops or aborts
- Check eBPF Map Access Time: Hash lookups should be sub-microsecond
- Review CPU Utilization: eBPF should distribute across cores
- Analyze Network Interface: Verify NIC supports XDP offload
Scalability Considerations:
- XDP Performance: 10M+ packets per second per core
- PDR Capacity: Millions of PDRs limited only by kernel memory
- Session Count: Thousands of concurrent sessions per UPF instance
- Throughput: Multi-gigabit throughput with proper NIC offload
For performance tuning, see Architecture Guide.
Additional Documentation
Component-Specific Operations Guides
For detailed operations and troubleshooting for each UPF component:
Configuration Guide
Complete configuration reference including:
- Configuration parameters (YAML, environment variables, CLI)
- Operating modes (UPF/PGW-U/SGW-U)
- XDP attachment modes (generic/native/offload)
- Hypervisor compatibility (Proxmox, VMware, KVM, Hyper-V, VirtualBox)
- NIC compatibility and XDP driver support
- Configuration examples for different scenarios
- Map sizing and capacity planning
Architecture Guide
Deep technical dive including:
- eBPF technology foundation and program lifecycle
- XDP packet processing pipeline with tail calls
- PFCP protocol implementation
- Buffering architecture (GTP-U encapsulation to port 22152)
- QoS sliding window rate limiting (5ms window)
- Performance characteristics (3.5μs latency, 10 Mpps/core)
Rules Management Guide
PFCP rules reference including:
- Packet Detection Rules (PDR) - Traffic classification
- Forwarding Action Rules (FAR) - Routing decisions with action flags
- QoS Enforcement Rules (QER) - Bandwidth management (MBR/GBR)
- Usage Reporting Rules (URR) - Volume tracking and reporting
- Uplink and downlink packet flow diagrams
- Rule processing logic and precedence
Monitoring Guide
Statistics and capacity management including:
- N3/N6 interface statistics and traffic distribution
- XDP processing statistics (pass/drop/redirect/abort)
- eBPF map capacity monitoring with color-coded zones
- Performance metrics (packet rate, throughput, drop rate)
- Capacity planning formulas and session estimation
- Alerting thresholds and best practices
Web UI Operations Guide
Control panel usage including:
- Dashboard overview and navigation
- Sessions monitoring (healthy/unhealthy states)
- Rules inspection (PDR, FAR, QER, URR details)
- Buffer monitoring and packet buffering state
- Real-time statistics dashboard
- eBPF map capacity visualization
- Configuration viewing
API Documentation
Complete REST API reference including:
- OpenAPI/Swagger interactive documentation
- PFCP sessions and associations endpoints
- Packet Detection Rules (PDR) - IPv4 and IPv6
- Forwarding Action Rules (FAR)
- QoS Enforcement Rules (QER)
- Usage Reporting Rules (URR)
- Packet buffer management
- Statistics and monitoring endpoints
- Route management and FRR integration
- eBPF map information
- Configuration management
- Authentication and security guidelines
- Common API workflows and examples
UE Route Management Guide
FRR routing integration including:
- FRR (Free Range Routing) overview and architecture
- UE route synchronization lifecycle
- Automatic route sync to routing daemon
- Route advertisement via OSPF and BGP
- OSPF neighbor monitoring
- OSPF External LSA database verification
- BGP peer session management
- Web UI route monitoring interface
- Manual route sync operations
- Mermaid diagrams for route flow and architecture
Troubleshooting Guide
Comprehensive problem diagnosis including:
- Quick diagnostic checklist and tools
- Installation and configuration issues
- PFCP association failures
- Packet processing problems
- XDP and eBPF errors
- Performance degradation
- Hypervisor-specific issues (Proxmox, VMware, VirtualBox)
- NIC and driver problems
- Step-by-step resolution procedures
Documentation by Use Case
Installing and Configuring OmniUPF
- Start with this guide for overview
- Configuration Guide for setup parameters
- Web UI Guide to access control panel
Deploying on Proxmox
- Configuration Guide - Hypervisor Compatibility
- Configuration Guide - Proxmox SR-IOV Setup
- Troubleshooting - Proxmox Issues
Optimizing Performance
- Architecture Guide - Performance Optimization
- Configuration Guide - XDP Modes
- Monitoring Guide - Performance Metrics
- Troubleshooting - Performance Issues
Understanding Packet Processing
- Architecture Guide - Packet Processing Pipeline
- Rules Management Guide
- Monitoring Guide - Statistics
Planning Capacity
- Configuration Guide - Map Sizing
- Monitoring Guide - Capacity Planning
- Monitoring Guide - Session Capacity Estimation
Managing UE Routes and FRR Integration
- UE Route Management Guide - Complete routing integration guide
- API Documentation - Route Management - Route API endpoints
- Web UI Guide - Routes page operations
- UE Route Management - FRR Verification - OSPF LSA verification
Using the REST API
- API Documentation - Complete API reference
- API Documentation - Swagger UI - Interactive API explorer
- API Documentation - Common Workflows - API usage examples
- Web UI Guide - Web interface as API client example
Troubleshooting Issues
- Troubleshooting Guide - Start here
- Monitoring Guide - Check statistics and capacity
- Web UI Guide - Use control panel diagnostics
Quick Reference
Common API Endpoints
OmniUPF provides a REST API for monitoring and management:
# Status and health
GET http://localhost:8080/api/v1/upf_status
# PFCP associations
GET http://localhost:8080/api/v1/upf_pipeline
# Sessions
GET http://localhost:8080/api/v1/sessions
# Statistics
GET http://localhost:8080/api/v1/packet_stats
GET http://localhost:8080/api/v1/xdp_stats
# Capacity monitoring
GET http://localhost:8080/api/v1/map_info
# Buffer statistics
GET http://localhost:8080/api/v1/upf_buffer_info
For complete API documentation, access the Swagger UI at http://<upf-ip>:8080/swagger/index.html
Essential Configuration Parameters
# Network interfaces
interface_name: [eth0] # Interfaces for N3/N6/N9 traffic
xdp_attach_mode: native # generic|native|offload
n3_address: 10.100.50.233 # N3 interface IP
pfcp_address: :8805 # PFCP listen address
pfcp_node_id: 10.100.50.241 # PFCP Node ID
# Capacity
max_sessions: 100000 # Maximum concurrent sessions
# API and monitoring
api_address: :8080 # REST API port
metrics_address: :9090 # Prometheus metrics port
Important Monitoring Thresholds
- eBPF Map Capacity < 70%: Normal operation
- eBPF Map Capacity 70-90%: Plan capacity increase within 1 week
- eBPF Map Capacity > 90%: Critical - immediate action required
- Packet Drop Rate < 0.1%: Excellent
- Packet Drop Rate 0.1-1%: Good - minor issues
- Packet Drop Rate > 5%: Critical - investigate immediately
- XDP Aborted > 0: Critical issue with eBPF program
3GPP Standards Reference
OmniUPF implements the following 3GPP specifications:
| Specification | Title | Relevance |
|---|---|---|
| TS 23.501 | System architecture for the 5G System (5GS) | 5G UPF architecture and interfaces |
| TS 23.401 | General Packet Radio Service (GPRS) enhancements for E-UTRAN access | LTE UPF (PGW-U) architecture |
| TS 29.244 | Interface between the Control Plane and the User Plane nodes (PFCP) | N4 PFCP protocol |
| TS 29.281 | General Packet Radio System (GPRS) Tunnelling Protocol User Plane (GTPv1-U) | GTP-U encapsulation |
| TS 23.503 | Policy and charging control framework for the 5G System (5GS) | QoS and charging |
| TS 29.212 | Policy and Charging Control (PCC) | QoS enforcement |
Glossary
5G Architecture Terms
- 3GPP: 3rd Generation Partnership Project - Standards body for mobile telecommunications
- AMF: Access and Mobility Management Function - 5G core network element for access control
- CHF: Charging Function - 5G charging system
- DN: Data Network - External network (Internet, IMS, enterprise)
- eNodeB: Evolved Node B - LTE base station
- F-TEID: Fully Qualified Tunnel Endpoint Identifier - GTP-U tunnel ID with IP address
- gNB: Next Generation Node B - 5G base station
- GTP-U: GPRS Tunnelling Protocol User Plane - Tunneling protocol for user data
- MBR: Maximum Bit Rate - QoS parameter for maximum allowed bandwidth
- GBR: Guaranteed Bit Rate - QoS parameter for guaranteed minimum bandwidth
- N3: Interface between RAN and UPF (user plane traffic)
- N4: Interface between SMF and UPF (PFCP control)
- N6: Interface between UPF and Data Network (user plane traffic)
- N9: Interface between two UPFs (inter-UPF user plane traffic)
- PCF: Policy Control Function - 5G policy server
- PDU: Protocol Data Unit - Data session in 5G
- PGW-C: PDN Gateway Control Plane - LTE control plane equivalent to SMF
- PGW-U: PDN Gateway User Plane - LTE user plane (UPF equivalent)
- QFI: QoS Flow Identifier - 5G QoS flow marking
- QoS: Quality of Service - Traffic prioritization and bandwidth management
- RAN: Radio Access Network - Base station network (gNB/eNodeB)
- SEID: Session Endpoint Identifier - PFCP session ID
- SMF: Session Management Function - 5G core network element for session control
- TEID: Tunnel Endpoint Identifier - GTP-U tunnel ID
- UE: User Equipment - Mobile device
- UPF: User Plane Function - 5G packet forwarding network element
PFCP Protocol Terms
- Association: Control relationship between SMF and UPF
- FAR: Forwarding Action Rule - Determines packet forwarding behavior
- IE: Information Element - PFCP message component
- Node ID: UPF or SMF identifier (FQDN or IP address)
- PDR: Packet Detection Rule - Classifies packets into flows
- PFCP: Packet Forwarding Control Protocol - N4 control protocol
- QER: QoS Enforcement Rule - Applies bandwidth limits and marking
- SDF: Service Data Flow - Application-specific traffic filter
- Session: PFCP session representing UE PDU session or PDP context
- URR: Usage Reporting Rule - Tracks data volumes for charging
eBPF and Linux Kernel Terms
- BPF: Berkeley Packet Filter - Kernel packet filtering technology
- eBPF: Extended BPF - Programmable kernel data path
- Hash Map: eBPF key-value store for fast lookups
- XDP: eXpress Data Path - Kernel packet processing at driver level
- Verifier: Kernel component that validates eBPF programs for safety
- Map: eBPF data structure shared between kernel and userspace
- Zero-copy: Packet processing without copying to userspace
OmniUPF Product Terms
- OmniUPF: eBPF-based User Plane Function (this product)
- Datapath: Packet processing engine (eBPF programs)
- Control Plane: PFCP handler and session management
- REST API: HTTP API for monitoring and management
- Web UI: Browser-based control panel