Protocol Reference: Transport for Data with Deadlines
This is a quick-reference list of protocols for applications where data has a deadline, and late delivery is failure—whether that deadline is 10μs (servo loop), 20ms (audio buffer), or 300ms (video call).
For taxonomy, analysis, and context, see When Packets Can’t Wait. This page is just the inventory—a living reference that grows as protocols become relevant to JitterTrap development.
Transport Layer Protocols
UDP
- Full name: User Datagram Protocol
- Standard: RFC 768 (1980)
- Layer: Transport (kernel)
- Philosophy: Best-effort—no reliability, no ordering, no congestion control
- Data type: Datagrams (any application data)
- Status: Ubiquitous
- Notes: Foundation for most application-layer delay-sensitive protocols. Minimal overhead (8-byte header). Applications must handle loss, reordering, and congestion themselves—but gain full control over timing.
TCP
- Full name: Transmission Control Protocol
- Standard: RFC 793 (1981), updated by RFC 9293 (2022)
- Layer: Transport (kernel)
- Philosophy: Guaranteed delivery—reliable, ordered byte stream with congestion control
- Data type: Byte streams (any application data)
- Status: Ubiquitous
- Notes: The baseline for comparison. Guarantees every byte arrives in order, but latency is unbounded during loss recovery. Congestion control (CUBIC, BBR) adapts to network conditions. Not delay-sensitive—will wait forever for a lost packet rather than skip it.
RDP
- Full name: Reliable Data Protocol
- Standard: RFC 908 (1984), RFC 1151 (1990)
- Layer: Transport
- Philosophy: Guaranteed delivery of datagrams
- Data type: Reliable datagrams (message boundaries preserved)
- Status: Historic (experimental, never widely deployed)
- Notes: Early attempt at “reliable UDP”—preserving message boundaries unlike TCP’s byte stream. Influenced later protocols like RUDP, UDT, and SCTP. Interesting historically but no modern implementations.
SCTP
- Full name: Stream Control Transmission Protocol
- Standard: RFC 4960 (2007), RFC 3758 (PR-SCTP)
- Layer: Transport (kernel)
- Philosophy: Guaranteed delivery with multi-streaming; PR-SCTP adds partial reliability
- Data type: Messages within multiple streams
- Status: Active, niche (SS7 signaling, WebRTC data channels, some HPC)
- Notes: TCP features (reliability, congestion control) plus: message boundaries, multiple independent streams (no head-of-line blocking), multi-homing (path redundancy). PR-SCTP extension enables timed reliability and limited retransmissions—making it delay-sensitive capable. Kernel support limited; WebRTC uses it over DTLS/UDP.
DCCP
- Full name: Datagram Congestion Control Protocol
- Standard: RFC 4340 (2006)
- Layer: Transport (kernel)
- Philosophy: Best-effort datagrams with congestion control
- Data type: Unreliable datagrams with congestion feedback
- Status: Deprecated (removed from Linux 6.16 in 2025)
- Notes: Elegant solution to “UDP with congestion control”—applications get datagram semantics while being network-friendly. Pluggable congestion control (TFRC for smooth rates, CCID2 for TCP-like). Failed due to poor adoption, middlebox issues, and the rise of userspace solutions like QUIC. A cautionary tale about kernel protocol deployment.
Application Layer Protocols
QUIC
- Full name: QUIC (originally “Quick UDP Internet Connections”)
- Standard: RFC 9000 (2021)
- Layer: Application (over UDP, userspace)
- Philosophy: Guaranteed delivery per-stream, no head-of-line blocking between streams
- Delivery: Push
- Topology: P2P
- Multiplexing: Streams (independent byte streams with stream IDs)
- Data type: Multiplexed byte streams (web content, API calls)
- Status: Ubiquitous (HTTP/3, YouTube, Cloudflare, Google)
- Notes: Modern TCP replacement for web traffic. Key innovation: stream multiplexing without head-of-line blocking—loss in one stream doesn’t stall others. 0-RTT connection establishment, mandatory TLS 1.3 encryption, connection migration across networks. Userspace implementation enables rapid iteration. Still guarantees delivery within each stream—not suitable for live video where late data should be dropped. Optimized for web latency (page loads), not media latency (playout deadlines).
MoQ
- Full name: Media over QUIC
- Standard: IETF MoQ WG (draft, 2022-present). Local Draft.
- Layer: Application (over QUIC)
- Philosophy: Bounded-time recovery—pub/sub with per-object expiry and relay fan-out
- Delivery: Hybrid (SUBSCRIBE pushes new objects; FETCH pulls historical objects)
- Topology: Relay (publisher → relays → subscribers)
- Multiplexing: Objects (tracks contain groups contain objects)
- Data type: Media objects (video frames, audio samples, metadata)
- Status: Emerging (IETF standardization in progress, experimental implementations)
- Notes: Bridges real-time (WebRTC) and buffered streaming (HLS/DASH) by combining low-latency push delivery with CDN-scalable relay architecture. Key insight: deliver media as discrete objects (frames, GOP groups) rather than byte streams—objects have priorities and can expire, allowing relays to drop late objects before delivery. Unlike HLS/DASH which relies on HTTP caching, MoQ achieves scalability through relay fan-out. Unlike WebRTC which is P2P, MoQ’s relay infrastructure handles thousands of subscribers. SUBSCRIBE pushes live objects; FETCH pulls specific ranges for late-join or seeking. MoQ Transport (moq-transport) handles delivery; catalog formats define media encapsulation. Still maturing—spec and implementations evolving rapidly.
- Resources: IETF MoQ Working Group, MoQ Transport Draft
RTP
- Full name: Real-time Transport Protocol
- Standard: RFC 3550 (2003)
- Layer: Application (over UDP)
- Philosophy: Best-effort—sequence numbers and timestamps for ordering, no reliability
- Delivery: Push
- Topology: P2P (or multicast)
- Multiplexing: SSRC (Synchronization Source identifiers)
- Data type: Continuous media streams (audio, video, any timed data)
- Status: Ubiquitous (foundation for WebRTC, VoIP, IPTV, broadcast)
- Extensions: RFC 4585 (AVPF feedback), RFC 4588 (retransmission), SMPTE 2022 (FEC)
- Notes: The lingua franca of real-time media. Provides sequence numbers (detect loss/reordering), timestamps (synchronization), and payload type identification. Deliberately minimal—reliability is layered on top via profiles (AVPF for NACK, SMPTE 2022 for FEC). RTCP companion protocol provides sender/receiver reports for congestion feedback. Almost every media protocol builds on or is influenced by RTP.
RTMP
- Full name: Real-Time Messaging Protocol
- Standard: Adobe RTMP Specification (2012, originally proprietary ~2002)
- Layer: Application (over TCP, port 1935)
- Philosophy: Guaranteed delivery—persistent TCP connection with chunked multiplexing
- Delivery: Push
- Topology: P2P (client to ingest server)
- Multiplexing: Chunks (interleaved audio/video/data message fragments)
- Data type: Continuous video/audio streams (live ingest)
- Status: Declining for ingest, being replaced by SRT
- Notes: The dominant live video ingest protocol for over a decade. Multiplexes audio, video, and data over a single TCP connection, breaking streams into chunks (64-128 bytes default) for interleaving. TCP guarantees delivery but introduces unbounded latency during loss—exactly the problem delay-sensitive protocols solve. Limited to H.264/AAC codecs. No built-in encryption (RTMPS adds TLS). Flash heritage meant universal platform support, but SRT is displacing it for professional ingest because UDP-based ARQ provides bounded latency over unreliable networks. Still widely supported by YouTube, Twitch, Facebook for ingest, then transcoded to HLS/DASH for delivery.
- Resources: Adobe RTMP Specification, RTMP vs SRT Comparison
HLS
- Full name: HTTP Live Streaming
- Standard: RFC 8216 (2017), Apple HLS Specification
- Layer: Application (over HTTP/TCP)
- Philosophy: Guaranteed delivery—buffered, segment-based streaming
- Delivery: Pull (client requests segments via HTTP GET)
- Topology: CDN (origin → edge servers → clients)
- Multiplexing: Segments (2-10 second chunks listed in m3u8 playlists)
- Data type: Pre-segmented video/audio (VOD, near-live)
- Status: Ubiquitous (Netflix, YouTube, Twitch delivery, all major streaming services)
- Notes: Adaptive bitrate streaming developed by Apple. Video is segmented into 2-10 second chunks, listed in m3u8 playlists. Player downloads segments over HTTP, buffering ahead for smooth playback. Quality adapts dynamically based on bandwidth. Typical latency: 10-30+ seconds. Low-Latency HLS (LL-HLS) reduces to 2-5 seconds using partial segments and blocking playlist reload. Not delay-sensitive—designed for reliability and quality over latency.
- Resources: Apple HLS Authoring Specification
DASH
- Full name: Dynamic Adaptive Streaming over HTTP
- Standard: ISO/IEC 23009-1 (MPEG-DASH), DASH Industry Forum
- Layer: Application (over HTTP/TCP)
- Philosophy: Guaranteed delivery—buffered, segment-based streaming
- Delivery: Pull (client requests segments via HTTP GET)
- Topology: CDN (origin → edge servers → clients)
- Multiplexing: Segments (described in MPD manifests)
- Data type: Pre-segmented video/audio (VOD, near-live)
- Status: Ubiquitous (YouTube, Netflix, Amazon, most major streaming platforms)
- Notes: Open standard alternative to HLS with similar architecture. MPD (Media Presentation Description) manifests describe available segments and quality levels. Player downloads segments over HTTP, adapting quality to bandwidth. Typical latency: 10-30+ seconds. Low-Latency DASH (LL-DASH) reduces to 2-5 seconds using chunked transfer encoding and early segment availability. CMAF (Common Media Application Format) enables single encoding for both HLS and DASH delivery.
- Resources: DASH Industry Forum, DASH-IF Guidelines
MQTT
- Full name: Message Queuing Telemetry Transport
- Standard: OASIS MQTT 5.0 (2019), ISO/IEC 20922
- Layer: Application (over TCP)
- Philosophy: Configurable—QoS 0 (at-most-once), QoS 1 (at-least-once), QoS 2 (exactly-once)
- Delivery: Push (after subscription, broker pushes matching messages)
- Topology: Broker (all messages flow through central broker)
- Multiplexing: Topics (hierarchical topic strings with wildcards)
- Data type: Discrete messages (sensor readings, events, commands)
- Status: Ubiquitous (Industrial IoT, smart home, telemetry)
- Notes: Event-driven pub/sub for telemetry. Designed for constrained devices and unreliable networks. Messages are independent—no continuous stream semantics. Broker-mediated (not peer-to-peer). Delay-sensitive for control commands (QoS 0), reliable for state updates (QoS 1/2).
CoAP
- Full name: Constrained Application Protocol
- Standard: RFC 7252 (2014), RFC 8323 (CoAP over TCP/TLS/WebSockets)
- Layer: Application (over UDP, or TCP/WebSockets)
- Philosophy: Best-effort with optional confirmable messages—RESTful request/response
- Delivery: Pull (request/response); Observe extension adds Push
- Topology: Client-Server
- Multiplexing: Resources (URI paths)
- Data type: RESTful resources (sensor readings, actuator commands)
- Status: Active (constrained IoT devices, LwM2M, Thread/Matter)
- Notes: “HTTP for constrained devices”—RESTful semantics over UDP. Compact binary format with 4-byte base header (vs HTTP’s verbose text). Supports GET/PUT/POST/DELETE on resources. Two message types: Confirmable (CON) gets acknowledgement with retransmission, Non-confirmable (NON) is fire-and-forget. Designed for devices with limited memory, power, and bandwidth. DTLS for security. Observe extension (RFC 7641) enables pub/sub-like notifications without polling. One-to-one client/server model contrasts with MQTT’s many-to-many broker-mediated pub/sub.
- Resources: RFC 7252, CoAP Technology
RTPS/DDS
- Full name: Real-Time Publish-Subscribe / Data Distribution Service
- Standard: OMG DDSI-RTPS 2.5 (2022)
- Layer: Application (over UDP)
- Philosophy: Configurable—QoS policies for reliability, history, deadlines, lifespan
- Delivery: Push (after subscription, samples pushed to subscribers)
- Topology: Decentralized (peer discovery, then direct communication)
- Multiplexing: Topics (typed, with QoS policies per topic)
- Data type: Typed data samples (sensor fusion, state, telemetry)
- Status: Active (ROS2, autonomous vehicles, aerospace, defense)
- Notes: Data-centric pub/sub middleware. Key insight: data has a type and lifecycle, not just bytes. QoS policies control timing (DEADLINE), reliability (RELIABLE/BEST_EFFORT), history (KEEP_LAST/KEEP_ALL), and sample lifespan. Decentralized discovery (no broker). Designed for distributed real-time systems where multiple consumers need consistent views of changing state—not continuous media streams. Overhead justified by semantic richness.
zenoh
- Full name: Eclipse zenoh
- Standard: Open specification, zenoh.io (2017)
- Layer: Application (over UDP, TCP, or any transport)
- Philosophy: Configurable—pub/sub + queries + storage with selectable reliability
- Delivery: Push (pub/sub); Pull (queries)
- Topology: Decentralized (optional routers for scaling)
- Multiplexing: Keys (hierarchical key expressions with wildcards)
- Data type: Named resources (key/value samples, queryables)
- Status: Growing (ROS 2 alternative middleware, robotics, edge computing)
- Notes: Designed to fix DDS scalability issues—verbose discovery and fully-connected participant mesh don’t scale to large deployments or constrained networks. Decentralized pub/sub with optional routing infrastructure. Unifies data in motion (pub/sub), data at rest (storage), and data in use (queries). Selected as official ROS 2 alternative middleware in 2024 (rmw_zenoh in Jazzy Jalisco). Research shows (Local PDF) lower delay and overhead than DDS over mesh networks with dynamic topology. Rust-based core with zero-copy on same-machine communication.
- Resources: zenoh.io, zenoh-plugin-ros2dds
ROS 2
- Full name: Robot Operating System 2
- Standard: Open source, ros.org (2017, evolved from ROS 1)
- Layer: Middleware framework (default: DDS/RTPS transport)
- Philosophy: Configurable—inherits QoS from underlying middleware (DDS or zenoh)
- Delivery: Push (topics); Pull (services, actions)
- Topology: Decentralized (peer discovery via DDS or zenoh)
- Multiplexing: Topics (typed message streams), Services (request/response), Actions (long-running goals)
- Data type: Typed messages (sensor data, commands, state, transforms)
- Status: Ubiquitous (robotics, autonomous systems, research, industrial automation)
- Notes: Not a protocol itself—a middleware framework with pluggable transport. Default RMW (ROS Middleware) uses DDS/RTPS (Fast DDS, Cyclone DDS, Connext). Alternative: rmw_zenoh available from Jazzy Jalisco (2024). QoS profiles map to underlying middleware: Sensor Data (best-effort, volatile), Services (reliable), Parameters (reliable, transient local). Key abstractions: topics for pub/sub, services for RPC, actions for long-running tasks with feedback, transforms (tf2) for coordinate frames. The de facto standard for robot software—used in autonomous vehicles, drones, manipulators, and research. Delay-sensitive behavior depends entirely on QoS configuration and middleware choice.
- Resources: ROS 2 Documentation, ROS 2 Design, QoS Policies
GVSP
- Full name: GigE Vision Streaming Protocol
- Standard: AIA GigE Vision 2.2 (2016)
- Layer: Application (over UDP)
- Philosophy: Bounded-time recovery—PACKETRESEND for missing packets, per-frame timeout
- Delivery: Push (camera streams frames to host)
- Topology: P2P
- Multiplexing: Frames (block_id identifies complete images)
- Data type: Image frames (machine vision cameras)
- Status: Active (factory automation, inspection, robotics)
- Notes: Designed for industrial cameras streaming to vision systems. Frame-oriented: block_id identifies complete images, not continuous flow. PACKETRESEND mechanism with configurable retry count and timeout. Assumes low-latency LAN (<1ms RTT). Bridges media (video frames) and industrial (deterministic timing) domains.
EtherNet/IP
- Full name: EtherNet/IP (Industrial Protocol)
- Standard: ODVA EtherNet/IP with CIP
- Layer: Application (over TCP + UDP)
- Philosophy: Hybrid—explicit messaging (TCP, reliable) + implicit I/O (UDP, cyclic real-time)
- Delivery: Push (implicit I/O); Pull (explicit messaging)
- Topology: P2P
- Multiplexing: Connections (CIP connection IDs)
- Data type: Cyclic I/O data (PLC inputs/outputs), acyclic messages (configuration)
- Status: Active (dominant in North American manufacturing)
- Notes: CIP (Common Industrial Protocol) over standard Ethernet. Two communication modes: explicit messaging (TCP, request/response, configuration) and implicit I/O (UDP multicast, cyclic, real-time process data). CIP Motion adds deterministic servo control with IEEE 1588 time sync. Delay-sensitive for cyclic I/O—typical cycle times 1-10ms.
- Resources: ODVA EtherNet/IP
Modbus TCP
- Full name: Modbus TCP/IP
- Standard: Modbus Organization (1999)
- Layer: Application (over TCP, port 502)
- Philosophy: Guaranteed delivery—request/response over TCP
- Delivery: Poll (client polls server for register values)
- Topology: Client-Server (master/slave)
- Multiplexing: Registers (addressed by function code + register number)
- Data type: Register reads/writes (coils, holding registers, input registers)
- Status: Ubiquitous (SCADA, PLCs, building automation, energy)
- Notes: Master/slave polling protocol—no pub/sub, no events. Client polls server for register values. Simple and universal but not optimized for real-time: polling interval determines latency floor. ADU limited to 260 bytes (253 data + 7 header). Originally serial (1979), adapted to TCP. Openly published, royalty-free. Delay-sensitive applications must poll frequently, wasting bandwidth.
OPC UA
- Full name: OPC Unified Architecture
- Standard: OPC Foundation / IEC 62541
- Layer: Application (over TCP; PubSub over UDP)
- Philosophy: Configurable—client/server (TCP, reliable) or PubSub (UDP, real-time)
- Delivery: Both (client/server: request/response; PubSub: push)
- Topology: Configurable (client/server or decentralized PubSub)
- Multiplexing: Nodes (hierarchical address space with NodeIds)
- Data type: Semantic data model (typed variables, methods, events, alarms)
- Status: Growing (Industry 4.0, IIoT, cross-vendor interoperability)
- Notes: More than a protocol—a semantic data model with type system, methods, and events. Client/server mode uses TCP for browsing and subscription. PubSub mode uses UDP multicast for deterministic real-time (combined with TSN for hard real-time). Key value: machine-readable information models enable interoperability without custom integration. Delay-sensitive in PubSub mode for control loops.
RAVENNA
- Full name: RAVENNA
- Standard: Open technology, ravenna-network.com (2010)
- Layer: Application (over UDP)
- Philosophy: Network-designed—PTPv2 synchronization, no retransmission
- Delivery: Push
- Topology: P2P (or multicast)
- Multiplexing: SSRC (RTP-based, like AES67)
- Data type: Continuous audio streams (up to 384 kHz, 32-bit)
- Status: Active (broadcast, live production, installed sound)
- Notes: Superset of AES67 for professional audio-over-IP. Sample-accurate synchronization via PTPv2 (IEEE 1588). No error recovery—network must not drop packets. Requires managed infrastructure: QoS, VLANs, traffic shaping. Supports higher sample rates and channel counts than AES67 baseline.
NDI
- Full name: Network Device Interface
- Standard: Proprietary, ndi.video (NewTek/Vizrt, 2015)
- Layer: Application (over TCP + UDP)
- Philosophy: Hybrid—reliable discovery/control (TCP), low-latency video (UDP with optional reliability)
- Delivery: Push
- Topology: P2P (with mDNS discovery)
- Multiplexing: Sources (named video/audio sources discoverable on LAN)
- Data type: Video and audio streams (typically 1080p60, some 4K support)
- Status: Ubiquitous (live production, streaming, prosumer video)
- Notes: Dominant protocol for software-based video production. Zero-configuration discovery via mDNS—sources appear automatically on the network. Designed for ease of use over absolute minimum latency: typical latency 2-4 frames (~40-80ms at 60fps). Supports multiple quality modes: full bandwidth (~100+ Mbps for 1080p60), HX (H.264/HEVC compressed, ~20 Mbps), and Proxy (low-bandwidth monitoring). Built into OBS, vMix, Wirecast, TriCaster, and hundreds of applications. Free SDK, but protocol is proprietary—no public specification. Works over standard networks without QoS, unlike AES67/RAVENNA which require managed infrastructure.
- Resources: NDI Central, NDI SDK
SMPTE ST 2110
- Full name: SMPTE ST 2110 Professional Media Over Managed IP Networks
- Standard: SMPTE ST 2110 suite (2017)
- Layer: Application (over UDP/RTP)
- Philosophy: Network-designed—uncompressed essence, PTPv2 synchronization, no retransmission
- Delivery: Push
- Topology: P2P (or multicast)
- Multiplexing: Separate streams (video, audio, ancillary data on independent RTP flows)
- Data type: Uncompressed video (ST 2110-20), audio (ST 2110-30), ancillary data (ST 2110-40)
- Status: Active (broadcast facilities, OB trucks, master control)
- Notes: Professional broadcast standard replacing SDI infrastructure with IP. Key difference from NDI: carries uncompressed video (up to 12 Gbps for 4K60), separate essence streams (video, audio, data travel independently), and requires broadcast-grade infrastructure. PTP synchronization (ST 2110-10) enables frame-accurate switching. NMOS (IS-04/IS-05) provides discovery and connection management. Designed for facilities where quality and timing precision are paramount—not for general IT networks. Complementary to AES67 (ST 2110-30 audio is AES67-compatible). The “IP replacement for SDI” that major broadcasters are deploying.
- Resources: SMPTE ST 2110 Overview, Alliance for IP Media Solutions
WebRTC
- Full name: Web Real-Time Communication
- Standard: W3C WebRTC + IETF RFC 8825 (2021)
- Layer: Application (over UDP)
- Philosophy: Bounded-time recovery—adaptive NACK/FEC based on RTT and loss rate
- Delivery: Push
- Topology: P2P (with STUN/TURN for NAT traversal)
- Multiplexing: Tracks (MediaStreamTrack for audio/video; DataChannel for data)
- Data type: Interactive audio/video streams, data channels
- Status: Ubiquitous (browsers, video conferencing, telehealth)
- Notes: Complete real-time media stack built into browsers. RTP/RTCP transport with adaptive error recovery: switches between NACK (retransmission) and FEC based on network conditions. GCC (Google Congestion Control) adapts bitrate. Mandatory encryption (DTLS-SRTP). P2P with STUN/TURN for NAT traversal. Optimized for interactive (<300ms) latency, not broadcast.
- Resources: WebRTC for the Curious
AES67
- Full name: AES67 High-Performance Audio-over-IP
- Standard: AES67-2023
- Layer: Application (over UDP)
- Philosophy: Network-designed—PTPv2 synchronization, DiffServ QoS, no retransmission
- Delivery: Push
- Topology: P2P (or multicast)
- Multiplexing: SSRC (RTP Synchronization Source identifiers)
- Data type: Continuous audio streams (48 kHz baseline, up to 96 kHz)
- Status: Active (broadcast, professional audio interoperability)
- Notes: Interoperability standard bridging Dante, Livewire, RAVENNA, and other AoIP systems. Defines common baseline for clock sync (PTPv2), encoding (L24/L16), and packet timing. No loss recovery—relies on network engineering. DiffServ marking (EF class) for QoS. Latency typically 1-5ms on managed networks. The “glue” that lets different vendor ecosystems exchange audio.
KCP
- Full name: KCP (Fast and Reliable ARQ Protocol)
- Standard: Open source, github.com/skywind3000/kcp (2014)
- Layer: Application (over UDP)
- Philosophy: Fast ARQ—aggressive retransmission, prioritizes latency over bandwidth
- Delivery: Push
- Topology: P2P
- Multiplexing: Streams (conversation IDs)
- Data type: Reliable message streams (game state, real-time sync)
- Status: Active (gaming, VPN tunnels, especially in Asia)
- Notes: TCP-like reliability with lower latency. Key optimizations: faster retransmit (no delayed ACK), selective repeat, configurable RTO calculation, no congestion window during recovery. Trades 10-20% extra bandwidth for 30-40% latency reduction. Pure algorithm—bring your own UDP transport and FEC. Popular in games and tools like kcptun for accelerating connections over lossy links.
ENet
- Full name: ENet
- Standard: Open source, enet.bespin.org (2002)
- Layer: Application (over UDP)
- Philosophy: Configurable—per-packet reliability flags (reliable, unreliable, unsequenced)
- Delivery: Push
- Topology: P2P
- Multiplexing: Channels (up to 255 independent channels per connection)
- Data type: Packets/messages (game state, player actions, events)
- Status: Active (multiplayer games, real-time applications)
- Notes: Reliable UDP library developed for Cube FPS. Key features: per-packet reliability flags, connection management with RTT/loss monitoring, packet fragmentation and reassembly, flow control with in-flight data window. Multiple independent channels provide separate sequencing—reliable packets on one channel don’t block unreliable packets on another. Progressive retry timeouts adapt to network turbulence. Pure C library, cross-platform, BSD-licensed, no royalties. Widely used in game engines—has official Unreal Engine plugin.
- Resources: ENet Features, ENet GitHub
Aeron
- Full name: Aeron
- Standard: Open source, aeron.io (2014)
- Layer: Application (over UDP, IPC, or InfiniBand)
- Philosophy: Configurable—reliable by default (NAK-based recovery), optional unreliable mode (
reliable=false) - Delivery: Push
- Topology: P2P (multi-publisher/multi-subscriber)
- Multiplexing: Streams (stream IDs within channels)
- Data type: Messages (market data, order flow, events)
- Status: Active (financial trading, high-frequency systems)
- Notes: Message-oriented transport for latency-critical systems. Default mode is reliable: NAK-based recovery delivers all messages in order, like TCP but optimized for speed. Optional
reliable=falsemode gap-fills losses with padding—subscribers skip lost data without waiting for retransmission (useful for multicast where some loss is acceptable). Configurable linger controls how long publications retain data for retransmission. Multi-publisher/multi-subscriber with ordered delivery within streams. IPC mode for same-machine messaging at <1μs. UDP mode achieves <100μs in cloud. Lock-free, zero-allocation design. - Resources: Message Delivery Assurances, Handling Data Loss
ZeroMQ
- Full name: ZeroMQ (ØMQ)
- Standard: ZMTP 3.1 (2015)
- Layer: Application (over TCP, IPC, inproc, PGM multicast)
- Philosophy: Best-effort—brokerless messaging with multiple patterns
- Delivery: Push (PUB/SUB, PUSH/PULL); Pull (REQ/REP)
- Topology: Configurable (P2P, proxy patterns, or broker via DEALER/ROUTER)
- Multiplexing: Messages (multipart message frames)
- Data type: Messages (any binary payload)
- Status: Ubiquitous (financial systems, distributed computing, DevOps tooling)
- Notes: Brokerless message queue library with socket abstraction. Multiple patterns: REQ/REP (request-reply), PUB/SUB (publish-subscribe), PUSH/PULL (pipeline), DEALER/ROUTER (async). Lock-free queues for inter-thread, IPC for same-machine, TCP/PGM for network. No broker means no double network hop—designed for low latency. Adaptive batching: turns batching off at low message rates to minimize latency, batches at high rates for throughput. Financial industry heritage (iMatix, original AMQP designers). Not a message broker—no persistence, no guaranteed delivery across process crashes, no built-in discovery.
- Resources: ZeroMQ Guide, ZeroMQ RFCs
SRT
- Full name: Secure Reliable Transport
- Standard: IETF draft-sharabayko-srt (expired), github.com/Haivision/srt (2017)
- Layer: Application (over UDP)
- Philosophy: Bounded-time recovery—configurable latency budget, NAK-based ARQ, TLPKTDROP
- Delivery: Push
- Topology: P2P
- Multiplexing: Streams (single stream per connection)
- Data type: Continuous video/audio streams (contribution, distribution)
- Status: Growing (broadcast contribution, OBS, encoders, CDNs)
- Notes: Purpose-built for live video over unpredictable networks. Key mechanism: TLPKTDROP (Too-Late Packet Drop)—packets past their deadline are discarded, not delivered. Latency budget configurable (default 120ms, minimum ~4×RTT). AES-128/256 encryption built-in. UDT heritage gives robust congestion control. Widely adopted in broadcast and streaming workflows.
- Resources: Why We Created SRT
RIST
- Full name: Reliable Internet Stream Transport
- Standard: VSF TR-06-1 (Simple Profile), TR-06-2 (Main Profile). Licensed CC BY-ND 4.0.
- Layer: Application (over UDP)
- Philosophy: Bounded-time recovery—RTP-based, ARQ + optional FEC
- Delivery: Push
- Topology: P2P (with optional bonding across multiple paths)
- Multiplexing: Streams (RTP-based, supports multiple flows)
- Data type: Continuous video/audio streams (broadcast distribution)
- Status: Growing (broadcast, content delivery, remote production)
- Notes: Broadcast industry’s answer to reliable streaming. RTP/SMPTE-2022 heritage means existing infrastructure compatibility. Features: ARQ retransmission, optional FEC, connection bonding (multiple paths), seamless failover, multicast support. DTLS for authentication/encryption. Developed by Video Services Forum consortium—interoperability is a design goal. Profiles: Simple (basic ARQ), Main (tunneling, bonding), Advanced (in development).
- Resources: RIST Forum, RIST vs SRT Comparison
See When Packets Can’t Wait for the survey that contextualizes these protocols.