Skip to content

abkarada/Full_CPU_Version

Repository files navigation

Enhanced Real-Time Streaming Engine with Proactive FEC

This repository contains the source code for a high-performance, real-time video streaming engine built on C++ and GStreamer. It is designed from the ground up for maximum resilience and quality under extreme network conditions, such as high packet loss and severe latency fluctuations.

Our core philosophy is simple: prioritize a stable, high-quality visual experience over chasing the lowest possible latency. We believe that for many professional and critical applications, a crystal-clear image arriving 300ms late is infinitely more valuable than a pixelated, artifact-ridden image arriving in 150ms. This engine is built to deliver on that promise.

Core Features

  • Advanced Forward Error Correction (FEC): Utilizes a powerful block-based FEC mechanism (based on Reed-Solomon principles via Cauchy Caterpillar) capable of recovering from massive burst packet loss (upwards of 50-60%) where traditional systems would fail completely.
  • Proactive Congestion Control: Unlike purely reactive systems that only respond after packet loss occurs, our engine features an intelligent adaptive controller that monitors delay trend (the rate of change in RTT) as a leading indicator of network congestion. This allows it to proactively adjust protection levels before loss happens.
  • Adaptive Bitrate with Bandwidth Probing: The system starts at a safe bitrate and intelligently probes the network to discover its true available capacity. It climbs to utilize the full potential of the connection and rapidly pulls back when it detects congestion, ensuring it never overfills the pipe.
  • Dynamic Latency Management: The engine treats latency not as an enemy, but as a flexible resource. Under duressor, it signals the receiver to dynamically increase its jitter buffer, absorbing massive network jitter and reordering delays to deliver a smooth, uninterrupted stream, even if it arrives slightly delayed.
  • Content-Aware FEC: The sender identifies critical video frames (I-Frames) and automatically applies a much stronger FEC protection multiplier to ensure they survive transmission, preventing catastrophic video corruption.
  • Optimized Low-Level I/O: Leverages sendmmsg and recvmmsg for efficient batch packet processing, minimizing CPU syscall overhead under high packet rates.

How It Compares to WebRTC

WebRTC is a phenomenal general-purpose tool, designed to be a "Swiss Army knife" for peer-to-peer communication. Our engine is a "surgeon's scalpel," engineered for one specific purpose: delivering the most robust, highest-quality video stream possible when network conditions are hostile.

Feature / Scenario Standard WebRTC (Google Congestion Control) Enhanced Streaming Engine Winner
Philosophy Lowest Latency is King. Sacrifices quality aggressively to keep latency minimal for interactive chat. Quality & Resilience are King. Sacrifices low latency to preserve video quality and stream integrity. Scenario Dependent
High Packet Loss (>20%) System collapses. Bitrate plummets to unusable levels, video becomes a slideshow of artifacts. Shines. Powerful block FEC and interleaving reconstruct the stream, often with zero perceived loss to the user. Our Engine (by a landslide)
High Jitter / Reordering Drops late packets to maintain low latency, resulting in freezes and frame skips. Absorbs the chaos. Dynamically expands the receiver buffer to wait for and reorder late packets, ensuring a smooth playback. Our Engine
Bandwidth Management Tends to be reactive and overly conservative, often underutilizing available bandwidth after a network event. Proactive and opportunistic. Finds the true bandwidth ceiling via probing and operates confidently at that limit. Our Engine
Target Applications General video chat, web conferencing. Professional & Critical Streaming: UAV/drone feeds, remote surveillance, high-quality broadcasting, industrial IoT, telemedicine. N/A

Project Status: Beta

The core components of the CPU-based media engine are complete and have demonstrated exceptional performance in simulated high-loss/high-latency environments. The system is now considered to be in beta. We are focusing on real-world testing and gathering performance data to further refine the adaptive algorithms.

The Future: The Zero-Copy GPU Pipeline (Project "AlienTech")

We are not stopping here. The performance bottlenecks in any high-resolution media pipeline are memory copies between the CPU, GPU, and NIC. Our ultimate goal is to eliminate them entirely.

We have begun internal development on the alpha version of our next-generation engine. This involves a complete move to a Zero-Copy GPU pipeline. We are in the process of forking and heavily modifying core GStreamer elements to create a direct data path:

VRAM <-> GPU <-> NIC

This visionary approach bypasses the traditional, inefficient path of VRAM -> RAM -> CPU -> RAM -> NIC. By processing and transmitting video frames directly from GPU memory to the network card, we aim to achieve an unprecedented level of performance and efficiency, enabling multiple 4K streams on low-power hardware.

This is our "inside information" and our promise for the future of this project. Stay tuned.