This repo contains the framework introduced in the paper "Towards Low-Latency GPU-Aware Pub/Sub Communication for Real-Time Edge Computing" accepted by RTCSA 2025.
There are two implementations of GAPS in different branches:
main: GAPS-z, built on top of Zenoh-cpp (with Zenoh-pico as backend)iceoryx: GAPS-i, built on top of Iceoryx.
Docker environment is priovded for development or testing. To set it up, make sure the following requirements are met:
- Docker is installed
- Docker Compose is installed
- NVIDIA Container Toolkit is installed
- Follow the installation and configuration steps here
- CUDA 12.6 is supported by your host's NVIDIA driver
- CUDA Driver APIs like
cuMemCreateandcuMemExportToShareableHandleare supported by your NVIDIA GPU
There are two environments under env/:
x86: for x86 machines with an NVIDIA GPUjetson: for NVIDIA Jetson embedded system like Jetson AGX Orin⚠️ Please seeenv/jetson/README.mdfor how to build its base image before proceeding
Change directory to either one, and then run the following commands:
docker compose up -d
ssh ubuntu@localhost -p 22222To destroy the environment, run the following command under the same directory:
docker compose downTo configure build instructions with CMake and build the project with Ninja,
run the following commands in the parent directory of GAPS/:
cmake GAPS -B build -G Ninja
ninja -C buildThe compiled executables will be generated under ./build/src.
CMake Build Options:
PROFILING=[on|(off)]: whether to profile publisher's put and subscriber's callbackBUILD_DEBUG=[on|(off)]: whether to build with debugging codesBUILD_TORCH_SUPPORT=[on|(off)]: whether to build PyTorch support (i.e., to build PyGAPS)BUILD_EXAMPLES=[(on)|off]: whether to build example codes
Note
The one wrapped with parentheses is the default value
Pre-commit is used to setup clang-format pre-commit hooks.
- Install pre-commit in your Python virtual environment.
- Run
pre-commit installto install the hooks. - Each time before committing, committed files will be formatted with
clang-format.
Thanks for the following works to make this project possible.
- This project depends on the following third-party libraries:
- Example codes in this projects depend on the following third-party libraries:
- Thanks tlsf-bsd for showing how to implement the TLSF allocator.
- Thanks jetson-containers for providing machine learning containers on NVIDIA Jetson embedded systems.