This release enables autonomous steering using the EgoLanes neural network to detect lane lines and navigate roads at a predetermined, desired speed.
This includes autonomous lane keeping with cruise control.
Multi-threaded lane detection inference system with ONNX Runtime backend.
- Set ONNX Runtime path:
export ONNXRUNTIME_ROOT=/path/to/onnxruntime-linux-x64-gpu-1.22.0- Build:
mkdir -p build && cd build
cmake ..
make -j$(nproc)
cd ..- Configure and Run:
# Edit run.sh to set paths and options
./run.sh0.5/
├── src/
│ ├── inference/ # Pure inference backend (no visualization)
│ │ ├── onnxruntime_session.cpp/hpp
│ │ ├── onnxruntime_engine.cpp/hpp
│ │ └── README.md
│ └── visualization/ # Visualization module (separate)
│ └── draw_lanes.cpp/hpp
├── scripts/ # Python utilities
├── main.cpp # Multi-threaded pipeline
├── CMakeLists.txt # Build configuration
└── run.sh # Runner script
VIDEO_PATH: Input video fileMODEL_PATH: ONNX model (.onnx)PROVIDER: cpu or tensorrtPRECISION: fp32 or fp16 (TensorRT only)DEVICE_ID: GPU device IDCACHE_DIR: TensorRT engine cache directoryTHRESHOLD: Segmentation threshold (default: 0.0)MEASURE_LATENCY: Enable performance metricsENABLE_VIZ: Enable visualization windowSAVE_VIDEO: Save annotated output videoOUTPUT_VIDEO: Output video path
- CPU: 20-40ms per frame
- TensorRT FP16: 2-5ms per frame (200-500 FPS capable)
3-channel lane segmentation (320x640):
- Channel 0: Ego left lane (blue)
- Channel 1: Ego right lane (magenta)
- Channel 2: Other lanes (green)