Skip to content

Latest commit

 

History

History
73 lines (57 loc) · 1.95 KB

File metadata and controls

73 lines (57 loc) · 1.95 KB

VisionPilot 0.5 - EgoLanes Production Release

This release enables autonomous steering using the EgoLanes neural network to detect lane lines and navigate roads at a predetermined, desired speed.

This includes autonomous lane keeping with cruise control.

C++ Inference Pipeline

Multi-threaded lane detection inference system with ONNX Runtime backend.

Quick Start

  1. Set ONNX Runtime path:
export ONNXRUNTIME_ROOT=/path/to/onnxruntime-linux-x64-gpu-1.22.0
  1. Build:
mkdir -p build && cd build
cmake ..
make -j$(nproc)
cd ..
  1. Configure and Run:
# Edit run.sh to set paths and options
./run.sh

Directory Structure

0.5/
├── src/
│   ├── inference/          # Pure inference backend (no visualization)
│   │   ├── onnxruntime_session.cpp/hpp
│   │   ├── onnxruntime_engine.cpp/hpp
│   │   └── README.md
│   └── visualization/      # Visualization module (separate)
│       └── draw_lanes.cpp/hpp
├── scripts/                # Python utilities
├── main.cpp                # Multi-threaded pipeline
├── CMakeLists.txt          # Build configuration
└── run.sh                  # Runner script

Configuration (run.sh)

  • VIDEO_PATH: Input video file
  • MODEL_PATH: ONNX model (.onnx)
  • PROVIDER: cpu or tensorrt
  • PRECISION: fp32 or fp16 (TensorRT only)
  • DEVICE_ID: GPU device ID
  • CACHE_DIR: TensorRT engine cache directory
  • THRESHOLD: Segmentation threshold (default: 0.0)
  • MEASURE_LATENCY: Enable performance metrics
  • ENABLE_VIZ: Enable visualization window
  • SAVE_VIDEO: Save annotated output video
  • OUTPUT_VIDEO: Output video path

Performance

  • CPU: 20-40ms per frame
  • TensorRT FP16: 2-5ms per frame (200-500 FPS capable)

Model Output

3-channel lane segmentation (320x640):

  • Channel 0: Ego left lane (blue)
  • Channel 1: Ego right lane (magenta)
  • Channel 2: Other lanes (green)