A real-time multi-person detection and tracking pipeline using an RPLidar laser scanner. Detects, counts, and tracks multiple people in proximity and streams results to a dashboard.
This project clusters LiDAR point clouds, classifies clusters as person/not-person, and tracks detections across frames:
LiDAR Scan → Clustering (DBSCAN) → Feature Extraction → Classification → Tracking → Output
The system performs:
- Point clustering (DBSCAN) to segment objects
- Feature extraction (26 geometric features per cluster)
- Person classification (Random Forest model)
- Multi-object tracking with persistent IDs
- Real-time streaming to a web dashboard (Flask + Socket.IO + Next.js)
Repo layout (high level):
rplidar-people-detect/
├── config.py # Hardware + detection + tracking + UI settings
├── requirements.txt # Python dependencies
├── src/ # Clustering, features, detection, tracking
├── examples/ # Connection checks + demos
├── training/ # Data collection + model training
└── ui/ # Backend API + Next.js dashboard
Hardware:
- RPLidar (A1 tested; other models may work if supported by the driver)
- USB connection + stable mount (avoid vibration/tilt)
Software:
- Python 3.10+ and
pip - Node.js 18+ and
npm(dashboard UI)
- Install Python dependencies:
python -m venv .venv
.venv\Scripts\Activate.ps1
pip install -r requirements.txt- Patch the local rplidar driver (required to prevent serial buffer freezes):
python patch_rplidar.py- Install the dashboard dependencies (optional but recommended):
cd ui/frontend
npm installThe main configuration is in config.py. At minimum, set the correct serial port:
# Hardware
LIDAR_PORT = "COM7" # e.g. "COM7" (Windows)
LIDAR_BAUDRATE = 115200
# Detection
SCAN_TYPE = "normal" # or "express" (if supported by your device/driver)
DETECTION_THRESHOLD = 0.7
MAX_DISTANCE = 4000 # mm
# Model
MODEL_PATH = "training/models/person_detector.pkl"
# UI / API
WEB_PORT = 5000 # frontend expects 5000 unless you update UI URLs
UPDATE_RATE = 10 # Hz- Ensure you have a trained model at
training/models/person_detector.pkl(required for detection). If you don’t, collect data and train:
python training/collect_clusters.py
python training/train_model.py- Sanity-check the LiDAR connection (recommended before running the UI):
python examples/check_connection.py
python examples/animate.py- Start the backend (Flask + Socket.IO):
python ui/backend/server.py- Start the frontend (Next.js):
cd ui/frontend
npm run devOpen http://localhost:3000.
Optional runners (no web UI):
- Console output:
python examples/detect.py - OpenCV visualization:
python examples/visualize.py
- Can't connect to the LiDAR / wrong port: update
config.pyLIDAR_PORT; check Device Manager for the COM port. - Backend runs, UI shows no data: confirm backend is on
WEB_PORT(default5000); the frontend hardcodeshttp://localhost:5000in several places. - “Too many bytes in the input buffer” / freezes:
patch_rplidar.pypatches a localrplidar.pyto flush the serial buffer; it’s Windows-path-specific and may need editing before running.