Skip to content

danielquzhao/sentinel

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

78 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RPLidar People Detection System

A real-time multi-person detection and tracking pipeline using an RPLidar laser scanner. Detects, counts, and tracks multiple people in proximity and streams results to a dashboard.

Overview

This project clusters LiDAR point clouds, classifies clusters as person/not-person, and tracks detections across frames:

LiDAR Scan → Clustering (DBSCAN) → Feature Extraction → Classification → Tracking → Output

The system performs:

  • Point clustering (DBSCAN) to segment objects
  • Feature extraction (26 geometric features per cluster)
  • Person classification (Random Forest model)
  • Multi-object tracking with persistent IDs
  • Real-time streaming to a web dashboard (Flask + Socket.IO + Next.js)

Repo layout (high level):

rplidar-people-detect/
├── config.py              # Hardware + detection + tracking + UI settings
├── requirements.txt       # Python dependencies
├── src/                   # Clustering, features, detection, tracking
├── examples/              # Connection checks + demos
├── training/              # Data collection + model training
└── ui/                    # Backend API + Next.js dashboard

Requirements

Hardware:

  • RPLidar (A1 tested; other models may work if supported by the driver)
  • USB connection + stable mount (avoid vibration/tilt)

Software:

  • Python 3.10+ and pip
  • Node.js 18+ and npm (dashboard UI)

Installation

  1. Install Python dependencies:
python -m venv .venv
.venv\Scripts\Activate.ps1
pip install -r requirements.txt
  1. Patch the local rplidar driver (required to prevent serial buffer freezes):
python patch_rplidar.py
  1. Install the dashboard dependencies (optional but recommended):
cd ui/frontend
npm install

Configuration

The main configuration is in config.py. At minimum, set the correct serial port:

# Hardware
LIDAR_PORT = "COM7"          # e.g. "COM7" (Windows)
LIDAR_BAUDRATE = 115200

# Detection
SCAN_TYPE = "normal"         # or "express" (if supported by your device/driver)
DETECTION_THRESHOLD = 0.7
MAX_DISTANCE = 4000          # mm

# Model
MODEL_PATH = "training/models/person_detector.pkl"

# UI / API
WEB_PORT = 5000              # frontend expects 5000 unless you update UI URLs
UPDATE_RATE = 10             # Hz

Usage

  1. Ensure you have a trained model at training/models/person_detector.pkl (required for detection). If you don’t, collect data and train:
python training/collect_clusters.py
python training/train_model.py
  1. Sanity-check the LiDAR connection (recommended before running the UI):
python examples/check_connection.py
python examples/animate.py
  1. Start the backend (Flask + Socket.IO):
python ui/backend/server.py
  1. Start the frontend (Next.js):
cd ui/frontend
npm run dev

Open http://localhost:3000.

Optional runners (no web UI):

  • Console output: python examples/detect.py
  • OpenCV visualization: python examples/visualize.py

Troubleshooting

  • Can't connect to the LiDAR / wrong port: update config.py LIDAR_PORT; check Device Manager for the COM port.
  • Backend runs, UI shows no data: confirm backend is on WEB_PORT (default 5000); the frontend hardcodes http://localhost:5000 in several places.
  • “Too many bytes in the input buffer” / freezes: patch_rplidar.py patches a local rplidar.py to flush the serial buffer; it’s Windows-path-specific and may need editing before running.

About

LiDAR people detection

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published