Real-time people counter for a fixed camera pointed at a doorway or staircase entrance. Detects every person crossing a virtual vertical line, increments entry/exit counters, persists a full event log to SQLite, and runs on anything from a laptop to a Raspberry Pi 5.
Two independent pipelines are provided:
| Pipeline | Entry point | Detection engine | Use case |
|---|---|---|---|
| Body tracking | main.py |
MobileNet-SSD (OpenCV DNN) | Fast, CPU-only, no GPU needed |
| Face-ID tracking | face_main.py |
MediaPipe FaceDetection + embedding | Named-person re-identification |
demo.mp4
Body-tracking pipeline running on a webcam β orange boxes show tracked people, the yellow vertical line triggers ENTRY (green) and EXIT (red) events.
- System Architecture
- Folder Structure
- Requirements
- Quick Start
- Command-Line Reference
- Keyboard Shortcuts
- How Crossing Detection Works
- Configuration Reference
- Night-Vision Mode
- Face-ID Pipeline
- Database Schema
- Raspberry Pi Deployment
- Troubleshooting
flowchart TD
A[Camera <br/> webcam / Pi cam / video file] --> B[Night-Vision Pre-processor <br/> night_vision.py]
B --> C[MobileNet-SSD Person Detector <br/> detector.py]
C -- "[x, y, w, h] boxes" --> D[Centroid Tracker <br/> tracker.py]
D -- "{track_id: Track}" --> E[Line-Crossing Detector <br/> entry_exit.py <br/> vertical line @ LINE_X_FRACTION]
E -- "ENTRY / EXIT events" --> F[(SQLite Database <br/> database.py <br/> logs/people.db)]
F --> G[OpenCV HUD <br/> main.py <br/> Counters Β· IDs Β· FPS Β· Mode badge]
classDef default fill:#1f2328,stroke:#d0d7de,stroke-width:1px,color:#c9d1d9;
classDef db fill:#1f2328,stroke:#2f81f7,stroke-width:2px,color:#c9d1d9;
class F db;
Key design principles
- Zero heavy dependencies β only
opencv-python+numpyfor the body pipeline. - Threaded capture β a background thread always holds the freshest frame, eliminating input-queue lag.
- No GPU required β OpenCV DNN runs purely on CPU; OpenCL is attempted on supported hardware.
- Persistent cooldown β a crossing during the post-event cooldown window is never silently discarded; it fires the moment the cooldown expires.
entry and exit/
β
βββ main.py β Body-tracking pipeline (start here)
βββ face_main.py β Face-ID pipeline
β
βββ config.py β Single file for ALL settings
βββ detector.py β MobileNet-SSD person detector
βββ tracker.py β Centroid-based multi-object tracker
βββ entry_exit.py β Virtual vertical-line crossing logic
βββ database.py β SQLite runtime + history database
βββ night_vision.py β Low-light pre-processing (CLAHE + gamma)
βββ face_id.py β MediaPipe face detector + embedding matcher
βββ face_database.py β SQLite database for the face-ID pipeline
β
βββ download_models.py β One-time model downloader
βββ requirements.txt
β
βββ models/
β βββ ssd_mobilenet_v1.pbtxt β TF graph config (~63 KB)
β βββ ssd_mobilenet_v1_frozen.pb β Frozen weights (~27 MB)
β βββ blaze_face_short_range.tflite β MediaPipe face model
β
βββ logs/
βββ people.db β Created automatically at runtime
Python 3.9 or later.
pip install -r requirements.txtContents of requirements.txt:
opencv-python>=4.7.0
opencv-contrib-python>=4.7.0
numpy>=1.24.0
mediapipe>=0.10.0 # face_main.py only
Note β
mediapipeis only needed forface_main.py.
The body-tracking pipeline (main.py) runs on OpenCV and NumPy alone.
python download_models.pyThis fetches the TF SSD MobileNet V1 COCO frozen graph (~27 MB) and places
the two required files into models/.
If the automatic download fails, place them manually:
| File | Source |
|---|---|
ssd_mobilenet_v1.pbtxt |
opencv_extra / testdata / dnn |
ssd_mobilenet_v1_frozen.pb |
TF Model Zoo tar.gz β extract frozen_inference_graph.pb, rename accordingly |
python main.pyA preview window opens showing:
- Orange bounding boxes around each tracked person
- Yellow vertical line β the virtual crossing boundary
- Green ENTRY β / β Red EXIT direction arrows
- Counter overlay (top-left) and FPS / mode banner (bottom)
python main.py [--source SOURCE] [--no-preview] [--night] [--pi]
| Argument | Default | Description |
|---|---|---|
--source SOURCE |
0 (default webcam) |
Camera index (0, 1, β¦), path to a video file, or RTSP/HTTP stream URL |
--no-preview |
off | Disable the OpenCV window (headless / server mode) |
--night |
off | Force night-vision mode on from the first frame |
--pi |
off | Apply Raspberry Pi optimisations (lower resolution + FPS) |
# Second USB camera
python main.py --source 1
# Test with a recorded video
python main.py --source footage.mp4
# Headless IP camera stream
python main.py --source "rtsp://192.168.1.100:554/stream" --no-preview
# Raspberry Pi, headless, night mode
python main.py --pi --no-preview --nightThese work while the preview window is focused:
| Key | Action |
|---|---|
q / ESC |
Quit and print final database summary |
s |
Print live database status to the console |
r |
Reset entry/exit counters to zero (history on disk is preserved) |
n |
Toggle night-vision mode on/off manually |
A vertical line is drawn at LINE_X_FRACTION Γ frame_width (default: 0.5 = centre).
A buffer zone of CROSSING_BUFFER pixels on either side creates a dead-band that
absorbs jitter for people standing near the boundary.
x=0 x=320 x=640
β β β
β (LEFT side) bufβbuf (RIGHT side) β
β ββββΌβββ β
β β β
β β EXIT β ENTRY β β
β (LINE) β
Crossing logic (default ENTRY_DIRECTION = "right"):
| Movement | Event |
|---|---|
| Centroid crosses left β right | ENTRY +1 |
| Centroid crosses right β left | EXIT +1 |
Set ENTRY_DIRECTION = "left" in config.py to reverse for a camera facing
the opposite direction.
Cooldown: After an event fires on a track the same track cannot fire again
for _COOLDOWN_FRAMES frames (default: 8 frames β 0.4 s at 20 FPS).
Unlike a naive implementation, a crossing that occurs during the cooldown is
not lost β the confirmed side is held at its pre-event value so the pending
crossing registers the instant the cooldown expires.
All settings live in config.py. No other file needs editing for normal
deployment changes.
| Setting | Default | Description |
|---|---|---|
CAMERA_SOURCE |
0 |
Webcam index, video path, or RTSP URL |
FRAME_WIDTH |
640 |
Capture width (pixels) |
FRAME_HEIGHT |
480 |
Capture height (pixels) |
TARGET_FPS |
20 |
Requested frame rate |
USE_THREADING_CAPTURE |
True |
Read frames in a background thread |
| Setting | Default | Description |
|---|---|---|
CONFIDENCE_THRESHOLD |
0.50 |
Minimum detection score to keep a box |
NMS_THRESHOLD |
0.40 |
Non-max suppression overlap threshold |
PERSON_CLASS_ID |
1 |
COCO class index for "person" (1-indexed) |
| Setting | Default | Description |
|---|---|---|
MAX_DISAPPEARED |
40 |
Frames a track can go undetected before being dropped |
MAX_DISTANCE |
150 |
Maximum centroid distance (px) for association |
| Setting | Default | Description |
|---|---|---|
LINE_X_FRACTION |
0.50 |
Line position as a fraction of frame width |
CROSSING_BUFFER |
30 |
Dead-band half-width in pixels |
ENTRY_DIRECTION |
"right" |
Direction of travel that counts as ENTRY ("right" or "left") |
| Setting | Default | Description |
|---|---|---|
NIGHT_VISION_ENABLED |
False |
Force night mode on (auto-detects otherwise) |
CLAHE_CLIP_LIMIT |
3.0 |
CLAHE contrast limit |
CLAHE_TILE_SIZE |
(8, 8) |
CLAHE tile grid size |
GAMMA_VALUE |
1.5 |
Gamma brightening factor (>1 = brighter) |
| Setting | Default | Description |
|---|---|---|
IS_RASPBERRY_PI |
False |
Enable Pi optimisations (auto-set by --pi) |
PI_FRAME_WIDTH |
480 |
Resolution width override on Pi |
PI_FRAME_HEIGHT |
360 |
Resolution height override on Pi |
PI_TARGET_FPS |
10 |
FPS override on Pi |
| Setting | Default | Description |
|---|---|---|
SHOW_PREVIEW |
True |
Show the OpenCV window |
DRAW_BBOXES |
True |
Draw person bounding boxes |
DRAW_CENTROIDS |
True |
Draw centroid dots |
DRAW_IDS |
True |
Draw track ID labels |
DRAW_CROSSING_LINE |
True |
Draw the vertical crossing line |
LINE_COLOR_BGR |
yellow | Crossing line colour |
ENTRY_COLOR_BGR |
green | Entry counter / arrow colour |
EXIT_COLOR_BGR |
red | Exit counter / arrow colour |
The system checks mean frame brightness every 30 frames. If it drops below
55 / 255, night mode activates automatically (and deactivates when light
returns). You can also force it with --night or toggle live with n.
Pre-processing stack applied in night mode:
Raw frame
β
βΌ 1. Greyscale conversion
β
βΌ 2. Gamma correction (Ξ³ = GAMMA_VALUE, default 1.5)
β β lifts dark midtones and shadows non-linearly
β
βΌ 3. CLAHE
β β adaptive per-tile histogram equalisation
β β boosts local contrast without over-brightening bright spots
β
βΌ 4. Gaussian denoise (3Γ3 kernel)
β β removes IR sensor speckle
β
βΌ 5. Grey β BGR copy
β restores 3-channel input expected by MobileNet-SSD
face_main.py extends the system with named-person re-identification using
facial embeddings, so a person who leaves and returns is counted as the same
individual rather than a new entry.
# Requires mediapipe in the active environment
python face_main.py
python face_main.py --source 1
python face_main.py --source video.mp4| Extra key | Action |
|---|---|
r |
Manually register the largest visible face as ENTERED |
s |
Print live face-database status |
q / ESC |
Quit |
How it works:
- MediaPipe FaceDetection locates face bounding boxes each frame.
- FaceIdentifier (
face_id.py) computes a lightweight embedding and matches it against known embeddings via cosine similarity β returning or re-appearing people are linked to their original track ID. - Matched face boxes feed the same CentroidTracker and LineCrossingDetector used by the body pipeline.
- Events are stored in a separate FacePeopleDB (
face_database.py).
Both pipelines write to logs/people.db (SQLite, created automatically on
first run).
| Column | Type | Description |
|---|---|---|
id |
INTEGER PK | Track ID from the centroid tracker |
entry_time |
TEXT | ISO-8601 UTC timestamp of the entry event |
last_seen |
TEXT | Updated every frame the person is visible |
cx, cy |
INTEGER | Last known centroid position (pixels) |
| Column | Type | Description |
|---|---|---|
id |
INTEGER | Track ID |
entry_time |
TEXT | ISO-8601 UTC entry timestamp |
exit_time |
TEXT | ISO-8601 UTC exit timestamp |
duration_secs |
REAL | Time spent between entry and exit |
direction |
TEXT | "ENTRY" or "EXIT" |
-- How many people are currently on the steps?
SELECT COUNT(*) FROM active_people;
-- Last 20 crossing events, most recent first
SELECT id, entry_time, exit_time, duration_secs
FROM history
ORDER BY exit_time DESC
LIMIT 20;
-- Average dwell time today
SELECT AVG(duration_secs)
FROM history
WHERE entry_time LIKE '2026-02-28%';Press s while the system is running to print a live console table:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
PEOPLE ON GIRNAR STEPS: 2
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ID Entry Time Last Seen Pos (cx,cy)
7 2026-02-28 09:12:03 2026-02-28 09:12:45 (320, 210)
12 2026-02-28 09:13:11 2026-02-28 09:13:44 (410, 190)
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# 1. Install system packages (Pi OS Bookworm)
sudo apt update
sudo apt install python3-opencv python3-numpy -y
# 2. Copy project files to the Pi, then download models
python download_models.py
# 3. Run headless with Pi optimisations
python main.py --pi --no-previewWhat --pi does automatically:
| Parameter | Laptop value | Pi value |
|---|---|---|
| Capture resolution | 640 Γ 480 | 480 Γ 360 |
| Target FPS | 20 | 10 |
| Threaded capture | β | β |
Tips for better Pi performance:
- Set
CONFIDENCE_THRESHOLD = 0.45to compensate for smaller detected regions at lower resolution. - Reduce
MAX_DISAPPEARED = 20so stale tracks are freed sooner. - Use
raspi-configβ Performance β enable the V4L2 camera driver for the Pi Camera Module. - For Pi 5 + Hailo AI Kit, replace the DNN backend lines in
detector.pywithcv2.dnn.DNN_BACKEND_INFERENCE_ENGINE/DNN_TARGET_MYRIAD.
| Symptom | Fix |
|---|---|
[ERROR] Model files not found |
Run python download_models.py |
| Camera won't open | Try --source 1 or --source 2; make sure no other application holds the camera |
| Very low FPS | Lower FRAME_WIDTH / FRAME_HEIGHT in config.py; use --pi on slow machines |
| People not detected | Lower CONFIDENCE_THRESHOLD to 0.35β0.45 |
| Exit counter not incrementing | Ensure the person walks fully past the buffer zone; increase CROSSING_BUFFER if the line is in a narrow space |
| Double-counting at the line | Increase CROSSING_BUFFER (try 40β50) |
| Entry and exit are swapped | Set ENTRY_DIRECTION = "left" in config.py |
| Night mode stays on permanently | Set NIGHT_VISION_ENABLED = False; the system will auto-detect |
mediapipe not found in face_main.py |
pip install mediapipe>=0.10.0 |
| Stale tracks accumulate | Reduce MAX_DISAPPEARED (try 20) |