Skip to content

Nikunj2608/entry-exit-counter

Repository files navigation

Girnar Steps – Real-Time Entry/Exit Tracking System

Real-time people counter for a fixed camera pointed at a doorway or staircase entrance. Detects every person crossing a virtual vertical line, increments entry/exit counters, persists a full event log to SQLite, and runs on anything from a laptop to a Raspberry Pi 5.

Two independent pipelines are provided:

Pipeline Entry point Detection engine Use case
Body tracking main.py MobileNet-SSD (OpenCV DNN) Fast, CPU-only, no GPU needed
Face-ID tracking face_main.py MediaPipe FaceDetection + embedding Named-person re-identification

Demo

demo.mp4

Body-tracking pipeline running on a webcam β€” orange boxes show tracked people, the yellow vertical line triggers ENTRY (green) and EXIT (red) events.


Table of Contents

  1. System Architecture
  2. Folder Structure
  3. Requirements
  4. Quick Start
  5. Command-Line Reference
  6. Keyboard Shortcuts
  7. How Crossing Detection Works
  8. Configuration Reference
  9. Night-Vision Mode
  10. Face-ID Pipeline
  11. Database Schema
  12. Raspberry Pi Deployment
  13. Troubleshooting

System Architecture

flowchart TD
    A[Camera <br/> webcam / Pi cam / video file] --> B[Night-Vision Pre-processor <br/> night_vision.py]
    B --> C[MobileNet-SSD Person Detector <br/> detector.py]
    C -- "[x, y, w, h] boxes" --> D[Centroid Tracker <br/> tracker.py]
    D -- "{track_id: Track}" --> E[Line-Crossing Detector <br/> entry_exit.py <br/> vertical line @ LINE_X_FRACTION]
    E -- "ENTRY / EXIT events" --> F[(SQLite Database <br/> database.py <br/> logs/people.db)]
    F --> G[OpenCV HUD <br/> main.py <br/> Counters Β· IDs Β· FPS Β· Mode badge]
    
    classDef default fill:#1f2328,stroke:#d0d7de,stroke-width:1px,color:#c9d1d9;
    classDef db fill:#1f2328,stroke:#2f81f7,stroke-width:2px,color:#c9d1d9;
    class F db;
Loading

Key design principles

  • Zero heavy dependencies – only opencv-python + numpy for the body pipeline.
  • Threaded capture – a background thread always holds the freshest frame, eliminating input-queue lag.
  • No GPU required – OpenCV DNN runs purely on CPU; OpenCL is attempted on supported hardware.
  • Persistent cooldown – a crossing during the post-event cooldown window is never silently discarded; it fires the moment the cooldown expires.

Folder Structure

entry and exit/
β”‚
β”œβ”€β”€ main.py               ← Body-tracking pipeline (start here)
β”œβ”€β”€ face_main.py          ← Face-ID pipeline
β”‚
β”œβ”€β”€ config.py             ← Single file for ALL settings
β”œβ”€β”€ detector.py           ← MobileNet-SSD person detector
β”œβ”€β”€ tracker.py            ← Centroid-based multi-object tracker
β”œβ”€β”€ entry_exit.py         ← Virtual vertical-line crossing logic
β”œβ”€β”€ database.py           ← SQLite runtime + history database
β”œβ”€β”€ night_vision.py       ← Low-light pre-processing (CLAHE + gamma)
β”œβ”€β”€ face_id.py            ← MediaPipe face detector + embedding matcher
β”œβ”€β”€ face_database.py      ← SQLite database for the face-ID pipeline
β”‚
β”œβ”€β”€ download_models.py    ← One-time model downloader
β”œβ”€β”€ requirements.txt
β”‚
β”œβ”€β”€ models/
β”‚   β”œβ”€β”€ ssd_mobilenet_v1.pbtxt          ← TF graph config  (~63 KB)
β”‚   β”œβ”€β”€ ssd_mobilenet_v1_frozen.pb      ← Frozen weights   (~27 MB)
β”‚   └── blaze_face_short_range.tflite   ← MediaPipe face model
β”‚
└── logs/
    └── people.db                        ← Created automatically at runtime

Requirements

Python version

Python 3.9 or later.

Packages

pip install -r requirements.txt

Contents of requirements.txt:

opencv-python>=4.7.0
opencv-contrib-python>=4.7.0
numpy>=1.24.0
mediapipe>=0.10.0        # face_main.py only

Note – mediapipe is only needed for face_main.py.
The body-tracking pipeline (main.py) runs on OpenCV and NumPy alone.


Quick Start

1 Β· Download model weights

python download_models.py

This fetches the TF SSD MobileNet V1 COCO frozen graph (~27 MB) and places the two required files into models/.

If the automatic download fails, place them manually:

File Source
ssd_mobilenet_v1.pbtxt opencv_extra / testdata / dnn
ssd_mobilenet_v1_frozen.pb TF Model Zoo tar.gz β†’ extract frozen_inference_graph.pb, rename accordingly

2 Β· Run

python main.py

A preview window opens showing:

  • Orange bounding boxes around each tracked person
  • Yellow vertical line β€” the virtual crossing boundary
  • Green ENTRY β†’ / ← Red EXIT direction arrows
  • Counter overlay (top-left) and FPS / mode banner (bottom)

Command-Line Reference

python main.py [--source SOURCE] [--no-preview] [--night] [--pi]
Argument Default Description
--source SOURCE 0 (default webcam) Camera index (0, 1, …), path to a video file, or RTSP/HTTP stream URL
--no-preview off Disable the OpenCV window (headless / server mode)
--night off Force night-vision mode on from the first frame
--pi off Apply Raspberry Pi optimisations (lower resolution + FPS)

Examples

# Second USB camera
python main.py --source 1

# Test with a recorded video
python main.py --source footage.mp4

# Headless IP camera stream
python main.py --source "rtsp://192.168.1.100:554/stream" --no-preview

# Raspberry Pi, headless, night mode
python main.py --pi --no-preview --night

Keyboard Shortcuts

These work while the preview window is focused:

Key Action
q / ESC Quit and print final database summary
s Print live database status to the console
r Reset entry/exit counters to zero (history on disk is preserved)
n Toggle night-vision mode on/off manually

How Crossing Detection Works

A vertical line is drawn at LINE_X_FRACTION Γ— frame_width (default: 0.5 = centre).
A buffer zone of CROSSING_BUFFER pixels on either side creates a dead-band that absorbs jitter for people standing near the boundary.

  x=0                   x=320                   x=640
   β”‚                      β”‚                        β”‚
   β”‚    (LEFT side)    bufβ”‚buf   (RIGHT side)       β”‚
   β”‚                   ───┼───                      β”‚
   β”‚                      β”‚                         β”‚
   β”‚   ← EXIT             β”‚            ENTRY β†’      β”‚
   β”‚                   (LINE)                       β”‚

Crossing logic (default ENTRY_DIRECTION = "right"):

Movement Event
Centroid crosses left β†’ right ENTRY +1
Centroid crosses right β†’ left EXIT +1

Set ENTRY_DIRECTION = "left" in config.py to reverse for a camera facing the opposite direction.

Cooldown: After an event fires on a track the same track cannot fire again for _COOLDOWN_FRAMES frames (default: 8 frames β‰ˆ 0.4 s at 20 FPS).
Unlike a naive implementation, a crossing that occurs during the cooldown is not lost β€” the confirmed side is held at its pre-event value so the pending crossing registers the instant the cooldown expires.


Configuration Reference

All settings live in config.py. No other file needs editing for normal deployment changes.

Camera

Setting Default Description
CAMERA_SOURCE 0 Webcam index, video path, or RTSP URL
FRAME_WIDTH 640 Capture width (pixels)
FRAME_HEIGHT 480 Capture height (pixels)
TARGET_FPS 20 Requested frame rate
USE_THREADING_CAPTURE True Read frames in a background thread

Detection

Setting Default Description
CONFIDENCE_THRESHOLD 0.50 Minimum detection score to keep a box
NMS_THRESHOLD 0.40 Non-max suppression overlap threshold
PERSON_CLASS_ID 1 COCO class index for "person" (1-indexed)

Tracker

Setting Default Description
MAX_DISAPPEARED 40 Frames a track can go undetected before being dropped
MAX_DISTANCE 150 Maximum centroid distance (px) for association

Crossing Line

Setting Default Description
LINE_X_FRACTION 0.50 Line position as a fraction of frame width
CROSSING_BUFFER 30 Dead-band half-width in pixels
ENTRY_DIRECTION "right" Direction of travel that counts as ENTRY ("right" or "left")

Night Vision

Setting Default Description
NIGHT_VISION_ENABLED False Force night mode on (auto-detects otherwise)
CLAHE_CLIP_LIMIT 3.0 CLAHE contrast limit
CLAHE_TILE_SIZE (8, 8) CLAHE tile grid size
GAMMA_VALUE 1.5 Gamma brightening factor (>1 = brighter)

Raspberry Pi

Setting Default Description
IS_RASPBERRY_PI False Enable Pi optimisations (auto-set by --pi)
PI_FRAME_WIDTH 480 Resolution width override on Pi
PI_FRAME_HEIGHT 360 Resolution height override on Pi
PI_TARGET_FPS 10 FPS override on Pi

Visualisation

Setting Default Description
SHOW_PREVIEW True Show the OpenCV window
DRAW_BBOXES True Draw person bounding boxes
DRAW_CENTROIDS True Draw centroid dots
DRAW_IDS True Draw track ID labels
DRAW_CROSSING_LINE True Draw the vertical crossing line
LINE_COLOR_BGR yellow Crossing line colour
ENTRY_COLOR_BGR green Entry counter / arrow colour
EXIT_COLOR_BGR red Exit counter / arrow colour

Night-Vision Mode

The system checks mean frame brightness every 30 frames. If it drops below 55 / 255, night mode activates automatically (and deactivates when light returns). You can also force it with --night or toggle live with n.

Pre-processing stack applied in night mode:

Raw frame
   β”‚
   β–Ό  1. Greyscale conversion
   β”‚
   β–Ό  2. Gamma correction  (Ξ³ = GAMMA_VALUE, default 1.5)
   β”‚     – lifts dark midtones and shadows non-linearly
   β”‚
   β–Ό  3. CLAHE
   β”‚     – adaptive per-tile histogram equalisation
   β”‚     – boosts local contrast without over-brightening bright spots
   β”‚
   β–Ό  4. Gaussian denoise  (3Γ—3 kernel)
   β”‚     – removes IR sensor speckle
   β”‚
   β–Ό  5. Grey β†’ BGR copy
        – restores 3-channel input expected by MobileNet-SSD

Face-ID Pipeline

face_main.py extends the system with named-person re-identification using facial embeddings, so a person who leaves and returns is counted as the same individual rather than a new entry.

# Requires mediapipe in the active environment
python face_main.py
python face_main.py --source 1
python face_main.py --source video.mp4
Extra key Action
r Manually register the largest visible face as ENTERED
s Print live face-database status
q / ESC Quit

How it works:

  1. MediaPipe FaceDetection locates face bounding boxes each frame.
  2. FaceIdentifier (face_id.py) computes a lightweight embedding and matches it against known embeddings via cosine similarity β€” returning or re-appearing people are linked to their original track ID.
  3. Matched face boxes feed the same CentroidTracker and LineCrossingDetector used by the body pipeline.
  4. Events are stored in a separate FacePeopleDB (face_database.py).

Database Schema

Both pipelines write to logs/people.db (SQLite, created automatically on first run).

active_people β€” who is currently on the steps

Column Type Description
id INTEGER PK Track ID from the centroid tracker
entry_time TEXT ISO-8601 UTC timestamp of the entry event
last_seen TEXT Updated every frame the person is visible
cx, cy INTEGER Last known centroid position (pixels)

history β€” complete entry/exit log

Column Type Description
id INTEGER Track ID
entry_time TEXT ISO-8601 UTC entry timestamp
exit_time TEXT ISO-8601 UTC exit timestamp
duration_secs REAL Time spent between entry and exit
direction TEXT "ENTRY" or "EXIT"

Useful queries

-- How many people are currently on the steps?
SELECT COUNT(*) FROM active_people;

-- Last 20 crossing events, most recent first
SELECT id, entry_time, exit_time, duration_secs
FROM   history
ORDER  BY exit_time DESC
LIMIT  20;

-- Average dwell time today
SELECT AVG(duration_secs)
FROM   history
WHERE  entry_time LIKE '2026-02-28%';

Press s while the system is running to print a live console table:

────────────────────────────────────────────────────────────
  PEOPLE ON GIRNAR STEPS: 2
────────────────────────────────────────────────────────────
     ID        Entry Time           Last Seen      Pos (cx,cy)
      7  2026-02-28 09:12:03  2026-02-28 09:12:45  (320, 210)
     12  2026-02-28 09:13:11  2026-02-28 09:13:44  (410, 190)
────────────────────────────────────────────────────────────

Raspberry Pi Deployment

# 1. Install system packages (Pi OS Bookworm)
sudo apt update
sudo apt install python3-opencv python3-numpy -y

# 2. Copy project files to the Pi, then download models
python download_models.py

# 3. Run headless with Pi optimisations
python main.py --pi --no-preview

What --pi does automatically:

Parameter Laptop value Pi value
Capture resolution 640 Γ— 480 480 Γ— 360
Target FPS 20 10
Threaded capture βœ“ βœ“

Tips for better Pi performance:

  • Set CONFIDENCE_THRESHOLD = 0.45 to compensate for smaller detected regions at lower resolution.
  • Reduce MAX_DISAPPEARED = 20 so stale tracks are freed sooner.
  • Use raspi-config β†’ Performance β†’ enable the V4L2 camera driver for the Pi Camera Module.
  • For Pi 5 + Hailo AI Kit, replace the DNN backend lines in detector.py with cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE / DNN_TARGET_MYRIAD.

Troubleshooting

Symptom Fix
[ERROR] Model files not found Run python download_models.py
Camera won't open Try --source 1 or --source 2; make sure no other application holds the camera
Very low FPS Lower FRAME_WIDTH / FRAME_HEIGHT in config.py; use --pi on slow machines
People not detected Lower CONFIDENCE_THRESHOLD to 0.35–0.45
Exit counter not incrementing Ensure the person walks fully past the buffer zone; increase CROSSING_BUFFER if the line is in a narrow space
Double-counting at the line Increase CROSSING_BUFFER (try 40–50)
Entry and exit are swapped Set ENTRY_DIRECTION = "left" in config.py
Night mode stays on permanently Set NIGHT_VISION_ENABLED = False; the system will auto-detect
mediapipe not found in face_main.py pip install mediapipe>=0.10.0
Stale tracks accumulate Reduce MAX_DISAPPEARED (try 20)

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages