Skip to content

AsadNizami/Glove-Detection

Repository files navigation

Glove Detection Project

This project uses YOLOv8 to detect hands with and without gloves in images. It can process a directory of images and will output annotated images as well as JSON files with detection details.

Demo

AsadNizami/glove-detection

Sample Output

Input Image Detected Output
Input Output

Installation

This project uses uv for package management.

  1. Install uv:
    pip install uv

Usage

To run glove detection on a folder of images, use the main.py script.

uv run main.py <path_to_your_image_folder>

For example, to process the sample images provided:

uv run main.py sample_images

The annotated images will be saved in the output/ directory, and the detection logs in JSON format will be saved in the logs/ directory.

Dataset

The dataset used for this project is a custom dataset focused on glove detection, originally sourced from Roboflow: Hang and Glove Detect. It contains images of hands, which will be categorized into 'gloved_hand' and 'bare_hand' after cleaning the dataset.

Model

The primary model used for detection is YOLOv8s (YOLOv8 small), chosen for its balance of performance and efficiency. This model was fine-tuned from a pretrained yolov8s.pt checkpoint.

Preprocessing and Training

The training process involved several steps:

  1. Data Cleaning: Initial data cleaning was performed using clean_dataset.py to remove duplicates or corrupted entries and to rename the classes.
  2. Data Splitting: The dataset originally had train set and test set. Validation set was created from 20% of the train set.
  3. Training: The YOLOv8s model was trained for 75 epochs using the train.py script. Key hyperparameters were configured as follows (from runs/detect/train3/args.yaml):
    • epochs: 75
    • batch: 16
    • imgsz: 640
    • model: yolov8s
    • cos_lr: True (cosine learning rate scheduler)
    • mixup: 0.1
    • flipud: 0.5 (vertical flip augmentation)
    • fliplr: 0.5 (horizontal flip augmentation)

What Worked and What Didn't

  • YOLOv8s Performance: The YOLOv8s model demonstrated strong performance, achieving high Mean Average Precision (mAP) scores on the validation set.
  • Hyperparameter Tuning: Extensive hyperparameter tuning was conducted across different training runs (train, train2, train3, train4).
  • Model Comparison: While YOLOv8m (from train4) showed marginally higher mAP50-95, YOLOv8s (from train3) was selected as the preferred model due to its comparable performance and significantly smaller size, making it more efficient for deployment.
  • Initial Models: Earlier training runs with YOLOv8n models showed lower performance compared to the 's' and 'm' variants, indicating the benefit of using larger models for this dataset.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages