This repository contains utilities to convert labeled images to a YOLO dataset, train a YOLO model using the Ultralytics API, and visualize inference results in a Streamlit dashboard.
data_convert.py— convert labelme-style JSONs into a YOLO-format dataset underYOLO_Dataset/.train.py— simple training entry usingultralytics.YOLO(adjust model path and training hyperparameters inside if needed).app.py— Streamlit app for visualizing video/image inference using trained.ptweights.YOLO_Dataset/— default output folder for converted datasets (data.yaml,images/,labels/).
Install dependencies from requirements.txt:
pip install -r requirements.txtMake sure you have a compatible GPU driver and PyTorch/cuDNN if you plan to train with GPU.
Convert a folder tree of labeled images (LabelMe JSONs next to images) into the YOLO dataset layout.
Example:
python data_convert.py --input ./my_images_root --output ./YOLO_Dataset --ratio 0.8What it does:
- Scans top-level subdirectories under the
--inputpath (ignores the output folder). - Finds image files (
.jpg,.png,.jpeg) and their corresponding.jsonLabelMe files. - Converts rectangle annotations to YOLO normalized bbox format and writes
.txtlabels. - Copies images and labels into
YOLO_Dataset/images/{train,val}andYOLO_Dataset/labels/{train,val}. - Writes
YOLO_Dataset/data.yamlwithtrain,val, andnamesmapping.
Notes:
- If a subfolder has many images, the script shuffles and splits them according to
--ratio. - The converter removes the output folder if it already exists — back up data if needed.
Training is provided by train.py using the Ultralytics YOLO API. By default it loads yolo12n.pt.
Quick start:
- Ensure
YOLO_Dataset/data.yamlexists (created by the converter). - Edit
train.pyto change the base model or hyperparameters if required. - Run:
python train.pyImportant:
train.pyusesdata=os.path.abspath('./YOLO_Dataset/data.yaml')— ensure that path is correct.- Tune
batch,imgsz,epochs, anddeviceintrain.pyfor your hardware. - Training outputs are saved under
runs/detect/<project>/<name>by default.
The Streamlit app in app.py provides two tabs: live video detection (upload video) and single image detection.
Run the app:
streamlit run app.pyHow to use:
- Place trained
.ptweight files underruns/detect/**/weights/(the app searchesruns/detect/**/weights/*.pt). - Use the sidebar to select a weights file, set confidence threshold, and render width.
- Upload a surveillance video (mp4/avi/mov) for looped inference or upload an image for single-frame detection.
Notes:
- The app encodes frames as JPEG base64 for smoother browser rendering and includes simple frame-rate throttling to avoid freezing.
- If no
.ptfiles are found, the sidebar will display an error and the app will stop.
- If
data_convert.pyfails to find JSONs, ensure JSON filenames match image names (same basename). - For low GPU memory, reduce
batchintrain.pyto 8 or 4. - Check
runs/after training to locate saved weights (runs/detect/<project>/<exp>/weights/best.pt).
This project uses permissive personal-use conventions. Feel free to open issues or suggest improvements.