This repository contains two Google Colab notebooks, as well as sections for troubleshooting and references:
- 1. YOLO11n Training on Google Colab
- 2. YOLO11n Full Integer Quantization and VELA Conversion for Grove Vision AI V2
- 3. Troubleshooting
- 4. References
A notebook to train a Ultralytics YOLO11n object detection model with a custom dataset on Google Colab.
-
Dataset Structure: Organize your dataset with the following folder structure:
🗂️ dataset 🗂️ train 🗂️ images 🗂️ labels 🗂️ valid 🗂️ images 🗂️ labels data.yamlEnsure
data.yamlis present in thedatasetfolder. -
Zip the Dataset: Compress the
datasetfolder into adataset.zipfile. On macOS, use the following command to exclude hidden files:zip -r dataset.zip . -x "*.DS_Store" "__MACOSX/*" ".Trashes/*" ".Spotlight-V100/*" ".TemporaryItems/*"
-
Google Drive Setup:
- Create a folder named
yoloin your Google Drive'sMyDrive(i.e.,/content/drive/MyDrive/yolo). - Copy the
dataset.zipfile into/content/drive/MyDrive/yolo. - Your trained YOLO model (e.g.,
best.pt) will also be saved here.
- Create a folder named
This notebook handles the full integer quantization of your trained YOLO11n model and its conversion using the Arm VELA compiler for deployment on the Himax WiseEye2 (WE2) chip. The results is a full_integer_quant_vela.tflite file.
- Python 3.10 Environment: This notebook requires Python 3.10 due to dependencies on the
impmodule, which is deprecated in newer Python versions. The notebook sets up a virtual environment (env_yolo11) with Python 3.10. - Dataset Preparation: The same dataset structure and zipping (
dataset.zip) as described in Section 1.1 are required for creating a calibration image set.
-
If you use the Himax AI web toolkit you may find that your custom yolo11n model detects the coco classes (person, bicycle, car, etc). This is because the classes are in the code. You find the list of class names in Himax_AI_web_toolkit/assets/index-legacy.51f14f00.js. Search for person in this file and replace them by the classes you trained your model on.
- How to build the environment on your local computer to make the image file and flash it to the Grove Vision AI V2 on macOS, windows or linux can be found
- Detailed information can be found in this github repository YOLO11n on WE2
- Install the Yolo11 environment at local PC
- The output int8 vela tflite model which you can open by netron
- The original YOLO11_on_WE2_Tutorial.ipynb on Colab