- 
          
 - 
                Notifications
    
You must be signed in to change notification settings  - Fork 1.9k
 
How to add your custom ReID model to BoxMOT
This guide walks you through the repository changes required to plug a brand-new person or vehicle re-identification (ReID) model into BoxMOT. By the end you will be able to select your weights with --reid-model across the CLI and Python APIs just like any built-in backbone.
 
Tip: Keep your implementation modular. If your model can be exported to ONNX, OpenVINO, TorchScript, or TensorRT later, follow the existing modules for clean separation between the backbone definition, the factory, and configuration metadata.
 
Create a new module that exposes your architecture. Use existing implementations—such as boxmot/appearance/backbones/osnet.py or mobilenetv2.py as references for structure, type hints, and docstrings. Two key rules apply:
 
- 
Inference must return feature vectors. The forward pass must emit embeddings 
vrather than classification logits so that trackers can compare appearance descriptors directly. - 
Follow PyTorch conventions. Your module should subclass 
torch.nn.Module, accepttorch.Tensorinputs shaped like[batch, channels, height, width], and run on the device provided by upstream code. If your model needs helper functions (e.g., custom blocks), keep them in the same module or a dedicated subpackage so they can be imported without side effects. 
 
Add an import for your backbone and register it in the MODEL_FACTORY map:
 
from boxmot.appearance.backbones.my_backbone import MyBackbone
 
MODEL_FACTORY = {
    # ...existing entries...
    "my_backbone": MyBackbone,
} 
The key string ("my_backbone" in the example) becomes the public identifier that users pass through configuration files or CLI flags. Make sure the constructor signature matches how you expect the model to be instantiated (e.g., keyword arguments for number of classes, pretrained weights, or input size).
 
Update two structures so that the rest of BoxMOT knows about the new backbone and any pretrained checkpoints you distribute:
- Append the same identifier used in the factory to 
MODEL_TYPES. - Add any downloadable weights to 
TRAINED_URLS. Use stable, publicly accessible URLs (GitHub Releases, Google Drive withuc?id=..., Hugging Face, etc.). Fro traceability add the dataset name to the registered model. For example: 
MODEL_TYPES = [
    # ...
    "my_backbone",
]
 
TRAINED_URLS = {
    # ...
    "my_backbone_market1501.pt": "https://github.com/your-org/your-repo/releases/download/v1.0/my_backbone_market1501.pt",
}
Before opening a pull request, validate the end-to-end workflow:
- Instantiate from the factory
 
from boxmot.appearance.reid.factory import MODEL_FACTORY
 
model = MODEL_FACTORY["my_backbone"]
model.eval()- Run a forward pass with a dummy tensor to confirm that embeddings are returned without errors.
 - Track with the CLI on a short video to ensure the new model plays nicely with the tracker of your choice, for example:
 
boxmot track --source assets/MOT17-mini/track.mp4 \
--yolo-model yolov8n.pt \
--reid-model my_backbone_market1501.pt \
--tracking-method botsort- 
Optional exports: run 
boxmot export --weights my_backbone_market1501.pt --include onnxif you intend to distribute additional formats.