Measure tracker quality with standard MOT metrics to get reproducible scores for development and benchmarking.
What you'll learn:
- Download ground-truth annotations and detections for evaluation
- Run tracking on pre-computed detections
- Evaluate tracking results against ground truth
Visualization of ground-truth annotations for MOT17.
Get started by installing the package.
pip install trackers
For more options, see the install guide.
Use trackers download to pull ground-truth annotations and detections from supported benchmarks like MOT17.
=== "CLI"
Fetch MOT17 validation annotations and detections from the command line.
```text
trackers download mot17 \
--split val \
--asset annotations,detections \
--output ./data
```
=== "Python"
Fetch MOT17 validation annotations and detections from Python.
```python
from trackers import Dataset, DatasetAsset, DatasetSplit, download_dataset
download_dataset(
dataset=Dataset.MOT17,
split=DatasetSplit.VAL,
asset=[DatasetAsset.ANNOTATIONS, DatasetAsset.DETECTIONS],
output="./data",
)
```
After downloading, your data directory will look like this.
data/
└── mot17/
└── val/
├── MOT17-02-FRCNN/
│ ├── det/
│ │ └── det.txt
│ └── gt/
│ └── gt.txt
├── MOT17-04-FRCNN/
│ ├── det/
│ │ └── det.txt
│ └── gt/
│ └── gt.txt
└── ...
For more download options, see the download guide.
Feed the pre-computed detections into a tracker and write the results to a file for evaluation.
Pass --detections to provide input detections and --mot-output to save the tracker output in MOT format.
trackers track \
--detections ./data/mot17/val/MOT17-02-FRCNN/det/det.txt \
--tracker bytetrack \
--mot-output results/MOT17-02-FRCNN.txt
Compare the tracker output against ground truth to compute standard MOT metrics.
trackers eval \
--gt ./data/mot17/val/MOT17-02-FRCNN/gt/gt.txt \
--tracker results/MOT17-02-FRCNN.txt \
--metrics CLEAR HOTA Identity \
--columns MOTA HOTA IDF1
Output:
MOTA HOTA IDF1
----------------------------------------------------
gt 30.192 35.475 38.515
Ground-truth and tracker output files use the MOT Challenge text format.
<frame>,<id>,<bb_left>,<bb_top>,<bb_width>,<bb_height>,<conf>,<x>,<y>,<z>
Example:
1,1,100,200,50,80,1,-1,-1,-1
1,2,300,150,60,90,1,-1,-1,-1
2,1,105,198,50,80,1,-1,-1,-1
Each line contains the frame number, object ID, bounding box (left, top, width, height), confidence score, and 3D position (set to -1 when unused).
Evaluate all sequences at once and get per-sequence results plus a combined aggregate.
trackers eval \
--gt-dir ./data/mot17/val \
--tracker-dir results \
--metrics CLEAR HOTA Identity \
--columns MOTA HOTA IDF1 \
--output results.json
Output:
Sequence MOTA HOTA IDF1
----------------------------------------------------
MOT17-02-FRCNN 30.192 35.475 38.515
MOT17-04-FRCNN 48.912 55.096 61.854
MOT17-05-FRCNN 52.755 45.515 55.705
MOT17-09-FRCNN 51.441 50.108 57.038
MOT17-10-FRCNN 51.832 49.648 55.797
MOT17-11-FRCNN 55.501 49.401 55.061
MOT17-13-FRCNN 60.488 58.651 69.884
----------------------------------------------------
COMBINED 47.406 50.355 56.600
Use --output to save the full results to a JSON file for later analysis.
All arguments accepted by trackers eval.
| Argument | Description | Default |
|---|---|---|
--gt |
Path to a single ground-truth file in MOT format. | — |
--tracker |
Path to a single tracker predictions file in MOT format. | — |
--gt-dir |
Directory containing ground-truth files for multi-sequence evaluation. | — |
--tracker-dir |
Directory containing tracker prediction files for multi-sequence evaluation. | — |
--seqmap |
Sequence map file listing sequences to evaluate. If omitted, all sequences in the directory are evaluated. | all |
--metrics |
Metric families to compute. Options: CLEAR, HOTA, Identity. |
CLEAR |
--threshold |
IoU threshold for CLEAR and Identity matching. HOTA evaluates across multiple thresholds internally. | 0.5 |
--columns |
Metric columns to display. If omitted, all columns for the selected metrics are shown. | auto |
--output |
Save results to a JSON file at the given path. | none |