-
Notifications
You must be signed in to change notification settings - Fork 707
Description
Search before asking
- I have searched the RF-DETR issues and found no similar feature requests.
Description
The current MetricsTensorBoardSink only logs high‑level metrics (loss, AP, AR, EMA metrics), but does not expose per‑class metrics to TensorBoard. This makes it difficult to understand how individual classes perform during training and validation.
I would like to add support for logging:
- per‑class mAP@50
- per‑class mAP@50:95
- per‑class precision / recall
- per‑class F1
- support for both Base and EMA model results
Use case
When training a multi‑class detector, I want to understand which classes perform well and which need more data.
Without per‑class metrics in TensorBoard, it’s difficult to:
- identify underperforming classes,
- spot class imbalance issues,
- detect noisy annotations for specific classes,
- choose which classes need extra training samples,
- monitor how EMA vs Base behaves per class,
- diagnose why global mAP improves or drops.
Per‑class visual feedback is extremely helpful during dataset development and active learning workflows.
Additional
I am preparing a PR that adds this functionality with:
- minimal modifications to the existing file,
- a clean helper method for per‑class logging,
- dynamic handling of all numeric fields (e.g., if new metrics are added later),
- class‑first TensorBoard grouping for readability,
- optional behavior (runs only if test_results_json / ema_test_results_json is present).
I personally want this change because the current TensorBoard sink provides no visibility into per‑class performance. Having this insight allows me to add the right training samples and improve weak classes more efficiently.
Are you willing to submit a PR?
- Yes I'd like to help by submitting a PR!