-
Notifications
You must be signed in to change notification settings - Fork 868
Description
Describe the documentation issue
The example for the stage specific metrics suggests to use
val_metrics = [
AUROC(fields=["pred_score", "gt_label"]), # Image-level AUROC
F1Score(fields=["pred_label", "gt_label"]) # Image-level F1
]
While this works for the test_metrics, when running the example, training stops with the following error
ValueError: Cannot update metric of type <class 'anomalib.metrics.f1_score.F1Score'>. Passed dataclass instance does not have a value for field with name pred_label.
Now I'm not sure what the right approach would be here. I'm not even sure if F1Score in validation makes sense at all. I tried the F1Max (with Dinomaly to have some training epochs, and logged that with MLFlow) which looks good to me (compared to the behaviour of the F1Score):
At other places in the documentation the F1Score is used differently. I found the following variants:
- f1 = F1Score(fields=["pred_score", "gt_label"]) source (I used that in the example above, but I think in the validation step it's not using the correct threshold?)
- f1 = F1Score(fields=["pred_score", "gt_label"], threshold=0.5) source (a fixed threshold probably doesn't make sense)
URL of the documentation page
Suggested improvement
Use F1Max in the val_metrics instead of F1Score.
Code of Conduct
- I agree to follow this project's Code of Conduct