Bug Description
During the evaluation stage of the MOT17 pedestrian tracking example, calculating the F-1 score crashes with a ZeroDivisionError whenever the sum of precision and recall is zero.
Root Cause
In examples/MOT17/multiedge_inference_bench/pedestrian_tracking/testenv/tracking/f1_score.py (line 51), the F-1 score calculation lacks a protective guard:
round(2*((precision*recall)/(precision+recall)), 4)
If a tracking model produces zero true positives for a given batch (which occurs frequently in edge-case testing, empty predictions, or fully mismatched sequences), both precision and recall evaluate to 0.0. Attempting to divide by (precision + recall) immediately results in a fatal ZeroDivisionError.
Impact
High. The evaluation stage outright crashes on degenerate or mismatched outputs, completely breaking the testing pipeline instead of gracefully scoring those edge-runs as 0.0.
Expected Behavior
If both precision and recall are zero, the algorithm should safely return an F-1 score of 0.0 rather than causing an unhandled arithmetic exception.
Proposed Fix
Introduce a mathematical guard constraint prior to the F-1 return statement:
if precision + recall == 0:
return 0.0
round(2*((precision*recall)/(precision+recall)), 4)
File affected:
examples/MOT17/multiedge_inference_bench/pedestrian_tracking/testenv/tracking/f1_score.py
Reproduction Steps
- Run MOT17 evaluation with a model producing zero true positives (empty predictions or complete mismatches)
- Observe
ZeroDivisionError during F-1 score calculation
- Evaluation pipeline crashes instead of continuing with
0.0 score
Bug Description
During the evaluation stage of the MOT17 pedestrian tracking example, calculating the F-1 score crashes with a
ZeroDivisionErrorwhenever the sum ofprecisionandrecallis zero.Root Cause
In
examples/MOT17/multiedge_inference_bench/pedestrian_tracking/testenv/tracking/f1_score.py(line 51), the F-1 score calculation lacks a protective guard:If a tracking model produces zero true positives for a given batch (which occurs frequently in edge-case testing, empty predictions, or fully mismatched sequences), both
precisionandrecallevaluate to0.0. Attempting to divide by(precision + recall)immediately results in a fatalZeroDivisionError.Impact
High. The evaluation stage outright crashes on degenerate or mismatched outputs, completely breaking the testing pipeline instead of gracefully scoring those edge-runs as
0.0.Expected Behavior
If both
precisionandrecallare zero, the algorithm should safely return an F-1 score of0.0rather than causing an unhandled arithmetic exception.Proposed Fix
Introduce a mathematical guard constraint prior to the F-1 return statement:
File affected:
examples/MOT17/multiedge_inference_bench/pedestrian_tracking/testenv/tracking/f1_score.pyReproduction Steps
ZeroDivisionErrorduring F-1 score calculation0.0score