Skip to content

ZeroDivisionError in MOT17 f1_score.py when precision and recall equal zero #401

@ARYANPATEL-BIT

Description

@ARYANPATEL-BIT

Bug Description

During the evaluation stage of the MOT17 pedestrian tracking example, calculating the F-1 score crashes with a ZeroDivisionError whenever the sum of precision and recall is zero.

Root Cause

In examples/MOT17/multiedge_inference_bench/pedestrian_tracking/testenv/tracking/f1_score.py (line 51), the F-1 score calculation lacks a protective guard:

round(2*((precision*recall)/(precision+recall)), 4)

If a tracking model produces zero true positives for a given batch (which occurs frequently in edge-case testing, empty predictions, or fully mismatched sequences), both precision and recall evaluate to 0.0. Attempting to divide by (precision + recall) immediately results in a fatal ZeroDivisionError.

Impact

High. The evaluation stage outright crashes on degenerate or mismatched outputs, completely breaking the testing pipeline instead of gracefully scoring those edge-runs as 0.0.

Expected Behavior

If both precision and recall are zero, the algorithm should safely return an F-1 score of 0.0 rather than causing an unhandled arithmetic exception.

Proposed Fix

Introduce a mathematical guard constraint prior to the F-1 return statement:

if precision + recall == 0:
    return 0.0
round(2*((precision*recall)/(precision+recall)), 4)

File affected:
examples/MOT17/multiedge_inference_bench/pedestrian_tracking/testenv/tracking/f1_score.py

Reproduction Steps

  1. Run MOT17 evaluation with a model producing zero true positives (empty predictions or complete mismatches)
  2. Observe ZeroDivisionError during F-1 score calculation
  3. Evaluation pipeline crashes instead of continuing with 0.0 score

Metadata

Metadata

Labels

kind/bugCategorizes issue or PR as related to a bug.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions