Skip to content

evaluation metrics was very bad #348

@dong-hub-png

Description

@dong-hub-png

Hello, thank you for your contribution.I trained bevfusion-mit using my own dataset, and after quantization, an anomaly occurred where all evaluation values ​​were 0.
`mAP: 0.0000
mATE: 1.0448
mASE: 0.9949
mAOE: 1.0000
NDS: 0.0006
Eval time: 1.2s

Per-class results:
Object Class AP ATE ASE AOE
Car 0.000 1.000 1.000 1.000
igv_without_box 0.000 1.000 1.000 1.000
igv_with_box 0.000 1.000 1.000 1.000
pillar 0.000 1.436 0.979 nan
cones 0.000 1.012 0.970 nan
pedestrian 0.000 1.000 1.000 nan
truck_head 0.000 1.000 1.000 1.000
truck_tail_without_box 0.000 1.000 1.000 1.000
truck_tail_with_halfbox 0.000 1.000 1.000 1.000
truck_tail_with_box 0.000 1.000 1.000 1.000 Where might this problem occur?The test results of the model I trained are:[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 275/275, 2.8 task/s, elapsed: 99s, ETA: 0s
mAP: 0.8512
mATE: 0.1395
mASE: 0.1361
mAOE: 0.1294
NDS: 0.8564
Eval time: 2.1s

Per-class results:
Object Class AP ATE ASE AOE
Car 0.993 0.076 0.074 0.028
igv_without_box 0.326 0.092 0.074 0.013
igv_with_box 0.978 0.143 0.070 0.045
pillar 0.773 0.040 0.174 nan
cones 0.640 0.283 0.352 nan
pedestrian 0.995 0.076 0.217 nan
truck_head 0.971 0.166 0.124 0.263
truck_tail_without_box 0.939 0.173 0.115 0.159
truck_tail_with_halfbox 0.920 0.186 0.075 0.124
truck_tail_with_box 0.978 0.160 0.085 0.274 `

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions