Skip to content

[Bug]: NaN values often yield very good interpretation metrics #575

@fhausmann

Description

@fhausmann

Contact details

No response

What happened?

In the case of a NaN value for metrics with interpretations, a very good (or very bad) interpretation is often reported, due to a lack of checks in the _analysis function.

Please let me know, if I should look deeper into this and provide a Pull request.
A potential fix would be: https://github.com/fhausmann/pycm/tree/fix_na_interpretation

Steps to reproduce

import pycm
import numpy as np
auc_values = 0.6 
pycm.interpret.AUC_analysis(auc_values)
# 'Fair'
auc_values = np.nan
pycm.interpret.AUC_analysis(auc_values)
# 'Excellent'

Expected behavior

Some metrics, such as pearson_C_analysis return "None" in this case, which I think is the correct behavior.

Actual behavior

Either the best or the worst interpretation is reported, depending on the metric.

Operating system

Linux

Python version

Python 3.12

PyCM version

PyCM 4.1

Relevant log output

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions