Replies: 1 comment
-
Optimizing just one of them leads to degenerate solutions: using the definitions from here just classifying everything as a positive gives a precision of 1.0 since there are no false negatives; likewise, classifying everything as negative gives a recall of 1.0 as there are no false positives. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Can automl optimize models for precision and recall? It doesn't seem like precision and recall are available options for metrics.
Ex:
automl_settings = {
"time_budget": 30,
"metric": 'recall',
"task": 'classification',
"log_file_name": "cell.log",
}
gives the error
'recall is not an built-in sklearn metric and nlp is not installed. Currently built-in sklearn metrics are: r2, rmse, mae, mse, accuracy, roc_auc, roc_auc_ovr, roc_auc_ovo,log_loss, mape, f1, micro_f1, macro_f1, ap. If the metric is an nlp metric, please pip install flaml[nlp] ', 'or pass a customized metric function to AutoML.fit(metric=func)'
Based on the error message, it doesn't seem like automl includes 'precision' and 'recall' as available metrics.
Based on https://learn.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml, I would think that 'precision' and 'recall' would be included.
In addition, it's interesting because metrics like f1, which rely on precision and recall, are included as available metrics so I wonder why precision and recall wouldn't be included as available metrics.
I guess in practice, it's favorable to optimize for both precision and recall (hence optimize for metrics like f1), over optimize for solely precision or solely recall.
Am I missing something?
Beta Was this translation helpful? Give feedback.
All reactions