Skip to content

Revise compare_models() for Bayesian models #716

Open
@DominiqueMakowski

Description

The current output would benefit from some streamlining.

> performance::compare_performance(model, model2, model3)
# Comparison of Model Performance Indices

Name   |   Model |     ELPD | ELPD_SE | LOOIC (weights) | LOOIC_SE | WAIC (weights) |    R2 | R2 (adj.) |  RMSE | Sigma
-----------------------------------------------------------------------------------------------------------------------
model  | brmsfit | -104.387 |   9.532 |   208.8 (0.092) |   19.063 |  208.8 (<.001) | 0.927 |     0.926 | 0.475 | 0.487
model2 | brmsfit |  -89.212 |  10.543 |   178.4 (<.001) |   21.085 |  178.4 (<.001) | 0.941 |     0.940 | 0.425 | 0.467
model3 | brmsfit |  -64.580 |  11.569 |   129.2 (0.908) |   23.137 |  129.1 (>.999) | 0.958 |     0.957 | 0.353 | 0.363

Suggestions:

  1. Remove RMSE and Sigma by default (not exactly typically useful or what you would expect when comparing models)
  2. Move R2 and R2 adj. first, after the model column
  3. My current understanding is that LOO and WAIC are both methods to estimate ELPD, which can be used to compare models (in particular, by looking at the ELPD-diff (see [Feature] Add report.compare.loo report#419 for the report support recently added). As such, in the interest of computation, I would display directly the ELPD_DIFF + (SE), by default computed using LOO (although we could add the option for WAIC for faster computation), and that's it, drop the LOOIC, WAIC and the raw ELPD which are redundant indices and simply add noise.

What do you think?

Metadata

Assignees

Labels

Consistency 🍏 🍎Expected output across functions could be more similarEnhancement 💥Implemented features can be improved or revised

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions