Skip to content

Learner fallback wishlist #803

Open
Open
@jemus42

Description

During my current benchmark setup, I have learned a few things I wish I had read in the book before:

  1. When doing nested resampling with an AutoTuner, the "inner" learner can have a fallback, which will trigger if there are errors during the inner resampling loop.
    However, if there are errors during the outer resampling loop, the AutoTuner itself also needs a fallback, otherwise it can crash the entire tuning process.

  2. When constructing a GraphLearner, the fallback should be added to the "finished" GraphLearner object. If the base learner gets a fallback and is then wrapped into a GraphLearner, the GraphLearners's $fallback will be NULL and errors will be silently ignored and not show up in the error column in ResampleResults.
    This is the worst kind of failure: The silent one 🙃
    In my mind this feels like a potential use case for a note-box or something.
    Big ⚠️ and 🚨 and everything.

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions