Description
During my current benchmark setup, I have learned a few things I wish I had read in the book before:
-
When doing nested resampling with an
AutoTuner
, the "inner" learner can have a fallback, which will trigger if there are errors during the inner resampling loop.
However, if there are errors during the outer resampling loop, theAutoTuner
itself also needs a fallback, otherwise it can crash the entire tuning process. -
When constructing a
GraphLearner
, the fallback should be added to the "finished" GraphLearner object. If the base learner gets a fallback and is then wrapped into a GraphLearner, the GraphLearners's$fallback
will beNULL
and errors will be silently ignored and not show up in theerror
column inResampleResults
.
This is the worst kind of failure: The silent one 🙃
In my mind this feels like a potential use case for a note-box or something.
Big⚠️ and 🚨 and everything.