EP‑01: Pluggable Training Infrastructure for sbi
#1673
janfb
started this conversation in
Enhancement Proposals
Replies: 1 comment
-
|
some discussion on this EP happened in the corresponding PR: #1674 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Summary
This Enhancement Proposal (EP) proposes refactoring sbi’s training infrastructure to reduce duplication,
improve type safety, and enable pluggable logging and early stopping. It introduces
typed configurations, a unified training loop shared across NPE/NLE/NRE, and clean
interfaces for logging backends (TensorBoard, WandB, MLflow, stdout) and early stopping
strategies—while maintaining full backward compatibility.
Why
contributions hard.
Proposal
configuration.
.train(...)kwargs continue to work.run_training(config, model, loss_fn, loaders, …) -> (model, summary)to avoid further bloatingNeuralInferenceand improve testability/extensibility.
API sketch
What we’d like feedback on
class?
Links
TrainConfigandLossArgs: Training dataclasses #1668Beta Was this translation helpful? Give feedback.
All reactions