Skip to content

Repository for the paper "Robust Amortized Bayesian Inference with Self-consistency Losses on Unlabeled Data".

Notifications You must be signed in to change notification settings

bayesflow-org/self-consistency-real

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

219 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Robust ABI with self-consistency lossed on unlabeled data

Introduction

Amortized Bayesian inference (ABI) with neural networks can solve probabilistic inverse problems orders of magnitude faster than classical methods. However, ABI is not yet sufficiently robust for widespread and safe application. When performing inference on observations outside the scope of the simulated training data, posterior approximations are likely to become highly biased, which cannot be corrected by additional simulations due to the bad pre-asymptotic behavior of current neural posterior estimators. In this paper, we propose a semi-supervised approach that enables training not only on labeled simulated data generated from the model, but also on \textit{unlabeled} data originating from any source, including real data. To achieve this, we leverage Bayesian self-consistency properties that can be transformed into strictly proper losses that do not require knowledge of ground-truth parameters. We test our approach on several real-world case studies, including applications to high-dimensional time-series and image data. Our results show that semi-supervised learning with unlabeled data drastically improves the robustness of ABI in the out-of-simulation regime. Notably, inference remains accurate even when evaluated on observations far away from the labeled and unlabeled data seen during training.

Citation

TBD

Authors

Aayush Mishra, Daniel Habermann, Marvin Schmitt, Stefan T. Radev, Paul-Christian Bürkner

Instructions

  1. All the experiments are in the directory experiments:
  • multivariate_normal: Multivariate normal model
  • air_traffic: Forecasting air passenger traffic: an autoregressive model with predictors
  • hodgkin_huxley: Hodgkin-Huxley model of neuron activation
  • image_denoising: Bayesian denoising of MNIST images
  1. There are specific instructions within each experiment in the files README.md to run the experiments and to do inference.

About

Repository for the paper "Robust Amortized Bayesian Inference with Self-consistency Losses on Unlabeled Data".

Resources

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •