Very concentrated posterior #1725
-
|
Hello, I have a moderately simple model, where the NN learns the model almost "perfectly" when I feed a lot of data using the NPE method. Then, I guess because the learned posterior becomes very concentrated, I get this warning (which I had seen with SNPE, but it now happens with NPE too): Ironically, I won't get this message when I don't provide too many training datasets to the NN (and obviously, the validation payoff is worse). But the problem is that the sampling becomes too slow if I switch to MCMC. So, I was wondering if you have any insights on how to prevent this. Thank you, |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
|
Hello @ali-akhavan89 It looks like your posterior is so concentrated that the default rejection sampling struggles to find valid samples strictly within the prior bounds. Since the network learned "too well," it might be placing the peak probability mass just on the edge or slightly outside your prior (leakage). # Disable the rejection loop to avoid the 0% acceptance warning
samples = posterior.sample((10000,), reject_outside_prior=False)This allows the flow to return samples instantly, but they might be slightly outside the prior support. If your simulator is strict (e.g., crashes on negative values), you may need to manually clamp these samples to the bounds. Hope this helps! |
Beta Was this translation helpful? Give feedback.
Hello @ali-akhavan89
It looks like your posterior is so concentrated that the default rejection sampling struggles to find valid samples strictly within the prior bounds. Since the network learned "too well," it might be placing the peak probability mass just on the edge or slightly outside your prior (leakage).
You may solve this by disabling the rejection check using the
reject_outside_priorargument, which was recently added to sbi (#1705) designed for this situationThis allows the flow to return samples instantly, but they might be slightly outside the prio…