Description
** Dynesty version **
2.1.4 (pip)
I compute an ndim = 100-dimensional integral as bayesian evidence. The integral takes as input an external parameter, b. I compute the integral on a grid of the a parameter, because the resulting logZ values are a posterior on b. I noticed that the posterior is shifted when I change rstate. I understand this happens because of the finite sampling prior. The number of live points I use is 50 * ndim (per the instructions in RTD), and I also use bootstrap = 50. Increasing the number of live points does not robustify my posterior against rstate. How can I obtain a credible posterior as a function of the variable a? Running for multiple values of rstate and averaging is not an option, because the computation takes very long. I also can't shrink my sampling prior any further to speed up the computation. I am running as follows:
sampler = dynesty.DynamicNestedSampler(logl, pt, rstate = rstate, ndim=ndim, pool=executor, queue_size=cpus + 1, logl_args=(b), bound='multi', sample='rslice', nlive = 50 * ndim, bootstrap = 50, slices = 3 + ndim)
sampler.run_nested(dlogz=0.001)
Below I provide logZ as a function of slice, bootstrap and rstate for a fixed value of the external parameter b and ndim = 10. Looking at the issues #289 , #285 , #367 , the number of slices at the default (3 + ndim) should suffice at least in this low-dimensional case, but am not sure which configuration to trust, especially for the full setup where ndim = 100 or so.



The dynamic nested sampler produces logZ's in the same range as the last plot across different rstates, but with slightly smaller error bars (0.13 instead of 0.20). Ultimately, I would like to have 0.01-0.1 accuracy in logZ across different rstate's. Currently, this seems to be feasible with nlive > 400 * ndim, which yield a logzerr of about 0.1. So my question is twofold:
- How can I achieve 0.01-0.1 error in logZ for ndim = 100 (is it possible at all?), since I don't see any improvement with slices, bootstrap and using the dynamic sampler unless I massively increase nlive? Is increasing nlive the only way to achieve such tight errors on logZ?
- Which estimate should I trust since the variation with the above parameters is significant? Will it reduce if I achieve a 0.01-0.1 error in logZ?
Activity