Skip to content

Commit 5d61dbf

Browse files
authored
Merge branch 'master' into unif_parallel
2 parents 9977da3 + b19df2d commit 5d61dbf

17 files changed

Lines changed: 538 additions & 281 deletions

CHANGELOG.md

Lines changed: 13 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,25 @@ All notable changes to dynesty will be documented in this file.
44

55
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
66

7-
87
[Unreleased]
98
### Added
9+
### Changed
10+
### Fixed
11+
12+
[2.1.5 - 2024-12-17]
13+
### Fixed
14+
- Fix the issue with merge_runs when the dynamic runs with different number of batches are merged ( #481 reported by @rodleiva, fixed by @segasai)
15+
- Fix the issue with leaking file descriptors when using dynesty pool. Can lead to 'too many open files' problem ( #379, reported by @bencebecsy, fixed by @segasai)
16+
17+
[2.1.4 - 2024-06-26]
18+
### Added
1019

1120
### Changed
1221
- Get rid of npdim option that at some point may have allowed the prior transformation to return higher dimensional vector than the inputs. Note that due to this change, restoring the checkpoint from previous version of the dynesty won't be possible) (issues #456, #457) (original issue reported by @MichaelDAlbrow, fixed by @segasai )
1322
### Fixed
14-
- Fix the way the additional arguments are treated when working with dynesty's pool. Previously those only could have been passed through dynesty.pool.Pool() constructor. Now they can still be provided directly to the sampler (not recommended) (reported by @eteq, fixed by @segasai )
15-
23+
- Fix the way the additional arguments are treated when working with dynesty's pool. Previously those only could have been passed through dynesty.pool.Pool() constructor. Now they can still be provided directly to the sampler (not recommended) ( #464 , reported by @eteq, fixed by @segasai )
24+
- change the .ptp() method to np.ptp() function as it is deprecated in numpy 2.0 ( #478 , reported and patched by @joezuntz)
25+
- Fix an error if you use run_nested() several times (i.e. with maxiter option) while using blob=True. ( #475 , reported by @carlosRmelo)
1626

1727
## [2.1.3] - 2023-10-04
1828
### Added

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
[![Build Status](https://github.com/joshspeagle/dynesty/workflows/Dynesty/badge.svg)](https://github.com/joshspeagle/dynesty/actions)
1+
[![Build Status](https://github.com/joshspeagle/dynesty/actions/workflows/test.yml/badge.svg)](https://github.com/joshspeagle/dynesty/actions/)
22
[![Documentation Status](https://readthedocs.org/projects/dynesty/badge/?version=latest)](https://dynesty.readthedocs.io/en/latest/?badge=latest)
3-
[![Coverage Status](https://coveralls.io/repos/github/joshspeagle/dynesty/badge.svg?branch=master)](https://coveralls.io/github/joshspeagle/dynesty?branch=master)[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.6609296.svg)](https://doi.org/10.5281/zenodo.3348367)
3+
[![Coverage Status](https://coveralls.io/repos/github/joshspeagle/dynesty/badge.svg?branch=master)](https://coveralls.io/github/joshspeagle/dynesty?branch=master)[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3348367.svg)](https://doi.org/10.5281/zenodo.3348367)
44

55

66
dynesty
@@ -35,7 +35,7 @@ of the code can be found
3535

3636
If you find the package useful in your research, please cite at least *both* of these references:
3737
* The original paper [Speagle (2020)](https://ui.adsabs.harvard.edu/abs/2020MNRAS.493.3132S/abstract)
38-
* The python implementation [Koposov et al. (2023)](https://doi.org/10.5281/zenodo.3348367) (the citation info is at the bottom of the page on the right)
38+
* The python implementation [Koposov et al. (2024)](https://doi.org/10.5281/zenodo.3348367) (the citation info is at the bottom of the page on the right)
3939

4040

4141
and ideally also papers describing the underlying methods (see the [documentation](https://dynesty.readthedocs.io/en/latest/references.html) for more details)

RELEASE.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,8 @@ Essential steps for the release (order is important):
55
* update the changelog.md and changelog in the docs
66
* change the internal version number ( py/dynesty/_version.py )
77
* git tag
8-
* release on pypi and github
8+
* release on pypi and github (Note it may not be a good idea to make a release on github,
9+
as I believe that creates one more zenodo record )
910

1011
Things to check before the release
1112

demos/Demo 3 - Errors.ipynb

Lines changed: 291 additions & 108 deletions
Large diffs are not rendered by default.

docs/source/examples.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,7 @@ Linear regression is ubiquitous in research. In this example we'll fit a line
140140
to data where the error bars have been over/underestimated by some fraction
141141
of the observed value :math:`f` and need to be decreased/increased.
142142
Note that this example is taken directly from the ``emcee`` `documentation
143-
<http://dan.iel.fm/emcee/current/user/line/>`_.
143+
<https://emcee.readthedocs.io/en/stable/>`_.
144144

145145
.. image:: ../images/examples_line_001.png
146146
:align: center

docs/source/faq.rst

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -41,28 +41,28 @@ efficiency of 10%, but that threshold can be adjusted using the
4141

4242
**Is there an easy way to add more samples to an existing set of results?**
4343

44-
Yes! There are actually a bunch of ways to do this. If you have the static
45-
`NestedSampler` currently initialized, just executing `run_nested()` will start
46-
adding samples where you left off.If you're instead interested in adding
47-
more samples to a previous part of the run, the best strategy is to just
48-
start a new independent run and then "combine" the old and new runs together
44+
Yes! There are actually a bunch of ways to do this.
45+
If you have the static `NestedSampler` run, the best strategy is to just
46+
start a new independent run and then combine the old and new runs together
4947
into a single (improved) run using the :meth:`~dynesty.utils.merge_runs`
5048
function.
5149

52-
If you're using the `DynamicNestedSampler`, executing `run_nested` will
53-
automatically add more dynamically-allocated samples based on your
54-
target weight function as long as the stopping criteria hasn't been met.
55-
If you would like to add a new batch of samples manually,
56-
running `add_batch` will assign a new set of samples.
57-
You can also specifically add new batch corresponding to a certain likelihood
58-
range (i.e. corresponding to where your posterior is concentrated).
59-
Also, if you are primarily interested in the posterior, you can use larger
60-
values of n_effective parameter of `run_nested` as that will ensure your posterior
61-
is less noisy.
50+
If you're using the `DynamicNestedSampler`, you can add as many batches as you want by running `add_batch`. The add_batch have a `auto` mode, or a manual
51+
mode where you can add new batch corresponding to a certain likelihood
52+
range (i.e. corresponding to where your posterior is concentrated for example).
53+
54+
Alternatively, if you are primarily interested in the posterior, you can use larger values of n_effective parameter of `run_nested` as that will ensure your posterior is less noisy.
6255
Finally, :meth:`~dynesty.utils.merge_runs` also works with results generated
6356
from Dynamic Nested Sampling, so it is just as easy to set off a new run and
6457
combine it with your original result.
6558

59+
**I have a very multi-modal posterior, or dynesty cannot find the mode of my posterior. What should I do**
60+
61+
If you have a large number of posterior modes, or a very narrow posterior mode occupying a tiny volume of your posterior, it may be difficult to find it.
62+
Typically using more live-points will help in those cases, as you have higher chance of discovering the your mode with one of the live-points.
63+
Also if you have a really large number of modes, you can use `add_batch` functionality with `logl_bounds` of (-inf,inf), to basically do repeated sampling of the posterior. In this case you have a good chance of discovering all the modes in your data.
64+
65+
6666
**There are inf values in my lower/upper log-likelihood bounds!
6767
Should I be concerned?**
6868

docs/source/index.rst

Lines changed: 22 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -98,6 +98,21 @@ Changelog
9898
.. image:: ../images/logo.gif
9999
:align: center
100100

101+
2.1.5 (2024-12-17)
102+
------------------
103+
Bug fix release
104+
105+
- Fix the issue with merge_runs when the dynamic runs with different number of batches are merged ( #481 reported by @rodleiva, fixed by @segasai)
106+
- Fix the issue with leaking file descriptors when using dynesty pool. Can lead to 'too many open files' problem ( #379, reported by @bencebecsy, fixed by @segasai)
107+
108+
109+
2.1.4 (2024-06-26)
110+
------------------
111+
- Get rid of npdim option that at some point may have allowed the prior transformation to return higher dimensional vector than the inputs. Note that due to this change, restoring the checkpoint from previous version of the dynesty won't be possible) (issues #456, #457) (original issue reported by @MichaelDAlbrow, fixed by @segasai )
112+
- Fix the way the additional arguments are treated when working with dynesty's pool. Previously those only could have been passed through dynesty.pool.Pool() constructor. Now they can still be provided directly to the sampler (not recommended) (reported by @eteq, fixed by @segasai )
113+
- change the .ptp() method to np.ptp() function as it is deprecated in numpy 2.0 (reported and patched by @joezuntz)
114+
- Fix an error if you use run_nested() several times (i.e. with maxiter option) while using blob=True. ( #475 , reported by @carlosRmelo,)
115+
101116
2.1.3 (2023-10-04)
102117
------------------
103118
Bug fix release
@@ -106,7 +121,7 @@ Bug fix release
106121
- Warning is emitted if maxcall/maxiter stop the iterations too early ( @segasai )
107122
- The clustering/K-means for ellipsoid decomposition is now done in scaled space of points divided by stddev along each dimension ( @segasai)
108123
- Update the initialisation of points in the case where some fraction of prior volume has log(L)=-inf this should increase the accuracy of log(Z) estimates in some cases
109-
- Fix a rare possible bug where due to numerical instability in uniform sampling a q==0 error could have occured ( @segasai)
124+
- Fix a rare possible bug where due to numerical instability in uniform sampling a q==0 error could have occurred ( @segasai)
110125
- Fix a FileExistsError when using check-points on Windows ( #450, reported by @rodleiva, fixed by @segasai )
111126

112127

@@ -116,7 +131,7 @@ Bug fix release
116131

117132
- Fix the restoration of the dynamic sampler from the checkpoint with the pool. Previously after restoring the sampler, the pool was not used. (#438 ; by @segasai)
118133
- Fix the issue with checkpointing from the dynamic sampler. Previously if one batch took shorter than the checkpoint_every seconds then the checkpoints were not created (by @segasai)
119-
- Fix the restoration from checkpoint that could have occassiounaly lead to one skipped point from nested run (by @segasai)
134+
- Fix the restoration from checkpoint that could have occassionally lead to one skipped point from nested run (by @segasai)
120135

121136

122137
2.1.1 (2023-04-16)
@@ -127,7 +142,7 @@ Mostly bug fix release
127142
- Refactor the bound update code which will lead to more consistent boundary updates (#428 by @segasai )
128143
- Fix some pathological cases when uniform distribution is sampled with a very low log likelihood values
129144
- Fix a problem when a very small nlive is used leading to the error (#424 , reported by @frgsimpson)
130-
- Fix the incorrect update_interval calculation leading to too unfrequent updates of bounds when using dynamic sampler (report by @ajw278, analysis and fix by @segasai)
145+
- Fix the incorrect update_interval calculation leading to too infrequent updates of bounds when using dynamic sampler (report by @ajw278, analysis and fix by @segasai)
131146
- If you try to resume a previously finished dynamic run, the warning will be raised and the sampler will exit (previously an error could have occured in this case)
132147

133148

@@ -185,8 +200,8 @@ All the individual changes are listed below:
185200
- The Monte-Carlo volume calculations by RadFriends/SupFriends/MultiEllipsoid were inaccurate (fix # 398; #399 ; by @segasai )
186201
- Setting n_effective for Sampler.run_nested() and DynamicSampler.sample_initial(), and n_effective_init for DynamicSampler.run_nested(), are deprecated ( #379 ; by @edbennett )
187202
- The slice sampling can now switch to doubling interval expansion algorithm from Neal(2003), if at any point of the sampling the interval was expanded more than 1000 times. It should help slice/rslice sampling of difficult posteriors ( #382 ; by @segasai )
188-
- The accumulation of statistics using to tune proposal distribution is now more robust when multi-threading/pool is used. Previously statistics from every queue_size call were used and all other were discarded. Now the statistics are accumulated from all the parallel sampling calls. That should help sampling of complex distributions. ( #385 ; by @segasai )
189-
- The .update_proposal() function that updates the states of samplers now has an additional keyword which allows to either just accumulate the statistics from repeated function calls or actual update of the proposal. This was needed to not loose information when queue_size>1 ( #385 ; by @segasai )
203+
- The accumulation of statistics using to tune proposal distribution is now more robust when multi-threading/pool is used. Previously statistics from every queue_size call were used and all others were discarded. Now the statistics are accumulated from all the parallel sampling calls. That should help sampling of complex distributions. ( #385 ; by @segasai )
204+
- The .update_proposal() function that updates the states of samplers now has an additional keyword which allows to either just accumulate the statistics from repeated function calls or actual update of the proposal. This was needed to not lose information when queue_size>1 ( #385 ; by @segasai )
190205
- The ellipsoid bounding has been sped up by not using the Cholesky transform , also a check was added/test expanded for possible numerical issues when sampling from multiple ellipsoids potentially causing assert q>0 ( #397 ; by @segasai )
191206
- The individual samplers now take as input a special Namedtuple SamplerArgument rather than just a tuple ( #400 ; by @segasai ).
192207

@@ -221,11 +236,11 @@ Small bug fix release
221236
1.2.0 (2022-03-31)
222237
------------------
223238

224-
This version has multiple changes that should improve stability and speed. The default dynamic sampling behaviour has been changed to focus on the effective number of posterior samples as opposed to KL divergence. The rstagger sampler has been removed and the default choice of the sampler may be different compared to previous releases depending on the dimensionality of the problem. dynesty should now provide 100% reproduceable results if the rstate object is provided. It needs to be a new generation Random Generator (as opposed to numpy.RandomState)
239+
This version has multiple changes that should improve stability and speed. The default dynamic sampling behaviour has been changed to focus on the effective number of posterior samples as opposed to KL divergence. The rstagger sampler has been removed and the default choice of the sampler may be different compared to previous releases depending on the dimensionality of the problem. dynesty should now provide 100% reproducible results if the rstate object is provided. It needs to be a new generation Random Generator (as opposed to numpy.RandomState)
225240

226241
Most of the changes in the release have been contributed by [Sergey Koposov](https://github.com/segasai) who has joined the dynesty project.
227242

228-
- Saving likelihood. It is now possible to save likelihood calls history during sampling into HDF5 file (this is not compatible with parallel sampling yet). The relevant options are save_history=False, history_filename=None (#235)
243+
- Saving likelihood. It is now possible to save likelihood calls history during sampling into HDF5 file (this is not compatible with parallel sampling yet). The relevant options are save_history=False, history_filename=None (#235)
229244
- add_batch() function now has the mode parameter that allows you to manually chose the logl range for the batch (#328)
230245
- More testing with code coverage of >90% + validation on test problems
231246
- Internal refactoring reducing code duplication (saved_run, integral calculations, different samplers etc)

docs/source/quickstart.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -176,7 +176,7 @@ Live Points
176176
-----------
177177

178178
Similar to ensemble sampling methods such as
179-
`emcee <http://dan.iel.fm/emcee/current/>`_, the behavior of Nested Sampling
179+
`emcee <https://emcee.readthedocs.io/en/stable/>`_, the behavior of Nested Sampling
180180
can also be sensitive to the number of live points used. Increasing the number
181181
of live points leads to smaller changes in the prior volume :math:`\ln X` over
182182
time. This improves the effective resolution while simultaneously increasing
@@ -295,11 +295,11 @@ the provided bounds which can be passed via the `sample` argument:
295295
In addition, `dynesty` also supports passing **custom callable functions**
296296
to the `sample` argument, provided they follow the same format as the
297297
default sampling functions defined `here
298-
<https://github.com/joshspeagle/dynesty/blob/master/dynesty/sampling.py>`__.
298+
<https://github.com/joshspeagle/dynesty/blob/master/py/dynesty/sampling.py>`__.
299299
These can also be accompanied by custom "update functions" that try to
300300
adaptively scale proposals to ensure better overall sampling efficiency.
301301
See `here
302-
<https://github.com/joshspeagle/dynesty/blob/master/dynesty/nestedsamplers.py>`__
302+
<https://github.com/joshspeagle/dynesty/blob/master/py/dynesty/nestedsamplers.py>`__
303303
for examples of some of the functions that are associated with the default
304304
sampling methods described above.
305305

@@ -613,7 +613,7 @@ The numpy blob can return arbitrary 1D numpy arrays. The can be record arrays as
613613
Running Externally
614614
------------------
615615

616-
Similar to `emcee <http://dan.iel.fm/emcee/current/>`_, `sampler` objects in
616+
Similar to `emcee <https://emcee.readthedocs.io/en/stable/>`_, `sampler` objects in
617617
``dynesty`` can also be run externally as a **generator** via the
618618
:meth:`~dynesty.sampler.Sampler.sample` function. This might look something
619619
like::

docs/source/references.rst

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,9 @@ References and Acknowledgements
44

55
**The release paper describing the code corresponding to dynesty 1.0 can be found**
66
`here <https://github.com/joshspeagle/dynesty/tree/master/paper/dynesty.pdf>`_.
7-
We remark that more recent dynesty versions have multiple changes with respect to the paper. Therefore
8-
please ensure that you cite the paper and the specific version of dynesty you used through `zenodo <https://doi.org/10.5281/zenodo.3348367>`_
7+
Since recent dynesty versions have significant changes with respect to the paper, unless
8+
you use the 1.0 version, you must *also* cite dynesty code used through `zenodo <https://doi.org/10.5281/zenodo.3348367>`_
9+
910

1011
A list of papers that you should cite can always be generated directly
1112
from the `sampler` object by calling::
@@ -85,8 +86,8 @@ put in by `Kyle Barbary <http://kylebarbary.com/>`_ and `other contributors
8586
<https://github.com/joshspeagle/dynesty/blob/master/AUTHORS.md>`_.
8687

8788
Much of the API is inspired by the ensemble MCMC package
88-
`emcee <http://dan.iel.fm/emcee/current/>`_ as well as other work by
89-
`Daniel Foreman-Mackey <http://dan.iel.fm/>`_.
89+
`emcee <https://emcee.readthedocs.io/en/stable/>`_ as well as other work by
90+
`Daniel Foreman-Mackey <http://dfm.io/>`_.
9091

9192
Many of the plotting utilities draw heavily upon Daniel Foreman-Mackey's
9293
wonderful `corner <http://corner.readthedocs.io>`_ package.

py/dynesty/_version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
__version__ = "2.1.3"
1+
__version__ = "2.1.5"

0 commit comments

Comments
 (0)