You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+13-3Lines changed: 13 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,15 +4,25 @@ All notable changes to dynesty will be documented in this file.
4
4
5
5
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
6
6
7
-
8
7
[Unreleased]
9
8
### Added
9
+
### Changed
10
+
### Fixed
11
+
12
+
[2.1.5 - 2024-12-17]
13
+
### Fixed
14
+
- Fix the issue with merge_runs when the dynamic runs with different number of batches are merged ( #481 reported by @rodleiva, fixed by @segasai)
15
+
- Fix the issue with leaking file descriptors when using dynesty pool. Can lead to 'too many open files' problem ( #379, reported by @bencebecsy, fixed by @segasai)
16
+
17
+
[2.1.4 - 2024-06-26]
18
+
### Added
10
19
11
20
### Changed
12
21
- Get rid of npdim option that at some point may have allowed the prior transformation to return higher dimensional vector than the inputs. Note that due to this change, restoring the checkpoint from previous version of the dynesty won't be possible) (issues #456, #457) (original issue reported by @MichaelDAlbrow, fixed by @segasai )
13
22
### Fixed
14
-
- Fix the way the additional arguments are treated when working with dynesty's pool. Previously those only could have been passed through dynesty.pool.Pool() constructor. Now they can still be provided directly to the sampler (not recommended) (reported by @eteq, fixed by @segasai )
15
-
23
+
- Fix the way the additional arguments are treated when working with dynesty's pool. Previously those only could have been passed through dynesty.pool.Pool() constructor. Now they can still be provided directly to the sampler (not recommended) ( #464 , reported by @eteq, fixed by @segasai )
24
+
- change the .ptp() method to np.ptp() function as it is deprecated in numpy 2.0 ( #478 , reported and patched by @joezuntz)
25
+
- Fix an error if you use run_nested() several times (i.e. with maxiter option) while using blob=True. ( #475 , reported by @carlosRmelo)
If you find the package useful in your research, please cite at least *both* of these references:
37
37
* The original paper [Speagle (2020)](https://ui.adsabs.harvard.edu/abs/2020MNRAS.493.3132S/abstract)
38
-
* The python implementation [Koposov et al. (2023)](https://doi.org/10.5281/zenodo.3348367) (the citation info is at the bottom of the page on the right)
38
+
* The python implementation [Koposov et al. (2024)](https://doi.org/10.5281/zenodo.3348367) (the citation info is at the bottom of the page on the right)
39
39
40
40
41
41
and ideally also papers describing the underlying methods (see the [documentation](https://dynesty.readthedocs.io/en/latest/references.html) for more details)
Copy file name to clipboardExpand all lines: docs/source/faq.rst
+15-15Lines changed: 15 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,28 +41,28 @@ efficiency of 10%, but that threshold can be adjusted using the
41
41
42
42
**Is there an easy way to add more samples to an existing set of results?**
43
43
44
-
Yes! There are actually a bunch of ways to do this. If you have the static
45
-
`NestedSampler` currently initialized, just executing `run_nested()` will start
46
-
adding samples where you left off.If you're instead interested in adding
47
-
more samples to a previous part of the run, the best strategy is to just
48
-
start a new independent run and then "combine" the old and new runs together
44
+
Yes! There are actually a bunch of ways to do this.
45
+
If you have the static `NestedSampler` run, the best strategy is to just
46
+
start a new independent run and then combine the old and new runs together
49
47
into a single (improved) run using the :meth:`~dynesty.utils.merge_runs`
50
48
function.
51
49
52
-
If you're using the `DynamicNestedSampler`, executing `run_nested` will
53
-
automatically add more dynamically-allocated samples based on your
54
-
target weight function as long as the stopping criteria hasn't been met.
55
-
If you would like to add a new batch of samples manually,
56
-
running `add_batch` will assign a new set of samples.
57
-
You can also specifically add new batch corresponding to a certain likelihood
58
-
range (i.e. corresponding to where your posterior is concentrated).
59
-
Also, if you are primarily interested in the posterior, you can use larger
60
-
values of n_effective parameter of `run_nested` as that will ensure your posterior
61
-
is less noisy.
50
+
If you're using the `DynamicNestedSampler`, you can add as many batches as you want by running `add_batch`. The add_batch have a `auto` mode, or a manual
51
+
mode where you can add new batch corresponding to a certain likelihood
52
+
range (i.e. corresponding to where your posterior is concentrated for example).
53
+
54
+
Alternatively, if you are primarily interested in the posterior, you can use larger values of n_effective parameter of `run_nested` as that will ensure your posterior is less noisy.
62
55
Finally, :meth:`~dynesty.utils.merge_runs` also works with results generated
63
56
from Dynamic Nested Sampling, so it is just as easy to set off a new run and
64
57
combine it with your original result.
65
58
59
+
**I have a very multi-modal posterior, or dynesty cannot find the mode of my posterior. What should I do**
60
+
61
+
If you have a large number of posterior modes, or a very narrow posterior mode occupying a tiny volume of your posterior, it may be difficult to find it.
62
+
Typically using more live-points will help in those cases, as you have higher chance of discovering the your mode with one of the live-points.
63
+
Also if you have a really large number of modes, you can use `add_batch` functionality with `logl_bounds` of (-inf,inf), to basically do repeated sampling of the posterior. In this case you have a good chance of discovering all the modes in your data.
64
+
65
+
66
66
**There are inf values in my lower/upper log-likelihood bounds!
Copy file name to clipboardExpand all lines: docs/source/index.rst
+22-7Lines changed: 22 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -98,6 +98,21 @@ Changelog
98
98
.. image:: ../images/logo.gif
99
99
:align:center
100
100
101
+
2.1.5 (2024-12-17)
102
+
------------------
103
+
Bug fix release
104
+
105
+
- Fix the issue with merge_runs when the dynamic runs with different number of batches are merged ( #481 reported by @rodleiva, fixed by @segasai)
106
+
- Fix the issue with leaking file descriptors when using dynesty pool. Can lead to 'too many open files' problem ( #379, reported by @bencebecsy, fixed by @segasai)
107
+
108
+
109
+
2.1.4 (2024-06-26)
110
+
------------------
111
+
- Get rid of npdim option that at some point may have allowed the prior transformation to return higher dimensional vector than the inputs. Note that due to this change, restoring the checkpoint from previous version of the dynesty won't be possible) (issues #456, #457) (original issue reported by @MichaelDAlbrow, fixed by @segasai )
112
+
- Fix the way the additional arguments are treated when working with dynesty's pool. Previously those only could have been passed through dynesty.pool.Pool() constructor. Now they can still be provided directly to the sampler (not recommended) (reported by @eteq, fixed by @segasai )
113
+
- change the .ptp() method to np.ptp() function as it is deprecated in numpy 2.0 (reported and patched by @joezuntz)
114
+
- Fix an error if you use run_nested() several times (i.e. with maxiter option) while using blob=True. ( #475 , reported by @carlosRmelo,)
115
+
101
116
2.1.3 (2023-10-04)
102
117
------------------
103
118
Bug fix release
@@ -106,7 +121,7 @@ Bug fix release
106
121
- Warning is emitted if maxcall/maxiter stop the iterations too early ( @segasai )
107
122
- The clustering/K-means for ellipsoid decomposition is now done in scaled space of points divided by stddev along each dimension ( @segasai)
108
123
- Update the initialisation of points in the case where some fraction of prior volume has log(L)=-inf this should increase the accuracy of log(Z) estimates in some cases
109
-
- Fix a rare possible bug where due to numerical instability in uniform sampling a q==0 error could have occured ( @segasai)
124
+
- Fix a rare possible bug where due to numerical instability in uniform sampling a q==0 error could have occurred ( @segasai)
110
125
- Fix a FileExistsError when using check-points on Windows ( #450, reported by @rodleiva, fixed by @segasai )
111
126
112
127
@@ -116,7 +131,7 @@ Bug fix release
116
131
117
132
- Fix the restoration of the dynamic sampler from the checkpoint with the pool. Previously after restoring the sampler, the pool was not used. (#438 ; by @segasai)
118
133
- Fix the issue with checkpointing from the dynamic sampler. Previously if one batch took shorter than the checkpoint_every seconds then the checkpoints were not created (by @segasai)
119
-
- Fix the restoration from checkpoint that could have occassiounaly lead to one skipped point from nested run (by @segasai)
134
+
- Fix the restoration from checkpoint that could have occassionally lead to one skipped point from nested run (by @segasai)
120
135
121
136
122
137
2.1.1 (2023-04-16)
@@ -127,7 +142,7 @@ Mostly bug fix release
127
142
- Refactor the bound update code which will lead to more consistent boundary updates (#428 by @segasai )
128
143
- Fix some pathological cases when uniform distribution is sampled with a very low log likelihood values
129
144
- Fix a problem when a very small nlive is used leading to the error (#424 , reported by @frgsimpson)
130
-
- Fix the incorrect update_interval calculation leading to too unfrequent updates of bounds when using dynamic sampler (report by @ajw278, analysis and fix by @segasai)
145
+
- Fix the incorrect update_interval calculation leading to too infrequent updates of bounds when using dynamic sampler (report by @ajw278, analysis and fix by @segasai)
131
146
- If you try to resume a previously finished dynamic run, the warning will be raised and the sampler will exit (previously an error could have occured in this case)
132
147
133
148
@@ -185,8 +200,8 @@ All the individual changes are listed below:
185
200
- The Monte-Carlo volume calculations by RadFriends/SupFriends/MultiEllipsoid were inaccurate (fix # 398; #399 ; by @segasai )
186
201
- Setting n_effective for Sampler.run_nested() and DynamicSampler.sample_initial(), and n_effective_init for DynamicSampler.run_nested(), are deprecated ( #379 ; by @edbennett )
187
202
- The slice sampling can now switch to doubling interval expansion algorithm from Neal(2003), if at any point of the sampling the interval was expanded more than 1000 times. It should help slice/rslice sampling of difficult posteriors ( #382 ; by @segasai )
188
-
- The accumulation of statistics using to tune proposal distribution is now more robust when multi-threading/pool is used. Previously statistics from every queue_size call were used and all other were discarded. Now the statistics are accumulated from all the parallel sampling calls. That should help sampling of complex distributions. ( #385 ; by @segasai )
189
-
- The .update_proposal() function that updates the states of samplers now has an additional keyword which allows to either just accumulate the statistics from repeated function calls or actual update of the proposal. This was needed to not loose information when queue_size>1 ( #385 ; by @segasai )
203
+
- The accumulation of statistics using to tune proposal distribution is now more robust when multi-threading/pool is used. Previously statistics from every queue_size call were used and all others were discarded. Now the statistics are accumulated from all the parallel sampling calls. That should help sampling of complex distributions. ( #385 ; by @segasai )
204
+
- The .update_proposal() function that updates the states of samplers now has an additional keyword which allows to either just accumulate the statistics from repeated function calls or actual update of the proposal. This was needed to not lose information when queue_size>1 ( #385 ; by @segasai )
190
205
- The ellipsoid bounding has been sped up by not using the Cholesky transform , also a check was added/test expanded for possible numerical issues when sampling from multiple ellipsoids potentially causing assert q>0 ( #397 ; by @segasai )
191
206
- The individual samplers now take as input a special Namedtuple SamplerArgument rather than just a tuple ( #400 ; by @segasai ).
192
207
@@ -221,11 +236,11 @@ Small bug fix release
221
236
1.2.0 (2022-03-31)
222
237
------------------
223
238
224
-
This version has multiple changes that should improve stability and speed. The default dynamic sampling behaviour has been changed to focus on the effective number of posterior samples as opposed to KL divergence. The rstagger sampler has been removed and the default choice of the sampler may be different compared to previous releases depending on the dimensionality of the problem. dynesty should now provide 100% reproduceable results if the rstate object is provided. It needs to be a new generation Random Generator (as opposed to numpy.RandomState)
239
+
This version has multiple changes that should improve stability and speed. The default dynamic sampling behaviour has been changed to focus on the effective number of posterior samples as opposed to KL divergence. The rstagger sampler has been removed and the default choice of the sampler may be different compared to previous releases depending on the dimensionality of the problem. dynesty should now provide 100% reproducible results if the rstate object is provided. It needs to be a new generation Random Generator (as opposed to numpy.RandomState)
225
240
226
241
Most of the changes in the release have been contributed by [Sergey Koposov](https://github.com/segasai) who has joined the dynesty project.
227
242
228
-
- Saving likelihood. It is now possible to save likelihood calls history during sampling into HDF5 file (this is not compatible with parallel sampling yet). The relevant options are save_history=False, history_filename=None (#235)
243
+
- Saving likelihood. It is now possible to save likelihood calls history during sampling into HDF5 file (this is not compatible with parallel sampling yet). The relevant options are save_history=False, history_filename=None (#235)
229
244
- add_batch() function now has the mode parameter that allows you to manually chose the logl range for the batch (#328)
230
245
- More testing with code coverage of >90% + validation on test problems
231
246
- Internal refactoring reducing code duplication (saved_run, integral calculations, different samplers etc)
0 commit comments