Skip to content

Commit 5b9d5fe

Browse files
committed
doc fixes
1 parent c236ea5 commit 5b9d5fe

3 files changed

Lines changed: 4 additions & 38 deletions

File tree

docs/source/dynamic.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ We can illustrate this directly using the same example from
4040
# Define our 3-D correlated multivariate normal log-likelihood.
4141
C = np.identity(ndim)
4242
C[C==0] = 0.95
43-
Cinv = linalg.inv(C)
43+
Cinv = np.linalg.inv(C)
4444
lnorm = -0.5 * (np.log(2 * np.pi) * ndim +
4545
np.log(np.linalg.det(C)))
4646

docs/source/faq.rst

Lines changed: 0 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -261,40 +261,6 @@ with (3) a large number of varying live points can make the stopping criteria
261261
difficult to evaluate quickly. See
262262
:ref:`Nested Sampling Errors` for additional details.
263263

264-
**I'm trying to sample using gradients but getting extremely poor performance.
265-
I thought gradients were supposed to make sampling more efficient!
266-
What gives?**
267-
268-
While gradients are extremely useful in terms of substantially improving
269-
the scaling of most sampling methods with dimensionality (gradient-based
270-
methods have better polynomial scaling than non-gradient slice sampling, both
271-
of which are *substantially* better over the runaway exponential scaling
272-
of random walks), it can take a while for these benefits to really kick in.
273-
These scaling arguments generally ignore the constant prefactor, which
274-
can be quite large for many gradient-based approaches that require
275-
integrating along some trajectory, often resulting in (at least) dozens of
276-
function calls per sample. This often makes it more efficient to run simpler
277-
sampling techniques on lower-dimensional problems. In general, Nested Sampling
278-
methods are also unable to exploit gradient-based information to the same
279-
degree as Hamiltonian Monte Carlo approaches, which further degrades
280-
performance and scaling relative to what you might naively expect.
281-
282-
If you feel like your performance is poorer than expected even given these
283-
caveats, or if you notice other results that make you highly suspicious of the
284-
resulting samples, please double-check the :ref:`Sampling with Gradients`
285-
page to make sure you've passed in the correct log-likelihood gradient and are
286-
dealing with the unit cube Jacobian properly. Failing
287-
to apply this (or applying it twice) violates conservation of energy and
288-
momentum and leads to the integration timesteps along the trajectories
289-
changing in undesirable ways.
290-
It's also possible the numerical errors in the Jacobian (if you've set
291-
`compute_jac=True`) might be propagating through to the computed trajectories.
292-
If so, consider trying to compute the analytic Jacobian by hand to reduce
293-
the impact of numerical errors.
294-
295-
If you still find subpar performance, please feel free to
296-
`open an issue <https://github.com/joshspeagle/dynesty/issues>`_.
297-
298264

299265
Live Point Questions
300266
--------------------

docs/source/quickstart.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,7 @@ define the :meth:`loglikelihood` and :meth:`prior_transform` functions::
140140
# Define our 3-D correlated multivariate normal log-likelihood.
141141
C = np.identity(ndim)
142142
C[C==0] = 0.95
143-
Cinv = linalg.inv(C)
143+
Cinv = np.linalg.inv(C)
144144
lnorm = -0.5 * (np.log(2 * np.pi) * ndim +
145145
np.log(np.linalg.det(C)))
146146

@@ -452,7 +452,7 @@ of parallelizing is using a dynesty provided pool (which is a thin wrapper aroun
452452
python's multiprocessing pool)::
453453

454454
with Pool(10, loglike, ptform) as pool:
455-
sampler = NestedSampler(pool.loglikehood, pool.prior_transform,
455+
sampler = NestedSampler(pool.loglikelihood, pool.prior_transform,
456456
ndim, pool = pool)
457457
sampler.run_nested()
458458

@@ -470,7 +470,7 @@ pickling of those arguments::
470470
logl_kwargs=logl_kwargs,
471471
ptform_args=ptform_args,
472472
ptform_kwargs=ptform_kwargs) as pool:
473-
sampler = NestedSampler(pool.loglikehood, pool.prior_transform,
473+
sampler = NestedSampler(pool.loglikelihood, pool.prior_transform,
474474
ndim, pool = pool)
475475
sampler.run_nested()
476476

0 commit comments

Comments
 (0)