Skip to content

Commit 28f32a2

Browse files
author
Documenter.jl
committed
build based on 52d9692
1 parent 61bff7d commit 28f32a2

File tree

13 files changed

+68
-68
lines changed

13 files changed

+68
-68
lines changed

dev/.documenter-siteinfo.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
{"documenter":{"julia_version":"1.11.5","generation_timestamp":"2025-06-17T06:32:27","documenter_version":"1.12.0"}}
1+
{"documenter":{"julia_version":"1.11.5","generation_timestamp":"2025-06-22T05:58:43","documenter_version":"1.13.0"}}

dev/API/approximatedistributions/index.html

Lines changed: 5 additions & 5 deletions
Large diffs are not rendered by default.

dev/API/architectures/index.html

Lines changed: 13 additions & 13 deletions
Large diffs are not rendered by default.

dev/API/core/index.html

Lines changed: 20 additions & 20 deletions
Large diffs are not rendered by default.

dev/API/index.html

Lines changed: 1 addition & 1 deletion
Large diffs are not rendered by default.

dev/API/loss/index.html

Lines changed: 3 additions & 3 deletions
Large diffs are not rendered by default.

dev/API/simulation/index.html

Lines changed: 6 additions & 6 deletions
Large diffs are not rendered by default.

dev/API/utility/index.html

Lines changed: 14 additions & 14 deletions
Large diffs are not rendered by default.

dev/index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,4 +49,4 @@
4949
volume = {78},
5050
pages = {1--14},
5151
doi = {10.1080/00031305.2023.2249522},
52-
}</code></pre><h2 id="Contributing"><a class="docs-heading-anchor" href="#Contributing">Contributing</a><a id="Contributing-1"></a><a class="docs-heading-anchor-permalink" href="#Contributing" title="Permalink"></a></h2><p>If you encounter a bug or have a suggestion, please consider <a href="https://github.com/msainsburydale/NeuralEstimators.jl/issues">opening an issue</a> or submitting a pull request. Instructions for contributing to the documentation can be found in <a href="https://github.com/msainsburydale/NeuralEstimators.jl/tree/main/docs/README.md">docs/README.md</a>. When adding functionality to the package, you may wish to add unit tests to the file <a href="https://github.com/msainsburydale/NeuralEstimators.jl/tree/main/test/runtests.jl">test/runtests.jl</a>. You can then run these tests locally by executing the following command from the root folder:</p><pre><code class="language-bash hljs">julia --project=. -e &quot;using Pkg; Pkg.test()&quot;</code></pre><h3 id="Papers-using-NeuralEstimators"><a class="docs-heading-anchor" href="#Papers-using-NeuralEstimators">Papers using NeuralEstimators</a><a id="Papers-using-NeuralEstimators-1"></a><a class="docs-heading-anchor-permalink" href="#Papers-using-NeuralEstimators" title="Permalink"></a></h3><ul><li><p><strong>Likelihood-free parameter estimation with neural Bayes estimators</strong> <a href="https://doi.org/10.1080/00031305.2023.2249522">[paper]</a> <a href="https://github.com/msainsburydale/NeuralBayesEstimators">[code]</a></p></li><li><p><strong>Neural methods for amortized inference</strong> <a href="https://doi.org/10.1146/annurev-statistics-112723-034123">[paper]</a><a href="https://github.com/andrewzm/Amortised_Neural_Inference_Review">[code]</a></p></li><li><p><strong>Neural Bayes estimators for irregular spatial data using graph neural networks</strong> <a href="https://doi.org/10.1080/10618600.2024.2433671">[paper]</a><a href="https://github.com/msainsburydale/NeuralEstimatorsGNN">[code]</a></p></li><li><p><strong>Neural Bayes estimators for censored inference with peaks-over-threshold models</strong> <a href="https://jmlr.org/papers/v25/23-1134.html">[paper]</a> <a href="https://github.com/Jbrich95/CensoredNeuralEstimators">[code]</a></p></li><li><p><strong>Neural parameter estimation with incomplete data</strong> <a href="https://arxiv.org/abs/2501.04330">[paper]</a><a href="https://github.com/msainsburydale/NeuralIncompleteData">[code]</a></p></li></ul></article><nav class="docs-footer"><a class="docs-footer-nextpage" href="methodology/">Methodology »</a><div class="flexbox-break"></div><p class="footer-message">Powered by <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> and the <a href="https://julialang.org/">Julia Programming Language</a>.</p></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="auto">Automatic (OS)</option><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option><option value="catppuccin-latte">catppuccin-latte</option><option value="catppuccin-frappe">catppuccin-frappe</option><option value="catppuccin-macchiato">catppuccin-macchiato</option><option value="catppuccin-mocha">catppuccin-mocha</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> version 1.12.0 on <span class="colophon-date" title="Tuesday 17 June 2025 06:32">Tuesday 17 June 2025</span>. Using Julia version 1.11.5.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
52+
}</code></pre><h2 id="Contributing"><a class="docs-heading-anchor" href="#Contributing">Contributing</a><a id="Contributing-1"></a><a class="docs-heading-anchor-permalink" href="#Contributing" title="Permalink"></a></h2><p>If you encounter a bug or have a suggestion, please consider <a href="https://github.com/msainsburydale/NeuralEstimators.jl/issues">opening an issue</a> or submitting a pull request. Instructions for contributing to the documentation can be found in <a href="https://github.com/msainsburydale/NeuralEstimators.jl/tree/main/docs/README.md">docs/README.md</a>. When adding functionality to the package, you may wish to add unit tests to the file <a href="https://github.com/msainsburydale/NeuralEstimators.jl/tree/main/test/runtests.jl">test/runtests.jl</a>. You can then run these tests locally by executing the following command from the root folder:</p><pre><code class="language-bash hljs">julia --project=. -e &quot;using Pkg; Pkg.test()&quot;</code></pre><h3 id="Papers-using-NeuralEstimators"><a class="docs-heading-anchor" href="#Papers-using-NeuralEstimators">Papers using NeuralEstimators</a><a id="Papers-using-NeuralEstimators-1"></a><a class="docs-heading-anchor-permalink" href="#Papers-using-NeuralEstimators" title="Permalink"></a></h3><ul><li><p><strong>Likelihood-free parameter estimation with neural Bayes estimators</strong> <a href="https://doi.org/10.1080/00031305.2023.2249522">[paper]</a> <a href="https://github.com/msainsburydale/NeuralBayesEstimators">[code]</a></p></li><li><p><strong>Neural methods for amortized inference</strong> <a href="https://doi.org/10.1146/annurev-statistics-112723-034123">[paper]</a><a href="https://github.com/andrewzm/Amortised_Neural_Inference_Review">[code]</a></p></li><li><p><strong>Neural Bayes estimators for irregular spatial data using graph neural networks</strong> <a href="https://doi.org/10.1080/10618600.2024.2433671">[paper]</a><a href="https://github.com/msainsburydale/NeuralEstimatorsGNN">[code]</a></p></li><li><p><strong>Neural Bayes estimators for censored inference with peaks-over-threshold models</strong> <a href="https://jmlr.org/papers/v25/23-1134.html">[paper]</a> <a href="https://github.com/Jbrich95/CensoredNeuralEstimators">[code]</a></p></li><li><p><strong>Neural parameter estimation with incomplete data</strong> <a href="https://arxiv.org/abs/2501.04330">[paper]</a><a href="https://github.com/msainsburydale/NeuralIncompleteData">[code]</a></p></li></ul></article><nav class="docs-footer"><a class="docs-footer-nextpage" href="methodology/">Methodology »</a><div class="flexbox-break"></div><p class="footer-message">Powered by <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> and the <a href="https://julialang.org/">Julia Programming Language</a>.</p></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="auto">Automatic (OS)</option><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option><option value="catppuccin-latte">catppuccin-latte</option><option value="catppuccin-frappe">catppuccin-frappe</option><option value="catppuccin-macchiato">catppuccin-macchiato</option><option value="catppuccin-mocha">catppuccin-mocha</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> version 1.13.0 on <span class="colophon-date" title="Sunday 22 June 2025 05:58">Sunday 22 June 2025</span>. Using Julia version 1.11.5.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>

dev/methodology/index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,4 +8,4 @@
88
\underset{c(\cdot, \cdot)}{\mathrm{arg\,min}} \sum_{y\in\{0, 1\}} \textrm{Pr}(Y = y) \int_\Theta\int_\mathcal{Z}L_{\textrm{BCE}}\{y, c(\boldsymbol{Z}, \boldsymbol{\theta})\}p(\boldsymbol{Z}, \boldsymbol{\theta} \mid Y = y)\textrm{d} \boldsymbol{Z} \textrm{d} \boldsymbol{\theta}\\
99
&amp;=
1010
\underset{c(\cdot, \cdot)}{\mathrm{arg\,min}} - \int_\Theta\int_\mathcal{Z}\Big[\log\{c(\boldsymbol{Z}, \boldsymbol{\theta})\}p(\boldsymbol{Z}, \boldsymbol{\theta}) + \log\{1 - c(\boldsymbol{Z}, \boldsymbol{\theta})\}p(\boldsymbol{Z})p(\boldsymbol{\theta}) \Big]\textrm{d} \boldsymbol{Z} \textrm{d} \boldsymbol{\theta},
11-
\end{aligned}\]</p><p>where <span>$L_{\textrm{BCE}}(y, c) \equiv -y\log(c) - (1 - y) \log(1 - c)$</span>. It can be shown (e.g., <a href="https://proceedings.mlr.press/v119/hermans20a.html">Hermans et al., 2020</a>, App. B) that the Bayes classifier is given by </p><p class="math-container">\[c^*(\boldsymbol{Z}, \boldsymbol{\theta}) = \frac{p(\boldsymbol{Z}, \boldsymbol{\theta})}{p(\boldsymbol{Z}, \boldsymbol{\theta}) + p(\boldsymbol{\theta})p(\boldsymbol{Z})}, \quad \boldsymbol{Z} \in \mathcal{Z}, \boldsymbol{\theta} \in \Theta,\]</p><p>and, hence,</p><p class="math-container">\[r(\boldsymbol{Z}, \boldsymbol{\theta}) = \frac{c^*(\boldsymbol{Z}, \boldsymbol{\theta})}{1 - c^*(\boldsymbol{Z}, \boldsymbol{\theta})}, \quad \boldsymbol{Z} \in \mathcal{Z}, \boldsymbol{\theta} \in \Theta.\]</p><p>This connection links the likelihood-to-evidence ratio to the average-risk-optimal solution of a standard binary classification problem, and consequently provides a foundation for approximating the ratio using neural networks. Specifically, let <span>$c_{\boldsymbol{\gamma}}: \mathcal{Z} \times \Theta \to (0, 1)$</span> denote a neural network parametrised by <span>$\boldsymbol{\gamma}$</span>. Then the Bayes classifier may be approximated by <span>$c_{\boldsymbol{\gamma}^*}(\cdot, \cdot)$</span>, where </p><p class="math-container">\[ \boldsymbol{\gamma}^* \equiv \underset{\boldsymbol{\gamma}}{\mathrm{arg\,min}} -\sum_{k=1}^K \Big[\log\{c_{\boldsymbol{\gamma}}(\boldsymbol{Z}^{(k)}, \boldsymbol{\theta}^{(k)})\} + \log\{1 - c_{\boldsymbol{\gamma}}(\boldsymbol{Z}^{(\sigma(k))}, \boldsymbol{\theta}^{(k)})\} \Big],\]</p><p>with each <span>$\boldsymbol{\theta}^{(k)}$</span> sampled independently from a &quot;proposal&quot; distribution <span>$p(\boldsymbol{\theta})$</span>, <span>$\boldsymbol{Z}^{(k)} \sim p(\boldsymbol{Z} \mid \boldsymbol{\theta}^{(k)})$</span>, and <span>$\sigma(\cdot)$</span> a random permutation of <span>$\{1, \dots, K\}$</span>. The proposal distribution <span>$p(\boldsymbol{\theta})$</span> does not necessarily correspond to the prior distribution <span>$\pi(\boldsymbol{\theta})$</span>, which is specified in the downstream inference algorithm (see below). In theory, any <span>$p(\boldsymbol{\theta})$</span> with support over <span>$\Theta$</span> can be used. However, with finite training data, the choice of <span>$p(\boldsymbol{\theta})$</span> is important, as it determines where the parameters <span>$\{\boldsymbol{\theta}^{(k)}\}$</span> are most densely sampled and, hence, where the neural network <span>$c_{\boldsymbol{\gamma}^*}(\cdot, \cdot)$</span> best approximates the Bayes classifier. Further, since neural networks are only reliable within the support of their training samples, a <span>$p(\boldsymbol{\theta})$</span> lacking full support over <span>$\Theta$</span> essentially acts as a &quot;soft prior&quot;. </p><p>Once the neural network is trained, <span>$r_{\boldsymbol{\gamma}^*}(\boldsymbol{Z}, \boldsymbol{\theta}) \equiv c_{\boldsymbol{\gamma}^*}(\boldsymbol{Z}, \boldsymbol{\theta})\{1 - c_{\boldsymbol{\gamma}^*}(\boldsymbol{Z}, \boldsymbol{\theta})\}^{-1}$</span>, <span>$\boldsymbol{Z} \in \mathcal{Z}, \boldsymbol{\theta} \in \Theta$</span>, may be used to quickly approximate the likelihood-to-evidence ratio, and therefore it is called a <em>neural ratio estimator</em>. </p><p>Inference based on a neural ratio estimator may proceed in a frequentist setting via maximum likelihood and likelihood ratios (e.g., <a href="https://doi.org/10.1016/j.spasta.2024.100848">Walchessen et al., 2024</a>), and in a Bayesian setting by facilitating the computation of transition probabilities in Hamiltonian Monte Carlo and MCMC algorithms (e.g., <a href="https://proceedings.mlr.press/v119/hermans20a.html">Hermans et al., 2020</a>). Further, an approximate posterior distribution can be obtained via the identity <span>${p(\boldsymbol{\theta} \mid \boldsymbol{Z})} = \pi(\boldsymbol{\theta}) r(\boldsymbol{\theta}, \boldsymbol{Z})$</span>, and sampled from using standard sampling techniques (e.g., <a href="https://doi.org/10.1214/20-BA1238">Thomas et al., 2022</a>).</p></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../">« NeuralEstimators</a><a class="docs-footer-nextpage" href="../workflow/overview/">Overview »</a><div class="flexbox-break"></div><p class="footer-message">Powered by <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> and the <a href="https://julialang.org/">Julia Programming Language</a>.</p></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="auto">Automatic (OS)</option><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option><option value="catppuccin-latte">catppuccin-latte</option><option value="catppuccin-frappe">catppuccin-frappe</option><option value="catppuccin-macchiato">catppuccin-macchiato</option><option value="catppuccin-mocha">catppuccin-mocha</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> version 1.12.0 on <span class="colophon-date" title="Tuesday 17 June 2025 06:32">Tuesday 17 June 2025</span>. Using Julia version 1.11.5.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
11+
\end{aligned}\]</p><p>where <span>$L_{\textrm{BCE}}(y, c) \equiv -y\log(c) - (1 - y) \log(1 - c)$</span>. It can be shown (e.g., <a href="https://proceedings.mlr.press/v119/hermans20a.html">Hermans et al., 2020</a>, App. B) that the Bayes classifier is given by </p><p class="math-container">\[c^*(\boldsymbol{Z}, \boldsymbol{\theta}) = \frac{p(\boldsymbol{Z}, \boldsymbol{\theta})}{p(\boldsymbol{Z}, \boldsymbol{\theta}) + p(\boldsymbol{\theta})p(\boldsymbol{Z})}, \quad \boldsymbol{Z} \in \mathcal{Z}, \boldsymbol{\theta} \in \Theta,\]</p><p>and, hence,</p><p class="math-container">\[r(\boldsymbol{Z}, \boldsymbol{\theta}) = \frac{c^*(\boldsymbol{Z}, \boldsymbol{\theta})}{1 - c^*(\boldsymbol{Z}, \boldsymbol{\theta})}, \quad \boldsymbol{Z} \in \mathcal{Z}, \boldsymbol{\theta} \in \Theta.\]</p><p>This connection links the likelihood-to-evidence ratio to the average-risk-optimal solution of a standard binary classification problem, and consequently provides a foundation for approximating the ratio using neural networks. Specifically, let <span>$c_{\boldsymbol{\gamma}}: \mathcal{Z} \times \Theta \to (0, 1)$</span> denote a neural network parametrised by <span>$\boldsymbol{\gamma}$</span>. Then the Bayes classifier may be approximated by <span>$c_{\boldsymbol{\gamma}^*}(\cdot, \cdot)$</span>, where </p><p class="math-container">\[ \boldsymbol{\gamma}^* \equiv \underset{\boldsymbol{\gamma}}{\mathrm{arg\,min}} -\sum_{k=1}^K \Big[\log\{c_{\boldsymbol{\gamma}}(\boldsymbol{Z}^{(k)}, \boldsymbol{\theta}^{(k)})\} + \log\{1 - c_{\boldsymbol{\gamma}}(\boldsymbol{Z}^{(\sigma(k))}, \boldsymbol{\theta}^{(k)})\} \Big],\]</p><p>with each <span>$\boldsymbol{\theta}^{(k)}$</span> sampled independently from a &quot;proposal&quot; distribution <span>$p(\boldsymbol{\theta})$</span>, <span>$\boldsymbol{Z}^{(k)} \sim p(\boldsymbol{Z} \mid \boldsymbol{\theta}^{(k)})$</span>, and <span>$\sigma(\cdot)$</span> a random permutation of <span>$\{1, \dots, K\}$</span>. The proposal distribution <span>$p(\boldsymbol{\theta})$</span> does not necessarily correspond to the prior distribution <span>$\pi(\boldsymbol{\theta})$</span>, which is specified in the downstream inference algorithm (see below). In theory, any <span>$p(\boldsymbol{\theta})$</span> with support over <span>$\Theta$</span> can be used. However, with finite training data, the choice of <span>$p(\boldsymbol{\theta})$</span> is important, as it determines where the parameters <span>$\{\boldsymbol{\theta}^{(k)}\}$</span> are most densely sampled and, hence, where the neural network <span>$c_{\boldsymbol{\gamma}^*}(\cdot, \cdot)$</span> best approximates the Bayes classifier. Further, since neural networks are only reliable within the support of their training samples, a <span>$p(\boldsymbol{\theta})$</span> lacking full support over <span>$\Theta$</span> essentially acts as a &quot;soft prior&quot;. </p><p>Once the neural network is trained, <span>$r_{\boldsymbol{\gamma}^*}(\boldsymbol{Z}, \boldsymbol{\theta}) \equiv c_{\boldsymbol{\gamma}^*}(\boldsymbol{Z}, \boldsymbol{\theta})\{1 - c_{\boldsymbol{\gamma}^*}(\boldsymbol{Z}, \boldsymbol{\theta})\}^{-1}$</span>, <span>$\boldsymbol{Z} \in \mathcal{Z}, \boldsymbol{\theta} \in \Theta$</span>, may be used to quickly approximate the likelihood-to-evidence ratio, and therefore it is called a <em>neural ratio estimator</em>. </p><p>Inference based on a neural ratio estimator may proceed in a frequentist setting via maximum likelihood and likelihood ratios (e.g., <a href="https://doi.org/10.1016/j.spasta.2024.100848">Walchessen et al., 2024</a>), and in a Bayesian setting by facilitating the computation of transition probabilities in Hamiltonian Monte Carlo and MCMC algorithms (e.g., <a href="https://proceedings.mlr.press/v119/hermans20a.html">Hermans et al., 2020</a>). Further, an approximate posterior distribution can be obtained via the identity <span>${p(\boldsymbol{\theta} \mid \boldsymbol{Z})} = \pi(\boldsymbol{\theta}) r(\boldsymbol{\theta}, \boldsymbol{Z})$</span>, and sampled from using standard sampling techniques (e.g., <a href="https://doi.org/10.1214/20-BA1238">Thomas et al., 2022</a>).</p></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../">« NeuralEstimators</a><a class="docs-footer-nextpage" href="../workflow/overview/">Overview »</a><div class="flexbox-break"></div><p class="footer-message">Powered by <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> and the <a href="https://julialang.org/">Julia Programming Language</a>.</p></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="auto">Automatic (OS)</option><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option><option value="catppuccin-latte">catppuccin-latte</option><option value="catppuccin-frappe">catppuccin-frappe</option><option value="catppuccin-macchiato">catppuccin-macchiato</option><option value="catppuccin-mocha">catppuccin-mocha</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> version 1.13.0 on <span class="colophon-date" title="Sunday 22 June 2025 05:58">Sunday 22 June 2025</span>. Using Julia version 1.11.5.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>

0 commit comments

Comments
 (0)