You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/inference.md
+9-3Lines changed: 9 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,9 @@ m = sfcc_model(Trange, θtrue) # condition on data
23
23
# draw 1,000 samples using the No U-Turn Sampler; gradients are computed automatically by Turing using forward-mode automatic differentiation (ForwardDiff.jl).
24
24
chain =sample(rng, m, NUTS(), 1000)
25
25
display(chain)
26
-
# output
26
+
```
27
+
Output:
28
+
```
27
29
Chains MCMC chain (1000×18×1 Array{Float64, 3}):
28
30
29
31
Iterations = 501:1:1500
@@ -75,7 +77,9 @@ If you aren't interested in the full posterior distribution, Turing also provide
75
77
using Optim
76
78
77
79
optimize(m, MAP(), LBFGS())
78
-
# output
80
+
```
81
+
Output:
82
+
```
79
83
ModeResult with maximized lp of 394.29
80
84
6-element Named Vector{Float64}
81
85
A │
@@ -93,7 +97,9 @@ Alternatively, one can ignore the prior entirely and just get a *maximum likelih
93
97
```julia
94
98
# here we use the common LBFGS optimizer; see Optim.jl docs for more options
0 commit comments