You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/GaussianProcessEmulator.md
+19Lines changed: 19 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -83,6 +83,18 @@ gauss_proc = GaussianProcess(
83
83
kernel = my_kernel )
84
84
```
85
85
86
+
!!! note "Kernel hyperparameter bounds, and optimizer kwargs (GPJL)"
87
+
`Optim.jl` is used to perform optimization, and keywords are passed in to `optimize_hyperparameters!`.
88
+
Kernel bounds are provided by default, but can be adjusted by providing the `kernbounds` keyword. This should be formatted in accordance with `GaussianProcesses.jl` formats, for example, by using the snippet:
Alternatively if you are using the `ScikitLearn.jl` package, you can [find the list of kernels here](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.gaussian_process).
Autodifferentiable emulators are used by our differentiable samplers. Currently the only support for autodifferentiable Gaussian process emulators in Julia (within the `predict()` method, not hyperparameter optimization) is to use `AbstractGPs.jl`. As `AbstractGPs.jl` has no optimization routines for kernels, we instead apply the following (temporary) recipe:
0 commit comments