Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions docs/src/GaussianProcessEmulator.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,18 @@ gauss_proc = GaussianProcess(
kernel = my_kernel )
```

!!! note "Kernel hyperparameter bounds, and optimizer kwargs (GPJL)"
`Optim.jl` is used to perform optimization, and keywords are passed in to `optimize_hyperparameters!`.
Kernel bounds are provided by default, but can be adjusted by providing the `kernbounds` keyword. This should be formatted in accordance with `GaussianProcesses.jl` formats, for example, by using the snippet:
```julia
using Emulators
n_hparams = length(Emulators.get_params(gauss_proc)[1])
low = repeat([log(low_bound)], n_hparams) # bounds provided in log space
high = repeat([log(up_bound)], n_hparams)
# then, having built emulator with `gauss_proc`
optimize_hyperparameters!(emulator; kernbounds=(low,high))
```

## SKLJL
Alternatively if you are using the `ScikitLearn.jl` package, you can [find the list of kernels here](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.gaussian_process).
These need this preamble:
Expand All @@ -103,6 +115,13 @@ gauss_proc = GaussianProcess(
```
You can also combine multiple ScikitLearn kernels via linear operations in the same way as above.

!!! note "Kernel hyperparameter bounds (SKLJL)"
Default bounds are provided, however, bounds are adjusted by providing new kernels, for example,
```julia
my_kernel = pykernels.RBF(length_scale = 1, length_scale_bounds=(low_bound, up_bound))
```


## [AGPJL](@id agpjl)

Autodifferentiable emulators are used by our differentiable samplers. Currently the only support for autodifferentiable Gaussian process emulators in Julia (within the `predict()` method, not hyperparameter optimization) is to use `AbstractGPs.jl`. As `AbstractGPs.jl` has no optimization routines for kernels, we instead apply the following (temporary) recipe:
Expand Down
14 changes: 12 additions & 2 deletions src/GaussianProcess.jl
Original file line number Diff line number Diff line change
Expand Up @@ -233,12 +233,22 @@ Optimize Gaussian process hyperparameters using in-build package method.
Warning: if one uses `GPJL()` and wishes to modify positional arguments. The first positional argument must be the `Optim` method (default `LBGFS()`).
"""
function optimize_hyperparameters!(gp::GaussianProcess{GPJL}, args...; kwargs...)

if !(haskey(kwargs, :kernbounds)) # if no bounds defined
n_hparams = length(get_params(gp)[1])
low = repeat([log(1e-5)], n_hparams) # bounds provided in log space
high = repeat([log(1e5)], n_hparams)
ext_kwargs = merge((; kwargs...), (; kernbounds = (low, high)))
else
ext_kwargs = (; kwargs...)
end
N_models = length(gp.models)

for i in 1:N_models
# always regress with noise_learn=false; if gp was created with noise_learn=true
# we've already explicitly added noise to the kernel

optimize!(gp.models[i], args...; noise = false, kwargs...)
optimize!(gp.models[i], args...; noise = false, ext_kwargs...)
println("optimized hyperparameters of GP: ", i)
println(gp.models[i].kernel)
end
Expand Down Expand Up @@ -304,7 +314,7 @@ function build_models!(
const_value = 1.0
var_kern = pykernels.ConstantKernel(constant_value = const_value, constant_value_bounds = (1e-5, 1e4))
rbf_len = ones(size(input_values, 2))
rbf = pykernels.RBF(length_scale = rbf_len, length_scale_bounds = (1e-5, 1e4))
rbf = pykernels.RBF(length_scale = rbf_len, length_scale_bounds = (1e-5, 1e5))
kern = var_kern * rbf
println("Using default squared exponential kernel:", kern)
else
Expand Down
15 changes: 12 additions & 3 deletions test/GaussianProcess/runtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,6 @@ using CalibrateEmulateSample.Utilities
# skip rebuild:
@test_logs (:warn,) Emulator(agp, iopairs, encoder_schedule = [], kernel_params = kernel_params)


μ1b, σ1b² = Emulators.predict(em_agp_from_gp1, new_inputs)

# gp1 and agp_from_gp2 should give similar predictions
Expand Down Expand Up @@ -187,8 +186,18 @@ using CalibrateEmulateSample.Utilities

em4_noise_from_Σ = Emulator(gp4, iopairs2; encoder_kwargs = (; obs_noise_cov = Σ))

Emulators.optimize_hyperparameters!(em4_noise_learnt)
Emulators.optimize_hyperparameters!(em4_noise_from_Σ)
# add some kernel bounds
n_hparams = length(Emulators.get_params(gp4)[1])
low = repeat([log(1e-4)], n_hparams) # bounds provided in log space
high = repeat([log(1e4)], n_hparams)
kernbounds = (low, high)
n_hparams2 = length(Emulators.get_params(gp4_noise_learnt)[1])
low2 = repeat([log(1e-4)], n_hparams2) # bounds provided in log space
high2 = repeat([log(1e4)], n_hparams2)
kernbounds2 = (low2, high2)

Emulators.optimize_hyperparameters!(em4_noise_from_Σ; kernbounds = kernbounds)
Emulators.optimize_hyperparameters!(em4_noise_learnt; kernbounds = kernbounds2)

new_inputs = zeros(2, 4)
new_inputs[:, 2] = [π / 2, π]
Expand Down