Skip to content

Commit 9f65dae

Browse files
authored
Add default kernel bounds for GP (#393)
* add GP lengthscales to the GPJL code, and modfiy default lengscales for SKLJL * typo * format * tests * docs for adding bounds * format
1 parent d31a2dc commit 9f65dae

File tree

3 files changed

+43
-5
lines changed

3 files changed

+43
-5
lines changed

docs/src/GaussianProcessEmulator.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -83,6 +83,18 @@ gauss_proc = GaussianProcess(
8383
kernel = my_kernel )
8484
```
8585

86+
!!! note "Kernel hyperparameter bounds, and optimizer kwargs (GPJL)"
87+
`Optim.jl` is used to perform optimization, and keywords are passed in to `optimize_hyperparameters!`.
88+
Kernel bounds are provided by default, but can be adjusted by providing the `kernbounds` keyword. This should be formatted in accordance with `GaussianProcesses.jl` formats, for example, by using the snippet:
89+
```julia
90+
using Emulators
91+
n_hparams = length(Emulators.get_params(gauss_proc)[1])
92+
low = repeat([log(low_bound)], n_hparams) # bounds provided in log space
93+
high = repeat([log(up_bound)], n_hparams)
94+
# then, having built emulator with `gauss_proc`
95+
optimize_hyperparameters!(emulator; kernbounds=(low,high))
96+
```
97+
8698
## SKLJL
8799
Alternatively if you are using the `ScikitLearn.jl` package, you can [find the list of kernels here](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.gaussian_process).
88100
These need this preamble:
@@ -103,6 +115,13 @@ gauss_proc = GaussianProcess(
103115
```
104116
You can also combine multiple ScikitLearn kernels via linear operations in the same way as above.
105117

118+
!!! note "Kernel hyperparameter bounds (SKLJL)"
119+
Default bounds are provided, however, bounds are adjusted by providing new kernels, for example,
120+
```julia
121+
my_kernel = pykernels.RBF(length_scale = 1, length_scale_bounds=(low_bound, up_bound))
122+
```
123+
124+
106125
## [AGPJL](@id agpjl)
107126

108127
Autodifferentiable emulators are used by our differentiable samplers. Currently the only support for autodifferentiable Gaussian process emulators in Julia (within the `predict()` method, not hyperparameter optimization) is to use `AbstractGPs.jl`. As `AbstractGPs.jl` has no optimization routines for kernels, we instead apply the following (temporary) recipe:

src/GaussianProcess.jl

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -233,12 +233,22 @@ Optimize Gaussian process hyperparameters using in-build package method.
233233
Warning: if one uses `GPJL()` and wishes to modify positional arguments. The first positional argument must be the `Optim` method (default `LBGFS()`).
234234
"""
235235
function optimize_hyperparameters!(gp::GaussianProcess{GPJL}, args...; kwargs...)
236+
237+
if !(haskey(kwargs, :kernbounds)) # if no bounds defined
238+
n_hparams = length(get_params(gp)[1])
239+
low = repeat([log(1e-5)], n_hparams) # bounds provided in log space
240+
high = repeat([log(1e5)], n_hparams)
241+
ext_kwargs = merge((; kwargs...), (; kernbounds = (low, high)))
242+
else
243+
ext_kwargs = (; kwargs...)
244+
end
236245
N_models = length(gp.models)
246+
237247
for i in 1:N_models
238248
# always regress with noise_learn=false; if gp was created with noise_learn=true
239249
# we've already explicitly added noise to the kernel
240250

241-
optimize!(gp.models[i], args...; noise = false, kwargs...)
251+
optimize!(gp.models[i], args...; noise = false, ext_kwargs...)
242252
println("optimized hyperparameters of GP: ", i)
243253
println(gp.models[i].kernel)
244254
end
@@ -304,7 +314,7 @@ function build_models!(
304314
const_value = 1.0
305315
var_kern = pykernels.ConstantKernel(constant_value = const_value, constant_value_bounds = (1e-5, 1e4))
306316
rbf_len = ones(size(input_values, 2))
307-
rbf = pykernels.RBF(length_scale = rbf_len, length_scale_bounds = (1e-5, 1e4))
317+
rbf = pykernels.RBF(length_scale = rbf_len, length_scale_bounds = (1e-5, 1e5))
308318
kern = var_kern * rbf
309319
println("Using default squared exponential kernel:", kern)
310320
else

test/GaussianProcess/runtests.jl

Lines changed: 12 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,6 @@ using CalibrateEmulateSample.Utilities
9696
# skip rebuild:
9797
@test_logs (:warn,) Emulator(agp, iopairs, encoder_schedule = [], kernel_params = kernel_params)
9898

99-
10099
μ1b, σ1b² = Emulators.predict(em_agp_from_gp1, new_inputs)
101100

102101
# gp1 and agp_from_gp2 should give similar predictions
@@ -187,8 +186,18 @@ using CalibrateEmulateSample.Utilities
187186

188187
em4_noise_from_Σ = Emulator(gp4, iopairs2; encoder_kwargs = (; obs_noise_cov = Σ))
189188

190-
Emulators.optimize_hyperparameters!(em4_noise_learnt)
191-
Emulators.optimize_hyperparameters!(em4_noise_from_Σ)
189+
# add some kernel bounds
190+
n_hparams = length(Emulators.get_params(gp4)[1])
191+
low = repeat([log(1e-4)], n_hparams) # bounds provided in log space
192+
high = repeat([log(1e4)], n_hparams)
193+
kernbounds = (low, high)
194+
n_hparams2 = length(Emulators.get_params(gp4_noise_learnt)[1])
195+
low2 = repeat([log(1e-4)], n_hparams2) # bounds provided in log space
196+
high2 = repeat([log(1e4)], n_hparams2)
197+
kernbounds2 = (low2, high2)
198+
199+
Emulators.optimize_hyperparameters!(em4_noise_from_Σ; kernbounds = kernbounds)
200+
Emulators.optimize_hyperparameters!(em4_noise_learnt; kernbounds = kernbounds2)
192201

193202
new_inputs = zeros(2, 4)
194203
new_inputs[:, 2] =/ 2, π]

0 commit comments

Comments
 (0)