Releases: pytorch/botorch
Releases · pytorch/botorch
Improved Multi-Objective Optimization, Support for categorical/mixed domains, robust/risk-aware optimization, efficient MTGP sampling
Compatibility
- Require PyTorch >=1.8.1 (#832).
- Require GPyTorch >=1.5 (#848).
- Changes to how input transforms are applied:
transform_inputs
is applied inmodel.forward
if the model is intrain
mode, otherwise it is applied in theposterior
call (#819, #835).
New Features
- Improved multi-objective optimization capabilities:
qNoisyExpectedHypervolumeImprovement
acquisition function that improves onqExpectedHypervolumeImprovement
in terms of tolerating observation noise and speeding up computation for largeq
-batches (#797, #822).qMultiObjectiveMaxValueEntropy
acqusition function (913aa0e, #760).- Heuristic for reference point selection (#830).
FastNondominatedPartitioning
for Hypervolume computations (#699).DominatedPartitioning
for partitioning the dominated space (#726).BoxDecompositionList
for handling box decompositions of varying sizes (#712).- Direct, batched dominated partitioning for the two-outcome case (#739).
get_default_partitioning_alpha
utility providing heuristic for selecting approximation level for partitioning algorithms (#793).- New method for computing Pareto Frontiers with less memory overhead (#842, #846).
- New
qLowerBoundMaxValueEntropy
acquisition function (a.k.a. GIBBON), a lightweight variant of Multi-fidelity Max-Value Entropy Search using a Determinantal Point Process approximation (#724, #737, #749). - Support for discrete and mixed input domains:
CategoricalKernel
for categorical inputs (#771).MixedSingleTaskGP
for mixed search spaces (containing both categorical and ordinal parameters) (#772, #847).optimize_acqf_discrete
for optimizing acquisition functions over fully discrete domains (#777).- Extend
optimize_acqf_mixed
to allow batch optimization (#804).
- Support for robust / risk-aware optimization:
- Risk measures for robust / risk-averse optimization (#821).
AppendFeatures
transform (#820).InputPerturbation
input transform for for risk averse BO with implementation errors (#827).- Tutorial notebook for Bayesian Optimization of risk measures (#823).
- Tutorial notebook for risk-averse Bayesian Optimization under input perturbations (#828).
- More scalable multi-task modeling and sampling:
- Various changes to simplify and streamline integration with Ax:
- Random Fourier Feature (RFF) utilties for fast (approximate) GP function sampling (#750).
DelaunayPolytopeSampler
for fast uniform sampling from (simple) polytopes (#741).- Add
evaluate
method toScalarizedObjective
(#795).
Bug Fixes
- Handle the case when all features are fixed in
optimize_acqf
(#770). - Pass
fixed_features
to initial candidate generation functions (#806). - Handle batch empty pareto frontier in
FastPartitioning
(#740). - Handle empty pareto set in
is_non_dominated
(#743). - Handle edge case of no or a single observation in
get_chebyshev_scalarization
(#762). - Fix an issue in
gen_candidates_torch
that caused problems with acqusition functions using fantasy models (#766). - Fix
HigherOrderGP
dtype
bug (#728). - Normalize before clamping in
Warp
input warping transform (#722). - Fix bug in GP sampling (#764).
Other Changes
- Modify input transforms to support one-to-many transforms (#819, #835).
- Make initial conditions for acquisition function optimization honor parameter constraints (#752).
- Perform optimization only over unfixed features if
fixed_features
is passed (#839). - Refactor Max Value Entropy Search Methods (#734).
- Use Linear Algebra functions from the
torch.linalg
module (#735). - Use PyTorch's
Kumaraswamy
distribution (#746). - Improved capabilities and some bugfixes for batched models (#723, #767).
- Pass
callback
argument toscipy.optim.minimize
ingen_candidates_scipy
(#744). - Modify behavior of
X_pending
in in multi-objective acqusiition functions (#747). - Allow multi-dimensional batch shapes in test functions (#757).
- Utility for converting batched multi-output models into batched single-output models (#759).
- Explicitly raise
NotPSDError
in_scipy_objective_and_grad
(#787). - Make
raw_samples
optional ifbatch_initial_conditions
is passed (#801). - Use powers of 2 in qMC docstrings & examples (#812).
High Order GP model, multi-step look-ahead acquisition function
Compatibility
New Features
HigherOrderGP
- High-Order Gaussian Process (HOGP) model for
high-dimensional output regression (#631, #646, #648, #680).qMultiStepLookahead
acquisition function for general look-ahead
optimization approaches (#611, #659).ScalarizedPosteriorMean
andproject_to_sample_points
for more
advanced MFKG functionality (#645).- Large-scale Thompson sampling tutorial (#654, #713).
- Tutorial for optimizing mixed continuous/discrete domains (application
to multi-fidelity KG with discrete fidelities) (#716). GPDraw
utility for sampling from (exact) GP priors (#655).- Add
X
as optional arg to call signature ofMCAcqusitionObjective
(#487). OSY
synthetic test problem (#679).
Bug Fixes
- Fix matrix multiplication in
scalarize_posterior
(#638). - Set
X_pending
inget_acquisition_function
inqEHVI
(#662). - Make contextual kernel device-aware (#666).
- Do not use an
MCSampler
inMaxPosteriorSampling
(#701). - Add ability to subset outcome transforms (#711).
Performance Improvements
- Batchify box decomposition for 2d case (#642).
Other Changes
- Use scipy distribution in MES quantile bisect (#633).
- Use new closure definition for GPyTorch priors (#634).
- Allow enabling of approximate root decomposition in
posterior
calls (#652). - Support for upcoming 21201-dimensional PyTorch
SobolEngine
(#672, #674). - Refactored various MOO utilities to allow future additions (#656, #657, #658, #661).
- Support input_transform in PairwiseGP (#632).
- Output shape checks for t_batch_mode_transform (#577).
- Check for NaN in
gen_candidates_scipy
(#688). - Introduce
base_sample_shape
property toPosterior
objects (#718).
Contextual Bayesian Optimization, Input Warping, TuRBO, sampling from polytopes.
Compatibility
New Features
- Models (LCE-A, LCE-M and SAC ) for Contextual Bayesian Optimziation (#581).
- Implements core models from:
High-Dimensional Contextual Policy Search with Unknown Context Rewards using Bayesian Optimization.
Q. Feng, B. Letham, H. Mao, E. Bakshy. NeurIPS 2020. - See Ax for usage of these models.
- Implements core models from:
- Hit and run sampler for uniform sampling from a polytope (#592).
- Input warping:
- TuRBO-1 tutorial (#598).
- Implements the method from:
Scalable Global Optimization via Local Bayesian Optimization.
D. Eriksson, M. Pearce, J. Gardner, R. D. Turner, M. Poloczek. NeurIPS 2019.
- Implements the method from:
Bug fixes
Other changes
- Add
train_inputs
option toqMaxValueEntropy
(#593). - Enable gpytorch settings to override BoTorch defaults for
fast_pred_var
anddebug
(#595). - Rename
set_train_data_transform
->preprocess_transform
(#575). - Modify
_expand_bounds()
shape checks to work with >2-dim bounds (#604). - Add
batch_shape
property to models (#588). - Modify
qMultiFidelityKnowledgeGradient.evaluate()
to work withproject
,expand
andcost_aware_utility
(#594). - Add list of papers using BoTorch to website docs (#617).
Maintenance Release
New Features
- Add
PenalizedAcquisitionFunction
wrapper (#585) - Input transforms
- Differentiable approximate rounding for integers (#561)
Bug fixes
- Fix sign error in UCB when
maximize=False
(a4bfacbfb2109d3b89107d171d2101e1995822bb) - Fix batch_range sample shape logic (#574)
Other changes
- Better support for two stage sampling in preference learning
(0cd13d0) - Remove noise term in
PairwiseGP
and addScaleKernel
by default (#571) - Rename
prior
totask_covar_prior
inMultiTaskGP
andFixedNoiseMultiTaskGP
(16573fe) - Support only transforming inputs on training or evaluation (#551)
- Add
equals
method forInputTransform
(#552)
Maintenance Release
New Features
- Constrained Multi-Objective tutorial (#493)
- Multi-fidelity Knowledge Gradient tutorial (#509)
- Support for batch qMC sampling (#510)
- New
evaluate
method forqKnowledgeGradient
(#515)
Compatibility
- Require PyTorch >=1.6 (#535)
- Require GPyTorch >=1.2 (#535)
- Remove deprecated
botorch.gen module
(#532)
Bug fixes
- Fix bad backward-indexing of task_feature in
MultiTaskGP
(#485) - Fix bounds in constrained Branin-Currin test function (#491)
- Fix max_hv for C2DTLZ2 and make Hypervolume always return a float (#494)
- Fix bug in
draw_sobol_samples
that did not use the proper effective dimension (#505) - Fix constraints for
q>1
inqExpectedHypervolumeImprovement
(c80c4fd) - Only use feasible observations in partitioning for
qExpectedHypervolumeImprovement
inget_acquisition_function
(#523) - Improved GPU compatibility for
PairwiseGP
(#537)
Performance Improvements
- Reduce memory footprint in
qExpectedHypervolumeImprovement
(#522) - Add
(q)ExpectedHypervolumeImprovement
to nonnegative functions
[for better initialization] (#496)
Other changes
- Support batched
best_f
inqExpectedImprovement
(#487) - Allow to return full tree of solutions in
OneShotAcquisitionFunction
(#488) - Added
construct_inputs
class method to models to programmatically construct the
inputs to the constructor from a standardizedTrainingData
representation
(#477, #482, 3621198) - Acquisition function constructors now accept catch-all
**kwargs
options
(#478, e5b6935) - Use
psd_safe_cholesky
inqMaxValueEntropy
for better numerical stabilty (#518) - Added
WeightedMCMultiOutputObjective
(81d91fd) - Add ability to specify
outcomes
to all multi-output objectives (#524) - Return optimization output in
info_dict
forfit_gpytorch_scipy
(#534) - Use
setuptools_scm
for versioning (#539)
Multi-Objective Bayesian Optimization
New Features
- Multi-Objective Acquisition Functions (#466)
- q-Expected Hypervolume Improvement
- q-ParEGO
- Analytic Expected Hypervolume Improvement with auto-differentiation
- Multi-Objective Utilities (#466)
- Pareto Computation
- Hypervolume Calculation
- Box Decomposition algorithm
- Multi-Objective Test Functions (#466)
- Suite of synthetic test functions for multi-objective, constrained
optimzation
- Suite of synthetic test functions for multi-objective, constrained
- Multi-Objective Tutorial (#468)
- Abstract ConstrainedBaseTestProblem (#454)
- Add optimize_acqf_list method for sequentially, greedily optimizing 1 candidate
from each provided acquisition function (d10aec9)
Bug fixes
- Fixed re-arranging mean in MultiTask multi-output models (#450).
Other changes
Bugfix Release
Bugfix Release
Bug fixes
- There was a mysterious issue with the 0.2.3 wheel on pypi, where part of the
botorch/optim/utils.py
file was not included, which resulted in anImportError
for many central components of the code. Interestingly, the source dist (built with the same command) did not have this issue. - Preserve order in ChainedOutcomeTransform (#440).
New Features
- Utilities for estimating the feasible volume under outcome constraints (#437).
Pairwise GP for Preference Learning, Sampling Strategies
Introduces a new Pairwise GP model for Preference Learning with pair-wise preferential feedback, as well as a Sampling Strategies abstraction for generating candidates from a discrete candidate set.
Compatibility
New Features
- Add
PairwiseGP
for preference learning with pair-wise comparison data (#388). - Add
SamplingStrategy
abstraction for sampling-based generation strategies, including
MaxPosteriorSampling
(i.e. Thompson Sampling) andBoltzmannSampling
(#218, #407).
Deprecations
- The existing
botorch.gen
module is moved tobotorch.generation.gen
and imports
frombotorch.gen
will raise a warning (an error in the next release) (#218).
Bug fixes
- Fix & update a number of tutorials (#394, #398, #393, #399, #403).
- Fix CUDA tests (#404).
- Fix sobol maxdim limitation in
prune_baseline
(#419).
Other changes
- Better stopping criteria for stochastic optimization (#392).
- Improve numerical stability of
LinearTruncatedFidelityKernel
(#409). - Allow batched
best_f
inqExpectedImprovement
andqProbabilityOfImprovement
(#411). - Introduce new logger framework (#412).
- Faster indexing in some situations (#414).
- More generic
BaseTestProblem
(9e604fe).
Require Python 3.7 and new features
Require Python 3.7 and adds new features for active learning and multi-fidelity optimization, along with a number of bug fixes.
Compatibility
New Features
- Add
qNegIntegratedPosteriorVariance
for Bayesian active learning (#377). - Add
FixedNoiseMultiFidelityGP
, analogous toSingleTaskMultiFidelityGP
(#386). - Support
scalarize_posterior
for m>1 and q>1 posteriors (#374). - Support
subset_output
method on multi-fidelity models (#372). - Add utilities for sampling from simplex and hypersphere (#369).
Bug fixes
- Fix
TestLoader
local test discovery (#376). - Fix batch-list conversion of
SingleTaskMultiFidelityGP
(#370). - Validate tensor args before checking input scaling for more
informative error messaages (#368). - Fix flaky
qNoisyExpectedImprovement
test (#362). - Fix test function in closed-loop tutorial (#360).
- Fix num_output attribute in BoTorch/Ax tutorial (#355).