Skip to content

Releases: pytorch/botorch

Improved Multi-Objective Optimization, Support for categorical/mixed domains, robust/risk-aware optimization, efficient MTGP sampling

29 Jun 19:31
Compare
Choose a tag to compare

Compatibility

  • Require PyTorch >=1.8.1 (#832).
  • Require GPyTorch >=1.5 (#848).
  • Changes to how input transforms are applied: transform_inputs is applied in model.forward if the model is in train mode, otherwise it is applied in the posterior call (#819, #835).

New Features

  • Improved multi-objective optimization capabilities:
    • qNoisyExpectedHypervolumeImprovement acquisition function that improves on qExpectedHypervolumeImprovement in terms of tolerating observation noise and speeding up computation for large q-batches (#797, #822).
    • qMultiObjectiveMaxValueEntropy acqusition function (913aa0e, #760).
    • Heuristic for reference point selection (#830).
    • FastNondominatedPartitioning for Hypervolume computations (#699).
    • DominatedPartitioning for partitioning the dominated space (#726).
    • BoxDecompositionList for handling box decompositions of varying sizes (#712).
    • Direct, batched dominated partitioning for the two-outcome case (#739).
    • get_default_partitioning_alpha utility providing heuristic for selecting approximation level for partitioning algorithms (#793).
    • New method for computing Pareto Frontiers with less memory overhead (#842, #846).
  • New qLowerBoundMaxValueEntropy acquisition function (a.k.a. GIBBON), a lightweight variant of Multi-fidelity Max-Value Entropy Search using a Determinantal Point Process approximation (#724, #737, #749).
  • Support for discrete and mixed input domains:
    • CategoricalKernel for categorical inputs (#771).
    • MixedSingleTaskGP for mixed search spaces (containing both categorical and ordinal parameters) (#772, #847).
    • optimize_acqf_discrete for optimizing acquisition functions over fully discrete domains (#777).
    • Extend optimize_acqf_mixed to allow batch optimization (#804).
  • Support for robust / risk-aware optimization:
    • Risk measures for robust / risk-averse optimization (#821).
    • AppendFeatures transform (#820).
    • InputPerturbation input transform for for risk averse BO with implementation errors (#827).
    • Tutorial notebook for Bayesian Optimization of risk measures (#823).
    • Tutorial notebook for risk-averse Bayesian Optimization under input perturbations (#828).
  • More scalable multi-task modeling and sampling:
    • KroneckerMultiTaskGP model for efficient multi-task modeling for block-design settings (all tasks observed at all inputs) (#637).
    • Support for transforms in Multi-Task GP models (#681).
    • Posterior sampling based on Matheron's rule for Multi-Task GP models (#841).
  • Various changes to simplify and streamline integration with Ax:
    • Handle non-block designs in TrainingData (#794).
    • Acquisition function input constructor registry (#788, #802, #845).
  • Random Fourier Feature (RFF) utilties for fast (approximate) GP function sampling (#750).
  • DelaunayPolytopeSampler for fast uniform sampling from (simple) polytopes (#741).
  • Add evaluate method to ScalarizedObjective (#795).

Bug Fixes

  • Handle the case when all features are fixed in optimize_acqf (#770).
  • Pass fixed_features to initial candidate generation functions (#806).
  • Handle batch empty pareto frontier in FastPartitioning (#740).
  • Handle empty pareto set in is_non_dominated (#743).
  • Handle edge case of no or a single observation in get_chebyshev_scalarization (#762).
  • Fix an issue in gen_candidates_torch that caused problems with acqusition functions using fantasy models (#766).
  • Fix HigherOrderGP dtype bug (#728).
  • Normalize before clamping in Warp input warping transform (#722).
  • Fix bug in GP sampling (#764).

Other Changes

  • Modify input transforms to support one-to-many transforms (#819, #835).
  • Make initial conditions for acquisition function optimization honor parameter constraints (#752).
  • Perform optimization only over unfixed features if fixed_features is passed (#839).
  • Refactor Max Value Entropy Search Methods (#734).
  • Use Linear Algebra functions from the torch.linalg module (#735).
  • Use PyTorch's Kumaraswamy distribution (#746).
  • Improved capabilities and some bugfixes for batched models (#723, #767).
  • Pass callback argument to scipy.optim.minimize in gen_candidates_scipy (#744).
  • Modify behavior of X_pending in in multi-objective acqusiition functions (#747).
  • Allow multi-dimensional batch shapes in test functions (#757).
  • Utility for converting batched multi-output models into batched single-output models (#759).
  • Explicitly raise NotPSDError in _scipy_objective_and_grad (#787).
  • Make raw_samples optional if batch_initial_conditions is passed (#801).
  • Use powers of 2 in qMC docstrings & examples (#812).

High Order GP model, multi-step look-ahead acquisition function

23 Feb 21:33
Compare
Choose a tag to compare

Compatibility

  • Require PyTorch >=1.7.1 (#714).
  • Require GPyTorch >=1.4 (#714).

New Features

  • HigherOrderGP - High-Order Gaussian Process (HOGP) model for
    high-dimensional output regression (#631, #646, #648, #680).
  • qMultiStepLookahead acquisition function for general look-ahead
    optimization approaches (#611, #659).
  • ScalarizedPosteriorMean and project_to_sample_points for more
    advanced MFKG functionality (#645).
  • Large-scale Thompson sampling tutorial (#654, #713).
  • Tutorial for optimizing mixed continuous/discrete domains (application
    to multi-fidelity KG with discrete fidelities) (#716).
  • GPDraw utility for sampling from (exact) GP priors (#655).
  • Add X as optional arg to call signature of MCAcqusitionObjective (#487).
  • OSY synthetic test problem (#679).

Bug Fixes

  • Fix matrix multiplication in scalarize_posterior (#638).
  • Set X_pending in get_acquisition_function in qEHVI (#662).
  • Make contextual kernel device-aware (#666).
  • Do not use an MCSampler in MaxPosteriorSampling (#701).
  • Add ability to subset outcome transforms (#711).

Performance Improvements

  • Batchify box decomposition for 2d case (#642).

Other Changes

  • Use scipy distribution in MES quantile bisect (#633).
  • Use new closure definition for GPyTorch priors (#634).
  • Allow enabling of approximate root decomposition in posterior calls (#652).
  • Support for upcoming 21201-dimensional PyTorch SobolEngine (#672, #674).
  • Refactored various MOO utilities to allow future additions (#656, #657, #658, #661).
  • Support input_transform in PairwiseGP (#632).
  • Output shape checks for t_batch_mode_transform (#577).
  • Check for NaN in gen_candidates_scipy (#688).
  • Introduce base_sample_shape property to Posterior objects (#718).

Contextual Bayesian Optimization, Input Warping, TuRBO, sampling from polytopes.

08 Dec 06:42
Compare
Choose a tag to compare

Compatibility

  • Require PyTorch >=1.7 (#614).
  • Require GPyTorch >=1.3 (#614).

New Features

Bug fixes

  • Fix bounds of HolderTable synthetic function (#596).
  • Fix device issue in MOO tutorial (#621).

Other changes

  • Add train_inputs option to qMaxValueEntropy (#593).
  • Enable gpytorch settings to override BoTorch defaults for fast_pred_var and debug (#595).
  • Rename set_train_data_transform -> preprocess_transform (#575).
  • Modify _expand_bounds() shape checks to work with >2-dim bounds (#604).
  • Add batch_shape property to models (#588).
  • Modify qMultiFidelityKnowledgeGradient.evaluate() to work with project, expand and cost_aware_utility (#594).
  • Add list of papers using BoTorch to website docs (#617).

Maintenance Release

26 Oct 04:28
Compare
Choose a tag to compare

New Features

  • Add PenalizedAcquisitionFunction wrapper (#585)
  • Input transforms
    • Reversible input transform (#550)
    • Rounding input transform (#562)
    • Log input transform (#563)
  • Differentiable approximate rounding for integers (#561)

Bug fixes

  • Fix sign error in UCB when maximize=False (a4bfacbfb2109d3b89107d171d2101e1995822bb)
  • Fix batch_range sample shape logic (#574)

Other changes

  • Better support for two stage sampling in preference learning
    (0cd13d0)
  • Remove noise term in PairwiseGP and add ScaleKernel by default (#571)
  • Rename prior to task_covar_prior in MultiTaskGP and FixedNoiseMultiTaskGP
    (16573fe)
  • Support only transforming inputs on training or evaluation (#551)
  • Add equals method for InputTransform (#552)

Maintenance Release

16 Sep 01:58
Compare
Choose a tag to compare

New Features

  • Constrained Multi-Objective tutorial (#493)
  • Multi-fidelity Knowledge Gradient tutorial (#509)
  • Support for batch qMC sampling (#510)
  • New evaluate method for qKnowledgeGradient (#515)

Compatibility

  • Require PyTorch >=1.6 (#535)
  • Require GPyTorch >=1.2 (#535)
  • Remove deprecated botorch.gen module (#532)

Bug fixes

  • Fix bad backward-indexing of task_feature in MultiTaskGP (#485)
  • Fix bounds in constrained Branin-Currin test function (#491)
  • Fix max_hv for C2DTLZ2 and make Hypervolume always return a float (#494)
  • Fix bug in draw_sobol_samples that did not use the proper effective dimension (#505)
  • Fix constraints for q>1 in qExpectedHypervolumeImprovement (c80c4fd)
  • Only use feasible observations in partitioning for qExpectedHypervolumeImprovement
    in get_acquisition_function (#523)
  • Improved GPU compatibility for PairwiseGP (#537)

Performance Improvements

  • Reduce memory footprint in qExpectedHypervolumeImprovement (#522)
  • Add (q)ExpectedHypervolumeImprovement to nonnegative functions
    [for better initialization] (#496)

Other changes

  • Support batched best_f in qExpectedImprovement (#487)
  • Allow to return full tree of solutions in OneShotAcquisitionFunction (#488)
  • Added construct_inputs class method to models to programmatically construct the
    inputs to the constructor from a standardized TrainingData representation
    (#477, #482, 3621198)
  • Acquisition function constructors now accept catch-all **kwargs options
    (#478, e5b6935)
  • Use psd_safe_cholesky in qMaxValueEntropy for better numerical stabilty (#518)
  • Added WeightedMCMultiOutputObjective (81d91fd)
  • Add ability to specify outcomes to all multi-output objectives (#524)
  • Return optimization output in info_dict for fit_gpytorch_scipy (#534)
  • Use setuptools_scm for versioning (#539)

Multi-Objective Bayesian Optimization

06 Jul 21:31
Compare
Choose a tag to compare

New Features

  • Multi-Objective Acquisition Functions (#466)
    • q-Expected Hypervolume Improvement
    • q-ParEGO
    • Analytic Expected Hypervolume Improvement with auto-differentiation
  • Multi-Objective Utilities (#466)
    • Pareto Computation
    • Hypervolume Calculation
    • Box Decomposition algorithm
  • Multi-Objective Test Functions (#466)
    • Suite of synthetic test functions for multi-objective, constrained
      optimzation
  • Multi-Objective Tutorial (#468)
  • Abstract ConstrainedBaseTestProblem (#454)
  • Add optimize_acqf_list method for sequentially, greedily optimizing 1 candidate
    from each provided acquisition function (d10aec9)

Bug fixes

  • Fixed re-arranging mean in MultiTask multi-output models (#450).

Other changes

  • Move gpt_posterior_settings into models.utils (#449)
  • Allow specifications of batch dims to collapse in samplers (#457)
  • Remove outcome transform before model-fitting for sequential model fitting
    in multi-output models (#458)

Bugfix Release

14 May 18:12
Compare
Choose a tag to compare

Bug fixes

  • Fixed issue with broken wheel build (#444).

Other changes

  • Changed code style to use absolute imports throughout (#443).

Bugfix Release

13 May 17:08
Compare
Choose a tag to compare

Bug fixes

  • There was a mysterious issue with the 0.2.3 wheel on pypi, where part of the botorch/optim/utils.py file was not included, which resulted in an ImportError for many central components of the code. Interestingly, the source dist (built with the same command) did not have this issue.
  • Preserve order in ChainedOutcomeTransform (#440).

New Features

  • Utilities for estimating the feasible volume under outcome constraints (#437).

Pairwise GP for Preference Learning, Sampling Strategies

26 Apr 18:34
Compare
Choose a tag to compare

Introduces a new Pairwise GP model for Preference Learning with pair-wise preferential feedback, as well as a Sampling Strategies abstraction for generating candidates from a discrete candidate set.

Compatibility

  • Require PyTorch >=1.5 (#423).
  • Require GPyTorch >=1.1.1 (#425).

New Features

  • Add PairwiseGP for preference learning with pair-wise comparison data (#388).
  • Add SamplingStrategy abstraction for sampling-based generation strategies, including
    MaxPosteriorSampling (i.e. Thompson Sampling) and BoltzmannSampling (#218, #407).

Deprecations

  • The existing botorch.gen module is moved to botorch.generation.gen and imports
    from botorch.gen will raise a warning (an error in the next release) (#218).

Bug fixes

  • Fix & update a number of tutorials (#394, #398, #393, #399, #403).
  • Fix CUDA tests (#404).
  • Fix sobol maxdim limitation in prune_baseline (#419).

Other changes

  • Better stopping criteria for stochastic optimization (#392).
  • Improve numerical stability of LinearTruncatedFidelityKernel (#409).
  • Allow batched best_f in qExpectedImprovement and qProbabilityOfImprovement
    (#411).
  • Introduce new logger framework (#412).
  • Faster indexing in some situations (#414).
  • More generic BaseTestProblem (9e604fe).

Require Python 3.7 and new features

09 Mar 16:58
Compare
Choose a tag to compare

Require Python 3.7 and adds new features for active learning and multi-fidelity optimization, along with a number of bug fixes.

Compatibility

  • Require PyTorch >=1.4 (#379).
  • Require Python >=3.7 (#378).

New Features

  • Add qNegIntegratedPosteriorVariance for Bayesian active learning (#377).
  • Add FixedNoiseMultiFidelityGP, analogous to SingleTaskMultiFidelityGP (#386).
  • Support scalarize_posterior for m>1 and q>1 posteriors (#374).
  • Support subset_output method on multi-fidelity models (#372).
  • Add utilities for sampling from simplex and hypersphere (#369).

Bug fixes

  • Fix TestLoader local test discovery (#376).
  • Fix batch-list conversion of SingleTaskMultiFidelityGP (#370).
  • Validate tensor args before checking input scaling for more
    informative error messaages (#368).
  • Fix flaky qNoisyExpectedImprovement test (#362).
  • Fix test function in closed-loop tutorial (#360).
  • Fix num_output attribute in BoTorch/Ax tutorial (#355).

Other changes

  • Require output dimension in MultiTaskGP (#383).
  • Update code of conduct (#380).
  • Remove deprecated joint_optimize and sequential_optimize (#363).