|
2 | 2 |
|
3 | 3 | The release log for BoTorch.
|
4 | 4 |
|
| 5 | +## [0.2.0] - Dec 20, 2019 |
| 6 | + |
| 7 | +Max-value entropy acqusition function, cost-aware / multifidelity optimization, |
| 8 | +subsetting models, outcome transforms. |
| 9 | + |
| 10 | +#### Compatibility |
| 11 | +* Require PyTorch >=1.3.1 (#313). |
| 12 | +* Require GPyTorch >=1.0 (#342). |
| 13 | + |
| 14 | +#### New Features |
| 15 | +* Add cost-aware KnowledgeGradient (`qMultiFidelityKnowledgeGradient`) for |
| 16 | + multi-fidelity optimization (#292). |
| 17 | +* Add `qMaxValueEntropy` and `qMultiFidelityMaxValueEntropy` max-value entropy |
| 18 | + search acquisition functions (#298). |
| 19 | +* Add `subset_output` functionality to (most) models (#324). |
| 20 | +* Add outcome transforms and input transforms (#321). |
| 21 | +* Add `outcome_transform` kwarg to model constructors for automatic outcome |
| 22 | + transformation and un-transformation (#327). |
| 23 | +* Add cost-aware utilities for cost-sensitive acquisiiton functions (#289). |
| 24 | +* Add `DeterminsticModel` and `DetermisticPosterior` abstractions (#288). |
| 25 | +* Add `AffineFidelityCostModel` (f838eacb4258f570c3086d7cbd9aa3cf9ce67904). |
| 26 | +* Add `project_to_target_fidelity` and `expand_trace_observations` utilties for |
| 27 | + use in multi-fidelity optimization (1ca12ac0736e39939fff650cae617680c1a16933). |
| 28 | + |
| 29 | +#### Performance Improvements |
| 30 | +* New `prune_baseline` option for pruning `X_baseline` in |
| 31 | + `qNoisyExpectedImprovement` (#287). |
| 32 | +* Do not use approximate MLL computation for deterministic fitting (#314). |
| 33 | +* Avoid re-evaluating the acquisition function in `gen_candidates_torch` (#319). |
| 34 | +* Use CPU where possible in `gen_batch_initial_conditions` to avoid memory |
| 35 | + issues on the GPU (#323). |
| 36 | + |
| 37 | +#### Bug fixes |
| 38 | +* Properly register `NoiseModelAddedLossTerm` in `HeteroskedasticSingleTaskGP` |
| 39 | + (671c93a203b03ef03592ce322209fc5e71f23a74). |
| 40 | +* Fix batch mode for `MultiTaskGPyTorchModel` (#316). |
| 41 | +* Honor `propagate_grads` argument in `fantasize` of `FixedNoiseGP` (#303). |
| 42 | +* Properly handle `diag` arg in `LinearTruncatedFidelityKernel` (#320). |
| 43 | + |
| 44 | +#### Other changes |
| 45 | +* Consolidate and simplify multi-fidelity models (#308). |
| 46 | +* New license header style (#309). |
| 47 | +* Validate shape of `best_f` in `qExpectedImprovement` (#299). |
| 48 | +* Support specifying observation noise explicitly for all models (#256). |
| 49 | +* Add `num_outputs` property to the `Model` API (#330). |
| 50 | +* Validate output shape of models upon instantiating acquisition functions (#331). |
| 51 | + |
| 52 | +#### Tests |
| 53 | +* Silence warnings outside of explicit tests (#290). |
| 54 | +* Enforce full sphinx docs coverage in CI (#294). |
| 55 | + |
| 56 | + |
5 | 57 | ## [0.1.4] - Oct 1, 2019
|
6 | 58 |
|
7 | 59 | Knowledge Gradient acquisition function (one-shot), various maintenance
|
|
0 commit comments