Skip to content

Commit 5272700

Browse files
Balandatfacebook-github-bot
authored andcommitted
Release v0.2.0 (#343)
Summary: Update changelog, bump version number. Pull Request resolved: #343 Reviewed By: lena-kashtelyan Differential Revision: D19200221 Pulled By: Balandat fbshipit-source-id: 7286b99e8832459df5d86882970da77fd7410b2a
1 parent ab306fe commit 5272700

File tree

3 files changed

+54
-2
lines changed

3 files changed

+54
-2
lines changed

CHANGELOG.md

+52
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,58 @@
22

33
The release log for BoTorch.
44

5+
## [0.2.0] - Dec 20, 2019
6+
7+
Max-value entropy acqusition function, cost-aware / multifidelity optimization,
8+
subsetting models, outcome transforms.
9+
10+
#### Compatibility
11+
* Require PyTorch >=1.3.1 (#313).
12+
* Require GPyTorch >=1.0 (#342).
13+
14+
#### New Features
15+
* Add cost-aware KnowledgeGradient (`qMultiFidelityKnowledgeGradient`) for
16+
multi-fidelity optimization (#292).
17+
* Add `qMaxValueEntropy` and `qMultiFidelityMaxValueEntropy` max-value entropy
18+
search acquisition functions (#298).
19+
* Add `subset_output` functionality to (most) models (#324).
20+
* Add outcome transforms and input transforms (#321).
21+
* Add `outcome_transform` kwarg to model constructors for automatic outcome
22+
transformation and un-transformation (#327).
23+
* Add cost-aware utilities for cost-sensitive acquisiiton functions (#289).
24+
* Add `DeterminsticModel` and `DetermisticPosterior` abstractions (#288).
25+
* Add `AffineFidelityCostModel` (f838eacb4258f570c3086d7cbd9aa3cf9ce67904).
26+
* Add `project_to_target_fidelity` and `expand_trace_observations` utilties for
27+
use in multi-fidelity optimization (1ca12ac0736e39939fff650cae617680c1a16933).
28+
29+
#### Performance Improvements
30+
* New `prune_baseline` option for pruning `X_baseline` in
31+
`qNoisyExpectedImprovement` (#287).
32+
* Do not use approximate MLL computation for deterministic fitting (#314).
33+
* Avoid re-evaluating the acquisition function in `gen_candidates_torch` (#319).
34+
* Use CPU where possible in `gen_batch_initial_conditions` to avoid memory
35+
issues on the GPU (#323).
36+
37+
#### Bug fixes
38+
* Properly register `NoiseModelAddedLossTerm` in `HeteroskedasticSingleTaskGP`
39+
(671c93a203b03ef03592ce322209fc5e71f23a74).
40+
* Fix batch mode for `MultiTaskGPyTorchModel` (#316).
41+
* Honor `propagate_grads` argument in `fantasize` of `FixedNoiseGP` (#303).
42+
* Properly handle `diag` arg in `LinearTruncatedFidelityKernel` (#320).
43+
44+
#### Other changes
45+
* Consolidate and simplify multi-fidelity models (#308).
46+
* New license header style (#309).
47+
* Validate shape of `best_f` in `qExpectedImprovement` (#299).
48+
* Support specifying observation noise explicitly for all models (#256).
49+
* Add `num_outputs` property to the `Model` API (#330).
50+
* Validate output shape of models upon instantiating acquisition functions (#331).
51+
52+
#### Tests
53+
* Silence warnings outside of explicit tests (#290).
54+
* Enforce full sphinx docs coverage in CI (#294).
55+
56+
557
## [0.1.4] - Oct 1, 2019
658

759
Knowledge Gradient acquisition function (one-shot), various maintenance

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ Optimization simply use Ax.
5151
**Installation Requirements**
5252
- Python >= 3.6
5353
- PyTorch >= 1.3.1
54-
- gpytorch >= 0.3.5
54+
- gpytorch >= 1.0
5555
- scipy
5656

5757

botorch/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
from .utils import manual_seed
2020

2121

22-
__version__ = "0.1.4"
22+
__version__ = "0.2.0"
2323

2424

2525
__all__ = [

0 commit comments

Comments
 (0)