Skip to content

Commit 9e777fa

Browse files
Balandatfacebook-github-bot
authored andcommitted
Version 0.1.4 (#282)
Summary: In addition to the version bump, excludes a test that requires gpytorch master from this release. Pull Request resolved: #282 Reviewed By: eytan Differential Revision: D17702906 Pulled By: Balandat fbshipit-source-id: c47d24fd1232da5b5cd3516745ede745f38e2f1d
1 parent bfb1dfe commit 9e777fa

File tree

4 files changed

+51
-7
lines changed

4 files changed

+51
-7
lines changed

.conda/meta.yaml

+1
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,7 @@ test:
3232
- botorch.utils
3333
- botorch.fit
3434
- botorch.gen
35+
- botorch.settings
3536

3637
about:
3738
home: https://botorch.org

CHANGELOG.md

+42
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,48 @@
22

33
The release log for BoTorch.
44

5+
## [0.1.4] - Oct 1, 2019
6+
7+
Knowledge Gradient acquisition function (one-shot), various maintenance
8+
9+
#### Breaking Changes
10+
* Require explicit output dimensions in BoTorch models (#238)
11+
* Make `joint_optimize` / `sequential_optimize` return acquisition function
12+
values (#149) [note deprecation notice below]
13+
* `standardize` now works on the second to last dimension (#263)
14+
* Refactor synthetic test functions (#273)
15+
16+
#### New Features
17+
* Add `qKnowledgeGradient` acquisition function (#272, #276)
18+
* Add input scaling check to standard models (#267)
19+
* Add `cyclic_optimize`, convergence criterion class (#269)
20+
* Add `settings.debug` context manager (#242)
21+
22+
#### Deprecations
23+
* Consolidate `sequential_optimize` and `joint_optimize` into `optimize_acqf`
24+
(#150)
25+
26+
#### Bug fixes
27+
* Properly pass noise levels to GPs using a `FixedNoiseGaussianLikelihood` (#241)
28+
[requires gpytorch > 0.3.5]
29+
* Fix q-batch dimension issue in `ConstrainedExpectedImprovement`
30+
(6c067185f56d3a244c4093393b8a97388fb1c0b3)
31+
* Fix parameter constraint issues on GPU (#260)
32+
33+
#### Minor changes
34+
* Add decorator for concatenating pending points (#240)
35+
* Draw independent sample from prior for each hyperparameter (#244)
36+
* Allow `dim > 1111` for `gen_batch_initial_conditions` (#249)
37+
* Allow `optimize_acqf` to use `q>1` for `AnalyticAcquisitionFunction` (#257)
38+
* Allow excluding parameters in fit functions (#259)
39+
* Track the final iteration objective value in `fit_gpytorch_scipy` (#258)
40+
* Error out on unexpected dims in parameter constraint generation (#270)
41+
* Compute acquisition values in gen_ functions w/o grad (#274)
42+
43+
#### Tests
44+
* Introduce BotorchTestCase to simplify test code (#243)
45+
* Refactor tests to have monolithic cuda tests (#261)
46+
547

648
## [0.1.3] - Aug 9, 2019
749

botorch/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
from .utils import manual_seed
1818

1919

20-
__version__ = "0.1.3"
20+
__version__ = "0.1.4"
2121

2222

2323
__all__ = [

test/models/test_model_list_gp_regression.py

+7-6
Original file line numberDiff line numberDiff line change
@@ -155,12 +155,13 @@ def test_ModelListGP_fixed_noise(self):
155155
self.assertIsInstance(posterior, GPyTorchPosterior)
156156
self.assertIsInstance(posterior.mvn, MultitaskMultivariateNormal)
157157

158-
# test output_indices
159-
posterior = model.posterior(
160-
test_x, output_indices=[0], observation_noise=True
161-
)
162-
self.assertIsInstance(posterior, GPyTorchPosterior)
163-
self.assertIsInstance(posterior.mvn, MultivariateNormal)
158+
# TODO: Add test back in once gpytorch > 0.3.5 is released
159+
# # test output_indices
160+
# posterior = model.posterior(
161+
# test_x, output_indices=[0], observation_noise=True
162+
# )
163+
# self.assertIsInstance(posterior, GPyTorchPosterior)
164+
# self.assertIsInstance(posterior.mvn, MultivariateNormal)
164165

165166
# test condition_on_observations
166167
f_x = torch.rand(2, 1, **tkwargs)

0 commit comments

Comments
 (0)