You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Runner should only need to know about outcomes, not objectives vs. constraints (facebook#2963)
Summary:
Context: In theory, a `BenchmarkRunner` should not have to know what metrics are objectives or constraints, and a test function should not have to be aware of that, either. They are just generating data. A `BenchmarkProblem` should only store knowledge of objectives and constraints on the `OptimizationConfig`, so that various `OptimizationConfigs` can be used without changing the runner and test function.
For historical reasons, runners track objectives and constraints separately and add noise to them separately, because this mimics how BoTorch test functions handle this. However, we now can and should isolate the quirks of BoTorch test functions to `BoTorchTestProblem`.
This diff:
* Updates `ParamBasedTestFunction.evaluate_true` to return all outcomes, not just objectives, and gets rid of `ParamBasedTestFunction.evaluate_true`, which was for constraints
* Removes `num_objectives` from `ParamBasedTestProblem`, leaving `ParamBasedTestProblem` with nothing but an `evaluate_true` method
* Removes the argument `constraint_noise_std` from `create_problem_from_botorch` and from `ParamBasedTestProblemRunner`, in favor of just using `noise_std`.
* Updates argument validation
Tangentially related changes:
* For simplicity, makes `get_noise_stds` always return a dict
* Stops allowing `noise_std` to be `None` and defaults it to zero (it was eventually set to zero when it was None in the past)
Differential Revision: D64919207
0 commit comments