Open
Description
If the generation strategy uses MBM with a single objective acquisition function on an MOO problem, the outputs are simply summed together in the acquisition function using a ScalarizedPosteriorTransform
.
Discovered while investigating #2514
Repro:
Notebook for Meta employees: N5489742
Setup the problem using AxClient
import random
from ax.modelbridge.generation_strategy import GenerationStep, GenerationStrategy
from ax.modelbridge.registry import Models
from ax.service.ax_client import AxClient, ObjectiveProperties
from botorch.acquisition.monte_carlo import qNoisyExpectedImprovement
generation_strategy = GenerationStrategy(
steps=[
GenerationStep(
model=Models.SOBOL,
num_trials=2,
min_trials_observed=1,
),
GenerationStep(
model=Models.BOTORCH_MODULAR,
num_trials=-1,
model_kwargs={
"botorch_acqf_class": qNoisyExpectedImprovement,
},
),
]
)
ax_client = AxClient(generation_strategy=generation_strategy)
ax_client.create_experiment(
name="test_experiment",
parameters=[
{
"name": "x1",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x2",
"type": "range",
"bounds": [0.0, 1.0],
},
],
objectives={
"a": ObjectiveProperties(
minimize=False,
),
"b": ObjectiveProperties(
minimize=False,
),
},
)
def evaluate(parameters):
return {"a": (random.random(), 0.0), "b": (random.random(), 0.0)}
for i in range(5):
parameterization, trial_index = ax_client.get_next_trial()
ax_client.complete_trial(
trial_index=trial_index, raw_data=evaluate(parameterization)
)
This runs fine and generates candidates.
Investigate arguments to acquisition function
from unittest import mock
with mock.patch.object(qNoisyExpectedImprovement, "__init__", side_effect=Exception) as mock_acqf:
parameterization, trial_index = ax_client.get_next_trial()
This will raise an exception. Ignore it and check kwargs.
mock_acqf.call_args.kwargs["posterior_transform"]
This is a ScalarizedPosteriorTransform
with weights tensor([1., 1.], dtype=torch.float64)
.
We can check opt config to verify that this is not an experiment setup issue.
ax_client.experiment.optimization_config
# MultiObjectiveOptimizationConfig(objective=MultiObjective(objectives=[Objective(metric_name="a", minimize=False), Objective(metric_name="b", minimize=False)]), outcome_constraints=[], objective_thresholds=[])`
Expected behavior
We can't do MOO using a single objective acquisition function. We should not be silently scalarizing the outputs. It should raise an informative error.