Skip to content

[GENERAL SUPPORT]: Running into user warning exception stating an objective was not 'observed' #2794

Open
@allen-yun

Description

Question

Hello,
I'm trying to run a MOBO experiment using the Service API and was running into an issue. It seems like everything is running properly but I'm getting warnings thrown which is making me wonder where the problem is coming from.

The specific message is:

[INFO 09-27 00:05:42] ax.modelbridge.transforms.standardize_y: Outcome x is constant, within tolerance.
[INFO 09-27 00:05:42] ax.modelbridge.transforms.standardize_y: Outcome y is constant, within tolerance.
C:\Users\USER\AppData\Local\Programs\Python\Python312\Lib\site-packages\ax\modelbridge\cross_validation.py:462: UserWarning:

Encountered exception in computing model fit quality: Outcome `x` was not observed.

I've attached my code below for further context. This takes place in a Jupyter Notebook and allows the user to manually input an initial reference trial and the rest of the experiment is done through a user dialog for ten trials. The surrogate is a SingleTaskGP and the Acquisition Function is qNEHVI (not sure if implemented correctly).

Please provide any relevant code snippet if applicable.

gs = GenerationStrategy(
    steps=[
        # Bayesian optimization step using the custom acquisition function
        GenerationStep(
            model=Models.BOTORCH_MODULAR,
            num_trials=-1,  # No limitation on how many trials should be produced from this step
            model_kwargs={
                "surrogate": Surrogate(SingleTaskGP),
                "botorch_acqf_class": qLogNoisyExpectedHypervolumeImprovement
            },
        ),
    ]
)

ax_client = AxClient(generation_strategy=gs)
ax_client.create_experiment(
    name="lc_optimization",
    parameters=[
        {
            "name": "a",
            "type": "range",
            "bounds": [100, 400],
        },
        {
            "name": "b",
            "type": "range",
            "bounds": [0.3, 500.00],
        },
        {
            "name": "c",
            "type": "range",
            "bounds": [0.0, 10.0],
        },
        {
            "name": "d",
            "type": "range",
            "bounds": [0, 5],
        },
        {
            "name": "e",
            "type": "range",
            "bounds": [0, 5],
        },
        {
            "name": "f",
            "type": "range",
            "bounds": [0.0, 3.0],
        },
        {
            "name": "g",
            "type": "range",
            "bounds": [0.0, 10.0],
        },
    ],
    objectives={
        # `threshold` arguments are optional
        "x": ObjectiveProperties(minimize=True, threshold=1.0),
        "y": ObjectiveProperties(minimize=True, threshold=1.0),
    },
    overwrite_existing_experiment=True,
    is_test=True,
)

initial_trial_parameters = {
    "a": 200,
    "b": 250.0,
    "c": 0.0,
    "d": 1,
    "e": 1,
    "f": 1.5,
    "g": 1.0,
}
initial_trial_results = {"x": (1.5, None), "y": (2.5, None)}

ax_client.attach_trial(initial_trial_parameters)
ax_client.complete_trial(trial_index=0, raw_data=initial_trial_results)

for i in range(10):  # Number of trials
    parameters, trial_index = ax_client.get_next_trial()
    print(f"Trial {i+1}: {parameters}")

    # Pause for user input via dialog
    trial_x = get_user_inputX("Please enter resulting x")
    trial_y = get_user_inputY("Please enter resulting y")

    # Simulate evaluation
    ax_client.complete_trial(trial_index=trial_index, raw_data={"x": (trial_x, None), "y": (trial_y, None)})

Code of Conduct

  • I agree to follow this Ax's Code of Conduct

Metadata

Assignees

Labels

questionFurther information is requested

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions