Skip to content

Error in MFPBenchObjetiveFunction #217

@LukasFehring

Description

@LukasFehring

In the creation of MFPBenchObjetiveFunction, the value metrics is assigned a metric.

class MFPBenchObjectiveFunction(ObjectiveFunction):
    """MF-Prior-Bench ObjectiveFunction class."""

    def __init__(
        self,
        benchmark_name: str,
        metric: str | list[str],
        benchmark: str | None = None,
        budget_type: str | None = None,
        prior: str | Path | C | Mapping[str, Any] | None = None,
        perturb_prior: float | None = None,
        benchmark_kwargs: dict | None = None,
        loggers: list[AbstractLogger] | None = None,
    ) -> None:
        """Initialize a MF-Prior-Bench objective function."""
        super().__init__(loggers)

        self.benchmark_name = benchmark_name
        self.budget_type = budget_type
        self.benchmark = benchmark
        self.metrics = metric
        ...

If a string is passed as a metric, e.g. "valid_error_rate" it is utilized as a list of metrics ["v","a",...] causing errors in the evaluation.

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
Cell In[7], [line 8](vscode-notebook-cell:?execution_count=7&line=8)
      5 baseline = obj.configspace.get_default_configuration()
      7 print("Create explanation task and instantiate HyperSHAP")
----> [8](vscode-notebook-cell:?execution_count=7&line=8) hypershap = HyperSHAP(explanation_task_from_carps_objective_function(obj))
     10 print("Tunability")
     11 hypershap.tunability(baseline)

Cell In[2], [line 6](vscode-notebook-cell:?execution_count=2&line=6)
      4 def explanation_task_from_carps_objective_function(objective_function: ObjectiveFunction) -> ExplanationTask:
      5     wrapper = ObjectiveFunctionWrapper(objective_function)
----> [6](vscode-notebook-cell:?execution_count=2&line=6)     return ExplanationTask.from_function(
      7         config_space=objective_function.configspace, function=wrapper.evaluate, n_samples=10_000
      8     )

File ~/Desktop/MergePriorLocations/.venv/lib/python3.13/site-packages/hypershap/task.py:149, in ExplanationTask.from_function(config_space, function, n_samples, base_model)
    136 """Create an ExplanationTask from a function that evaluates configurations.
    137 
    138 Args:
   (...)    146 
    147 """
    148 samples: list[Configuration] = config_space.sample_configuration(n_samples)
--> [149](https://file+.vscode-resource.vscode-cdn.net/Users/lukasfehring/Desktop/MergePriorLocations/~/Desktop/MergePriorLocations/.venv/lib/python3.13/site-packages/hypershap/task.py:149) values: list[float] = [function(config) for config in samples]
    150 data: list[tuple[Configuration, float]] = list(zip(samples, values, strict=False))
...
--> [139](https://file+.vscode-resource.vscode-cdn.net/Users/lukasfehring/Desktop/MergePriorLocations/~/Desktop/MergePriorLocations/.venv/lib/python3.13/site-packages/carps/objective_functions/mfpbench.py:139) ret = [result[metric] for metric in self.metrics]
    140 if len(ret) == 1:
    141     ret = ret[0]

KeyError: 'v'

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions