-
-
Notifications
You must be signed in to change notification settings - Fork 234
MO-SMAC merge #1222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: development
Are you sure you want to change the base?
MO-SMAC merge #1222
Conversation
This is a safety measure. Normally, everytime we update the runhistory, the objective bounds are updated so that the value to normalize should be inside the bound.
Created a helper method to create a set with preserved order from a list
Previously: random scalarization of MO costs bc ParEGO was the only MO algo. Now: separate function which can be overwritten.
Also, reset obtain to the kwargs (no magic numbers) Better debug message
Updates incumbents of runhistory automatically if updated
# Conflicts: # smac/acquisition/maximizer/local_search.py
…e retraining and forces to run the maximization procedure immediately to time the computation cost.
Early termination of unfruitful configurations.
…e intensifier class.
…eved from the RunHistoryEncoder (which could mean they are not normalized. Therefore the predictions are not either. Hence, we need to normalize the predictions of the objectives here before computing the hypervolumes.
…y play around with different implementations.
# Conflicts: # smac/acquisition/maximizer/local_search.py # smac/intensifier/abstract_intensifier.py
# Conflicts: # README.md # smac/intensifier/abstract_intensifier.py # smac/utils/multi_objective.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rename file to hypervolume
@@ -153,3 +156,62 @@ def sort_by_crowding_distance( | |||
config_with_crowding = sorted(config_with_crowding, key=lambda x: x[1], reverse=True) | |||
|
|||
return [c for c, _ in config_with_crowding] | |||
|
|||
def sort_by_hypervolume_contribution( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All functions seem not to be implemented
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general:
- add docstrings for classes and private functions
- abstract intermediate update and sth else
- check all TODOs
- fix tests
- write new tests, the coverage should be higher than before
README.md
Outdated
We are excited to introduce the new major release and look forward to developing new features on the new code base. | ||
We hope you enjoy this new user experience as much as we do. 🚀 | ||
|
||
MO-SMAC is implemented directly into SMAC3. This repository is forked from the [SMAC3 repository](https://github.com/automl/SMAC3) and therefore contains references and copyright information to those authors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- update README
objectives="accuracy", | ||
# min_budget=1, # Train the MLP using a hyperparameter configuration for at least 5 epochs | ||
# max_budget=25, # Train the MLP using a hyperparameter configuration for at most 25 epochs | ||
n_workers=4, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- test example
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works
@@ -123,7 +118,7 @@ def _maximize( | |||
# Sort according to acq value | |||
configs_acq.sort(reverse=True, key=lambda x: x[0]) | |||
for a, inc in configs_acq: | |||
inc.origin = "Acquisition Function Maximizer: Local Search" | |||
inc.origin = "Local Search" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- change back to AF Max --> make consistent for all (TODO @benjamc )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed this already to make the pytests work again
@@ -115,7 +116,12 @@ def get_intensifier( | |||
max_incumbents : int, defaults to 10 | |||
How many incumbents to keep track of in the case of multi-objective. | |||
""" | |||
return Intensifier( | |||
class NewIntensifier(intermediate_decision.NewCostDominatesOldCost, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- rename intensifier
LekkerMOACIntensifier
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Combine with abstraction of the intensifier
|
||
return init_points | ||
|
||
def _create_sort_keys(self, costs: np.array) -> list[list[float]]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What impact does the way of sorting have? Any intuition?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is to get points to perform the local search with based on earlier runs. So it can make a difference. This code was not implemented by me btw. Only moved probably
# # TODO adjust | ||
# raise NotImplementedError | ||
|
||
def _cut_incumbents(self, incumbent_ids: list[int], all_incumbent_isb_keys: list[list[InstanceSeedBudgetKey]]) -> list[int]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- add docstring
# if len(previous_incumbents) == len(new_incumbents): | ||
# if previous_incumbents == new_incumbents: | ||
# # No changes in the incumbents | ||
# self._remove_rejected_config(config_id) # This means that the challenger is not rejected!! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- remove comments
@@ -612,20 +634,279 @@ def update_incumbents(self, config: Configuration) -> None: | |||
# Cut incumbents: We only want to keep a specific number of incumbents | |||
# We use the crowding distance for that | |||
if len(new_incumbents) > self._max_incumbents: | |||
new_incumbents = sort_by_crowding_distance(rh, new_incumbents, all_incumbent_isb_keys) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sort_by_crowding_distance
got replaced by what?
elif len(previous_incumbents) < len(new_incumbents): | ||
# Config becomes a new incumbent; nothing is rejected in this case | ||
self._remove_rejected_config(config_id) | ||
self._remove_rejected_config(config) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are not working on ids? Why?
Several pytest fail. All of them yield an |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Extra note: After merge add additional functions such as separate surrogate models and runhistories (log) for the different objectives.
objectives="accuracy", | ||
# min_budget=1, # Train the MLP using a hyperparameter configuration for at least 5 epochs | ||
# max_budget=25, # Train the MLP using a hyperparameter configuration for at most 25 epochs | ||
n_workers=4, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works
@@ -123,7 +118,7 @@ def _maximize( | |||
# Sort according to acq value | |||
configs_acq.sort(reverse=True, key=lambda x: x[0]) | |||
for a, inc in configs_acq: | |||
inc.origin = "Acquisition Function Maximizer: Local Search" | |||
inc.origin = "Local Search" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed this already to make the pytests work again
def _create_sort_keys(self, costs: np.array) -> list[list[float]]: | ||
"""Non-Dominated Sorting of Costs | ||
|
||
In case of the predictive model returning the prediction for more than one objective per configuration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update text to comply with workings of function
|
||
return init_points | ||
|
||
def _create_sort_keys(self, costs: np.array) -> list[list[float]]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is to get points to perform the local search with based on earlier runs. So it can make a difference. This code was not implemented by me btw. Only moved probably
@@ -115,7 +116,12 @@ def get_intensifier( | |||
max_incumbents : int, defaults to 10 | |||
How many incumbents to keep track of in the case of multi-objective. | |||
""" | |||
return Intensifier( | |||
class NewIntensifier(intermediate_decision.NewCostDominatesOldCost, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Combine with abstraction of the intensifier
config_hash = get_config_hash(config) | ||
|
||
# Do not compare very early in the process | ||
if len(config_isb_keys) < 4: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Empirically. But the mixing is likely to overwrite this function anyway
isb_keys = self.get_incumbent_instance_seed_budget_keys(compare=True) | ||
|
||
n_samples = 1000 | ||
if len(isb_keys) < 7: # When there are only a limited number of trials available we run all combinations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe 7 is the miminum number you need to have a least 1000 distinct samples.
Multi-objective SMAC as described in https://doi.org/10.1162/evco_a_00371