Skip to content

Conversation

@AnushaChattoHeidelberg
Copy link

What does this implement/fix?

Adds the ability to use The Asynchronous Successive Halving Algorithm

Using Random/ Grid search as initializers, one can run the ASHA algorithm for hyper parameter optimization

Additional information

  • To import ASHA, use
    from seml.utils import asha

example experiment placed in folder asha_example

@AnushaChattoHeidelberg AnushaChattoHeidelberg changed the title adding asha to utils and added experiment adding asha to utils including experiment example Nov 25, 2025
@AnushaChattoHeidelberg AnushaChattoHeidelberg changed the title adding asha to utils including experiment example Implementing The Asynchronous Successive Halving Algorithm (including experiment example) Nov 25, 2025
@AnushaChatto
Copy link

Here some more general info about the hyperparameter optimizer:

Example script

To get a feeling for how the algorithm works one can run the included experiment in examples/asha_example.

  • main.py is the main experiment file
  • experiment_1.yaml is the configurations
  • model.py is a sample model that matches main.py
    The configuration of ASHA is done in the .yaml file as follows, in addition to the Random Search parameters:
fixed:
  asha_collection_name: asha_import_test
  num_stages: 10
  asha:
    eta: 3 //elimination fraction
    min_r: 1 //first elimination rung
    max_r: 20 //last elimination rung
    metric_increases: True //is the evaluated metric is better when increasing or decreasing (e.g if accuracy: metric_increases:True, if loss: metric_increases: False

Results of another experiment done during development
Screenshot from 2025-11-26 15-25-10

To sum up the current state, this is how the architecture currently operates:
Hyperparameter-Simulation-Flowchart - Copy of SystemArchitecture(3)

Things to do / open questions:

  • Is the way in which ASHA is currently integrated compatible with the philosophy of SEML or should it be done in a different way?
  • There is currently no direct way to see the status of an ASHA run when using seml ... status, this still needs to be added.
  • Intermediate metrics (as per the system architecture) are currently stored in a separate MongoDB collection. This requires specifying a distinct collection name in the experiment.yaml configuration. However, this collection is not removed when running seml experiment delete. What is the best approach to handle this? Should we integrate intermediate metrics into metricsdb with an experiment-specific tag? Or should the metrics be integrated into the seml collection?
  • From a maintainer perspective: What would need to be done to make this extension to SEML easily maintainable and fit well into the current project structure?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants