Skip to content

Optimazing weights. #28

@gian1312

Description

@gian1312

Sorry, to bother you. It is not an issue but a matter of understanding on my part.

First of all. Thanks a lot for the example. Great job. I am trying to derive my own strategy based on the example.

Before basing my way forward on a false understanding of the strategy, I would like to ask the following question:

The weights w0-w8 are hyperoptable (at least it seems to me like that). If I am not mistaken, those are also used to derive the target score.
Ignoring the fact, that the sample parameter aren't aggregating to one (only the default ones), I am wondering what this implies for the strategy.
Lets assume, we want to hyperopt the parameters.

  • The model trains with the sample parameters given in the strategy.
  • In a second step, hyperopt tries to optimize said parameters, without retraining the model obviously.
    -> This is likely not going to work.

My question is now the following: Did I understand this correctly? Especially how the target score to train the model is generated or did my train of thought derail completely?

Best regards and thanks a lot.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions