Result inconsistency with NeuralProphet #1576
Unanswered
AhmedGabal
asked this question in
Q&A - forecasting best practices
Replies: 1 comment 2 replies
-
Yes I had this issue and found out that the seed reset function in neuralprophet did not reset the random state of all the involved libraries. I created my own reset random state function. Call this function with the seed of your choice before fitting your model and it will give consistent results everytime.
|
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have an issue where running the same model multiple times yields very different results, not even similar to each other. I believe this happens because the model learns differently each time due to the neural network initializing the weights with a different set of numbers each run. Consequently, the gradient starts from a different point every time, leading to varying outcomes. For example, when I run the same model with identical hyperparameters three times, the results are (-5, 13, 50). The result of 13 is close to the actual value, but the other two runs produce significantly worse results, if i use the random seed i will get every time like the first one (-5) which is not accurate as well , but the model learn different things each run time.
Has anyone else experienced this inconsistency with NeuralProphet, and found a solution?
Beta Was this translation helpful? Give feedback.
All reactions