You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently working on solving the double-pendulum problem. After exploring the literature, I found that curriculum regularization (progressively increasing the complexity of the system during training) and sequence-to-sequence learning (dividing the time domain into small parts and training them one by one in sequence) appear to be effective techniques for tackling chaotic and high-frequency systems. To implement these strategies, I am attempting to save the weights obtained from a previous round of training and use them as initialization for a new round.
Unfortunately, I'm encountering difficulties in saving the weights after the training process. I have tried replicating some of the examples provided in the FAQ, but I have not been successful so far. I am running my code on Colab and I have included the relevant code below.
Additionally, I would appreciate guidance on how to initialize a new network with the saved weights and train it with different parameters in the ODE or on a different time domain.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone!
I am currently working on solving the double-pendulum problem. After exploring the literature, I found that curriculum regularization (progressively increasing the complexity of the system during training) and sequence-to-sequence learning (dividing the time domain into small parts and training them one by one in sequence) appear to be effective techniques for tackling chaotic and high-frequency systems. To implement these strategies, I am attempting to save the weights obtained from a previous round of training and use them as initialization for a new round.
Unfortunately, I'm encountering difficulties in saving the weights after the training process. I have tried replicating some of the examples provided in the FAQ, but I have not been successful so far. I am running my code on Colab and I have included the relevant code below.
Additionally, I would appreciate guidance on how to initialize a new network with the saved weights and train it with different parameters in the ODE or on a different time domain.
Beta Was this translation helpful? Give feedback.
All reactions