Correct way to save a trained model? #2620
Unanswered
carschandler
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I've looked around a decent bit and it's possible that I've somehow missed this, but what is the correct way to save a model which is performing well at the end of a training loop, and how would I go about re-using it later? I now understand how to save replay buffer contents, but to my understanding, this doesn't save any information about the weights of the models themselves, right?
I thought that maybe using
torch.save
might be the route to go, but what do I use it on? For example, I have a policy/actor which is aQValueActor
with aDuelingCnnDQNet
value net inside of it. I also have apolicy_explore
which is aSeq(policy, EGreedyModule)
. Is saving theQValueActor
enough to get up and running again? Do I also need to save thepolicy_explore
? What about the value net?Beta Was this translation helpful? Give feedback.
All reactions