Replies: 1 comment 2 replies
-
|
Try deleting (or just move) the most recent partially completed checkpoint file from the directory and then rerun training. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I'm using the Gradio interface for training F5-TTS, and I want to resume training after stopping it. For example, if I stop after 20 epochs, I want to continue from the same parameters to improve the model further.
I tried simply going to the training tab and starting again, but I got this error:
Fatal Python error: config_init_hash_seed: PYTHONHASHSEED must be "random" or an integer in range [0; 4294967295]
Python runtime state: preinitialized
I also tried enabling the Finetune checkbox and setting the last model's .pt file in Path to the Pretrained Checkpoint, but I wasn’t sure what to put in Tokenizer.
Do I need to prepare the data again, or can I continue without reprocessing it?
Note: I'm not changing the dataset or any parameters; I just want to resume training from where I left off.
Can someone guide me on the correct steps to resume training using the Gradio interface?
Beta Was this translation helpful? Give feedback.
All reactions