Skip to content
This repository was archived by the owner on Nov 3, 2023. It is now read-only.
This repository was archived by the owner on Nov 3, 2023. It is now read-only.

Keep multiple checkpoints during training #4970

Closed
@hamjam

Description

@hamjam

Hi,
Is there an option or a way to keep for example 5 best checkpoints in the training process by using train_model.py and not just the best model checkpoint?
As far as I understood, there isn't any option to keep multiple checkpoints in train_model.py, and by any option group, the new checkpoint will be overwritten on the only last checkpoint that has been saved.
Should I add this feature and if it's required create a pull request or is there a logic behind the way train_model.py keeps the checkpoints?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions