Skip to content

Why the t_total is 437600? #49

@reroze

Description

@reroze

In config.yaml, the train's instances_per_epoch is 65536 and batch_size is 16, after 100 epochs, it seems that only 409600 batches used during the training stage. So the t_total might be 409600?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions