Skip to content

Address stochasticity in training #37

@matsen

Description

@matsen

The current models give rather different outcomes upon re-training. One possibility is that we aren't fully optimizing the models. One could imagine working harder to optimize the models using something like CyclicLR. OTOH, that's at odds with an early-stopping regularization procedure if we wanted to go there.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions