Skip to content

Support asynchronous evaluation during training #377

@athitten

Description

@athitten

This is to better identify how the model is performing during the training/finetuning compared to the base model and report evaluation metrics from the eval benchmarks.
This is also a request from the Customizer team.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions