Skip to content

ML Pipeline Logging and Inferencing Example #15

@kordless

Description

@kordless

This issue replaces #14, #10, #4 and #8.

A large amount of data is desired for ingestion. Evaluate datasets which may lead to:

  1. Choosing a dataset on Kaggle that has a large number amount of training data.
  2. Choosing a set of models from HuggingFace that can be run on the data set, for initial labeling.
  3. Building a Jupyter notebook that uses the labeled data + reports from FB to train a new model.
  4. Logging the training of the new model.
  5. Add a cron triggered process that does rollups.

Query exploration is desired for reporting by the user in a simple UI. This feature requires the ability of #2 for graphing the results.

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions