Skip to content

Make all Datasets Running with All Models and Evaluate All #114

@zenzeii

Description

@zenzeii

User story

  1. As a developer
  2. I want have an overview of all results from all dataset with all models
  3. So that I can understand which model is best for what kind of dataset

Acceptance criteria

  • All dataset has been applied to all models (trained, forecasted).
  • The splits (into training, test, validation) are all the same for one dataset.
  • The model configuration are the same for one model.
  • All results are documented

Definition of done (DoD)

  • Acceptance criteria are met.
  • Work are pushed to the Github repository.
  • Create a branch for each backlog items (coding)
  • A pull request is created for each related branch.
  • The work products in the pull requests are reviewed.
  • The corresponding branches are merged and closed.
  • The bill of materials section of the planning documents is updated.
  • The software architecture should be updated based on features changes.
  • Work needs to be documented in the corresponding wiki section
  • For new features unit test have to be written.
  • Update our forked repository with the latest RTDIP release.
  • All unit tests must pass successfully in the CI pipeline.
  • If the task involves coding, the implementation is integrated into the RTDIP framework and verified to function correctly within it.

DoD general criteria

  • Feature has been fully implemented
  • Feature has been merged into the mainline
  • All acceptance criteria were met
  • Product owner approved features
  • All tests are passing
  • Developers agreed to release

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

Status

Feature Archive

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions