Skip to content

Capture whether a Tool implement an ensemble strategy #259

@tschaffter

Description

@tschaffter

This proposal comes after reading the article On the “usefulness” of the Netflix Prize. In this article, it is highlighted that complicated ensemble methods may not be suitable for production ready application. We have experienced a similar situation with the Digital Mammography DREAM Challenge where the final method was an ensemble of 11 tools/docker images. This strategy is actually adopted in most DREAM challenges, where the final model published is an ensemble of the best performing models submitted during the competitive or collaborative phase.

There are two bits of information that we may want to capture, possibly as properties of the Tool schemas:

  • Whether the "tool" - the docker image submitted - is an ensemble of the output of multiple algorithms.
    • This may also flag tools that may take a long time to run, though we will ultimately capture and report on tool runtime in the future.
  • Whether the submitted tool can be trained, re-trained and/or fine-tuned
    • Distinguish between a tool that has not yet been trained and must absolutely be trained before being used for inference, and a tool that has been previously trained and can be further re-trained or fine-tuned, for example periodically on new data.
    • Enabling submitted tools to train on private data from data sites should be possible before the end of 2021.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions