Ready-to-use workflow template for evaluating prediction files submitted on Synapse.org
The data-to-model (d2m) workflow is typically used when participants are able to download the challenge data, train their model locally, and submit their predictions file for evaluation against the hidden groundtruth data.
- Customize evaluation logic: modify the scoring and validation scripts
within the
evaluationfolder - Configure workflow: adapt
workflow.cwl(andwriteup-workflow.cwl, if applicable) to define the inputs and steps specific to your challenge - Test your changes: use
cwltoolto test your CWL scripts within thestepsfolder
This template provides all necessary components for a full challenge pipeline:
.
├── evaluation // core scoring and validation scripts
├── README.md
├── steps // individual CWL scripts (called by the main workflow CWL)
├── workflow.cwl // CWL workflow for evaluating submissions
└── writeup-workflow.cwl // CWL workflow to validate and archive writeup submissions
This template is built using CWL and Docker, and is designed to be handled by the SynapseWorkflowOrchestrator orchestration tool. For detailed information on utilizing these core technologies, please refer to their docs below:
- CWL: https://www.commonwl.org/user_guide/
- Docker: https://docs.docker.com/get-started/
- SynapseWorkflowOrchestrator: https://github.com/Sage-Bionetworks/SynapseWorkflowOrchestrator