-
Notifications
You must be signed in to change notification settings - Fork 90
Closed
Milestone
Description
To help the most users understand how to get their data into NWB, I propose that we target a handful of dominant preprocessing pipelines (that is, spike sorting & calcium segmentation/ trace extraction) & give examples of getting that data into NWB.
- One of the pain points of getting data into NWB is accounting for provenance information related to a processing pipeline (i.e. saving the ROI and neuropil subtraction in addition to the calcium traces)
- While we won't promise a script that does a conversion, this should help get users that use these existing processing pipelines 90% of the way there.
- Each of these pieces of software outputs a fairly standardized set of files.
I'd like to brainstorm and identify a handful of target "sources" for ingest into NWB.
We can then either write examples ourselves or solicit the developers of the respective tools to contribute.
My initial brainstorm:
kilosort / phy TemplateGUI
This is the dominant spike sorting software for dense ephys recordings.
- modality: ephys
- output files: npy & tsv
- docs on outputs: http://phy-contrib.readthedocs.io/en/latest/template-gui/#analysis
- developers: @kwikteam
suite2p
AFAIK, this is the dominant segmentation and trace extraction processing software around right now.
- modality: 2P
- output files: mat
- docs on outputs: https://github.com/cortex-lab/Suite2P#iii-outputs
- developers: @marius10p
Metadata
Metadata
Assignees
Labels
No labels