diff --git a/.github/workflows/test.yaml b/.github/workflows/test.yaml
index 8a4ec130..66a4f739 100644
--- a/.github/workflows/test.yaml
+++ b/.github/workflows/test.yaml
@@ -1,10 +1,12 @@
name: Test
on:
push:
+ branches:
+ - main
pull_request:
+ branches:
+ - main
workflow_dispatch:
- schedule:
- - cron: "0 8 * * 1"
jobs:
devcontainer-build:
uses: datajoint/.github/.github/workflows/devcontainer-build.yaml@main
@@ -13,12 +15,7 @@ jobs:
strategy:
matrix:
py_ver: ["3.9", "3.10"]
- mysql_ver: ["8.0", "5.7"]
- include:
- - py_ver: "3.8"
- mysql_ver: "5.7"
- - py_ver: "3.7"
- mysql_ver: "5.7"
+ mysql_ver: ["8.0"]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{matrix.py_ver}}
@@ -34,4 +31,3 @@ jobs:
python_version=${{matrix.py_ver}}
black element_calcium_imaging --check --verbose --target-version py${python_version//.}
black notebooks --check --verbose --target-version py${python_version//.}
-
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 6b311ec4..1d3c58d8 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -3,6 +3,11 @@
Observes [Semantic Versioning](https://semver.org/spec/v2.0.0.html) standard and
[Keep a Changelog](https://keepachangelog.com/en/1.0.0/) convention.
+## [0.11.0] - 2025-01-31
+
++ Feat - Single `imaging` module for all imaging data
++ Update - Code fixes and improvements throughout
+
## [0.10.1] - 2024-06-20
+ Fix - cleaner plotting in tutorial notebook
@@ -220,6 +225,7 @@ Observes [Semantic Versioning](https://semver.org/spec/v2.0.0.html) standard and
+ Add - `scan` and `imaging` modules
+ Add - Readers for `ScanImage`, `ScanBox`, `Suite2p`, `CaImAn`
+[0.11.0]: https://github.com/datajoint/element-calcium-imaging/releases/tag/0.11.0
[0.10.0]: https://github.com/datajoint/element-calcium-imaging/releases/tag/0.10.0
[0.9.5]: https://github.com/datajoint/element-calcium-imaging/releases/tag/0.9.5
[0.9.4]: https://github.com/datajoint/element-calcium-imaging/releases/tag/0.9.4
diff --git a/README.md b/README.md
index dcb09f6c..2b475f3a 100644
--- a/README.md
+++ b/README.md
@@ -1,18 +1,17 @@
# DataJoint Element for Functional Calcium Imaging
-DataJoint Element for functional calcium imaging with
-[ScanImage](https://docs.scanimage.org/),
-[Scanbox](https://scanbox.org/),
-[Nikon NIS-Elements](https://www.microscope.healthcare.nikon.com/products/software/nis-elements),
-and `Bruker Prairie View` acquisition software; and
-[Suite2p](https://github.com/MouseLand/suite2p),
+DataJoint Element for functional calcium imaging with support for
+[ScanImage](https://docs.scanimage.org/), [Scanbox](https://scanbox.org/), [Nikon
+NIS-Elements](https://www.microscope.healthcare.nikon.com/products/software/nis-elements),
+and `Bruker Prairie View` acquisition software; and
+[Suite2p](https://github.com/MouseLand/suite2p),
[CaImAn](https://github.com/flatironinstitute/CaImAn), and
-[EXTRACT](https://github.com/schnitzer-lab/EXTRACT-public) analysis
-software. DataJoint Elements collectively standardize and automate
-data collection and analysis for neuroscience experiments. Each Element is a modular
-pipeline for data storage and processing with corresponding database tables that can be
-combined with other Elements to assemble a fully functional pipeline. This repository
-also provides a tutorial environment and notebooks to learn the pipeline.
+[EXTRACT](https://github.com/schnitzer-lab/EXTRACT-public) analysis software. DataJoint
+Elements collectively standardize and automate data collection and analysis for
+neuroscience experiments. Each Element is a modular pipeline for data storage and
+processing with corresponding database tables that can be combined with other Elements
+to assemble a fully functional pipeline. This repository also provides a tutorial
+environment and notebooks to learn the pipeline.
## Experiment Flowchart
@@ -20,13 +19,16 @@ also provides a tutorial environment and notebooks to learn the pipeline.
## Data Pipeline Diagram
-
+
-+ We have designed three variations of the pipeline to handle different use cases.
-Displayed above is the default `imaging` schema. Details on all of the `imaging`
-schemas can be found in the [Data
-Pipeline](https://datajoint.com/docs/elements/element-calcium-imaging/latest/pipeline/)
-documentation page.
+### Legacy Support
+
++ Three variations of the pipeline were designed and supported through December 2024 to
+ handle different use cases. However, based on community feedback and use cases, only
+ one pipeline will be supported after December 2024. However, all three pipeline
+ versions will remain available in their own branches for reference and use.
++ Displayed above is the default `imaging` schema. Please see other branches for the
+ `imaging` module with `Curation` table and the `imaging_preprocess` module.
## Getting Started
@@ -55,37 +57,64 @@ or contact our team by email at support@datajoint.com.
## Interactive Tutorial
-+ The easiest way to learn about DataJoint Elements is to use the tutorial notebooks within the included interactive environment configured using [Dev Container](https://containers.dev/).
++ The easiest way to learn about DataJoint Elements is to use the tutorial notebooks
+ within the included interactive environment configured using [Dev
+ Container](https://containers.dev/).
### Launch Environment
Here are some options that provide a great experience:
-- (*recommended*) Cloud-based Environment
- - Launch using [GitHub Codespaces](https://github.com/features/codespaces) using the `+` option which will `Create codespace on main` in the codebase repository on your fork with default options. For more control, see the `...` where you may create `New with options...`.
- - Build time for a codespace is a few minutes. This is done infrequently and cached for convenience.
- - Start time for a codespace is less than 1 minute. This will pull the built codespace from cache when you need it.
- - *Tip*: Each month, GitHub renews a [free-tier](https://docs.github.com/en/billing/managing-billing-for-github-codespaces/about-billing-for-github-codespaces#monthly-included-storage-and-core-hours-for-personal-accounts) quota of compute and storage. Typically we run into the storage limits before anything else since Codespaces consume storage while stopped. It is best to delete Codespaces when not actively in use and recreate when needed. We'll soon be creating prebuilds to avoid larger build times. Once any portion of your quota is reached, you will need to wait for it to be reset at the end of your cycle or add billing info to your GitHub account to handle overages.
- - *Tip*: GitHub auto names the codespace but you can rename the codespace so that it is easier to identify later.
-
-- Local Environment
- > *Note: Access to example data is currently limited to MacOS and Linux due to the s3fs utility. Windows users are recommended to use the above environment.*
- - Install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
- - Install [Docker](https://docs.docker.com/get-docker/)
- - Install [VSCode](https://code.visualstudio.com/)
- - Install the VSCode [Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers)
- - `git clone` the codebase repository and open it in VSCode
- - Use the `Dev Containers extension` to `Reopen in Container` (More info is in the `Getting started` included with the extension.)
-
-You will know your environment has finished loading once you either see a terminal open related to `Running postStartCommand` with a final message of `Done` or the `README.md` is opened in `Preview`.
++ (*recommended*) Cloud-based Environment
+ + Launch using [GitHub Codespaces](https://github.com/features/codespaces) using the
+ `+` option which will `Create codespace on main` in the codebase repository on your
+ fork with default options. For more control, see the `...` where you may create `New
+ with options...`.
+ + Build time for a codespace is a few minutes. This is done infrequently and cached
+ for convenience.
+ + Start time for a codespace is less than 1 minute. This will pull the built codespace
+ from cache when you need it.
+ + *Tip*: Each month, GitHub renews a
+ [free-tier](https://docs.github.com/en/billing/managing-billing-for-github-codespaces/about-billing-for-github-codespaces#monthly-included-storage-and-core-hours-for-personal-accounts)
+ quota of compute and storage. Typically we run into the storage limits before
+ anything else since Codespaces consume storage while stopped. It is best to delete
+ Codespaces when not actively in use and recreate when needed. We'll soon be creating
+ prebuilds to avoid larger build times. Once any portion of your quota is reached,
+ you will need to wait for it to be reset at the end of your cycle or add billing
+ info to your GitHub account to handle overages.
+ + *Tip*: GitHub auto names the codespace but you can rename the codespace so that it
+ is easier to identify later.
+
++ Local Environment
+ > *Note: Access to example data is currently limited to MacOS and Linux due to the
+ > s3fs utility. Windows users are recommended to use the above environment.*
+ + Install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
+ + Install [Docker](https://docs.docker.com/get-docker/)
+ + Install [VSCode](https://code.visualstudio.com/)
+ + Install the VSCode [Dev Containers
+ extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers)
+ + `git clone` the codebase repository and open it in VSCode
+ + Use the `Dev Containers extension` to `Reopen in Container` (More info is in the
+ `Getting started` included with the extension.)
+
+You will know your environment has finished loading once you either see a terminal open
+related to `Running postStartCommand` with a final message of `Done` or the `README.md`
+is opened in `Preview`.
Once the environment has launched, please run the following command in the terminal:
-```
+```sh
MYSQL_VER=8.0 docker compose -f docker-compose-db.yaml up --build -d
```
### Instructions
-1. We recommend you start by navigating to the `notebooks` directory on the left panel and go through the `tutorial.ipynb` Jupyter notebook. Execute the cells in the notebook to begin your walk through of the tutorial.
+1. We recommend you start by navigating to the `notebooks` directory on the left panel
+ and go through the `tutorial.ipynb` Jupyter notebook. Execute the cells in the
+ notebook to begin your walk through of the tutorial.
-1. Once you are done, see the options available to you in the menu in the bottom-left corner. For example, in Codespace you will have an option to `Stop Current Codespace` but when running Dev Container on your own machine the equivalent option is `Reopen folder locally`. By default, GitHub will also automatically stop the Codespace after 30 minutes of inactivity. Once the Codespace is no longer being used, we recommend deleting the Codespace.
+2. Once you are done, see the options available to you in the menu in the bottom-left
+ corner. For example, in Codespace you will have an option to `Stop Current Codespace`
+ but when running Dev Container on your own machine the equivalent option is `Reopen
+ folder locally`. By default, GitHub will also automatically stop the Codespace after
+ 30 minutes of inactivity. Once the Codespace is no longer being used, we recommend
+ deleting the Codespace.
diff --git a/docs/src/partnerships.md b/docs/src/partnerships.md
index addc8aa0..b7c56da7 100644
--- a/docs/src/partnerships.md
+++ b/docs/src/partnerships.md
@@ -1,6 +1,9 @@
# Key partnerships
-Several labs have developed DataJoint-based data management and processing pipelines for two-photon calcium imaging. Our team collaborated with several of them during their projects. Additionally, we interviewed these teams to understand their experiment workflow, pipeline design, associated tools, and interfaces.
+Several labs have developed DataJoint-based data management and processing pipelines for
+two-photon calcium imaging. Our team collaborated with several of them during their
+projects. Additionally, we interviewed these teams to understand their experiment
+workflow, pipeline design, associated tools, and interfaces.
These teams include:
diff --git a/docs/src/pipeline.md b/docs/src/pipeline.md
index 5c5163e2..6a5f77b5 100644
--- a/docs/src/pipeline.md
+++ b/docs/src/pipeline.md
@@ -5,9 +5,7 @@ corresponding table in the database. Within the pipeline, Element Calcium Imagi
connects to upstream Elements including Lab, Animal, Session, and Event. For more
detailed documentation on each table, see the API docs for the respective schemas.
-The Element is composed of two main schemas, `scan` and `imaging`. To handle
-several use cases of this pipeline, we have designed two alternatives to the `imaging`
-schema, including `imaging_no_curation` and `imaging_preprocess`.
+The Element is composed of two main schemas, `scan` and `imaging`.
## Diagrams
@@ -15,23 +13,15 @@ schema, including `imaging_no_curation` and `imaging_preprocess`.
- Multiple scans are acquired during each session and each scan is processed independently.
- 
-
-### `imaging_no_curation` module
-
-- Same as the `imaging` module, but without the `Curation` table.
-

-### `imaging_preprocess` module
-
-- Same as the `imaging` module, and additional pre-processing steps can be performed on each scan prior to processing with Suite2p or CaImAn.
-
- 
-
### `multi-scan-processing` branch
-- The processing pipeline is typically performed on a per-scan basis, however, depending on the nature of the research questions, different labs may opt to perform processing/segmentation on a concatenated set of data from multiple scans. To this end, we have extended the Calcium Imaging Element and provided a design version capable of supporting a multi-scan processing scheme.
+- The processing pipeline is typically performed on a per-scan basis, however, depending
+ on the nature of the research questions, different labs may opt to perform
+ processing/segmentation on a concatenated set of data from multiple scans. To this
+ end, we have extended the Calcium Imaging Element and provided a design version
+ capable of supporting a multi-scan processing scheme.
## Table descriptions
@@ -89,7 +79,6 @@ schema, including `imaging_no_curation` and `imaging_preprocess`.
| MaskType | Available labels for segmented masks |
| ProcessingTask | Task defined by a combination of Scan and ProcessingParamSet |
| Processing | The core table that executes a ProcessingTask |
-| Curation | Curated results |
| MotionCorrection | Results of the motion correction procedure |
| MotionCorrection.RigidMotionCorrection | Details of the rigid motion correction performed on the imaging data |
| MotionCorrection.NonRigidMotionCorrection | Details of nonrigid motion correction performed on the imaging data |
diff --git a/docs/src/roadmap.md b/docs/src/roadmap.md
index 518209a8..528afc70 100644
--- a/docs/src/roadmap.md
+++ b/docs/src/roadmap.md
@@ -18,9 +18,12 @@ the common motifs to create Element Calcium Imaging. Major features include:
- [ ] Deepinterpolation
- [x] Data export to NWB
- [x] Data publishing to DANDI
-- [x] Widgets for manual ROI mask creation and curation for cell segmentation of Fluorescent voltage sensitive indicators, neurotransmitter imaging, and neuromodulator imaging
-- [ ] Expand creation widget to provide pixel weights for each mask based on Fluorescence intensity traces at each pixel
+- [x] Widgets for manual ROI mask creation and curation for cell segmentation of
+ Fluorescent voltage sensitive indicators, neurotransmitter imaging, and neuromodulator
+ imaging
+- [ ] Expand creation widget to provide pixel weights for each mask based on
+ Fluorescence intensity traces at each pixel
Further development of this Element is community driven. Upon user requests and based on
-guidance from the Scientific Steering Group we will continue adding features to this
+guidance from the Scientific Steering Group, we will continue adding features to this
Element.
diff --git a/element_calcium_imaging/__init__.py b/element_calcium_imaging/__init__.py
index e69de29b..09e87035 100644
--- a/element_calcium_imaging/__init__.py
+++ b/element_calcium_imaging/__init__.py
@@ -0,0 +1,3 @@
+from . import imaging
+
+imaging_no_curation = imaging # alias for backward compatibility
diff --git a/element_calcium_imaging/analysis.py b/element_calcium_imaging/analysis.py
deleted file mode 100644
index 6615ce2f..00000000
--- a/element_calcium_imaging/analysis.py
+++ /dev/null
@@ -1,220 +0,0 @@
-import importlib
-import inspect
-
-import datajoint as dj
-import numpy as np
-
-schema = dj.schema()
-
-_linking_module = None
-
-
-def activate(
- schema_name, *, create_schema=True, create_tables=True, linking_module=None
-):
- """Activate this schema.
-
- Args:
- schema_name (str): Schema name on the database server to activate the `subject`
- element.
- create_schema (bool): When True (default), create schema in the database if it
- does not yet exist.
- create_tables (bool): When True (default), create tables in the database if they
- do not yet exist.
- linking_module (str): A module name or a module containing the required
- dependencies to activate the `subject` element: Upstream schema: scan,
- session, trial.
- """
- if isinstance(linking_module, str):
- linking_module = importlib.import_module(linking_module)
- assert inspect.ismodule(linking_module), (
- "The argument 'dependency' must " + "be a module's name or a module"
- )
-
- global _linking_module
- _linking_module = linking_module
-
- schema.activate(
- schema_name,
- create_schema=create_schema,
- create_tables=create_tables,
- add_objects=linking_module.__dict__,
- )
-
-
-@schema
-class ActivityAlignmentCondition(dj.Manual):
- """Activity alignment condition.
-
- Attributes:
- imaging.Activity (foreign key): Primary key from imaging.Activity.
- event.AlignmentEvent (foreign key): Primary key from event.AlignmentEvent.
- trial_condition (str): User-friendly name of condition.
- condition_description (str). Optional. Description. Default is ''.
- bin_size (float): bin-size (in second) used to compute the PSTH,
- """
-
- definition = """
- -> imaging.Activity
- -> event.AlignmentEvent
- trial_condition: varchar(128) # user-friendly name of condition
- ---
- condition_description='': varchar(1000)
- bin_size=0.04: float # bin-size (in second) used to compute the PSTH
- """
-
- class Trial(dj.Part):
- """Trial
-
- Attributes:
- ActivityAlignmentCondition (foreign key): Primary key from
- ActivityAlignmentCondition.
- trial.Trial: Primary key from trial.Trial.
- """
-
- definition = """ # Trials (or subset) to compute event-aligned activity
- -> master
- -> trial.Trial
- """
-
-
-@schema
-class ActivityAlignment(dj.Computed):
- """
- Attributes:
- ActivityAlignmentCondition (foreign key): Primary key from
- ActivityAlignmentCondition.
- aligned_timestamps (longblob): Aligned timestamps.
- """
-
- definition = """
- -> ActivityAlignmentCondition
- ---
- aligned_timestamps: longblob
- """
-
- class AlignedTrialActivity(dj.Part):
- """Aligned trial activity.
-
- Attributes:
- ActivityAlignment (foreign key): Primary key from ActivityAlignment.
- imaging.Activity.Trace (foreign key): Primary key from
- imaging.Activity.Trace.
- ActivityAlignmentCondition.Trial (foreign key): Primary key from
- ActivityAlignmentCondition.Trial.
- aligned_trace (longblob): Calcium activity aligned to the event time (s).
- """
-
- definition = """
- -> master
- -> imaging.Activity.Trace
- -> ActivityAlignmentCondition.Trial
- ---
- aligned_trace: longblob # (s) Calcium activity aligned to the event time
- """
-
- def make(self, key):
- sess_time, scan_time, nframes, frame_rate = (
- _linking_module.scan.ScanInfo * _linking_module.session.Session & key
- ).fetch1("session_datetime", "scan_datetime", "nframes", "fps")
-
- trialized_event_times = (
- _linking_module.trial.get_trialized_alignment_event_times(
- key,
- _linking_module.trial.Trial & (ActivityAlignmentCondition.Trial & key),
- )
- )
-
- min_limit = (trialized_event_times.event - trialized_event_times.start).max()
- max_limit = (trialized_event_times.end - trialized_event_times.event).max()
-
- aligned_timestamps = np.arange(-min_limit, max_limit, 1 / frame_rate)
- nsamples = len(aligned_timestamps)
-
- trace_keys, activity_traces = (
- _linking_module.imaging.Activity.Trace & key
- ).fetch("KEY", "activity_trace", order_by="mask")
- activity_traces = np.vstack(activity_traces)
-
- aligned_trial_activities = []
- for _, r in trialized_event_times.iterrows():
- if r.event is None or np.isnan(r.event):
- continue
- alignment_start_idx = int((r.event - min_limit) * frame_rate)
- roi_aligned_activities = activity_traces[
- :, alignment_start_idx : (alignment_start_idx + nsamples)
- ]
- if roi_aligned_activities.shape[-1] != nsamples:
- shape_diff = nsamples - roi_aligned_activities.shape[-1]
- roi_aligned_activities = np.pad(
- roi_aligned_activities,
- ((0, 0), (0, shape_diff)),
- mode="constant",
- constant_values=np.nan,
- )
-
- aligned_trial_activities.extend(
- [
- {**key, **r.trial_key, **trace_key, "aligned_trace": aligned_trace}
- for trace_key, aligned_trace in zip(
- trace_keys, roi_aligned_activities
- )
- ]
- )
-
- self.insert1({**key, "aligned_timestamps": aligned_timestamps})
- self.AlignedTrialActivity.insert(aligned_trial_activities)
-
- def plot_aligned_activities(self, key, roi, axs=None, title=None):
- """Plot event-aligned activities for selected trials, and trial-averaged
- activity (e.g. dF/F, neuropil-corrected dF/F, Calcium events, etc.).
-
- Args:
- key (dict): key of ActivityAlignment master table
- roi (int): imaging segmentation mask
- axs (matplotlib.ax): optional definition of axes for plot.
- Default is plt.subplots(2, 1, figsize=(12, 8))
- title (str): Optional title label
-
- Returns:
- fig (matplotlib.pyplot.figure): Figure of the event aligned activities.
- """
- import matplotlib.pyplot as plt
-
- fig = None
- if axs is None:
- fig, (ax0, ax1) = plt.subplots(2, 1, figsize=(12, 8))
- else:
- ax0, ax1 = axs
-
- aligned_timestamps = (self & key).fetch1("aligned_timestamps")
- trial_ids, aligned_spikes = (
- self.AlignedTrialActivity & key & {"mask": roi}
- ).fetch("trial_id", "aligned_trace", order_by="trial_id")
-
- aligned_spikes = np.vstack(aligned_spikes)
-
- ax0.imshow(
- aligned_spikes,
- cmap="inferno",
- interpolation="nearest",
- aspect="auto",
- extent=(
- aligned_timestamps[0],
- aligned_timestamps[-1],
- 0,
- aligned_spikes.shape[0],
- ),
- )
- ax0.axvline(x=0, linestyle="--", color="white")
- ax0.set_axis_off()
-
- ax1.plot(aligned_timestamps, np.nanmean(aligned_spikes, axis=0))
- ax1.axvline(x=0, linestyle="--", color="black")
- ax1.set_xlabel("Time (s)")
- ax1.set_xlim(aligned_timestamps[0], aligned_timestamps[-1])
-
- if title:
- plt.suptitle(title)
-
- return fig
diff --git a/element_calcium_imaging/export/nwb/nwb.py b/element_calcium_imaging/export/nwb/nwb.py
index 77462348..143400eb 100644
--- a/element_calcium_imaging/export/nwb/nwb.py
+++ b/element_calcium_imaging/export/nwb/nwb.py
@@ -10,19 +10,12 @@
TwoPhotonSeries,
)
-from ... import scan, imaging_no_curation
+from ... import scan, imaging
from ...scan import get_calcium_imaging_files, get_imaging_root_data_dir
logger = dj.logger
-if imaging_no_curation.schema.is_activated():
- imaging = imaging_no_curation
-else:
- raise DataJointError(
- "This export function is designed for the `imaging_no_curation` module."
- )
-
def imaging_session_to_nwb(
session_key,
diff --git a/element_calcium_imaging/imaging.py b/element_calcium_imaging/imaging.py
index 453a050e..506057bd 100644
--- a/element_calcium_imaging/imaging.py
+++ b/element_calcium_imaging/imaging.py
@@ -365,6 +365,7 @@ def make(self, key):
task_mode, output_dir = (ProcessingTask & key).fetch1(
"task_mode", "processing_output_dir"
)
+ acq_software = (scan.Scan & key).fetch1("acq_software")
if not output_dir:
output_dir = ProcessingTask.infer_output_dir(key, relative=True, mkdir=True)
@@ -459,8 +460,8 @@ def make(self, key):
"Caiman pipeline is not yet capable of analyzing 3D scans."
)
- # handle multi-channel tiff image before running CaImAn
- if nchannels > 1:
+ if acq_software == "ScanImage" and nchannels > 1:
+ # handle multi-channel tiff image before running CaImAn
channel_idx = caiman_params.get("channel_to_process", 0)
tmp_dir = pathlib.Path(output_dir) / "channel_separated_tif"
tmp_dir.mkdir(exist_ok=True)
@@ -538,79 +539,6 @@ def make(self, key):
self.insert1({**key, "package_version": ""})
-@schema
-class Curation(dj.Manual):
- """Curated results
-
- If no curation is applied, the curation_output_dir can be set to
- the value of processing_output_dir.
-
- Attributes:
- Processing (foreign key): Primary key from Processing.
- curation_id (int): Unique curation ID.
- curation_time (datetime): Time of generation of this set of curated results.
- curation_output_dir (str): Output directory of the curated results, relative to
- root data directory.
- manual_curation (bool): If True, manual curation has been performed on this
- result.
- curation_note (str, optional): Notes about the curation task.
- """
-
- definition = """# Curation(s) results
- -> Processing
- curation_id: int
- ---
- curation_time: datetime # Time of generation of this set of curated results
- curation_output_dir: varchar(255) # Output directory of the curated results, relative to root data directory
- manual_curation: bool # Has manual curation been performed on this result?
- curation_note='': varchar(2000)
- """
-
- def create1_from_processing_task(self, key, is_curated=False, curation_note=""):
- """Create a Curation entry for a given ProcessingTask key.
-
- Args:
- key (dict): Primary key set of an entry in the ProcessingTask table.
- is_curated (bool): When True, indicates a manual curation.
- curation_note (str): User's note on the specifics of the curation.
- """
- if key not in Processing():
- raise ValueError(
- f"No corresponding entry in Processing available for: {key};"
- f"Please run `Processing.populate(key)`"
- )
-
- output_dir = (ProcessingTask & key).fetch1("processing_output_dir")
- method, imaging_dataset = get_loader_result(key, ProcessingTask)
-
- if method == "suite2p":
- suite2p_dataset = imaging_dataset
- curation_time = suite2p_dataset.creation_time
- elif method == "caiman":
- caiman_dataset = imaging_dataset
- curation_time = caiman_dataset.creation_time
- elif method == "extract":
- extract_dataset = imaging_dataset
- curation_time = extract_dataset.creation_time
- else:
- raise NotImplementedError("Unknown method: {}".format(method))
-
- # Synthesize curation_id
- curation_id = (
- dj.U().aggr(self & key, n="ifnull(max(curation_id)+1,1)").fetch1("n")
- )
- self.insert1(
- {
- **key,
- "curation_id": curation_id,
- "curation_time": curation_time,
- "curation_output_dir": output_dir,
- "manual_curation": is_curated,
- "curation_note": curation_note,
- }
- )
-
-
# -------------- Motion Correction --------------
@@ -619,13 +547,13 @@ class MotionCorrection(dj.Imported):
"""Results of motion correction shifts performed on the imaging data.
Attributes:
- Curation (foreign key): Primary key from Curation.
+ Processing (foreign key): Primary key from Processing.
scan.Channel.proj(motion_correct_channel='channel') (int): Channel used for
motion correction in this processing task.
"""
definition = """# Results of motion correction
- -> Curation
+ -> Processing
---
-> scan.Channel.proj(motion_correct_channel='channel') # channel used for motion correction in this processing task
"""
@@ -747,7 +675,7 @@ class Summary(dj.Part):
def make(self, key):
"""Populate MotionCorrection with results parsed from analysis outputs"""
- method, imaging_dataset = get_loader_result(key, Curation)
+ method, imaging_dataset = get_loader_result(key, ProcessingTask)
field_keys, _ = (scan.ScanInfo.Field & key).fetch(
"KEY", "field_z", order_by="field_z"
@@ -852,7 +780,7 @@ def make(self, key):
# -- summary images --
motion_correction_key = (
- scan.ScanInfo.Field * Curation & key & field_keys[plane]
+ scan.ScanInfo.Field * Processing & key & field_keys[plane]
).fetch1("KEY")
summary_images.append(
{
@@ -1087,11 +1015,11 @@ class Segmentation(dj.Computed):
"""Result of the Segmentation process.
Attributes:
- Curation (foreign key): Primary key from Curation.
+ Processing (foreign key): Primary key from Processing.
"""
definition = """# Different mask segmentations.
- -> Curation
+ -> Processing
"""
class Mask(dj.Part):
@@ -1130,7 +1058,7 @@ class Mask(dj.Part):
def make(self, key):
"""Populate the Segmentation with the results parsed from analysis outputs."""
- method, imaging_dataset = get_loader_result(key, Curation)
+ method, imaging_dataset = get_loader_result(key, ProcessingTask)
if method == "suite2p":
suite2p_dataset = imaging_dataset
@@ -1352,7 +1280,7 @@ class Trace(dj.Part):
def make(self, key):
"""Populate the Fluorescence with the results parsed from analysis outputs."""
- method, imaging_dataset = get_loader_result(key, Curation)
+ method, imaging_dataset = get_loader_result(key, ProcessingTask)
if method == "suite2p":
suite2p_dataset = imaging_dataset
@@ -1499,7 +1427,7 @@ def key_source(self):
def make(self, key):
"""Populate the Activity with the results parsed from analysis outputs."""
- method, imaging_dataset = get_loader_result(key, Curation)
+ method, imaging_dataset = get_loader_result(key, ProcessingTask)
if method == "suite2p":
if key["extraction_method"] == "suite2p_deconvolution":
diff --git a/element_calcium_imaging/imaging_no_curation.py b/element_calcium_imaging/imaging_no_curation.py
deleted file mode 100644
index 506057bd..00000000
--- a/element_calcium_imaging/imaging_no_curation.py
+++ /dev/null
@@ -1,1646 +0,0 @@
-import importlib
-import inspect
-import pathlib
-from collections.abc import Callable
-
-import datajoint as dj
-import numpy as np
-from element_interface.utils import dict_to_uuid, find_full_path, find_root_directory
-
-from . import imaging_report, scan
-from .scan import (
- get_calcium_imaging_files,
- get_imaging_root_data_dir,
- get_processed_root_data_dir,
-)
-
-schema = dj.schema()
-
-_linking_module = None
-
-
-def activate(
- imaging_schema_name: str,
- scan_schema_name: str = None,
- *,
- create_schema: bool = True,
- create_tables: bool = True,
- linking_module: str = None,
-):
- """Activate this schema.
-
- Args:
- imaging_schema_name (str): Schema name on the database server to activate the
- `imaging` module.
- scan_schema_name (str): Schema name on the database server to activate the
- `scan` module. Omitted, if the `scan` module is already activated.
- create_schema (bool): When True (default), create schema in the database if it
- does not yet exist.
- create_tables (bool): When True (default), create tables in the database if they
- do not yet exist.
- linking_module (str): A module name or a module containing the required
- dependencies to activate the `imaging` module: + all that are required by
- the `scan` module.
-
- Dependencies:
- Upstream tables:
- + Session: A parent table to Scan, identifying a scanning session.
- + Equipment: A parent table to Scan, identifying a scanning device.
- """
-
- if isinstance(linking_module, str):
- linking_module = importlib.import_module(linking_module)
- assert inspect.ismodule(
- linking_module
- ), "The argument 'dependency' must be a module's name or a module"
-
- global _linking_module
- _linking_module = linking_module
-
- scan.activate(
- scan_schema_name,
- create_schema=create_schema,
- create_tables=create_tables,
- linking_module=linking_module,
- )
- schema.activate(
- imaging_schema_name,
- create_schema=create_schema,
- create_tables=create_tables,
- add_objects=_linking_module.__dict__,
- )
- imaging_report.activate(f"{imaging_schema_name}_report", imaging_schema_name)
-
-
-# -------------- Table declarations --------------
-
-
-@schema
-class ProcessingMethod(dj.Lookup):
- """Package used for processing of calcium imaging data (e.g. Suite2p, CaImAn, etc.).
-
- Attributes:
- processing_method (str): Processing method.
- processing_method_desc (str): Processing method description.
- """
-
- definition = """# Package used for processing of calcium imaging data (e.g. Suite2p, CaImAn, etc.).
- processing_method: char(8)
- ---
- processing_method_desc: varchar(1000) # Processing method description
- """
-
- contents = [
- ("suite2p", "suite2p analysis suite"),
- ("caiman", "caiman analysis suite"),
- ("extract", "extract analysis suite"),
- ]
-
-
-@schema
-class ProcessingParamSet(dj.Lookup):
- """Parameter set used for the processing of the calcium imaging scans,
- including both the analysis suite and its respective input parameters.
-
- A hash of the parameters of the analysis suite is also stored in order
- to avoid duplicated entries.
-
- Attributes:
- paramset_idx (int): Unique parameter set ID.
- ProcessingMethod (foreign key): A primary key from ProcessingMethod.
- paramset_desc (str): Parameter set description.
- param_set_hash (uuid): A universally unique identifier for the parameter set.
- params (longblob): Parameter Set, a dictionary of all applicable parameters to
- the analysis suite.
- """
-
- definition = """# Processing Parameter Set
- paramset_idx: smallint # Unique parameter set ID.
- ---
- -> ProcessingMethod
- paramset_desc: varchar(1280) # Parameter-set description
- param_set_hash: uuid # A universally unique identifier for the parameter set
- unique index (param_set_hash)
- params: longblob # Parameter Set, a dictionary of all applicable parameters to the analysis suite.
- """
-
- @classmethod
- def insert_new_params(
- cls,
- processing_method: str,
- paramset_idx: int,
- paramset_desc: str,
- params: dict,
- ):
- """Insert a parameter set into ProcessingParamSet table.
-
- This function automates the parameter set hashing and avoids insertion of an
- existing parameter set.
-
- Attributes:
- processing_method (str): Processing method/package used for processing of
- calcium imaging.
- paramset_idx (int): Unique parameter set ID.
- paramset_desc (str): Parameter set description.
- params (dict): Parameter Set, all applicable parameters to the analysis
- suite.
- """
- if processing_method == "extract":
- assert (
- params.get("extract") is not None and params.get("suite2p") is not None
- ), ValueError(
- "Please provide the processing parameters in the {'suite2p': {...}, 'extract': {...}} dictionary format."
- )
-
- # Force Suite2p to only run motion correction.
- params["suite2p"]["do_registration"] = True
- params["suite2p"]["roidetect"] = False
-
- param_dict = {
- "processing_method": processing_method,
- "paramset_idx": paramset_idx,
- "paramset_desc": paramset_desc,
- "params": params,
- "param_set_hash": dict_to_uuid(params),
- }
- q_param = cls & {"param_set_hash": param_dict["param_set_hash"]}
-
- if q_param: # If the specified param-set already exists
- p_name = q_param.fetch1("paramset_idx")
- if p_name == paramset_idx: # If the existed set has the same name: job done
- return
- else: # If not same name: human error, trying to add the same paramset with different name
- raise dj.DataJointError(
- "The specified param-set already exists - name: {}".format(p_name)
- )
- else:
- cls.insert1(param_dict)
-
-
-@schema
-class CellCompartment(dj.Lookup):
- """Cell compartments that can be imaged (e.g. 'axon', 'soma', etc.)
-
- Attributes:
- cell_compartment (str): Cell compartment.
- """
-
- definition = """# Cell compartments
- cell_compartment: char(16)
- """
-
- contents = zip(["axon", "soma", "bouton"])
-
-
-@schema
-class MaskType(dj.Lookup):
- """Available labels for segmented masks (e.g. 'soma', 'axon', 'dendrite', 'neuropil').
-
- Attributes:
- mask_type (str): Mask type.
- """
-
- definition = """# Possible types of a segmented mask
- mask_type: varchar(16)
- """
-
- contents = zip(["soma", "axon", "dendrite", "neuropil", "artefact", "unknown"])
-
-
-# -------------- Trigger a processing routine --------------
-
-
-@schema
-class ProcessingTask(dj.Manual):
- """A pairing of processing params and scans to be loaded or triggered
-
- This table defines a calcium imaging processing task for a combination of a
- `Scan` and a `ProcessingParamSet` entries, including all the inputs (scan, method,
- method's parameters). The task defined here is then run in the downstream table
- `Processing`. This table supports definitions of both loading of pre-generated results
- and the triggering of new analysis for all supported analysis methods.
-
- Attributes:
- scan.Scan (foreign key): Primary key from scan.Scan.
- ProcessingParamSet (foreign key): Primary key from ProcessingParamSet.
- processing_output_dir (str): Output directory of the processed scan relative to the root data directory.
- task_mode (str): One of 'load' (load computed analysis results) or 'trigger'
- (trigger computation).
- """
-
- definition = """# Manual table for defining a processing task ready to be run
- -> scan.Scan
- -> ProcessingParamSet
- ---
- processing_output_dir: varchar(255) # Output directory of the processed scan relative to root data directory
- task_mode='load': enum('load', 'trigger') # 'load': load computed analysis results, 'trigger': trigger computation
- """
-
- @classmethod
- def infer_output_dir(cls, key, relative=False, mkdir=False):
- """Infer an output directory for an entry in ProcessingTask table.
-
- Args:
- key (dict): Primary key from the ProcessingTask table.
- relative (bool): If True, processing_output_dir is returned relative to
- imaging_root_dir. Default False.
- mkdir (bool): If True, create the processing_output_dir directory.
- Default True.
-
- Returns:
- dir (str): A default output directory for the processed results (processed_output_dir
- in ProcessingTask) based on the following convention:
- processed_dir / scan_dir / {processing_method}_{paramset_idx}
- e.g.: sub4/sess1/scan0/suite2p_0
- """
- acq_software = (scan.Scan & key).fetch1("acq_software")
- scan_dir = find_full_path(
- get_imaging_root_data_dir(),
- get_calcium_imaging_files(key, acq_software)[0],
- ).parent
- root_dir = find_root_directory(get_imaging_root_data_dir(), scan_dir)
-
- method = (
- (ProcessingParamSet & key).fetch1("processing_method").replace(".", "-")
- )
-
- processed_dir = pathlib.Path(get_processed_root_data_dir())
- output_dir = (
- processed_dir
- / scan_dir.relative_to(root_dir)
- / f'{method}_{key["paramset_idx"]}'
- )
-
- if mkdir:
- output_dir.mkdir(parents=True, exist_ok=True)
-
- return output_dir.relative_to(processed_dir) if relative else output_dir
-
- @classmethod
- def generate(cls, scan_key, paramset_idx=0):
- """Generate a ProcessingTask for a Scan using an parameter ProcessingParamSet
-
- Generate an entry in the ProcessingTask table for a particular scan using an
- existing parameter set from the ProcessingParamSet table.
-
- Args:
- scan_key (dict): Primary key from Scan table.
- paramset_idx (int): Unique parameter set ID.
- """
- key = {**scan_key, "paramset_idx": paramset_idx}
-
- processed_dir = get_processed_root_data_dir()
- output_dir = cls.infer_output_dir(key, relative=False, mkdir=True)
-
- method = (ProcessingParamSet & {"paramset_idx": paramset_idx}).fetch1(
- "processing_method"
- )
-
- try:
- if method == "suite2p":
- from element_interface import suite2p_loader
-
- suite2p_loader.Suite2p(output_dir)
- elif method == "caiman":
- from element_interface import caiman_loader
-
- caiman_loader.CaImAn(output_dir)
- elif method == "extract":
- from element_interface import extract_loader
-
- extract_loader.EXTRACT(output_dir)
-
- else:
- raise NotImplementedError(
- "Unknown/unimplemented method: {}".format(method)
- )
- except FileNotFoundError:
- task_mode = "trigger"
- else:
- task_mode = "load"
-
- cls.insert1(
- {
- **key,
- "processing_output_dir": output_dir.relative_to(
- processed_dir
- ).as_posix(),
- "task_mode": task_mode,
- }
- )
-
- auto_generate_entries = generate
-
-
-@schema
-class Processing(dj.Computed):
- """Perform the computation of an entry (task) defined in the ProcessingTask table.
- The computation is performed only on the scans with ScanInfo inserted.
-
- Attributes:
- ProcessingTask (foreign key): Primary key from ProcessingTask.
- processing_time (datetime): Process completion datetime.
- package_version (str, optional): Version of the analysis package used in
- processing the data.
- """
-
- definition = """
- -> ProcessingTask
- ---
- processing_time : datetime # Time of generation of this set of processed, segmented results
- package_version='' : varchar(16)
- """
-
- # Run processing only on Scan with ScanInfo inserted
- @property
- def key_source(self):
- """Limit the Processing to Scans that have their metadata ingested to the
- database."""
-
- return ProcessingTask & scan.ScanInfo
-
- def make(self, key):
- """Execute the calcium imaging analysis defined by the ProcessingTask."""
-
- task_mode, output_dir = (ProcessingTask & key).fetch1(
- "task_mode", "processing_output_dir"
- )
- acq_software = (scan.Scan & key).fetch1("acq_software")
-
- if not output_dir:
- output_dir = ProcessingTask.infer_output_dir(key, relative=True, mkdir=True)
- # update processing_output_dir
- ProcessingTask.update1(
- {**key, "processing_output_dir": output_dir.as_posix()}
- )
-
- try:
- output_dir = find_full_path(
- get_imaging_root_data_dir(), output_dir
- ).as_posix()
- except FileNotFoundError as e:
- if task_mode == "trigger":
- processed_dir = pathlib.Path(get_processed_root_data_dir())
- output_dir = processed_dir / output_dir
- output_dir.mkdir(parents=True, exist_ok=True)
- else:
- raise e
-
- if task_mode == "load":
- method, imaging_dataset = get_loader_result(key, ProcessingTask)
- if method == "suite2p":
- if (scan.ScanInfo & key).fetch1("nrois") > 0:
- raise NotImplementedError(
- "Suite2p ingestion error - Unable to handle"
- + " ScanImage multi-ROI scanning mode yet"
- )
- suite2p_dataset = imaging_dataset
- key = {**key, "processing_time": suite2p_dataset.creation_time}
- elif method == "caiman":
- caiman_dataset = imaging_dataset
- key = {**key, "processing_time": caiman_dataset.creation_time}
- elif method == "extract":
- raise NotImplementedError(
- "To use EXTRACT with this DataJoint Element please set `task_mode=trigger`"
- )
- else:
- raise NotImplementedError("Unknown method: {}".format(method))
- elif task_mode == "trigger":
- method = (ProcessingParamSet * ProcessingTask & key).fetch1(
- "processing_method"
- )
-
- image_files = (scan.ScanInfo.ScanFile & key).fetch("file_path")
- image_files = [
- find_full_path(get_imaging_root_data_dir(), image_file)
- for image_file in image_files
- ]
-
- if method == "suite2p":
- import suite2p
-
- suite2p_params = (ProcessingTask * ProcessingParamSet & key).fetch1(
- "params"
- )
- suite2p_params["save_path0"] = output_dir
- (
- suite2p_params["fs"],
- suite2p_params["nplanes"],
- suite2p_params["nchannels"],
- ) = (scan.ScanInfo & key).fetch1("fps", "ndepths", "nchannels")
-
- input_format = pathlib.Path(image_files[0]).suffix
- suite2p_params["input_format"] = input_format[1:]
-
- suite2p_paths = {
- "data_path": [image_files[0].parent.as_posix()],
- "tiff_list": [f.as_posix() for f in image_files],
- }
-
- suite2p.run_s2p(ops=suite2p_params, db=suite2p_paths) # Run suite2p
-
- _, imaging_dataset = get_loader_result(key, ProcessingTask)
- suite2p_dataset = imaging_dataset
- key = {**key, "processing_time": suite2p_dataset.creation_time}
-
- elif method == "caiman":
- from element_interface.caiman_loader import _process_scanimage_tiff
- from element_interface.run_caiman import run_caiman
-
- caiman_params = (ProcessingTask * ProcessingParamSet & key).fetch1(
- "params"
- )
- sampling_rate, ndepths, nchannels = (scan.ScanInfo & key).fetch1(
- "fps", "ndepths", "nchannels"
- )
-
- is3D = bool(ndepths > 1)
- if is3D:
- raise NotImplementedError(
- "Caiman pipeline is not yet capable of analyzing 3D scans."
- )
-
- if acq_software == "ScanImage" and nchannels > 1:
- # handle multi-channel tiff image before running CaImAn
- channel_idx = caiman_params.get("channel_to_process", 0)
- tmp_dir = pathlib.Path(output_dir) / "channel_separated_tif"
- tmp_dir.mkdir(exist_ok=True)
- _process_scanimage_tiff(
- [f.as_posix() for f in image_files], output_dir=tmp_dir
- )
- image_files = tmp_dir.glob(f"*_chn{channel_idx}.tif")
-
- run_caiman(
- file_paths=[f.as_posix() for f in image_files],
- parameters=caiman_params,
- sampling_rate=sampling_rate,
- output_dir=output_dir,
- is3D=is3D,
- )
-
- _, imaging_dataset = get_loader_result(key, ProcessingTask)
- caiman_dataset = imaging_dataset
- key["processing_time"] = caiman_dataset.creation_time
-
- elif method == "extract":
- import suite2p
- from element_interface.extract_trigger import EXTRACT_trigger
- from scipy.io import savemat
-
- # Motion Correction with Suite2p
- params = (ProcessingTask * ProcessingParamSet & key).fetch1("params")
-
- params["suite2p"]["save_path0"] = output_dir
- (
- params["suite2p"]["fs"],
- params["suite2p"]["nplanes"],
- params["suite2p"]["nchannels"],
- ) = (scan.ScanInfo & key).fetch1("fps", "ndepths", "nchannels")
-
- input_format = pathlib.Path(image_files[0]).suffix
- params["suite2p"]["input_format"] = input_format[1:]
-
- suite2p_paths = {
- "data_path": [image_files[0].parent.as_posix()],
- "tiff_list": [f.as_posix() for f in image_files],
- }
-
- suite2p.run_s2p(ops=params["suite2p"], db=suite2p_paths)
-
- # Convert data.bin to registered_scans.mat
- scanfile_fullpath = pathlib.Path(output_dir) / "suite2p/plane0/data.bin"
-
- data_shape = (scan.ScanInfo * scan.ScanInfo.Field & key).fetch1(
- "nframes", "px_height", "px_width"
- )
- data = np.memmap(scanfile_fullpath, shape=data_shape, dtype=np.int16)
-
- scan_matlab_fullpath = scanfile_fullpath.parent / "registered_scan.mat"
-
- # Save the motion corrected movie (data.bin) in a .mat file
- savemat(
- scan_matlab_fullpath,
- {"M": np.transpose(data, axes=[1, 2, 0])},
- )
-
- # Execute EXTRACT
-
- ex = EXTRACT_trigger(
- scan_matlab_fullpath, params["extract"], output_dir
- )
- ex.run()
-
- _, extract_dataset = get_loader_result(key, ProcessingTask)
- key["processing_time"] = extract_dataset.creation_time
-
- else:
- raise ValueError(f"Unknown task mode: {task_mode}")
-
- self.insert1({**key, "package_version": ""})
-
-
-# -------------- Motion Correction --------------
-
-
-@schema
-class MotionCorrection(dj.Imported):
- """Results of motion correction shifts performed on the imaging data.
-
- Attributes:
- Processing (foreign key): Primary key from Processing.
- scan.Channel.proj(motion_correct_channel='channel') (int): Channel used for
- motion correction in this processing task.
- """
-
- definition = """# Results of motion correction
- -> Processing
- ---
- -> scan.Channel.proj(motion_correct_channel='channel') # channel used for motion correction in this processing task
- """
-
- class RigidMotionCorrection(dj.Part):
- """Details of rigid motion correction performed on the imaging data.
-
- Attributes:
- MotionCorrection (foreign key): Primary key from MotionCorrection.
- outlier_frames (longblob): Mask with true for frames with outlier shifts
- (already corrected).
- y_shifts (longblob): y motion correction shifts (pixels).
- x_shifts (longblob): x motion correction shifts (pixels).
- z_shifts (longblob, optional): z motion correction shifts (z-drift, pixels).
- y_std (float): standard deviation of y shifts across all frames (pixels).
- x_std (float): standard deviation of x shifts across all frames (pixels).
- z_std (float, optional): standard deviation of z shifts across all frames
- (pixels).
- """
-
- definition = """# Details of rigid motion correction performed on the imaging data
- -> master
- ---
- outlier_frames=null : longblob # mask with true for frames with outlier shifts (already corrected)
- y_shifts : longblob # (pixels) y motion correction shifts
- x_shifts : longblob # (pixels) x motion correction shifts
- z_shifts=null : longblob # (pixels) z motion correction shifts (z-drift)
- y_std : float # (pixels) standard deviation of y shifts across all frames
- x_std : float # (pixels) standard deviation of x shifts across all frames
- z_std=null : float # (pixels) standard deviation of z shifts across all frames
- """
-
- class NonRigidMotionCorrection(dj.Part):
- """Piece-wise rigid motion correction - tile the FOV into multiple 3D
- blocks/patches.
-
- Attributes:
- MotionCorrection (foreign key): Primary key from MotionCorrection.
- outlier_frames (longblob, null): Mask with true for frames with outlier
- shifts (already corrected).
- block_height (int): Block height in pixels.
- block_width (int): Block width in pixels.
- block_depth (int): Block depth in pixels.
- block_count_y (int): Number of blocks tiled in the y direction.
- block_count_x (int): Number of blocks tiled in the x direction.
- block_count_z (int): Number of blocks tiled in the z direction.
- """
-
- definition = """# Details of non-rigid motion correction performed on the imaging data
- -> master
- ---
- outlier_frames=null : longblob # mask with true for frames with outlier shifts (already corrected)
- block_height : int # (pixels)
- block_width : int # (pixels)
- block_depth : int # (pixels)
- block_count_y : int # number of blocks tiled in the y direction
- block_count_x : int # number of blocks tiled in the x direction
- block_count_z : int # number of blocks tiled in the z direction
- """
-
- class Block(dj.Part):
- """FOV-tiled blocks used for non-rigid motion correction.
-
- Attributes:
- NonRigidMotionCorrection (foreign key): Primary key from
- NonRigidMotionCorrection.
- block_id (int): Unique block ID.
- block_y (longblob): y_start and y_end in pixels for this block
- block_x (longblob): x_start and x_end in pixels for this block
- block_z (longblob): z_start and z_end in pixels for this block
- y_shifts (longblob): y motion correction shifts for every frame in pixels
- x_shifts (longblob): x motion correction shifts for every frame in pixels
- z_shift=null (longblob, optional): x motion correction shifts for every frame
- in pixels
- y_std (float): standard deviation of y shifts across all frames in pixels
- x_std (float): standard deviation of x shifts across all frames in pixels
- z_std=null (float, optional): standard deviation of z shifts across all frames
- in pixels
- """
-
- definition = """# FOV-tiled blocks used for non-rigid motion correction
- -> master.NonRigidMotionCorrection
- block_id : int
- ---
- block_y : longblob # (y_start, y_end) in pixel of this block
- block_x : longblob # (x_start, x_end) in pixel of this block
- block_z : longblob # (z_start, z_end) in pixel of this block
- y_shifts : longblob # (pixels) y motion correction shifts for every frame
- x_shifts : longblob # (pixels) x motion correction shifts for every frame
- z_shifts=null : longblob # (pixels) x motion correction shifts for every frame
- y_std : float # (pixels) standard deviation of y shifts across all frames
- x_std : float # (pixels) standard deviation of x shifts across all frames
- z_std=null : float # (pixels) standard deviation of z shifts across all frames
- """
-
- class Summary(dj.Part):
- """Summary images for each field and channel after corrections.
-
- Attributes:
- MotionCorrection (foreign key): Primary key from MotionCorrection.
- scan.ScanInfo.Field (foreign key): Primary key from scan.ScanInfo.Field.
- ref_image (longblob): Image used as alignment template.
- average_image (longblob): Mean of registered frames.
- correlation_image (longblob, optional): Correlation map (computed during
- cell detection).
- max_proj_image (longblob, optional): Max of registered frames.
- """
-
- definition = """# Summary images for each field and channel after corrections
- -> master
- -> scan.ScanInfo.Field
- ---
- ref_image : longblob # image used as alignment template
- average_image : longblob # mean of registered frames
- correlation_image=null : longblob # correlation map (computed during cell detection)
- max_proj_image=null : longblob # max of registered frames
- """
-
- def make(self, key):
- """Populate MotionCorrection with results parsed from analysis outputs"""
-
- method, imaging_dataset = get_loader_result(key, ProcessingTask)
-
- field_keys, _ = (scan.ScanInfo.Field & key).fetch(
- "KEY", "field_z", order_by="field_z"
- )
-
- if method in ["suite2p", "extract"]:
- suite2p_dataset = imaging_dataset
-
- motion_correct_channel = suite2p_dataset.planes[0].alignment_channel
-
- # ---- iterate through all s2p plane outputs ----
- rigid_correction, nonrigid_correction, nonrigid_blocks = {}, {}, {}
- summary_images = []
- for idx, (plane, s2p) in enumerate(suite2p_dataset.planes.items()):
- # -- rigid motion correction --
- if idx == 0:
- rigid_correction = {
- **key,
- "y_shifts": s2p.ops["yoff"],
- "x_shifts": s2p.ops["xoff"],
- "z_shifts": np.full_like(s2p.ops["xoff"], 0),
- "y_std": np.nanstd(s2p.ops["yoff"]),
- "x_std": np.nanstd(s2p.ops["xoff"]),
- "z_std": np.nan,
- "outlier_frames": s2p.ops["badframes"],
- }
- else:
- rigid_correction["y_shifts"] = np.vstack(
- [rigid_correction["y_shifts"], s2p.ops["yoff"]]
- )
- rigid_correction["y_std"] = np.nanstd(
- rigid_correction["y_shifts"].flatten()
- )
- rigid_correction["x_shifts"] = np.vstack(
- [rigid_correction["x_shifts"], s2p.ops["xoff"]]
- )
- rigid_correction["x_std"] = np.nanstd(
- rigid_correction["x_shifts"].flatten()
- )
- rigid_correction["outlier_frames"] = np.logical_or(
- rigid_correction["outlier_frames"], s2p.ops["badframes"]
- )
- # -- non-rigid motion correction --
- if s2p.ops["nonrigid"]:
- if idx == 0:
- nonrigid_correction = {
- **key,
- "block_height": s2p.ops["block_size"][0],
- "block_width": s2p.ops["block_size"][1],
- "block_depth": 1,
- "block_count_y": s2p.ops["nblocks"][0],
- "block_count_x": s2p.ops["nblocks"][1],
- "block_count_z": len(suite2p_dataset.planes),
- "outlier_frames": s2p.ops["badframes"],
- }
- else:
- nonrigid_correction["outlier_frames"] = np.logical_or(
- nonrigid_correction["outlier_frames"],
- s2p.ops["badframes"],
- )
- for b_id, (b_y, b_x, bshift_y, bshift_x) in enumerate(
- zip(
- s2p.ops["xblock"],
- s2p.ops["yblock"],
- s2p.ops["yoff1"].T,
- s2p.ops["xoff1"].T,
- )
- ):
- if b_id in nonrigid_blocks:
- nonrigid_blocks[b_id]["y_shifts"] = np.vstack(
- [nonrigid_blocks[b_id]["y_shifts"], bshift_y]
- )
- nonrigid_blocks[b_id]["y_std"] = np.nanstd(
- nonrigid_blocks[b_id]["y_shifts"].flatten()
- )
- nonrigid_blocks[b_id]["x_shifts"] = np.vstack(
- [nonrigid_blocks[b_id]["x_shifts"], bshift_x]
- )
- nonrigid_blocks[b_id]["x_std"] = np.nanstd(
- nonrigid_blocks[b_id]["x_shifts"].flatten()
- )
- else:
- nonrigid_blocks[b_id] = {
- **key,
- "block_id": b_id,
- "block_y": b_y,
- "block_x": b_x,
- "block_z": np.full_like(b_x, plane),
- "y_shifts": bshift_y,
- "x_shifts": bshift_x,
- "z_shifts": np.full(
- (
- len(suite2p_dataset.planes),
- len(bshift_x),
- ),
- 0,
- ),
- "y_std": np.nanstd(bshift_y),
- "x_std": np.nanstd(bshift_x),
- "z_std": np.nan,
- }
-
- # -- summary images --
- motion_correction_key = (
- scan.ScanInfo.Field * Processing & key & field_keys[plane]
- ).fetch1("KEY")
- summary_images.append(
- {
- **motion_correction_key,
- "ref_image": s2p.ref_image,
- "average_image": s2p.mean_image,
- "correlation_image": s2p.correlation_map,
- "max_proj_image": s2p.max_proj_image,
- }
- )
-
- self.insert1({**key, "motion_correct_channel": motion_correct_channel})
- if rigid_correction:
- self.RigidMotionCorrection.insert1(rigid_correction)
- if nonrigid_correction:
- self.NonRigidMotionCorrection.insert1(nonrigid_correction)
- self.Block.insert(nonrigid_blocks.values())
- self.Summary.insert(summary_images)
- elif method == "caiman":
- caiman_dataset = imaging_dataset
-
- self.insert1(
- {
- **key,
- "motion_correct_channel": caiman_dataset.alignment_channel,
- }
- )
-
- is3D = caiman_dataset.params.motion["is3D"]
- if not caiman_dataset.params.motion["pw_rigid"]:
- # -- rigid motion correction --
- rigid_correction = {
- **key,
- "x_shifts": caiman_dataset.motion_correction["shifts_rig"][:, 0],
- "y_shifts": caiman_dataset.motion_correction["shifts_rig"][:, 1],
- "z_shifts": (
- caiman_dataset.motion_correction["shifts_rig"][:, 2]
- if is3D
- else np.full_like(
- caiman_dataset.motion_correction["shifts_rig"][:, 0],
- 0,
- )
- ),
- "x_std": np.nanstd(
- caiman_dataset.motion_correction["shifts_rig"][:, 0]
- ),
- "y_std": np.nanstd(
- caiman_dataset.motion_correction["shifts_rig"][:, 1]
- ),
- "z_std": (
- np.nanstd(caiman_dataset.motion_correction["shifts_rig"][:, 2])
- if is3D
- else np.nan
- ),
- "outlier_frames": None,
- }
-
- self.RigidMotionCorrection.insert1(rigid_correction)
- else:
- # -- non-rigid motion correction --
- nonrigid_correction = {
- **key,
- "block_height": (
- caiman_dataset.params.motion["strides"][0]
- + caiman_dataset.params.motion["overlaps"][0]
- ),
- "block_width": (
- caiman_dataset.params.motion["strides"][1]
- + caiman_dataset.params.motion["overlaps"][1]
- ),
- "block_depth": (
- caiman_dataset.params.motion["strides"][2]
- + caiman_dataset.params.motion["overlaps"][2]
- if is3D
- else 1
- ),
- "block_count_x": len(
- set(caiman_dataset.motion_correction["coord_shifts_els"][:, 0])
- ),
- "block_count_y": len(
- set(caiman_dataset.motion_correction["coord_shifts_els"][:, 2])
- ),
- "block_count_z": (
- len(
- set(
- caiman_dataset.motion_correction["coord_shifts_els"][
- :, 4
- ]
- )
- )
- if is3D
- else 1
- ),
- "outlier_frames": None,
- }
-
- nonrigid_blocks = []
- for b_id in range(
- len(caiman_dataset.motion_correction["x_shifts_els"][0, :])
- ):
- nonrigid_blocks.append(
- {
- **key,
- "block_id": b_id,
- "block_x": np.arange(
- *caiman_dataset.motion_correction["coord_shifts_els"][
- b_id, 0:2
- ]
- ),
- "block_y": np.arange(
- *caiman_dataset.motion_correction["coord_shifts_els"][
- b_id, 2:4
- ]
- ),
- "block_z": (
- np.arange(
- *caiman_dataset.motion_correction[
- "coord_shifts_els"
- ][b_id, 4:6]
- )
- if is3D
- else np.full_like(
- np.arange(
- *caiman_dataset.motion_correction[
- "coord_shifts_els"
- ][b_id, 0:2]
- ),
- 0,
- )
- ),
- "x_shifts": caiman_dataset.motion_correction[
- "x_shifts_els"
- ][:, b_id],
- "y_shifts": caiman_dataset.motion_correction[
- "y_shifts_els"
- ][:, b_id],
- "z_shifts": (
- caiman_dataset.motion_correction["z_shifts_els"][
- :, b_id
- ]
- if is3D
- else np.full_like(
- caiman_dataset.motion_correction["x_shifts_els"][
- :, b_id
- ],
- 0,
- )
- ),
- "x_std": np.nanstd(
- caiman_dataset.motion_correction["x_shifts_els"][
- :, b_id
- ]
- ),
- "y_std": np.nanstd(
- caiman_dataset.motion_correction["y_shifts_els"][
- :, b_id
- ]
- ),
- "z_std": (
- np.nanstd(
- caiman_dataset.motion_correction["z_shifts_els"][
- :, b_id
- ]
- )
- if is3D
- else np.nan
- ),
- }
- )
-
- self.NonRigidMotionCorrection.insert1(nonrigid_correction)
- self.Block.insert(nonrigid_blocks)
-
- # -- summary images --
- summary_images = [
- {
- **key,
- **fkey,
- "ref_image": ref_image,
- "average_image": ave_img,
- "correlation_image": corr_img,
- "max_proj_image": max_img,
- }
- for fkey, ref_image, ave_img, corr_img, max_img in zip(
- field_keys,
- (
- caiman_dataset.motion_correction["reference_image"].transpose(
- 2, 0, 1
- )
- if is3D
- else caiman_dataset.motion_correction["reference_image"][...][
- np.newaxis, ...
- ]
- ),
- (
- caiman_dataset.motion_correction["average_image"].transpose(
- 2, 0, 1
- )
- if is3D
- else caiman_dataset.motion_correction["average_image"][...][
- np.newaxis, ...
- ]
- ),
- (
- caiman_dataset.motion_correction["correlation_image"].transpose(
- 2, 0, 1
- )
- if is3D
- else caiman_dataset.motion_correction["correlation_image"][...][
- np.newaxis, ...
- ]
- ),
- (
- caiman_dataset.motion_correction["max_image"].transpose(2, 0, 1)
- if is3D
- else caiman_dataset.motion_correction["max_image"][...][
- np.newaxis, ...
- ]
- ),
- )
- ]
- self.Summary.insert(summary_images)
- else:
- raise NotImplementedError("Unknown/unimplemented method: {}".format(method))
-
-
-# -------------- Segmentation --------------
-
-
-@schema
-class Segmentation(dj.Computed):
- """Result of the Segmentation process.
-
- Attributes:
- Processing (foreign key): Primary key from Processing.
- """
-
- definition = """# Different mask segmentations.
- -> Processing
- """
-
- class Mask(dj.Part):
- """Details of the masks identified from the Segmentation procedure.
-
- Attributes:
- Segmentation (foreign key): Primary key from Segmentation.
- mask (int): Unique mask ID.
- scan.Channel.proj(segmentation_channel='channel') (foreign key): Channel
- used for segmentation.
- mask_npix (int): Number of pixels in ROIs.
- mask_center_x (int): Center x coordinate in pixel.
- mask_center_y (int): Center y coordinate in pixel.
- mask_center_z (int): Center z coordinate in pixel.
- mask_xpix (longblob): X coordinates in pixels.
- mask_ypix (longblob): Y coordinates in pixels.
- mask_zpix (longblob): Z coordinates in pixels.
- mask_weights (longblob): Weights of the mask at the indices above.
- """
-
- definition = """ # A mask produced by segmentation.
- -> master
- mask : smallint
- ---
- -> scan.Channel.proj(segmentation_channel='channel') # channel used for segmentation
- mask_npix : int # number of pixels in ROIs
- mask_center_x : int # center x coordinate in pixel
- mask_center_y : int # center y coordinate in pixel
- mask_center_z=null : int # center z coordinate in pixel
- mask_xpix : longblob # x coordinates in pixels
- mask_ypix : longblob # y coordinates in pixels
- mask_zpix=null : longblob # z coordinates in pixels
- mask_weights : longblob # weights of the mask at the indices above
- """
-
- def make(self, key):
- """Populate the Segmentation with the results parsed from analysis outputs."""
-
- method, imaging_dataset = get_loader_result(key, ProcessingTask)
-
- if method == "suite2p":
- suite2p_dataset = imaging_dataset
-
- # ---- iterate through all s2p plane outputs ----
- masks, cells = [], []
- for plane, s2p in suite2p_dataset.planes.items():
- mask_count = len(masks) # increment mask id from all "plane"
- for mask_idx, (is_cell, cell_prob, mask_stat) in enumerate(
- zip(s2p.iscell, s2p.cell_prob, s2p.stat)
- ):
- masks.append(
- {
- **key,
- "mask": mask_idx + mask_count,
- "segmentation_channel": s2p.segmentation_channel,
- "mask_npix": mask_stat["npix"],
- "mask_center_x": mask_stat["med"][1],
- "mask_center_y": mask_stat["med"][0],
- "mask_center_z": mask_stat.get("iplane", plane),
- "mask_xpix": mask_stat["xpix"],
- "mask_ypix": mask_stat["ypix"],
- "mask_zpix": np.full(
- mask_stat["npix"],
- mask_stat.get("iplane", plane),
- ),
- "mask_weights": mask_stat["lam"],
- }
- )
- if is_cell:
- cells.append(
- {
- **key,
- "mask_classification_method": "suite2p_default_classifier",
- "mask": mask_idx + mask_count,
- "mask_type": "soma",
- "confidence": cell_prob,
- }
- )
-
- self.insert1(key)
- self.Mask.insert(masks, ignore_extra_fields=True)
-
- if cells:
- MaskClassification.insert1(
- {
- **key,
- "mask_classification_method": "suite2p_default_classifier",
- },
- allow_direct_insert=True,
- )
- MaskClassification.MaskType.insert(
- cells, ignore_extra_fields=True, allow_direct_insert=True
- )
- elif method == "caiman":
- caiman_dataset = imaging_dataset
-
- # infer "segmentation_channel" - from params if available, else from caiman loader
- params = (ProcessingParamSet * ProcessingTask & key).fetch1("params")
- segmentation_channel = params.get(
- "segmentation_channel", caiman_dataset.segmentation_channel
- )
-
- masks, cells = [], []
- for mask in caiman_dataset.masks:
- masks.append(
- {
- **key,
- "segmentation_channel": segmentation_channel,
- "mask": mask["mask_id"],
- "mask_npix": mask["mask_npix"],
- "mask_center_x": mask["mask_center_x"],
- "mask_center_y": mask["mask_center_y"],
- "mask_center_z": mask["mask_center_z"],
- "mask_xpix": mask["mask_xpix"],
- "mask_ypix": mask["mask_ypix"],
- "mask_zpix": mask["mask_zpix"],
- "mask_weights": mask["mask_weights"],
- }
- )
- if caiman_dataset.cnmf.estimates.idx_components is not None:
- if mask["mask_id"] in caiman_dataset.cnmf.estimates.idx_components:
- cells.append(
- {
- **key,
- "mask_classification_method": "caiman_default_classifier",
- "mask": mask["mask_id"],
- "mask_type": "soma",
- }
- )
-
- self.insert1(key)
- self.Mask.insert(masks, ignore_extra_fields=True)
-
- if cells:
- MaskClassification.insert1(
- {
- **key,
- "mask_classification_method": "caiman_default_classifier",
- },
- allow_direct_insert=True,
- )
- MaskClassification.MaskType.insert(
- cells, ignore_extra_fields=True, allow_direct_insert=True
- )
- elif method == "extract":
- extract_dataset = imaging_dataset
- masks = [
- dict(
- **key,
- segmentation_channel=0,
- mask=mask["mask_id"],
- mask_npix=mask["mask_npix"],
- mask_center_x=mask["mask_center_x"],
- mask_center_y=mask["mask_center_y"],
- mask_center_z=mask["mask_center_z"],
- mask_xpix=mask["mask_xpix"],
- mask_ypix=mask["mask_ypix"],
- mask_zpix=mask["mask_zpix"],
- mask_weights=mask["mask_weights"],
- )
- for mask in extract_dataset.load_results()
- ]
-
- self.insert1(key)
- self.Mask.insert(masks, ignore_extra_fields=True)
- else:
- raise NotImplementedError(f"Unknown/unimplemented method: {method}")
-
-
-@schema
-class MaskClassificationMethod(dj.Lookup):
- """Available mask classification methods.
-
- Attributes:
- mask_classification_method (str): Mask classification method.
- """
-
- definition = """
- mask_classification_method: varchar(48)
- """
-
- contents = zip(["suite2p_default_classifier", "caiman_default_classifier"])
-
-
-@schema
-class MaskClassification(dj.Computed):
- """Classes assigned to each mask.
-
- Attributes:
- Segmentation (foreign key): Primary key from Segmentation.
- MaskClassificationMethod (foreign key): Primary key from
- MaskClassificationMethod.
- """
-
- definition = """
- -> Segmentation
- -> MaskClassificationMethod
- """
-
- class MaskType(dj.Part):
- """Type assigned to each mask.
-
- Attributes:
- MaskClassification (foreign key): Primary key from MaskClassification.
- Segmentation.Mask (foreign key): Primary key from Segmentation.Mask.
- MaskType: Primary key from MaskType.
- confidence (float, optional): Confidence level of the mask classification.
- """
-
- definition = """
- -> master
- -> Segmentation.Mask
- ---
- -> MaskType
- confidence=null: float
- """
-
- def make(self, key):
- pass
-
-
-# -------------- Activity Trace --------------
-
-
-@schema
-class Fluorescence(dj.Computed):
- """Fluorescence traces.
-
- Attributes:
- Segmentation (foreign key): Primary key from Segmentation.
- """
-
- definition = """# Fluorescence traces before spike extraction or filtering
- -> Segmentation
- """
-
- class Trace(dj.Part):
- """Traces obtained from segmented region of interests.
-
- Attributes:
- Fluorescence (foreign key): Primary key from Fluorescence.
- Segmentation.Mask (foreign key): Primary key from Segmentation.Mask.
- scan.Channel.proj(fluo_channel='channel') (int): The channel that this trace
- comes from.
- fluorescence (longblob): Fluorescence trace associated with this mask.
- neuropil_fluorescence (longblob, optional): Neuropil fluorescence trace.
- """
-
- definition = """
- -> master
- -> Segmentation.Mask
- -> scan.Channel.proj(fluo_channel='channel') # The channel that this trace comes from
- ---
- fluorescence : longblob # Fluorescence trace associated with this mask
- neuropil_fluorescence=null : longblob # Neuropil fluorescence trace
- """
-
- def make(self, key):
- """Populate the Fluorescence with the results parsed from analysis outputs."""
-
- method, imaging_dataset = get_loader_result(key, ProcessingTask)
-
- if method == "suite2p":
- suite2p_dataset = imaging_dataset
-
- # ---- iterate through all s2p plane outputs ----
- fluo_traces, fluo_chn2_traces = [], []
- for s2p in suite2p_dataset.planes.values():
- mask_count = len(fluo_traces) # increment mask id from all "plane"
- for mask_idx, (f, fneu) in enumerate(zip(s2p.F, s2p.Fneu)):
- fluo_traces.append(
- {
- **key,
- "mask": mask_idx + mask_count,
- "fluo_channel": 0,
- "fluorescence": f,
- "neuropil_fluorescence": fneu,
- }
- )
- if len(s2p.F_chan2):
- mask_chn2_count = len(
- fluo_chn2_traces
- ) # increment mask id from all planes
- for mask_idx, (f2, fneu2) in enumerate(
- zip(s2p.F_chan2, s2p.Fneu_chan2)
- ):
- fluo_chn2_traces.append(
- {
- **key,
- "mask": mask_idx + mask_chn2_count,
- "fluo_channel": 1,
- "fluorescence": f2,
- "neuropil_fluorescence": fneu2,
- }
- )
-
- self.insert1(key)
- self.Trace.insert(fluo_traces + fluo_chn2_traces)
- elif method == "caiman":
- caiman_dataset = imaging_dataset
-
- # infer "segmentation_channel" - from params if available, else from caiman loader
- params = (ProcessingParamSet * ProcessingTask & key).fetch1("params")
- segmentation_channel = params.get(
- "segmentation_channel", caiman_dataset.segmentation_channel
- )
-
- fluo_traces = []
- for mask in caiman_dataset.masks:
- fluo_traces.append(
- {
- **key,
- "mask": mask["mask_id"],
- "fluo_channel": segmentation_channel,
- "fluorescence": mask["inferred_trace"],
- }
- )
-
- self.insert1(key)
- self.Trace.insert(fluo_traces)
- elif method == "extract":
- extract_dataset = imaging_dataset
-
- fluo_traces = [
- {
- **key,
- "mask": mask_id,
- "fluo_channel": 0,
- "fluorescence": fluorescence,
- }
- for mask_id, fluorescence in enumerate(extract_dataset.T)
- ]
-
- self.insert1(key)
- self.Trace.insert(fluo_traces)
-
- else:
- raise NotImplementedError("Unknown/unimplemented method: {}".format(method))
-
-
-@schema
-class ActivityExtractionMethod(dj.Lookup):
- """Available activity extraction methods.
-
- Attributes:
- extraction_method (str): Extraction method.
- """
-
- definition = """# Activity extraction method
- extraction_method: varchar(32)
- """
-
- contents = zip(["suite2p_deconvolution", "caiman_deconvolution", "caiman_dff"])
-
-
-@schema
-class Activity(dj.Computed):
- """Inferred neural activity from fluorescence trace (e.g. dff, spikes, etc.).
-
- Attributes:
- Fluorescence (foreign key): Primary key from Fluorescence.
- ActivityExtractionMethod (foreign key): Primary key from
- ActivityExtractionMethod.
- """
-
- definition = """# Neural Activity
- -> Fluorescence
- -> ActivityExtractionMethod
- """
-
- class Trace(dj.Part):
- """Trace(s) for each mask.
-
- Attributes:
- Activity (foreign key): Primary key from Activity.
- Fluorescence.Trace (foreign key): Fluorescence.Trace.
- activity_trace (longblob): Neural activity from fluorescence trace.
- """
-
- definition = """
- -> master
- -> Fluorescence.Trace
- ---
- activity_trace: longblob
- """
-
- @property
- def key_source(self):
- suite2p_key_source = (
- Fluorescence
- * ActivityExtractionMethod
- * ProcessingParamSet.proj("processing_method")
- & 'processing_method = "suite2p"'
- & 'extraction_method LIKE "suite2p%"'
- )
- caiman_key_source = (
- Fluorescence
- * ActivityExtractionMethod
- * ProcessingParamSet.proj("processing_method")
- & 'processing_method = "caiman"'
- & 'extraction_method LIKE "caiman%"'
- )
- return suite2p_key_source.proj() + caiman_key_source.proj()
-
- def make(self, key):
- """Populate the Activity with the results parsed from analysis outputs."""
-
- method, imaging_dataset = get_loader_result(key, ProcessingTask)
-
- if method == "suite2p":
- if key["extraction_method"] == "suite2p_deconvolution":
- suite2p_dataset = imaging_dataset
- # ---- iterate through all s2p plane outputs ----
- spikes = [
- dict(
- key,
- mask=mask_idx,
- fluo_channel=0,
- activity_trace=spks,
- )
- for mask_idx, spks in enumerate(
- s
- for plane in suite2p_dataset.planes.values()
- for s in plane.spks
- )
- ]
-
- self.insert1(key)
- self.Trace.insert(spikes)
- elif method == "caiman":
- caiman_dataset = imaging_dataset
-
- if key["extraction_method"] in (
- "caiman_deconvolution",
- "caiman_dff",
- ):
- attr_mapper = {
- "caiman_deconvolution": "spikes",
- "caiman_dff": "dff",
- }
-
- # infer "segmentation_channel" - from params if available, else from caiman loader
- params = (ProcessingParamSet * ProcessingTask & key).fetch1("params")
- segmentation_channel = params.get(
- "segmentation_channel", caiman_dataset.segmentation_channel
- )
-
- self.insert1(key)
- self.Trace.insert(
- dict(
- key,
- mask=mask["mask_id"],
- fluo_channel=segmentation_channel,
- activity_trace=mask[attr_mapper[key["extraction_method"]]],
- )
- for mask in caiman_dataset.masks
- )
- else:
- raise NotImplementedError("Unknown/unimplemented method: {}".format(method))
-
-
-@schema
-class ProcessingQualityMetrics(dj.Computed):
- """Quality metrics used to evaluate the results of the calcium imaging analysis pipeline.
-
- Attributes:
- Fluorescence (foreign key): Primary key from Fluorescence.
- """
-
- definition = """
- -> Fluorescence
- """
-
- class Mask(dj.Part):
- """Quality metrics used to evaluate the masks.
-
- Attributes:
- Fluorescence (foreign key): Primary key from Fluorescence.
- Segmentation.Mask (foreign key): Primary key from Segmentation.Mask.
- mask_area (float): Mask area in square micrometer.
- roundness (float): Roundness between 0 and 1. Values closer to 1 are rounder.
- """
-
- definition = """
- -> master
- -> Segmentation.Mask
- ---
- mask_area=null: float # Mask area in square micrometer.
- roundness: float # Roundness between 0 and 1. Values closer to 1 are rounder.
- """
-
- class Trace(dj.Part):
- """Quality metrics used to evaluate the fluorescence traces.
-
- Attributes:
- Fluorescence (foreign key): Primary key from Fluorescence.
- Fluorescence.Trace (foreign key): Primary key from Fluorescence.Trace.
- skewness (float): Skewness of the fluorescence trace.
- variance (float): Variance of the fluorescence trace.
- """
-
- definition = """
- -> master
- -> Fluorescence.Trace
- ---
- skewness: float # Skewness of the fluorescence trace.
- variance: float # Variance of the fluorescence trace.
- """
-
- def make(self, key):
- """Populate the ProcessingQualityMetrics table and its part tables."""
- from scipy.stats import skew
-
- (
- mask_xpixs,
- mask_ypixs,
- mask_weights,
- fluorescence,
- fluo_channels,
- mask_ids,
- mask_npix,
- px_height,
- px_width,
- um_height,
- um_width,
- ) = (Segmentation.Mask * scan.ScanInfo.Field * Fluorescence.Trace & key).fetch(
- "mask_xpix",
- "mask_ypix",
- "mask_weights",
- "fluorescence",
- "fluo_channel",
- "mask",
- "mask_npix",
- "px_height",
- "px_width",
- "um_height",
- "um_width",
- )
-
- norm_mean = lambda x: x.mean() / x.max()
- roundnesses = [
- norm_mean(np.linalg.eigvals(np.cov(x, y, aweights=w)))
- for x, y, w in zip(mask_xpixs, mask_ypixs, mask_weights)
- ]
-
- fluorescence = np.stack(fluorescence)
-
- self.insert1(key)
-
- self.Mask.insert(
- dict(key, mask=mask_id, mask_area=mask_area, roundness=roundness)
- for mask_id, mask_area, roundness in zip(
- mask_ids,
- mask_npix * (um_height / px_height) * (um_width / px_width),
- roundnesses,
- )
- )
-
- self.Trace.insert(
- dict(
- key,
- fluo_channel=fluo_channel,
- mask=mask_id,
- skewness=skewness,
- variance=variance,
- )
- for fluo_channel, mask_id, skewness, variance in zip(
- fluo_channels,
- mask_ids,
- skew(fluorescence, axis=1),
- fluorescence.std(axis=1),
- )
- )
-
-
-# ---------------- HELPER FUNCTIONS ----------------
-
-
-_table_attribute_mapper = {
- "ProcessingTask": "processing_output_dir",
- "Curation": "curation_output_dir",
-}
-
-
-def get_loader_result(key: dict, table: dj.Table) -> Callable:
- """Retrieve the processed imaging results from a suite2p or caiman loader.
-
- Args:
- key (dict): The `key` to one entry of ProcessingTask or Curation
- table (dj.Table): A datajoint table to retrieve the loaded results from (e.g.
- ProcessingTask, Curation)
-
- Raises:
- NotImplementedError: If the processing_method is different than 'suite2p' or
- 'caiman'.
-
- Returns:
- A loader object of the loaded results (e.g. suite2p.Suite2p or caiman.CaImAn,
- see element-interface for more information on the loaders.)
- """
- method, output_dir = (ProcessingParamSet * table & key).fetch1(
- "processing_method", _table_attribute_mapper[table.__name__]
- )
-
- output_path = find_full_path(get_imaging_root_data_dir(), output_dir)
-
- if method == "suite2p" or (
- method == "extract" and table.__name__ == "MotionCorrection"
- ):
- from element_interface import suite2p_loader
-
- loaded_dataset = suite2p_loader.Suite2p(output_path)
- elif method == "caiman":
- from element_interface import caiman_loader
-
- loaded_dataset = caiman_loader.CaImAn(output_path)
- elif method == "extract":
- from element_interface import extract_loader
-
- loaded_dataset = extract_loader.EXTRACT(output_path)
- else:
- raise NotImplementedError("Unknown/unimplemented method: {}".format(method))
-
- return method, loaded_dataset
diff --git a/element_calcium_imaging/imaging_preprocess.py b/element_calcium_imaging/imaging_preprocess.py
deleted file mode 100644
index 54614f66..00000000
--- a/element_calcium_imaging/imaging_preprocess.py
+++ /dev/null
@@ -1,1931 +0,0 @@
-import importlib
-import inspect
-import pathlib
-from collections.abc import Callable
-
-import datajoint as dj
-import numpy as np
-from element_interface.utils import dict_to_uuid, find_full_path, find_root_directory
-
-from . import imaging_report, scan
-from .scan import (
- get_calcium_imaging_files,
- get_imaging_root_data_dir,
- get_processed_root_data_dir,
-)
-
-schema = dj.schema()
-
-_linking_module = None
-
-
-def activate(
- imaging_schema_name: str,
- scan_schema_name: str = None,
- *,
- create_schema: bool = True,
- create_tables: bool = True,
- linking_module: str = None,
-):
- """Activate this schema.
-
- Args:
- imaging_schema_name (str): Schema name on the database server to activate the
- `imaging` module.
- scan_schema_name (str): Schema name on the database server to activate the
- `scan` module. Omitted, if the `scan` module is already activated.
- create_schema (bool): When True (default), create schema in the database if it
- does not yet exist.
- create_tables (bool): When True (default), create tables in the database if they
- do not yet exist.
- linking_module (str): A module name or a module containing the required
- dependencies to activate the `imaging` module: + all that are required by
- the `scan` module.
-
- Dependencies:
- Upstream tables:
- + Session: A parent table to Scan, identifying a scanning session.
- + Equipment: A parent table to Scan, identifying a scanning device.
- """
-
- if isinstance(linking_module, str):
- linking_module = importlib.import_module(linking_module)
- assert inspect.ismodule(
- linking_module
- ), "The argument 'dependency' must be a module's name or a module"
-
- global _linking_module
- _linking_module = linking_module
-
- scan.activate(
- scan_schema_name,
- create_schema=create_schema,
- create_tables=create_tables,
- linking_module=linking_module,
- )
- schema.activate(
- imaging_schema_name,
- create_schema=create_schema,
- create_tables=create_tables,
- add_objects=_linking_module.__dict__,
- )
- imaging_report.activate(f"{imaging_schema_name}_report", imaging_schema_name)
-
-
-# -------------- Table declarations --------------
-
-
-@schema
-class PreprocessMethod(dj.Lookup):
- """Method(s) used for preprocessing of calcium imaging data.
-
- Attributes:
- preprocess_method (str): Preprocessing method.
- preprocess_method_desc (str): Processing method description.
- """
-
- definition = """ # Method/package used for pre-processing
- preprocess_method: varchar(16)
- ---
- preprocess_method_desc: varchar(1000)
- """
-
-
-@schema
-class PreprocessParamSet(dj.Lookup):
- """Parameter set used for the preprocessing of the calcium imaging scans.
-
- A hash of the parameters of the analysis suite is also stored in order
- to avoid duplicated entries.
-
- Attributes:
- paramset_idx (int): Unique parameter set ID.
- PreprocessMethod (foreign key): A primary key from PreprocessMethod.
- paramset_desc (str): Parameter set description.
- param_set_hash (uuid): A universally unique identifier for the parameter set.
- params (longblob): Parameter Set, a dictionary of all applicable parameters to
- the analysis suite.
- """
-
- definition = """ # Parameter set used for pre-processing of calcium imaging data
- paramset_idx: smallint
- ---
- -> PreprocessMethod
- paramset_desc: varchar(128)
- param_set_hash: uuid
- unique index (param_set_hash)
- params: longblob # dictionary of all applicable parameters
- """
-
- @classmethod
- def insert_new_params(
- cls,
- preprocess_method: str,
- paramset_idx: int,
- paramset_desc: str,
- params: dict,
- ):
- """Insert a parameter set into PreprocessParamSet table.
- This function automates the parameter set hashing and avoids insertion of an
- existing parameter set.
-
- Attributes:
- preprocess_method (str): Method used for processing of calcium imaging scans.
- paramset_idx (int): Unique parameter set ID.
- paramset_desc (str): Parameter set description.
- params (dict): Parameter Set, all applicable parameters.
- """
- param_dict = {
- "preprocess_method": preprocess_method,
- "paramset_idx": paramset_idx,
- "paramset_desc": paramset_desc,
- "params": params,
- "param_set_hash": dict_to_uuid(params),
- }
- q_param = cls & {"param_set_hash": param_dict["param_set_hash"]}
-
- if q_param: # If the specified param-set already exists
- p_name = q_param.fetch1("paramset_idx")
- if p_name == paramset_idx: # If the existed set has the same name: job done
- return
- else: # If not same name: human error, trying to add the same paramset with different name
- raise dj.DataJointError(
- "The specified param-set already exists - name: {}".format(p_name)
- )
- else:
- cls.insert1(param_dict)
-
-
-@schema
-class PreprocessParamSteps(dj.Manual):
- """Ordered list of paramset_idx that will be run.
-
- When pre-processing is not performed, do not create an entry in `Step` Part table
-
- Attributes:
- preprocess_param_steps_id (int):
- preprocess_param_steps_name (str):
- preprocess_param_steps_desc (str):
- """
-
- definition = """
- preprocess_param_steps_id: smallint
- ---
- preprocess_param_steps_name: varchar(32)
- preprocess_param_steps_desc: varchar(128)
- """
-
- class Step(dj.Part):
- """ADD DEFINITION
-
- Attributes:
- PreprocessParamSteps (foreign key): A primary key from PreprocessParamSteps.
- step_number (int):
- PreprocessParamSet (foreign key): A primary key from PreprocessParamSet.
- """
-
- definition = """
- -> master
- step_number: smallint # Order of operations
- ---
- -> PreprocessParamSet
- """
-
-
-@schema
-class PreprocessTask(dj.Manual):
- """This table defines a calcium imaging preprocessing task for a combination of a
- `Scan` and a `PreprocessParamSteps` entries, including all the inputs (scan, method,
- steps). The task defined here is then run in the downstream table
- Preprocess. This table supports definitions of both loading of pre-generated,
- results, triggering of new analysis, or skipping of preprocessing step.
-
- Attributes:
- Scan (foreign key): A primary key from Scan.
- PreprocessParamSteps (foreign key): A primary key from PreprocessParamSteps.
- preprocess_output_dir (str): Output directory for the results of preprocessing.
- task_mode (str, optional): One of 'load' (load computed analysis results), 'trigger'
- (trigger computation), 'none' (no pre-processing). Default none.
- """
-
- definition = """
- # Manual table for defining a pre-processing task ready to be run
- -> scan.Scan
- -> PreprocessParamSteps
- ---
- preprocess_output_dir: varchar(255) # Pre-processing output directory relative
- # to the root data directory
- task_mode='none': enum('none','load', 'trigger') # 'none': no pre-processing
- # 'load': load analysis results
- # 'trigger': trigger computation
- """
-
-
-@schema
-class Preprocess(dj.Imported):
- """Perform the computation of an entry (task) defined in the PreprocessTask table.
-
- + If `task_mode == "none"`: no pre-processing performed
- + If `task_mode == "trigger"`: Not implemented
- + If `task_mode == "load"`: Not implemented
-
- Attributes:
- PreprocessTask (foreign key):
- preprocess_time (datetime, optional):
- package_version (str, optional): Version of the analysis package used in
- processing the data.
- """
-
- definition = """
- -> PreprocessTask
- ---
- preprocess_time=null: datetime # Time of generation of pre-processing results
- package_version='': varchar(16)
- """
-
- def make(self, key):
- """Execute the preprocessing analysis steps defined in PreprocessTask."""
-
- task_mode, output_dir = (PreprocessTask & key).fetch1(
- "task_mode", "preprocess_output_dir"
- )
- _ = find_full_path(get_imaging_root_data_dir(), output_dir)
-
- if task_mode == "none":
- print(f"No pre-processing run on entry: {key}")
- elif task_mode in ["load", "trigger"]:
- raise NotImplementedError(
- "Pre-processing steps are not implemented."
- "Please overwrite this `make` function with"
- "desired pre-processing steps."
- )
- else:
- raise ValueError(f"Unknown task mode: {task_mode}")
-
- self.insert1({**key, "package_version": ""})
-
-
-@schema
-class ProcessingMethod(dj.Lookup):
- """Package used for processing of calcium imaging data (e.g. Suite2p, CaImAn, etc.).
-
- Attributes:
- processing_method (str): Processing method.
- processing_method_desc (str): Processing method description.
- """
-
- definition = """# Package used for processing of calcium imaging data (e.g. Suite2p, CaImAn, etc.).
- processing_method: char(8)
- ---
- processing_method_desc: varchar(1000) # Processing method description
- """
-
- contents = [
- ("suite2p", "suite2p analysis suite"),
- ("caiman", "caiman analysis suite"),
- ("extract", "extract analysis suite"),
- ]
-
-
-@schema
-class ProcessingParamSet(dj.Lookup):
- """Parameter set used for the processing of the calcium imaging scans,
- including both the analysis suite and its respective input parameters.
-
- A hash of the parameters of the analysis suite is also stored in order
- to avoid duplicated entries.
-
- Attributes:
- paramset_idx (int): Unique parameter set ID.
- ProcessingMethod (foreign key): A primary key from ProcessingMethod.
- paramset_desc (str): Parameter set description.
- param_set_hash (uuid): A universally unique identifier for the parameter set.
- params (longblob): Parameter Set, a dictionary of all applicable parameters to
- the analysis suite.
- """
-
- definition = """# Processing Parameter Set
- paramset_idx: smallint # Unique parameter set ID.
- ---
- -> ProcessingMethod
- paramset_desc: varchar(1280) # Parameter-set description
- param_set_hash: uuid # A universally unique identifier for the parameter set
- unique index (param_set_hash)
- params: longblob # Parameter Set, a dictionary of all applicable parameters to the analysis suite.
- """
-
- @classmethod
- def insert_new_params(
- cls,
- processing_method: str,
- paramset_idx: int,
- paramset_desc: str,
- params: dict,
- ):
- """Insert a parameter set into ProcessingParamSet table.
-
- This function automates the parameter set hashing and avoids insertion of an
- existing parameter set.
-
- Attributes:
- processing_method (str): Processing method/package used for processing of
- calcium imaging.
- paramset_idx (int): Unique parameter set ID.
- paramset_desc (str): Parameter set description.
- params (dict): Parameter Set, all applicable parameters to the analysis
- suite.
- """
- if processing_method == "extract":
- assert (
- params.get("extract") is not None and params.get("suite2p") is not None
- ), ValueError(
- "Please provide the processing parameters in the {'suite2p': {...}, 'extract': {...}} dictionary format."
- )
-
- # Force Suite2p to only run motion correction.
- params["suite2p"]["do_registration"] = True
- params["suite2p"]["roidetect"] = False
-
- param_dict = {
- "processing_method": processing_method,
- "paramset_idx": paramset_idx,
- "paramset_desc": paramset_desc,
- "params": params,
- "param_set_hash": dict_to_uuid(params),
- }
- q_param = cls & {"param_set_hash": param_dict["param_set_hash"]}
-
- if q_param: # If the specified param-set already exists
- p_name = q_param.fetch1("paramset_idx")
- if p_name == paramset_idx: # If the existed set has the same name: job done
- return
- else: # If not same name: human error, trying to add the same paramset with different name
- raise dj.DataJointError(
- "The specified param-set already exists - name: {}".format(p_name)
- )
- else:
- cls.insert1(param_dict)
-
-
-@schema
-class CellCompartment(dj.Lookup):
- """Cell compartments that can be imaged (e.g. 'axon', 'soma', etc.)
-
- Attributes:
- cell_compartment (str): Cell compartment.
- """
-
- definition = """# Cell compartments
- cell_compartment: char(16)
- """
-
- contents = zip(["axon", "soma", "bouton"])
-
-
-@schema
-class MaskType(dj.Lookup):
- """Available labels for segmented masks (e.g. 'soma', 'axon', 'dendrite', 'neuropil').
-
- Attributes:
- mask_type (str): Mask type.
- """
-
- definition = """# Possible types of a segmented mask
- mask_type: varchar(16)
- """
-
- contents = zip(["soma", "axon", "dendrite", "neuropil", "artefact", "unknown"])
-
-
-# -------------- Trigger a processing routine --------------
-
-
-@schema
-class ProcessingTask(dj.Manual):
- """A pairing of processing params and scans to be loaded or triggered
-
- This table defines a calcium imaging processing task for a combination of a
- `Scan` and a `ProcessingParamSet` entries, including all the inputs (scan, method,
- method's parameters). The task defined here is then run in the downstream table
- `Processing`. This table supports definitions of both loading of pre-generated results
- and the triggering of new analysis for all supported analysis methods.
-
- Attributes:
- Preprocess (foreign key): Primary key from Preprocess.
- ProcessingParamSet (foreign key): Primary key from ProcessingParamSet.
- processing_output_dir (str): Output directory of the processed scan relative to the root data directory.
- task_mode (str): One of 'load' (load computed analysis results) or 'trigger'
- (trigger computation).
- """
-
- definition = """# Manual table for defining a processing task ready to be run
- -> Preprocess
- -> ProcessingParamSet
- ---
- processing_output_dir: varchar(255) # Output directory of the processed scan relative to root data directory
- task_mode='load': enum('load', 'trigger') # 'load': load computed analysis results, 'trigger': trigger computation
- """
-
- @classmethod
- def infer_output_dir(cls, key, relative=False, mkdir=False):
- """Infer an output directory for an entry in ProcessingTask table.
-
- Args:
- key (dict): Primary key from the ProcessingTask table.
- relative (bool): If True, processing_output_dir is returned relative to
- imaging_root_dir. Default False.
- mkdir (bool): If True, create the processing_output_dir directory.
- Default True.
-
- Returns:
- dir (str): A default output directory for the processed results (processed_output_dir
- in ProcessingTask) based on the following convention:
- processed_dir / scan_dir / {processing_method}_{paramset_idx}
- e.g.: sub4/sess1/scan0/suite2p_0
- """
- acq_software = (scan.Scan & key).fetch1("acq_software")
- scan_dir = find_full_path(
- get_imaging_root_data_dir(),
- get_calcium_imaging_files(key, acq_software)[0],
- ).parent
- root_dir = find_root_directory(get_imaging_root_data_dir(), scan_dir)
-
- method = (
- (ProcessingParamSet & key).fetch1("processing_method").replace(".", "-")
- )
-
- processed_dir = pathlib.Path(get_processed_root_data_dir())
- output_dir = (
- processed_dir
- / scan_dir.relative_to(root_dir)
- / f'{method}_{key["paramset_idx"]}'
- )
-
- if mkdir:
- output_dir.mkdir(parents=True, exist_ok=True)
-
- return output_dir.relative_to(processed_dir) if relative else output_dir
-
- @classmethod
- def generate(cls, scan_key, paramset_idx=0):
- """Generate a ProcessingTask for a Scan using an parameter ProcessingParamSet
-
- Generate an entry in the ProcessingTask table for a particular scan using an
- existing parameter set from the ProcessingParamSet table.
-
- Args:
- scan_key (dict): Primary key from Scan table.
- paramset_idx (int): Unique parameter set ID.
- """
- key = {**scan_key, "paramset_idx": paramset_idx}
-
- processed_dir = get_processed_root_data_dir()
- output_dir = cls.infer_output_dir(key, relative=False, mkdir=True)
-
- method = (ProcessingParamSet & {"paramset_idx": paramset_idx}).fetch1(
- "processing_method"
- )
-
- try:
- if method == "suite2p":
- from element_interface import suite2p_loader
-
- suite2p_loader.Suite2p(output_dir)
- elif method == "caiman":
- from element_interface import caiman_loader
-
- caiman_loader.CaImAn(output_dir)
- elif method == "extract":
- from element_interface import extract_loader
-
- extract_loader.EXTRACT(output_dir)
-
- else:
- raise NotImplementedError(
- "Unknown/unimplemented method: {}".format(method)
- )
- except FileNotFoundError:
- task_mode = "trigger"
- else:
- task_mode = "load"
-
- cls.insert1(
- {
- **key,
- "processing_output_dir": output_dir.relative_to(
- processed_dir
- ).as_posix(),
- "task_mode": task_mode,
- }
- )
-
- auto_generate_entries = generate
-
-
-@schema
-class Processing(dj.Computed):
- """Perform the computation of an entry (task) defined in the ProcessingTask table.
- The computation is performed only on the scans with ScanInfo inserted.
-
- Attributes:
- ProcessingTask (foreign key): Primary key from ProcessingTask.
- processing_time (datetime): Process completion datetime.
- package_version (str, optional): Version of the analysis package used in
- processing the data.
- """
-
- definition = """
- -> ProcessingTask
- ---
- processing_time : datetime # Time of generation of this set of processed, segmented results
- package_version='' : varchar(16)
- """
-
- # Run processing only on Scan with ScanInfo inserted
- @property
- def key_source(self):
- """Limit the Processing to Scans that have their metadata ingested to the
- database."""
-
- return ProcessingTask & scan.ScanInfo
-
- def make(self, key):
- """Execute the calcium imaging analysis defined by the ProcessingTask."""
-
- task_mode, output_dir = (ProcessingTask & key).fetch1(
- "task_mode", "processing_output_dir"
- )
-
- if not output_dir:
- output_dir = ProcessingTask.infer_output_dir(key, relative=True, mkdir=True)
- # update processing_output_dir
- ProcessingTask.update1(
- {**key, "processing_output_dir": output_dir.as_posix()}
- )
-
- try:
- output_dir = find_full_path(
- get_imaging_root_data_dir(), output_dir
- ).as_posix()
- except FileNotFoundError as e:
- if task_mode == "trigger":
- processed_dir = pathlib.Path(get_processed_root_data_dir())
- output_dir = processed_dir / output_dir
- output_dir.mkdir(parents=True, exist_ok=True)
- else:
- raise e
-
- if task_mode == "load":
- method, imaging_dataset = get_loader_result(key, ProcessingTask)
- if method == "suite2p":
- if (scan.ScanInfo & key).fetch1("nrois") > 0:
- raise NotImplementedError(
- "Suite2p ingestion error - Unable to handle"
- + " ScanImage multi-ROI scanning mode yet"
- )
- suite2p_dataset = imaging_dataset
- key = {**key, "processing_time": suite2p_dataset.creation_time}
- elif method == "caiman":
- caiman_dataset = imaging_dataset
- key = {**key, "processing_time": caiman_dataset.creation_time}
- elif method == "extract":
- raise NotImplementedError(
- "To use EXTRACT with this DataJoint Element please set `task_mode=trigger`"
- )
- else:
- raise NotImplementedError("Unknown method: {}".format(method))
- elif task_mode == "trigger":
- method = (ProcessingParamSet * ProcessingTask & key).fetch1(
- "processing_method"
- )
-
- preprocess_paramsets = (
- PreprocessParamSteps.Step()
- & dict(preprocess_param_steps_id=key["preprocess_param_steps_id"])
- ).fetch("paramset_idx")
-
- if len(preprocess_paramsets) == 0:
- # No pre-processing steps were performed on the acquired dataset, so process the raw/acquired files.
- image_files = (scan.ScanInfo.ScanFile & key).fetch("file_path")
- image_files = [
- find_full_path(get_imaging_root_data_dir(), image_file)
- for image_file in image_files
- ]
-
- else:
- preprocess_output_dir = (PreprocessTask & key).fetch1(
- "preprocess_output_dir"
- )
-
- preprocess_output_dir = find_full_path(
- get_imaging_root_data_dir(), preprocess_output_dir
- )
-
- if not preprocess_output_dir.exists():
- raise FileNotFoundError(
- f"Pre-processed output directory not found ({preprocess_output_dir})"
- )
-
- image_files = list(preprocess_output_dir.glob("*.tif"))
-
- if method == "suite2p":
- import suite2p
-
- suite2p_params = (ProcessingTask * ProcessingParamSet & key).fetch1(
- "params"
- )
- suite2p_params["save_path0"] = output_dir
- (
- suite2p_params["fs"],
- suite2p_params["nplanes"],
- suite2p_params["nchannels"],
- ) = (scan.ScanInfo & key).fetch1("fps", "ndepths", "nchannels")
-
- input_format = pathlib.Path(image_files[0]).suffix
- suite2p_params["input_format"] = input_format[1:]
-
- suite2p_paths = {
- "data_path": [image_files[0].parent.as_posix()],
- "tiff_list": [f.as_posix() for f in image_files],
- }
-
- suite2p.run_s2p(ops=suite2p_params, db=suite2p_paths) # Run suite2p
-
- _, imaging_dataset = get_loader_result(key, ProcessingTask)
- suite2p_dataset = imaging_dataset
- key = {**key, "processing_time": suite2p_dataset.creation_time}
-
- elif method == "caiman":
- from element_interface.caiman_loader import _process_scanimage_tiff
- from element_interface.run_caiman import run_caiman
-
- caiman_params = (ProcessingTask * ProcessingParamSet & key).fetch1(
- "params"
- )
- sampling_rate, ndepths, nchannels = (scan.ScanInfo & key).fetch1(
- "fps", "ndepths", "nchannels"
- )
-
- is3D = bool(ndepths > 1)
- if is3D:
- raise NotImplementedError(
- "Caiman pipeline is not yet capable of analyzing 3D scans."
- )
-
- # handle multi-channel tiff image before running CaImAn
- if nchannels > 1:
- channel_idx = caiman_params.get("channel_to_process", 0)
- tmp_dir = pathlib.Path(output_dir) / "channel_separated_tif"
- tmp_dir.mkdir(exist_ok=True)
- _process_scanimage_tiff(
- [f.as_posix() for f in image_files], output_dir=tmp_dir
- )
- image_files = tmp_dir.glob(f"*_chn{channel_idx}.tif")
-
- run_caiman(
- file_paths=[f.as_posix() for f in image_files],
- parameters=caiman_params,
- sampling_rate=sampling_rate,
- output_dir=output_dir,
- is3D=is3D,
- )
-
- _, imaging_dataset = get_loader_result(key, ProcessingTask)
- caiman_dataset = imaging_dataset
- key["processing_time"] = caiman_dataset.creation_time
-
- elif method == "extract":
- import suite2p
- from element_interface.extract_trigger import EXTRACT_trigger
- from scipy.io import savemat
-
- # Motion Correction with Suite2p
- params = (ProcessingTask * ProcessingParamSet & key).fetch1("params")
-
- params["suite2p"]["save_path0"] = output_dir
- (
- params["suite2p"]["fs"],
- params["suite2p"]["nplanes"],
- params["suite2p"]["nchannels"],
- ) = (scan.ScanInfo & key).fetch1("fps", "ndepths", "nchannels")
-
- input_format = pathlib.Path(image_files[0]).suffix
- params["suite2p"]["input_format"] = input_format[1:]
-
- suite2p_paths = {
- "data_path": [image_files[0].parent.as_posix()],
- "tiff_list": [f.as_posix() for f in image_files],
- }
-
- suite2p.run_s2p(ops=params["suite2p"], db=suite2p_paths)
-
- # Convert data.bin to registered_scans.mat
- scanfile_fullpath = pathlib.Path(output_dir) / "suite2p/plane0/data.bin"
-
- data_shape = (scan.ScanInfo * scan.ScanInfo.Field & key).fetch1(
- "nframes", "px_height", "px_width"
- )
- data = np.memmap(scanfile_fullpath, shape=data_shape, dtype=np.int16)
-
- scan_matlab_fullpath = scanfile_fullpath.parent / "registered_scan.mat"
-
- # Save the motion corrected movie (data.bin) in a .mat file
- savemat(
- scan_matlab_fullpath,
- {"M": np.transpose(data, axes=[1, 2, 0])},
- )
-
- # Execute EXTRACT
-
- ex = EXTRACT_trigger(
- scan_matlab_fullpath, params["extract"], output_dir
- )
- ex.run()
-
- _, extract_dataset = get_loader_result(key, ProcessingTask)
- key["processing_time"] = extract_dataset.creation_time
-
- else:
- raise ValueError(f"Unknown task mode: {task_mode}")
-
- self.insert1({**key, "package_version": ""})
-
-
-@schema
-class Curation(dj.Manual):
- """Curated results
-
- If no curation is applied, the curation_output_dir can be set to
- the value of processing_output_dir.
-
- Attributes:
- Processing (foreign key): Primary key from Processing.
- curation_id (int): Unique curation ID.
- curation_time (datetime): Time of generation of this set of curated results.
- curation_output_dir (str): Output directory of the curated results, relative to
- root data directory.
- manual_curation (bool): If True, manual curation has been performed on this
- result.
- curation_note (str, optional): Notes about the curation task.
- """
-
- definition = """# Curation(s) results
- -> Processing
- curation_id: int
- ---
- curation_time: datetime # Time of generation of this set of curated results
- curation_output_dir: varchar(255) # Output directory of the curated results, relative to root data directory
- manual_curation: bool # Has manual curation been performed on this result?
- curation_note='': varchar(2000)
- """
-
- def create1_from_processing_task(self, key, is_curated=False, curation_note=""):
- """Create a Curation entry for a given ProcessingTask key.
-
- Args:
- key (dict): Primary key set of an entry in the ProcessingTask table.
- is_curated (bool): When True, indicates a manual curation.
- curation_note (str): User's note on the specifics of the curation.
- """
- if key not in Processing():
- raise ValueError(
- f"No corresponding entry in Processing available for: {key};"
- f"Please run `Processing.populate(key)`"
- )
-
- output_dir = (ProcessingTask & key).fetch1("processing_output_dir")
- method, imaging_dataset = get_loader_result(key, ProcessingTask)
-
- if method == "suite2p":
- suite2p_dataset = imaging_dataset
- curation_time = suite2p_dataset.creation_time
- elif method == "caiman":
- caiman_dataset = imaging_dataset
- curation_time = caiman_dataset.creation_time
- elif method == "extract":
- extract_dataset = imaging_dataset
- curation_time = extract_dataset.creation_time
- else:
- raise NotImplementedError("Unknown method: {}".format(method))
-
- # Synthesize curation_id
- curation_id = (
- dj.U().aggr(self & key, n="ifnull(max(curation_id)+1,1)").fetch1("n")
- )
- self.insert1(
- {
- **key,
- "curation_id": curation_id,
- "curation_time": curation_time,
- "curation_output_dir": output_dir,
- "manual_curation": is_curated,
- "curation_note": curation_note,
- }
- )
-
-
-# -------------- Motion Correction --------------
-
-
-@schema
-class MotionCorrection(dj.Imported):
- """Results of motion correction shifts performed on the imaging data.
-
- Attributes:
- Curation (foreign key): Primary key from Curation.
- scan.Channel.proj(motion_correct_channel='channel') (int): Channel used for
- motion correction in this processing task.
- """
-
- definition = """# Results of motion correction
- -> Curation
- ---
- -> scan.Channel.proj(motion_correct_channel='channel') # channel used for motion correction in this processing task
- """
-
- class RigidMotionCorrection(dj.Part):
- """Details of rigid motion correction performed on the imaging data.
-
- Attributes:
- MotionCorrection (foreign key): Primary key from MotionCorrection.
- outlier_frames (longblob): Mask with true for frames with outlier shifts
- (already corrected).
- y_shifts (longblob): y motion correction shifts (pixels).
- x_shifts (longblob): x motion correction shifts (pixels).
- z_shifts (longblob, optional): z motion correction shifts (z-drift, pixels).
- y_std (float): standard deviation of y shifts across all frames (pixels).
- x_std (float): standard deviation of x shifts across all frames (pixels).
- z_std (float, optional): standard deviation of z shifts across all frames
- (pixels).
- """
-
- definition = """# Details of rigid motion correction performed on the imaging data
- -> master
- ---
- outlier_frames=null : longblob # mask with true for frames with outlier shifts (already corrected)
- y_shifts : longblob # (pixels) y motion correction shifts
- x_shifts : longblob # (pixels) x motion correction shifts
- z_shifts=null : longblob # (pixels) z motion correction shifts (z-drift)
- y_std : float # (pixels) standard deviation of y shifts across all frames
- x_std : float # (pixels) standard deviation of x shifts across all frames
- z_std=null : float # (pixels) standard deviation of z shifts across all frames
- """
-
- class NonRigidMotionCorrection(dj.Part):
- """Piece-wise rigid motion correction - tile the FOV into multiple 3D
- blocks/patches.
-
- Attributes:
- MotionCorrection (foreign key): Primary key from MotionCorrection.
- outlier_frames (longblob, null): Mask with true for frames with outlier
- shifts (already corrected).
- block_height (int): Block height in pixels.
- block_width (int): Block width in pixels.
- block_depth (int): Block depth in pixels.
- block_count_y (int): Number of blocks tiled in the y direction.
- block_count_x (int): Number of blocks tiled in the x direction.
- block_count_z (int): Number of blocks tiled in the z direction.
- """
-
- definition = """# Details of non-rigid motion correction performed on the imaging data
- -> master
- ---
- outlier_frames=null : longblob # mask with true for frames with outlier shifts (already corrected)
- block_height : int # (pixels)
- block_width : int # (pixels)
- block_depth : int # (pixels)
- block_count_y : int # number of blocks tiled in the y direction
- block_count_x : int # number of blocks tiled in the x direction
- block_count_z : int # number of blocks tiled in the z direction
- """
-
- class Block(dj.Part):
- """FOV-tiled blocks used for non-rigid motion correction.
-
- Attributes:
- NonRigidMotionCorrection (foreign key): Primary key from
- NonRigidMotionCorrection.
- block_id (int): Unique block ID.
- block_y (longblob): y_start and y_end in pixels for this block
- block_x (longblob): x_start and x_end in pixels for this block
- block_z (longblob): z_start and z_end in pixels for this block
- y_shifts (longblob): y motion correction shifts for every frame in pixels
- x_shifts (longblob): x motion correction shifts for every frame in pixels
- z_shift=null (longblob, optional): x motion correction shifts for every frame
- in pixels
- y_std (float): standard deviation of y shifts across all frames in pixels
- x_std (float): standard deviation of x shifts across all frames in pixels
- z_std=null (float, optional): standard deviation of z shifts across all frames
- in pixels
- """
-
- definition = """# FOV-tiled blocks used for non-rigid motion correction
- -> master.NonRigidMotionCorrection
- block_id : int
- ---
- block_y : longblob # (y_start, y_end) in pixel of this block
- block_x : longblob # (x_start, x_end) in pixel of this block
- block_z : longblob # (z_start, z_end) in pixel of this block
- y_shifts : longblob # (pixels) y motion correction shifts for every frame
- x_shifts : longblob # (pixels) x motion correction shifts for every frame
- z_shifts=null : longblob # (pixels) x motion correction shifts for every frame
- y_std : float # (pixels) standard deviation of y shifts across all frames
- x_std : float # (pixels) standard deviation of x shifts across all frames
- z_std=null : float # (pixels) standard deviation of z shifts across all frames
- """
-
- class Summary(dj.Part):
- """Summary images for each field and channel after corrections.
-
- Attributes:
- MotionCorrection (foreign key): Primary key from MotionCorrection.
- scan.ScanInfo.Field (foreign key): Primary key from scan.ScanInfo.Field.
- ref_image (longblob): Image used as alignment template.
- average_image (longblob): Mean of registered frames.
- correlation_image (longblob, optional): Correlation map (computed during
- cell detection).
- max_proj_image (longblob, optional): Max of registered frames.
- """
-
- definition = """# Summary images for each field and channel after corrections
- -> master
- -> scan.ScanInfo.Field
- ---
- ref_image : longblob # image used as alignment template
- average_image : longblob # mean of registered frames
- correlation_image=null : longblob # correlation map (computed during cell detection)
- max_proj_image=null : longblob # max of registered frames
- """
-
- def make(self, key):
- """Populate MotionCorrection with results parsed from analysis outputs"""
-
- method, imaging_dataset = get_loader_result(key, Curation)
-
- field_keys, _ = (scan.ScanInfo.Field & key).fetch(
- "KEY", "field_z", order_by="field_z"
- )
-
- if method in ["suite2p", "extract"]:
- suite2p_dataset = imaging_dataset
-
- motion_correct_channel = suite2p_dataset.planes[0].alignment_channel
-
- # ---- iterate through all s2p plane outputs ----
- rigid_correction, nonrigid_correction, nonrigid_blocks = {}, {}, {}
- summary_images = []
- for idx, (plane, s2p) in enumerate(suite2p_dataset.planes.items()):
- # -- rigid motion correction --
- if idx == 0:
- rigid_correction = {
- **key,
- "y_shifts": s2p.ops["yoff"],
- "x_shifts": s2p.ops["xoff"],
- "z_shifts": np.full_like(s2p.ops["xoff"], 0),
- "y_std": np.nanstd(s2p.ops["yoff"]),
- "x_std": np.nanstd(s2p.ops["xoff"]),
- "z_std": np.nan,
- "outlier_frames": s2p.ops["badframes"],
- }
- else:
- rigid_correction["y_shifts"] = np.vstack(
- [rigid_correction["y_shifts"], s2p.ops["yoff"]]
- )
- rigid_correction["y_std"] = np.nanstd(
- rigid_correction["y_shifts"].flatten()
- )
- rigid_correction["x_shifts"] = np.vstack(
- [rigid_correction["x_shifts"], s2p.ops["xoff"]]
- )
- rigid_correction["x_std"] = np.nanstd(
- rigid_correction["x_shifts"].flatten()
- )
- rigid_correction["outlier_frames"] = np.logical_or(
- rigid_correction["outlier_frames"], s2p.ops["badframes"]
- )
- # -- non-rigid motion correction --
- if s2p.ops["nonrigid"]:
- if idx == 0:
- nonrigid_correction = {
- **key,
- "block_height": s2p.ops["block_size"][0],
- "block_width": s2p.ops["block_size"][1],
- "block_depth": 1,
- "block_count_y": s2p.ops["nblocks"][0],
- "block_count_x": s2p.ops["nblocks"][1],
- "block_count_z": len(suite2p_dataset.planes),
- "outlier_frames": s2p.ops["badframes"],
- }
- else:
- nonrigid_correction["outlier_frames"] = np.logical_or(
- nonrigid_correction["outlier_frames"],
- s2p.ops["badframes"],
- )
- for b_id, (b_y, b_x, bshift_y, bshift_x) in enumerate(
- zip(
- s2p.ops["xblock"],
- s2p.ops["yblock"],
- s2p.ops["yoff1"].T,
- s2p.ops["xoff1"].T,
- )
- ):
- if b_id in nonrigid_blocks:
- nonrigid_blocks[b_id]["y_shifts"] = np.vstack(
- [nonrigid_blocks[b_id]["y_shifts"], bshift_y]
- )
- nonrigid_blocks[b_id]["y_std"] = np.nanstd(
- nonrigid_blocks[b_id]["y_shifts"].flatten()
- )
- nonrigid_blocks[b_id]["x_shifts"] = np.vstack(
- [nonrigid_blocks[b_id]["x_shifts"], bshift_x]
- )
- nonrigid_blocks[b_id]["x_std"] = np.nanstd(
- nonrigid_blocks[b_id]["x_shifts"].flatten()
- )
- else:
- nonrigid_blocks[b_id] = {
- **key,
- "block_id": b_id,
- "block_y": b_y,
- "block_x": b_x,
- "block_z": np.full_like(b_x, plane),
- "y_shifts": bshift_y,
- "x_shifts": bshift_x,
- "z_shifts": np.full(
- (
- len(suite2p_dataset.planes),
- len(bshift_x),
- ),
- 0,
- ),
- "y_std": np.nanstd(bshift_y),
- "x_std": np.nanstd(bshift_x),
- "z_std": np.nan,
- }
-
- # -- summary images --
- motion_correction_key = (
- scan.ScanInfo.Field * Curation & key & field_keys[plane]
- ).fetch1("KEY")
- summary_images.append(
- {
- **motion_correction_key,
- "ref_image": s2p.ref_image,
- "average_image": s2p.mean_image,
- "correlation_image": s2p.correlation_map,
- "max_proj_image": s2p.max_proj_image,
- }
- )
-
- self.insert1({**key, "motion_correct_channel": motion_correct_channel})
- if rigid_correction:
- self.RigidMotionCorrection.insert1(rigid_correction)
- if nonrigid_correction:
- self.NonRigidMotionCorrection.insert1(nonrigid_correction)
- self.Block.insert(nonrigid_blocks.values())
- self.Summary.insert(summary_images)
- elif method == "caiman":
- caiman_dataset = imaging_dataset
-
- self.insert1(
- {
- **key,
- "motion_correct_channel": caiman_dataset.alignment_channel,
- }
- )
-
- is3D = caiman_dataset.params.motion["is3D"]
- if not caiman_dataset.params.motion["pw_rigid"]:
- # -- rigid motion correction --
- rigid_correction = {
- **key,
- "x_shifts": caiman_dataset.motion_correction["shifts_rig"][:, 0],
- "y_shifts": caiman_dataset.motion_correction["shifts_rig"][:, 1],
- "z_shifts": (
- caiman_dataset.motion_correction["shifts_rig"][:, 2]
- if is3D
- else np.full_like(
- caiman_dataset.motion_correction["shifts_rig"][:, 0],
- 0,
- )
- ),
- "x_std": np.nanstd(
- caiman_dataset.motion_correction["shifts_rig"][:, 0]
- ),
- "y_std": np.nanstd(
- caiman_dataset.motion_correction["shifts_rig"][:, 1]
- ),
- "z_std": (
- np.nanstd(caiman_dataset.motion_correction["shifts_rig"][:, 2])
- if is3D
- else np.nan
- ),
- "outlier_frames": None,
- }
-
- self.RigidMotionCorrection.insert1(rigid_correction)
- else:
- # -- non-rigid motion correction --
- nonrigid_correction = {
- **key,
- "block_height": (
- caiman_dataset.params.motion["strides"][0]
- + caiman_dataset.params.motion["overlaps"][0]
- ),
- "block_width": (
- caiman_dataset.params.motion["strides"][1]
- + caiman_dataset.params.motion["overlaps"][1]
- ),
- "block_depth": (
- caiman_dataset.params.motion["strides"][2]
- + caiman_dataset.params.motion["overlaps"][2]
- if is3D
- else 1
- ),
- "block_count_x": len(
- set(caiman_dataset.motion_correction["coord_shifts_els"][:, 0])
- ),
- "block_count_y": len(
- set(caiman_dataset.motion_correction["coord_shifts_els"][:, 2])
- ),
- "block_count_z": (
- len(
- set(
- caiman_dataset.motion_correction["coord_shifts_els"][
- :, 4
- ]
- )
- )
- if is3D
- else 1
- ),
- "outlier_frames": None,
- }
-
- nonrigid_blocks = []
- for b_id in range(
- len(caiman_dataset.motion_correction["x_shifts_els"][0, :])
- ):
- nonrigid_blocks.append(
- {
- **key,
- "block_id": b_id,
- "block_x": np.arange(
- *caiman_dataset.motion_correction["coord_shifts_els"][
- b_id, 0:2
- ]
- ),
- "block_y": np.arange(
- *caiman_dataset.motion_correction["coord_shifts_els"][
- b_id, 2:4
- ]
- ),
- "block_z": (
- np.arange(
- *caiman_dataset.motion_correction[
- "coord_shifts_els"
- ][b_id, 4:6]
- )
- if is3D
- else np.full_like(
- np.arange(
- *caiman_dataset.motion_correction[
- "coord_shifts_els"
- ][b_id, 0:2]
- ),
- 0,
- )
- ),
- "x_shifts": caiman_dataset.motion_correction[
- "x_shifts_els"
- ][:, b_id],
- "y_shifts": caiman_dataset.motion_correction[
- "y_shifts_els"
- ][:, b_id],
- "z_shifts": (
- caiman_dataset.motion_correction["z_shifts_els"][
- :, b_id
- ]
- if is3D
- else np.full_like(
- caiman_dataset.motion_correction["x_shifts_els"][
- :, b_id
- ],
- 0,
- )
- ),
- "x_std": np.nanstd(
- caiman_dataset.motion_correction["x_shifts_els"][
- :, b_id
- ]
- ),
- "y_std": np.nanstd(
- caiman_dataset.motion_correction["y_shifts_els"][
- :, b_id
- ]
- ),
- "z_std": (
- np.nanstd(
- caiman_dataset.motion_correction["z_shifts_els"][
- :, b_id
- ]
- )
- if is3D
- else np.nan
- ),
- }
- )
-
- self.NonRigidMotionCorrection.insert1(nonrigid_correction)
- self.Block.insert(nonrigid_blocks)
-
- # -- summary images --
- summary_images = [
- {
- **key,
- **fkey,
- "ref_image": ref_image,
- "average_image": ave_img,
- "correlation_image": corr_img,
- "max_proj_image": max_img,
- }
- for fkey, ref_image, ave_img, corr_img, max_img in zip(
- field_keys,
- (
- caiman_dataset.motion_correction["reference_image"].transpose(
- 2, 0, 1
- )
- if is3D
- else caiman_dataset.motion_correction["reference_image"][...][
- np.newaxis, ...
- ]
- ),
- (
- caiman_dataset.motion_correction["average_image"].transpose(
- 2, 0, 1
- )
- if is3D
- else caiman_dataset.motion_correction["average_image"][...][
- np.newaxis, ...
- ]
- ),
- (
- caiman_dataset.motion_correction["correlation_image"].transpose(
- 2, 0, 1
- )
- if is3D
- else caiman_dataset.motion_correction["correlation_image"][...][
- np.newaxis, ...
- ]
- ),
- (
- caiman_dataset.motion_correction["max_image"].transpose(2, 0, 1)
- if is3D
- else caiman_dataset.motion_correction["max_image"][...][
- np.newaxis, ...
- ]
- ),
- )
- ]
- self.Summary.insert(summary_images)
- else:
- raise NotImplementedError("Unknown/unimplemented method: {}".format(method))
-
-
-# -------------- Segmentation --------------
-
-
-@schema
-class Segmentation(dj.Computed):
- """Result of the Segmentation process.
-
- Attributes:
- Curation (foreign key): Primary key from Curation.
- """
-
- definition = """# Different mask segmentations.
- -> Curation
- """
-
- class Mask(dj.Part):
- """Details of the masks identified from the Segmentation procedure.
-
- Attributes:
- Segmentation (foreign key): Primary key from Segmentation.
- mask (int): Unique mask ID.
- scan.Channel.proj(segmentation_channel='channel') (foreign key): Channel
- used for segmentation.
- mask_npix (int): Number of pixels in ROIs.
- mask_center_x (int): Center x coordinate in pixel.
- mask_center_y (int): Center y coordinate in pixel.
- mask_center_z (int): Center z coordinate in pixel.
- mask_xpix (longblob): X coordinates in pixels.
- mask_ypix (longblob): Y coordinates in pixels.
- mask_zpix (longblob): Z coordinates in pixels.
- mask_weights (longblob): Weights of the mask at the indices above.
- """
-
- definition = """ # A mask produced by segmentation.
- -> master
- mask : smallint
- ---
- -> scan.Channel.proj(segmentation_channel='channel') # channel used for segmentation
- mask_npix : int # number of pixels in ROIs
- mask_center_x : int # center x coordinate in pixel
- mask_center_y : int # center y coordinate in pixel
- mask_center_z=null : int # center z coordinate in pixel
- mask_xpix : longblob # x coordinates in pixels
- mask_ypix : longblob # y coordinates in pixels
- mask_zpix=null : longblob # z coordinates in pixels
- mask_weights : longblob # weights of the mask at the indices above
- """
-
- def make(self, key):
- """Populate the Segmentation with the results parsed from analysis outputs."""
-
- method, imaging_dataset = get_loader_result(key, Curation)
-
- if method == "suite2p":
- suite2p_dataset = imaging_dataset
-
- # ---- iterate through all s2p plane outputs ----
- masks, cells = [], []
- for plane, s2p in suite2p_dataset.planes.items():
- mask_count = len(masks) # increment mask id from all "plane"
- for mask_idx, (is_cell, cell_prob, mask_stat) in enumerate(
- zip(s2p.iscell, s2p.cell_prob, s2p.stat)
- ):
- masks.append(
- {
- **key,
- "mask": mask_idx + mask_count,
- "segmentation_channel": s2p.segmentation_channel,
- "mask_npix": mask_stat["npix"],
- "mask_center_x": mask_stat["med"][1],
- "mask_center_y": mask_stat["med"][0],
- "mask_center_z": mask_stat.get("iplane", plane),
- "mask_xpix": mask_stat["xpix"],
- "mask_ypix": mask_stat["ypix"],
- "mask_zpix": np.full(
- mask_stat["npix"],
- mask_stat.get("iplane", plane),
- ),
- "mask_weights": mask_stat["lam"],
- }
- )
- if is_cell:
- cells.append(
- {
- **key,
- "mask_classification_method": "suite2p_default_classifier",
- "mask": mask_idx + mask_count,
- "mask_type": "soma",
- "confidence": cell_prob,
- }
- )
-
- self.insert1(key)
- self.Mask.insert(masks, ignore_extra_fields=True)
-
- if cells:
- MaskClassification.insert1(
- {
- **key,
- "mask_classification_method": "suite2p_default_classifier",
- },
- allow_direct_insert=True,
- )
- MaskClassification.MaskType.insert(
- cells, ignore_extra_fields=True, allow_direct_insert=True
- )
- elif method == "caiman":
- caiman_dataset = imaging_dataset
-
- # infer "segmentation_channel" - from params if available, else from caiman loader
- params = (ProcessingParamSet * ProcessingTask & key).fetch1("params")
- segmentation_channel = params.get(
- "segmentation_channel", caiman_dataset.segmentation_channel
- )
-
- masks, cells = [], []
- for mask in caiman_dataset.masks:
- masks.append(
- {
- **key,
- "segmentation_channel": segmentation_channel,
- "mask": mask["mask_id"],
- "mask_npix": mask["mask_npix"],
- "mask_center_x": mask["mask_center_x"],
- "mask_center_y": mask["mask_center_y"],
- "mask_center_z": mask["mask_center_z"],
- "mask_xpix": mask["mask_xpix"],
- "mask_ypix": mask["mask_ypix"],
- "mask_zpix": mask["mask_zpix"],
- "mask_weights": mask["mask_weights"],
- }
- )
- if caiman_dataset.cnmf.estimates.idx_components is not None:
- if mask["mask_id"] in caiman_dataset.cnmf.estimates.idx_components:
- cells.append(
- {
- **key,
- "mask_classification_method": "caiman_default_classifier",
- "mask": mask["mask_id"],
- "mask_type": "soma",
- }
- )
-
- self.insert1(key)
- self.Mask.insert(masks, ignore_extra_fields=True)
-
- if cells:
- MaskClassification.insert1(
- {
- **key,
- "mask_classification_method": "caiman_default_classifier",
- },
- allow_direct_insert=True,
- )
- MaskClassification.MaskType.insert(
- cells, ignore_extra_fields=True, allow_direct_insert=True
- )
- elif method == "extract":
- extract_dataset = imaging_dataset
- masks = [
- dict(
- **key,
- segmentation_channel=0,
- mask=mask["mask_id"],
- mask_npix=mask["mask_npix"],
- mask_center_x=mask["mask_center_x"],
- mask_center_y=mask["mask_center_y"],
- mask_center_z=mask["mask_center_z"],
- mask_xpix=mask["mask_xpix"],
- mask_ypix=mask["mask_ypix"],
- mask_zpix=mask["mask_zpix"],
- mask_weights=mask["mask_weights"],
- )
- for mask in extract_dataset.load_results()
- ]
-
- self.insert1(key)
- self.Mask.insert(masks, ignore_extra_fields=True)
- else:
- raise NotImplementedError(f"Unknown/unimplemented method: {method}")
-
-
-@schema
-class MaskClassificationMethod(dj.Lookup):
- """Available mask classification methods.
-
- Attributes:
- mask_classification_method (str): Mask classification method.
- """
-
- definition = """
- mask_classification_method: varchar(48)
- """
-
- contents = zip(["suite2p_default_classifier", "caiman_default_classifier"])
-
-
-@schema
-class MaskClassification(dj.Computed):
- """Classes assigned to each mask.
-
- Attributes:
- Segmentation (foreign key): Primary key from Segmentation.
- MaskClassificationMethod (foreign key): Primary key from
- MaskClassificationMethod.
- """
-
- definition = """
- -> Segmentation
- -> MaskClassificationMethod
- """
-
- class MaskType(dj.Part):
- """Type assigned to each mask.
-
- Attributes:
- MaskClassification (foreign key): Primary key from MaskClassification.
- Segmentation.Mask (foreign key): Primary key from Segmentation.Mask.
- MaskType: Primary key from MaskType.
- confidence (float, optional): Confidence level of the mask classification.
- """
-
- definition = """
- -> master
- -> Segmentation.Mask
- ---
- -> MaskType
- confidence=null: float
- """
-
- def make(self, key):
- pass
-
-
-# -------------- Activity Trace --------------
-
-
-@schema
-class Fluorescence(dj.Computed):
- """Fluorescence traces.
-
- Attributes:
- Segmentation (foreign key): Primary key from Segmentation.
- """
-
- definition = """# Fluorescence traces before spike extraction or filtering
- -> Segmentation
- """
-
- class Trace(dj.Part):
- """Traces obtained from segmented region of interests.
-
- Attributes:
- Fluorescence (foreign key): Primary key from Fluorescence.
- Segmentation.Mask (foreign key): Primary key from Segmentation.Mask.
- scan.Channel.proj(fluo_channel='channel') (int): The channel that this trace
- comes from.
- fluorescence (longblob): Fluorescence trace associated with this mask.
- neuropil_fluorescence (longblob, optional): Neuropil fluorescence trace.
- """
-
- definition = """
- -> master
- -> Segmentation.Mask
- -> scan.Channel.proj(fluo_channel='channel') # The channel that this trace comes from
- ---
- fluorescence : longblob # Fluorescence trace associated with this mask
- neuropil_fluorescence=null : longblob # Neuropil fluorescence trace
- """
-
- def make(self, key):
- """Populate the Fluorescence with the results parsed from analysis outputs."""
-
- method, imaging_dataset = get_loader_result(key, Curation)
-
- if method == "suite2p":
- suite2p_dataset = imaging_dataset
-
- # ---- iterate through all s2p plane outputs ----
- fluo_traces, fluo_chn2_traces = [], []
- for s2p in suite2p_dataset.planes.values():
- mask_count = len(fluo_traces) # increment mask id from all "plane"
- for mask_idx, (f, fneu) in enumerate(zip(s2p.F, s2p.Fneu)):
- fluo_traces.append(
- {
- **key,
- "mask": mask_idx + mask_count,
- "fluo_channel": 0,
- "fluorescence": f,
- "neuropil_fluorescence": fneu,
- }
- )
- if len(s2p.F_chan2):
- mask_chn2_count = len(
- fluo_chn2_traces
- ) # increment mask id from all planes
- for mask_idx, (f2, fneu2) in enumerate(
- zip(s2p.F_chan2, s2p.Fneu_chan2)
- ):
- fluo_chn2_traces.append(
- {
- **key,
- "mask": mask_idx + mask_chn2_count,
- "fluo_channel": 1,
- "fluorescence": f2,
- "neuropil_fluorescence": fneu2,
- }
- )
-
- self.insert1(key)
- self.Trace.insert(fluo_traces + fluo_chn2_traces)
- elif method == "caiman":
- caiman_dataset = imaging_dataset
-
- # infer "segmentation_channel" - from params if available, else from caiman loader
- params = (ProcessingParamSet * ProcessingTask & key).fetch1("params")
- segmentation_channel = params.get(
- "segmentation_channel", caiman_dataset.segmentation_channel
- )
-
- fluo_traces = []
- for mask in caiman_dataset.masks:
- fluo_traces.append(
- {
- **key,
- "mask": mask["mask_id"],
- "fluo_channel": segmentation_channel,
- "fluorescence": mask["inferred_trace"],
- }
- )
-
- self.insert1(key)
- self.Trace.insert(fluo_traces)
- elif method == "extract":
- extract_dataset = imaging_dataset
-
- fluo_traces = [
- {
- **key,
- "mask": mask_id,
- "fluo_channel": 0,
- "fluorescence": fluorescence,
- }
- for mask_id, fluorescence in enumerate(extract_dataset.T)
- ]
-
- self.insert1(key)
- self.Trace.insert(fluo_traces)
-
- else:
- raise NotImplementedError("Unknown/unimplemented method: {}".format(method))
-
-
-@schema
-class ActivityExtractionMethod(dj.Lookup):
- """Available activity extraction methods.
-
- Attributes:
- extraction_method (str): Extraction method.
- """
-
- definition = """# Activity extraction method
- extraction_method: varchar(32)
- """
-
- contents = zip(["suite2p_deconvolution", "caiman_deconvolution", "caiman_dff"])
-
-
-@schema
-class Activity(dj.Computed):
- """Inferred neural activity from fluorescence trace (e.g. dff, spikes, etc.).
-
- Attributes:
- Fluorescence (foreign key): Primary key from Fluorescence.
- ActivityExtractionMethod (foreign key): Primary key from
- ActivityExtractionMethod.
- """
-
- definition = """# Neural Activity
- -> Fluorescence
- -> ActivityExtractionMethod
- """
-
- class Trace(dj.Part):
- """Trace(s) for each mask.
-
- Attributes:
- Activity (foreign key): Primary key from Activity.
- Fluorescence.Trace (foreign key): Fluorescence.Trace.
- activity_trace (longblob): Neural activity from fluorescence trace.
- """
-
- definition = """
- -> master
- -> Fluorescence.Trace
- ---
- activity_trace: longblob
- """
-
- @property
- def key_source(self):
- suite2p_key_source = (
- Fluorescence
- * ActivityExtractionMethod
- * ProcessingParamSet.proj("processing_method")
- & 'processing_method = "suite2p"'
- & 'extraction_method LIKE "suite2p%"'
- )
- caiman_key_source = (
- Fluorescence
- * ActivityExtractionMethod
- * ProcessingParamSet.proj("processing_method")
- & 'processing_method = "caiman"'
- & 'extraction_method LIKE "caiman%"'
- )
- return suite2p_key_source.proj() + caiman_key_source.proj()
-
- def make(self, key):
- """Populate the Activity with the results parsed from analysis outputs."""
-
- method, imaging_dataset = get_loader_result(key, Curation)
-
- if method == "suite2p":
- if key["extraction_method"] == "suite2p_deconvolution":
- suite2p_dataset = imaging_dataset
- # ---- iterate through all s2p plane outputs ----
- spikes = [
- dict(
- key,
- mask=mask_idx,
- fluo_channel=0,
- activity_trace=spks,
- )
- for mask_idx, spks in enumerate(
- s
- for plane in suite2p_dataset.planes.values()
- for s in plane.spks
- )
- ]
-
- self.insert1(key)
- self.Trace.insert(spikes)
- elif method == "caiman":
- caiman_dataset = imaging_dataset
-
- if key["extraction_method"] in (
- "caiman_deconvolution",
- "caiman_dff",
- ):
- attr_mapper = {
- "caiman_deconvolution": "spikes",
- "caiman_dff": "dff",
- }
-
- # infer "segmentation_channel" - from params if available, else from caiman loader
- params = (ProcessingParamSet * ProcessingTask & key).fetch1("params")
- segmentation_channel = params.get(
- "segmentation_channel", caiman_dataset.segmentation_channel
- )
-
- self.insert1(key)
- self.Trace.insert(
- dict(
- key,
- mask=mask["mask_id"],
- fluo_channel=segmentation_channel,
- activity_trace=mask[attr_mapper[key["extraction_method"]]],
- )
- for mask in caiman_dataset.masks
- )
- else:
- raise NotImplementedError("Unknown/unimplemented method: {}".format(method))
-
-
-@schema
-class ProcessingQualityMetrics(dj.Computed):
- """Quality metrics used to evaluate the results of the calcium imaging analysis pipeline.
-
- Attributes:
- Fluorescence (foreign key): Primary key from Fluorescence.
- """
-
- definition = """
- -> Fluorescence
- """
-
- class Mask(dj.Part):
- """Quality metrics used to evaluate the masks.
-
- Attributes:
- Fluorescence (foreign key): Primary key from Fluorescence.
- Segmentation.Mask (foreign key): Primary key from Segmentation.Mask.
- mask_area (float): Mask area in square micrometer.
- roundness (float): Roundness between 0 and 1. Values closer to 1 are rounder.
- """
-
- definition = """
- -> master
- -> Segmentation.Mask
- ---
- mask_area=null: float # Mask area in square micrometer.
- roundness: float # Roundness between 0 and 1. Values closer to 1 are rounder.
- """
-
- class Trace(dj.Part):
- """Quality metrics used to evaluate the fluorescence traces.
-
- Attributes:
- Fluorescence (foreign key): Primary key from Fluorescence.
- Fluorescence.Trace (foreign key): Primary key from Fluorescence.Trace.
- skewness (float): Skewness of the fluorescence trace.
- variance (float): Variance of the fluorescence trace.
- """
-
- definition = """
- -> master
- -> Fluorescence.Trace
- ---
- skewness: float # Skewness of the fluorescence trace.
- variance: float # Variance of the fluorescence trace.
- """
-
- def make(self, key):
- """Populate the ProcessingQualityMetrics table and its part tables."""
- from scipy.stats import skew
-
- (
- mask_xpixs,
- mask_ypixs,
- mask_weights,
- fluorescence,
- fluo_channels,
- mask_ids,
- mask_npix,
- px_height,
- px_width,
- um_height,
- um_width,
- ) = (Segmentation.Mask * scan.ScanInfo.Field * Fluorescence.Trace & key).fetch(
- "mask_xpix",
- "mask_ypix",
- "mask_weights",
- "fluorescence",
- "fluo_channel",
- "mask",
- "mask_npix",
- "px_height",
- "px_width",
- "um_height",
- "um_width",
- )
-
- norm_mean = lambda x: x.mean() / x.max()
- roundnesses = [
- norm_mean(np.linalg.eigvals(np.cov(x, y, aweights=w)))
- for x, y, w in zip(mask_xpixs, mask_ypixs, mask_weights)
- ]
-
- fluorescence = np.stack(fluorescence)
-
- self.insert1(key)
-
- self.Mask.insert(
- dict(key, mask=mask_id, mask_area=mask_area, roundness=roundness)
- for mask_id, mask_area, roundness in zip(
- mask_ids,
- mask_npix * (um_height / px_height) * (um_width / px_width),
- roundnesses,
- )
- )
-
- self.Trace.insert(
- dict(
- key,
- fluo_channel=fluo_channel,
- mask=mask_id,
- skewness=skewness,
- variance=variance,
- )
- for fluo_channel, mask_id, skewness, variance in zip(
- fluo_channels,
- mask_ids,
- skew(fluorescence, axis=1),
- fluorescence.std(axis=1),
- )
- )
-
-
-# ---------------- HELPER FUNCTIONS ----------------
-
-
-_table_attribute_mapper = {
- "ProcessingTask": "processing_output_dir",
- "Curation": "curation_output_dir",
-}
-
-
-def get_loader_result(key: dict, table: dj.Table) -> Callable:
- """Retrieve the processed imaging results from a suite2p or caiman loader.
-
- Args:
- key (dict): The `key` to one entry of ProcessingTask or Curation
- table (dj.Table): A datajoint table to retrieve the loaded results from (e.g.
- ProcessingTask, Curation)
-
- Raises:
- NotImplementedError: If the processing_method is different than 'suite2p' or
- 'caiman'.
-
- Returns:
- A loader object of the loaded results (e.g. suite2p.Suite2p or caiman.CaImAn,
- see element-interface for more information on the loaders.)
- """
- method, output_dir = (ProcessingParamSet * table & key).fetch1(
- "processing_method", _table_attribute_mapper[table.__name__]
- )
-
- output_path = find_full_path(get_imaging_root_data_dir(), output_dir)
-
- if method == "suite2p" or (
- method == "extract" and table.__name__ == "MotionCorrection"
- ):
- from element_interface import suite2p_loader
-
- loaded_dataset = suite2p_loader.Suite2p(output_path)
- elif method == "caiman":
- from element_interface import caiman_loader
-
- loaded_dataset = caiman_loader.CaImAn(output_path)
- elif method == "extract":
- from element_interface import extract_loader
-
- loaded_dataset = extract_loader.EXTRACT(output_path)
- else:
- raise NotImplementedError("Unknown/unimplemented method: {}".format(method))
-
- return method, loaded_dataset
diff --git a/element_calcium_imaging/plotting/draw_rois.py b/element_calcium_imaging/plotting/draw_rois.py
index 0b8481d4..16e7fb0f 100644
--- a/element_calcium_imaging/plotting/draw_rois.py
+++ b/element_calcium_imaging/plotting/draw_rois.py
@@ -215,12 +215,12 @@ def submit_annotations(n_clicks, annotation_list, value):
scan, imaging, yaml.safe_load(value), x_mask_li, y_mask_li
)
else:
- logger.warn(
+ logger.warning(
"Incorrect annotation list format. This is a known bug. Please draw a line anywhere on the image and click `Submit Curated Masks`. It will be ignored in the final submission but will format the list correctly."
)
return no_update
else:
- logger.warn("No annotations to submit.")
+ logger.warning("No annotations to submit.")
return no_update
else:
return no_update
diff --git a/element_calcium_imaging/version.py b/element_calcium_imaging/version.py
index aad7f6ef..d932c9ec 100644
--- a/element_calcium_imaging/version.py
+++ b/element_calcium_imaging/version.py
@@ -1,3 +1,3 @@
"""Package metadata."""
-__version__ = "0.10.1"
+__version__ = "0.11.0"
diff --git a/images/pipeline_imaging.svg b/images/pipeline_imaging.svg
deleted file mode 100644
index 5b010211..00000000
--- a/images/pipeline_imaging.svg
+++ /dev/null
@@ -1,583 +0,0 @@
-
\ No newline at end of file
diff --git a/images/pipeline_imaging_preprocess.svg b/images/pipeline_imaging_preprocess.svg
deleted file mode 100644
index 0deec313..00000000
--- a/images/pipeline_imaging_preprocess.svg
+++ /dev/null
@@ -1,667 +0,0 @@
-
\ No newline at end of file
diff --git a/notebooks/demo.ipynb b/notebooks/demo.ipynb
deleted file mode 100644
index 0ba3b442..00000000
--- a/notebooks/demo.ipynb
+++ /dev/null
@@ -1,773 +0,0 @@
-{
- "cells": [
- {
- "attachments": {},
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# DataJoint Element for Calcium Imaging\n",
- "\n",
- "+ This notebook briefly demonstrates using the open-source DataJoint Element for\n",
- "calcium imaging.\n",
- "+ For a detailed tutorial, please see the [tutorial notebook](./tutorial.ipynb)."
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "
\n",
- "
\n",
- "
\n",
- "
\n",
- "\n",
- "Left to right: Raw scans, Motion corrected scans, Cell segmentations, Calcium events"
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Import dependencies"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 1,
- "metadata": {},
- "outputs": [
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "[2024-01-11 02:19:21,916][WARNING]: lab.Project and related tables will be removed in a future version of Element Lab. Please use the project schema.\n",
- "[2024-01-11 02:19:21,918][INFO]: Connecting root@fakeservices.datajoint.io:3306\n",
- "[2024-01-11 02:19:21,924][INFO]: Connected root@fakeservices.datajoint.io:3306\n"
- ]
- }
- ],
- "source": [
- "from tests.tutorial_pipeline import *"
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### View workflow"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 2,
- "metadata": {},
- "outputs": [
- {
- "data": {
- "image/svg+xml": [
- ""
- ],
- "text/plain": [
- ""
- ]
- },
- "execution_count": 2,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "(\n",
- " dj.Diagram(subject.Subject)\n",
- " + dj.Diagram(session.Session)\n",
- " + dj.Diagram(scan)\n",
- " + dj.Diagram(imaging)\n",
- ")"
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Insert an entry in a manual table by calling the `insert()` method\n",
- "\n",
- "```python\n",
- "\n",
- "subject.Subject.insert1(\n",
- " dict(subject='subject1',\n",
- " subject_birth_date='2023-01-01',\n",
- " sex='U'\n",
- " )\n",
- ")\n",
- "```"
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Automatically process data with the `populate()` method\n",
- "\n",
- "+ Once data is inserted into manual tables, the `populate()` function automatically runs the ingestion and processing routines. \n",
- "\n",
- "+ For example, to run Suite2p processing in the `Processing` table:\n",
- "\n",
- " ```python\n",
- " imaging.Processing.populate()\n",
- " ```"
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Visualize processed data"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 3,
- "metadata": {},
- "outputs": [
- {
- "data": {
- "application/vnd.jupyter.widget-view+json": {
- "model_id": "324b33aea44f404fb052dd4776fbb72c",
- "version_major": 2,
- "version_minor": 0
- },
- "text/plain": [
- "VBox(children=(HBox(children=(Dropdown(description='Result:', layout=Layout(display='flex', flex_flow='row', g…"
- ]
- },
- "execution_count": 3,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "from element_calcium_imaging.plotting.widget import main\n",
- "\n",
- "main(imaging)"
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "For an in-depth tutorial please see the [tutorial notebook](./tutorial.ipynb)."
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.9.17"
- },
- "orig_nbformat": 4,
- "vscode": {
- "interpreter": {
- "hash": "949777d72b0d2535278d3dc13498b2535136f6dfe0678499012e853ee9abcab1"
- }
- }
- },
- "nbformat": 4,
- "nbformat_minor": 2
-}
diff --git a/setup.py b/setup.py
index 3249f404..6d2dce43 100644
--- a/setup.py
+++ b/setup.py
@@ -59,7 +59,7 @@ def fetch_and_parse_dependencies(url):
packages=find_packages(exclude=["contrib", "docs", "tests*"]),
scripts=[],
install_requires=[
- "datajoint>=0.13.0",
+ "datajoint>=0.14.0",
"ipykernel>=6.0.1",
"ipywidgets",
"plotly",
diff --git a/tests/tutorial_pipeline.py b/tests/tutorial_pipeline.py
index 42efc9f5..1c72c3be 100644
--- a/tests/tutorial_pipeline.py
+++ b/tests/tutorial_pipeline.py
@@ -3,7 +3,7 @@
from element_animal import subject
from element_animal.subject import Subject
from element_calcium_imaging import (
- imaging_no_curation as imaging,
+ imaging,
scan,
imaging_report,
plotting,