-
Notifications
You must be signed in to change notification settings - Fork 87
DeepLearning_PrepareData
This pipeline prepares images generated by Clinica to be used with the PyTorch deep learning library [Paszke et al., 2019]. Three types of tensors are proposed: 3D images, 3D patches or 2D slices.
Currently, only outputs from the t1-linear pipeline can be processed. This pipeline was designed as a prerequisite for the deep learning classification algorithms presented in [Wen et al., 2020] and showcased in the AD-DL framework.
You need to have performed the t1-linear pipeline on your T1-weighted MRI.
If you installed the core of Clinica, this pipeline needs no further dependencies.
The pipeline can be run with the following command line:
clinica run deeplearning-prepare-data <caps_directory> <tensor_format>
where:
-
caps_directoryis the folder containing the results of thet1-linearpipeline and the output of the present command, both in a CAPS hierarchy. -
tensor_formatis the format of the extracted tensors. You can choose betweenimageto convert to PyTorch tensor the whole 3D image,patchto extract 3D patches andsliceto extract 2D slices from the image.
By default the features are extracted from the cropped image (see the documentation of the t1-linear pipeline). You can deactivate this behaviour with the --use_uncropped_image flag.
Pipeline options if you use patch extraction:
-
--patch_size: patch size. Default value:50. -
--stride_size: stride size. Default value:50.
Pipeline options if you use slice extraction:
-
--slice_direction: slice direction. You can choose between0(sagittal plane),1(coronal plane) or2(axial plane). Default value:0. -
--slice_mode: slice mode. You can choose betweenrgb(will save the slice in three identical channels) orsingle(will save the slice in a single channel). Default value:rgb.
!!! note "Regarding the default values" When using patch or slice extraction, default values were set according to [Wen et al., 2020].
!!! note The arguments common to all Clinica pipelines are described in Interacting with clinica.
!!! tip
Do not hesitate to type clinica run deeplearning-prepare-data --help to see the full list of parameters.
In the following subsections, files with the .pt extension denote tensors in PyTorch format.
The full list of output files can be found in the ClinicA Processed Structure (CAPS) Specification.
Results are stored in the following folder of the CAPS hierarchy: subjects/<subject_id>/<session_id>/deeplearning_prepare_data/image_based/t1_linear.
The main output files are:
-
<source_file>_space-MNI152NLin2009cSym[_desc-Crop]_res-1x1x1_T1w.pt: tensor version of the 3D T1w image registered to theMNI152NLin2009cSymtemplate and optionally cropped.
Results are stored in the following folder of the CAPS hierarchy: subjects/<subject_id>/<session_id>/deeplearning_prepare_data/patch_based/t1_linear.
The main output files are:
-
<source_file>_space-MNI152NLin2009cSym[_desc-Crop]_res-1x1x1_patchsize-<N>_stride-<M>_patch-<i>_T1w.pt: tensor version of the<i>-th 3D isotropic patch of size<N>with a stride of<M>. Each patch is extracted from the T1w image registered to theMNI152NLin2009cSymtemplate and optionally cropped.
Results are stored in the following folder of the CAPS hierarchy: subjects/<subject_id>/<session_id>/deeplearning_prepare_data/slice_based/t1_linear.
The main output files are:
-
<source_file>_space-MNI152NLin2009cSym[_desc-Crop]_res-1x1x1_axis-{sag|cor|axi}_channel-{single|rgb}_T1w.pt: tensor version of the<i>-th 2D slice insagittal,coronal oraxial plane using three identical channels (rgb) or one channel (single). Each slice is extracted from the T1w image registered to theMNI152NLin2009cSymtemplate and optionally cropped.
- You can now perform classification based on deep learning using the AD-DL framework presented in [Wen et al., 2020].
!!! cite "Example of paragraph"
These results have been obtained using the deeplearning-prepare-data pipeline of Clinica [Routier et al; Wen et al., 2020]. More precisely,
- 3D images
- 3D patches with patch size of `<patch_size>` and stride size of `<stride_size>`
- 2D slices in {sagittal | coronal | axial} plane and saved in {three identical channels | a single channel}
were extracted and converted to PyTorch tensors [[Paszke et al., 2019](https://papers.nips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library)].
!!! tip Easily access the papers cited on this page on Zotero.