Skip to content

Commit 334c738

Browse files
author
Clément POIRET
committed
chore: update docs and versions to v1.2.0
1 parent 375987a commit 334c738

File tree

7 files changed

+187
-10
lines changed

7 files changed

+187
-10
lines changed

LAST_CHANGELOG.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,12 @@
22

33
## Software
44

5-
* Fixed installation on macOS,
6-
* Backends are now installed as "extras" (e.g. `pip install hsf[gpu]`)
5+
* Release of the HSF finetuning pipeline,
6+
* Bug fixes and optimizations,
77
* Updated dependencies.
88

99
## Models
1010

11-
* Nothing to report.
11+
* Models trained on hippocampal subfields from Clark et al. (2023) dataset (https://doi.org/10.1038/s41597-023-02449-9),
12+
* Models are now hosted on HuggingFace,
13+
* Bug fixes and optimizations.

README.rst

Lines changed: 27 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Hippocampal Segmentation Factory (HSF)
44

55
Exhaustive documentation available at: `hsf.rtfd.io <https://hsf.rtfd.io/>`_
66

7-
**Current Models version:** 3.0.0
7+
**Current Models version:** 4.0.0
88

99
.. list-table::
1010
:header-rows: 1
@@ -220,6 +220,13 @@ Changelogs
220220
HSF
221221
---
222222

223+
**Version 1.2.0**
224+
225+
* Released finetuning scripts,
226+
* New models trained on more data,
227+
* Models are now hosted on HuggingFace,
228+
* Bug fixes and optimizations.
229+
223230
**Version 1.1.3**
224231

225232
* Lower onnxruntime dependency to min 1.8.0
@@ -270,6 +277,12 @@ HSF
270277
Models
271278
------
272279

280+
**Version 4.0.0**
281+
282+
* Models trained on hippocampal subfields from Clark et al. (2023) dataset (https://doi.org/10.1038/s41597-023-02449-9),
283+
* Models are now hosted on HuggingFace,
284+
* Bug fixes and optimizations.
285+
273286
**Version 3.0.0**
274287

275288
* More data (coming from the Human Connectome Project),
@@ -316,6 +329,18 @@ Authorship:
316329

317330
If you use this work, please cite it as follows:
318331

319-
``C. Poiret, et al. (2021). clementpoiret/HSF. Zenodo. https://doi.org/10.5281/zenodo.5527122``
332+
```
333+
@ARTICLE{10.3389/fninf.2023.1130845,
334+
AUTHOR={Poiret, Clement and Bouyeure, Antoine and Patil, Sandesh and Grigis, Antoine and Duchesnay, Edouard and Faillot, Matthieu and Bottlaender, Michel and Lemaitre, Frederic and Noulhiane, Marion},
335+
TITLE={A fast and robust hippocampal subfields segmentation: HSF revealing lifespan volumetric dynamics},
336+
JOURNAL={Frontiers in Neuroinformatics},
337+
VOLUME={17},
338+
YEAR={2023},
339+
URL={https://www.frontiersin.org/articles/10.3389/fninf.2023.1130845},
340+
DOI={10.3389/fninf.2023.1130845},
341+
ISSN={1662-5196},
342+
ABSTRACT={The hippocampal subfields, pivotal to episodic memory, are distinct both in terms of cyto- and myeloarchitectony. Studying the structure of hippocampal subfields in vivo is crucial to understand volumetric trajectories across the lifespan, from the emergence of episodic memory during early childhood to memory impairments found in older adults. However, segmenting hippocampal subfields on conventional MRI sequences is challenging because of their small size. Furthermore, there is to date no unified segmentation protocol for the hippocampal subfields, which limits comparisons between studies. Therefore, we introduced a novel segmentation tool called HSF short for hippocampal segmentation factory, which leverages an end-to-end deep learning pipeline. First, we validated HSF against currently used tools (ASHS, HIPS, and HippUnfold). Then, we used HSF on 3,750 subjects from the HCP development, young adults, and aging datasets to study the effect of age and sex on hippocampal subfields volumes. Firstly, we showed HSF to be closer to manual segmentation than other currently used tools (p < 0.001), regarding the Dice Coefficient, Hausdorff Distance, and Volumetric Similarity. Then, we showed differential maturation and aging across subfields, with the dentate gyrus being the most affected by age. We also found faster growth and decay in men than in women for most hippocampal subfields. Thus, while we introduced a new, fast and robust end-to-end segmentation tool, our neuroanatomical results concerning the lifespan trajectories of the hippocampal subfields reconcile previous conflicting results.}
343+
}
344+
```
320345

321346
This work licensed under MIT license was supported in part by the Fondation de France and the IDRIS/GENCI for the HPE Supercomputer Jean Zay.

docs/about/authorship.md

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,18 @@
1010

1111
If you use this work, please cite it as follows:
1212

13-
`C. Poiret, A. Bouyeure, S. Patil, C. Boniteau, A. Grigis, E. Duchesnay, & M. Noulhiane. (2021). clementpoiret/HSF: Hippocampal Segmentation Factory. Zenodo. https://doi.org/10.5281/zenodo.5527122`
13+
```
14+
@ARTICLE{10.3389/fninf.2023.1130845,
15+
AUTHOR={Poiret, Clement and Bouyeure, Antoine and Patil, Sandesh and Grigis, Antoine and Duchesnay, Edouard and Faillot, Matthieu and Bottlaender, Michel and Lemaitre, Frederic and Noulhiane, Marion},
16+
TITLE={A fast and robust hippocampal subfields segmentation: HSF revealing lifespan volumetric dynamics},
17+
JOURNAL={Frontiers in Neuroinformatics},
18+
VOLUME={17},
19+
YEAR={2023},
20+
URL={https://www.frontiersin.org/articles/10.3389/fninf.2023.1130845},
21+
DOI={10.3389/fninf.2023.1130845},
22+
ISSN={1662-5196},
23+
ABSTRACT={The hippocampal subfields, pivotal to episodic memory, are distinct both in terms of cyto- and myeloarchitectony. Studying the structure of hippocampal subfields in vivo is crucial to understand volumetric trajectories across the lifespan, from the emergence of episodic memory during early childhood to memory impairments found in older adults. However, segmenting hippocampal subfields on conventional MRI sequences is challenging because of their small size. Furthermore, there is to date no unified segmentation protocol for the hippocampal subfields, which limits comparisons between studies. Therefore, we introduced a novel segmentation tool called HSF short for hippocampal segmentation factory, which leverages an end-to-end deep learning pipeline. First, we validated HSF against currently used tools (ASHS, HIPS, and HippUnfold). Then, we used HSF on 3,750 subjects from the HCP development, young adults, and aging datasets to study the effect of age and sex on hippocampal subfields volumes. Firstly, we showed HSF to be closer to manual segmentation than other currently used tools (p < 0.001), regarding the Dice Coefficient, Hausdorff Distance, and Volumetric Similarity. Then, we showed differential maturation and aging across subfields, with the dentate gyrus being the most affected by age. We also found faster growth and decay in men than in women for most hippocampal subfields. Thus, while we introduced a new, fast and robust end-to-end segmentation tool, our neuroanatomical results concerning the lifespan trajectories of the hippocampal subfields reconcile previous conflicting results.}
24+
}
25+
```
1426

1527
This work licensed under MIT license was supported in part by the Fondation de France and the IDRIS/GENCI for the HPE Supercomputer Jean Zay.

docs/about/release-notes.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,13 @@ Current maintainers:
1616

1717
## HSF
1818

19+
### Version 1.2.0 (2024-02-06)
20+
21+
* Released finetuning scripts,
22+
* New models trained on more data,
23+
* Models are now hosted on HuggingFace,
24+
* Bug fixes and optimizations.
25+
1926
### Version 1.1.1 (2022-04-27)
2027

2128
* Added whole-hippocampus segmentation
@@ -53,6 +60,12 @@ Current maintainers:
5360

5461
## Models
5562

63+
### Version 4.0.0 (2024-02-06)
64+
65+
* New models trained on more data,
66+
* Models integrate architecture improvements,
67+
* Models are now hosted on HuggingFace.
68+
5669
### Version 3.0.0 (2022-04-24)
5770

5871
* More data (coming from the Human Connectome Project),

docs/index.md

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,9 @@
55
<br>
66
<font size="+2"><b>Hippocampal</b> <i>Segmentation</i> Factory</font>
77
<br>
8-
<b>Current HSF version:</b> 1.1.3<br>
9-
<b>Built-in Models version:</b> 3.0.0<br>
10-
<b>Models in the Hub:</b> 6
8+
<b>Current HSF version:</b> 1.2.0<br>
9+
<b>Built-in Models version:</b> 4.0.0<br>
10+
<b>Models in the Hub:</b> 4
1111
</p>
1212

1313
____
@@ -82,6 +82,9 @@ which can be used in conjunction with pruned and int8 quantized models
8282
to deliver a much faster CPU inference speed (see [Hardware Acceleration](user-guide/configuration.md)
8383
section).
8484

85+
Since v1.2.0, the complete training code is available at [hsf_train](https://github.com/clementpoiret/hsf_train).
86+
The `hsf_train` repository also contains easy to use scripts to train OR **finetune your own models**.
87+
8588
____
8689

8790
HSF is distributed under the [MIT license](about/license.md):
@@ -93,5 +96,6 @@ HSF is distributed under the [MIT license](about/license.md):
9396
!!! note ""
9497
This work has been partly founded by the Fondation de France.
9598
HSF has been made possible by the IDRIS/GENCI with the HPE Jean Zay Supercomputer.
99+
Latest models have been trained with the help of [Scaleway](https://www.scaleway.com/) and [Hugging Face](https://huggingface.co/).
96100

97101
CEA Saclay | NeuroSpin | UNIACT-Inserm U1141

docs/user-guide/finetuning.md

Lines changed: 121 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,121 @@
1+
# Finetuning
2+
3+
## Overview
4+
5+
Since v1.2.0, the complete training code is available at [hsf_train](https://github.com/clementpoiret/hsf_train).
6+
The `hsf_train` repository also contains easy to use scripts to train OR **finetune your own models**.
7+
8+
## Purpose & Use Cases
9+
10+
The goal of HSF is to provide foundational models that can be used as a starting point for further development and customization in Hippocampal Subfields segmentation.
11+
12+
We are aware that the provided models may not be perfect for every use case, and that's why we provide the training code and the possibility to finetune the models.
13+
14+
If you want to use HSF on MRIs that are very different from the ones used for training, or if you want to segment the hippocampal subfields in a specific way, you are at the right place.
15+
16+
## Configuration
17+
18+
The `hsf_train` repository contains a `conf` directory, analogous to the `conf` directory in the `hsf` repository. This directory contains the configuration files for the training and finetuning scripts.
19+
20+
Here is the default configuration file for the finetuning script:
21+
22+
```yaml
23+
mode: decoder # encoder or decoder, defines which part of the model to finetune (see below)
24+
depth: -1 # -1 for all layers, 0 for the first layer, 1 for the second layer, etc.
25+
unfreeze_frequency: 4 # how often to unfreeze a layer
26+
out_channels: 6 # number of output channels (subfields) in case you want to segment a different number of subfields
27+
```
28+
29+
## Getting Started
30+
31+
### Installation
32+
33+
You will need to clone the repository, install PyTorch, and install the required packages:
34+
35+
```bash
36+
git clone https://github.com/clementpoiret/hsf_train.git
37+
cd hsf_train
38+
conda create -n hsf_train python=3.10
39+
conda activate hsf_train
40+
conda install pytorch torchvision torchaudio cudatoolkit=12.1 -c pytorch -c nvidia
41+
pip install -r requirements.txt
42+
```
43+
44+
### Custom Dataset
45+
46+
Whoever wants to finetune the models will need to provide their own dataset. The heavier the changes are between the training dataset and the custom dataset, the more data you will need.
47+
48+
You will need to adapt the example `custom_dataset.yaml` file to suit your needs, such as:
49+
50+
```yaml
51+
main_path: "/mnt/data/hsf/"
52+
output_path: "/mnt/hsf/models/"
53+
batch_size: 1
54+
num_workers: 16
55+
pin_memory: True
56+
train_ratio: .9
57+
replace: False
58+
k_sample: Null # i.e. k = train_ratio * num_samples
59+
train_val_test_idx: Null
60+
train_on_all: False
61+
62+
datasets:
63+
clark:
64+
path: "hippocampus_clark_3T"
65+
ca_type: "1/23"
66+
patterns:
67+
right_t2:
68+
mri: "**/t2w_Hippocampus_right_ElasticSyN_crop.nii.gz"
69+
label: "t2w_Hippocampus_right_ElasticSyN_seg_crop.nii.gz"
70+
left_t2:
71+
mri: "**/t2w_Hippocampus_left_ElasticSyN_crop.nii.gz"
72+
label: "t2w_Hippocampus_left_ElasticSyN_seg_crop.nii.gz"
73+
averaged_right_t2:
74+
mri: "**/averaged_t2w_Hippocampus_right_ElasticSyN_crop.nii.gz"
75+
label: "averaged_t2w_Hippocampus_right_ElasticSyN_seg_crop.nii.gz"
76+
averaged_left_t2:
77+
mri: "**/averaged_t2w_Hippocampus_left_ElasticSyN_crop.nii.gz"
78+
label: "averaged_t2w_Hippocampus_left_ElasticSyN_seg_crop.nii.gz"
79+
labels:
80+
1: 1
81+
2: 2
82+
3: 3
83+
4: 4
84+
5: 5
85+
6: 6
86+
7: 7
87+
labels_names:
88+
1: "DG"
89+
2: "CA2/3"
90+
3: "CA1"
91+
4: "PRESUB"
92+
5: "UNCUS"
93+
6: "PARASUB"
94+
7: "KYST"
95+
```
96+
97+
??? info "Comment on the `out_channels` parameter"
98+
You can see in the example above that we have 7 subfields. However, the `out_channels` parameter also includes the background (class 0), so you should set it to 8.
99+
100+
### Finetuning
101+
102+
Then, you can run the finetuning script:
103+
104+
```bash
105+
python finetune.py \
106+
datasets=custom_dataset \
107+
finetuning.out_channels=8 \
108+
models.lr=1e-3 \
109+
models.use_forgiving_loss=False
110+
```
111+
112+
## Training Tips
113+
114+
Numerous factors can influence the quality of the finetuned model. Here are some tips to help you get the best results:
115+
116+
- **Number of manual segmentations**: The more manual segmentations you have, the better. The more subtle the differences between our original dataset and your custom dataset, the less data you will need,
117+
- **Learning rate**: We recommend starting with a learning rate of 1e-3,
118+
- **Depth**: We recommend starting with a depth of -1, which means that all layers will be finetuned. If your changes are small, you can finetune less layers, or even only the final layer (depth = 0),
119+
- **Number of epochs**: We recommend starting with 16 epochs if the depth is -1, otherwise a rule of thumb might be `epochs = (depth + 1) * unfreeze_frequency`,
120+
- **Mode**: It depends on the type of changes you want to make. If you change the image modality (e.g. another contrast than T1w or T2w, a magnetic field intensity larger than 7T, etc.), you might want to finetune the encoder. If you want to segment a different number of subfields or use another segmentation guideline, you might want to finetune the decoder.
121+
- **Unfreeze frequency**: We recommend starting with an unfreeze frequency of 4. Check the learning curves, you should reach a plateau before unfreezing the next layer,

tests/test_hsf.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616

1717

1818
def test_version():
19-
assert __version__ == '1.1.3'
19+
assert __version__ == '1.2.0'
2020

2121

2222
# SETUP FIXTURES

0 commit comments

Comments
 (0)