Skip to content

Commit 20cbd8e

Browse files
author
Clément POIRET
authored
Merge pull request #19 from clementpoiret/develop
v1.1.0 - Multispectrality and Better Models
2 parents 46590a7 + 8e1c30f commit 20cbd8e

27 files changed

+537
-172
lines changed

.github/workflows/python-app.yml

Lines changed: 45 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,12 @@ name: Python application
55

66
on:
77
push:
8-
branches: [ master, develop ]
8+
branches: [master, develop]
99
pull_request:
10-
branches: [ master, develop ]
10+
branches: [master, develop]
1111

1212
jobs:
1313
build:
14-
1514
strategy:
1615
matrix:
1716
operating-system: ["ubuntu-latest"]
@@ -20,51 +19,51 @@ jobs:
2019
runs-on: ${{ matrix.operating-system }}
2120

2221
steps:
23-
- uses: actions/checkout@v2
22+
- uses: actions/checkout@v2
23+
24+
- name: Set up Python ${{ matrix.python-version }}
25+
uses: actions/setup-python@v2
26+
with:
27+
python-version: ${{ matrix.python-version }}
28+
29+
- name: Install Poetry
30+
uses: snok/install-poetry@v1
31+
with:
32+
version: latest
33+
virtualenvs-create: true
34+
virtualenvs-in-project: false
35+
virtualenvs-path: ~/.virtualenvs
36+
installer-parallel: true
2437

25-
- name: Set up Python ${{ matrix.python-version }}
26-
uses: actions/setup-python@v2
27-
with:
28-
python-version: ${{ matrix.python-version }}
38+
- name: Cache Poetry virtualenv
39+
uses: actions/cache@v1
40+
id: cache
41+
with:
42+
path: ~/.virtualenvs
43+
key: poetry-${{ secrets.CACHE_VERSION }}-${{ matrix.python-version }}-${{ matrix.operating-system }}-${{ hashFiles('**/poetry.lock') }}
44+
restore-keys: |
45+
poetry-${{ secrets.CACHE_VERSION }}-${{ matrix.python-version }}-${{ matrix.operating-system }}-${{ hashFiles('**/poetry.lock') }}
2946
30-
- name: Install Poetry
31-
uses: snok/install-poetry@v1
32-
with:
33-
version: latest
34-
virtualenvs-create: true
35-
virtualenvs-in-project: false
36-
virtualenvs-path: ~/.virtualenvs
37-
installer-parallel: true
47+
- name: Install Dependencies
48+
run: poetry install
49+
if: steps.cache.outputs.cache-hit != 'true'
3850

39-
- name: Cache Poetry virtualenv
40-
uses: actions/cache@v1
41-
id: cache
42-
with:
43-
path: ~/.virtualenvs
44-
key: poetry-${{ secrets.CACHE_VERSION }}-${{ matrix.python-version }}-${{ matrix.operating-system }}-${{ hashFiles('**/poetry.lock') }}
45-
restore-keys: |
46-
poetry-${{ secrets.CACHE_VERSION }}-${{ matrix.python-version }}-${{ matrix.operating-system }}-${{ hashFiles('**/poetry.lock') }}
51+
- name: Test & Coverage
52+
run: poetry run coverage run --source=hsf -m pytest
4753

48-
- name: Install Dependencies
49-
run: poetry install
50-
if: steps.cache.outputs.cache-hit != 'true'
51-
52-
- name: Test & Coverage
53-
run: poetry run coverage run --source=hsf -m pytest
54-
55-
- name: Publish Coverage on CodeClimate
56-
uses: paambaati/[email protected]
57-
env:
58-
CC_TEST_REPORTER_ID: ${{ secrets.CC_TEST_REPORTER_ID }}
59-
with:
60-
coverageCommand: poetry run coverage xml
61-
coverageLocations: |
62-
${{ github.workspace }}/coverage.xml:coverage.py
63-
debug: true
54+
- name: Publish Coverage on CodeClimate
55+
uses: paambaati/[email protected]
56+
env:
57+
CC_TEST_REPORTER_ID: ${{ secrets.CC_TEST_REPORTER_ID }}
58+
with:
59+
coverageCommand: poetry run coverage xml
60+
coverageLocations: |
61+
${{ github.workspace }}/coverage.xml:coverage.py
62+
debug: true
6463

65-
- name: Publish Coverage on Codacy
66-
uses: codacy/[email protected]
67-
with:
68-
project-token: ${{ secrets.CODACY_PROJECT_TOKEN }}
69-
api-token: ${{ secrets.CODACY_API_TOKEN }}
70-
coverage-reports: ${{ github.workspace }}/coverage.xml
64+
- name: Publish Coverage on Codacy
65+
uses: codacy/[email protected]
66+
with:
67+
project-token: ${{ secrets.CODACY_PROJECT_TOKEN }}
68+
api-token: ${{ secrets.CODACY_API_TOKEN }}
69+
coverage-reports: ${{ github.workspace }}/coverage.xml

LAST_CHANGELOG.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,5 +2,11 @@
22

33
## Software
44

5-
* Fixed batch_size not properly set,
6-
* Memory optimizations.
5+
* New multispectral mode to segment from both T1 and T2 images
6+
* Fixed ANTs overloading /tmp directory
7+
* Updated dependencies
8+
9+
## Models
10+
11+
* New single and bagging models with more data and less biases (Tails of T1w MRIs)
12+
* New sparse models with int8-quantized weights

README.rst

Lines changed: 43 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Hippocampal Segmentation Factory (HSF)
44

55
Exhaustive documentation available at: `hsf.rtfd.io <https://hsf.rtfd.io/>`_
66

7-
**Current Models version:** 2.0.0
7+
**Current Models version:** 3.0.0
88

99
.. list-table::
1010
:header-rows: 1
@@ -115,8 +115,10 @@ To date, we propose 4 different segmentation algorithms (from the fastest to the
115115

116116
- ``single_fast``: a segmentation is performed on the whole volume by only one model,
117117
- ``single_accurate``: a single model segments the same volume that has been augmented 20 times through TTA,
118+
- ``single_sq``: like ``single_accurate``, but using int8-quantized sparse models for a fast and efficient inference,
118119
- ``bagging_fast``: a bagging ensemble of 5 models is used to segment the volume without TTA,
119-
- ``bagging_accurate``: a bagging ensemble of 5 models is used to segment the volume with TTA.
120+
- ``bagging_accurate``: a bagging ensemble of 5 models is used to segment the volume with TTA,
121+
- ``bagging_sq``: like ``bagging_accurate``, but using int8-quantized sparse models for a fast and efficient inference.
120122

121123
Finally, ``segmentation.ca_mode`` is a parameter that allows to combine CA1, CA2 and CA3 subfields.
122124
It is particularly useful when you want to segment low-resolution images where it makes no sense to
@@ -132,8 +134,10 @@ Compose your configuration from those groups (group=option)
132134

133135
* augmentation: default
134136
* files: default
135-
* roiloc: default_t2iso
136-
* segmentation: bagging_accurate, bagging_fast, single_accurate, single_fast
137+
* hardware: deepsparse, onnxruntime
138+
* multispectrality: default
139+
* roiloc: default_corot2, default_t2iso
140+
* segmentation: bagging_accurate, bagging_fast, bagging_sq, single_accurate, single_fast, single_sq
137141

138142
Override anything in the config (e.g. hsf roiloc.margin=[16,2,16])
139143

@@ -189,13 +193,34 @@ Fields set with ??? are mandatory.
189193
* max_displacement: 4
190194
* locked_borders: 0
191195

196+
multispectrality:
197+
198+
* pattern: null
199+
* same_space: true
200+
* registration:
201+
* type_of_transform: Affine
202+
203+
hardware:
204+
205+
* engine: onnxruntime
206+
* engine_settings:
207+
* execution_providers:
208+
- CUDAExecutionProvider
209+
- CPUExecutionProvider
210+
* batch_size: 1
211+
192212

193213
Changelogs
194214
==========
195215

196216
HSF
197217
---
198218

219+
**Version 1.1.0**
220+
221+
* New optional multispectral mode de segment from both T1 AND T2 images
222+
* Bug fixes and optimizations
223+
199224
**Version 1.0.1**
200225

201226
* Fix batch size issue
@@ -226,6 +251,20 @@ HSF
226251
Models
227252
------
228253

254+
**Version 3.0.0**
255+
256+
* More data (coming from the Human Connectome Project),
257+
* New sparse and int8-quantized models.
258+
259+
**Version 2.1.1**
260+
261+
* Fixed some tails in 3T CoroT2w images (MemoDev)
262+
263+
**Version 2.1.0**
264+
265+
* Corrected incorrect T1w labels used for training,
266+
* Trained on slightly more data (T1w @1.5T & 3T, T2w; Healthy, Epilepsy & Alzheimer)
267+
229268
**Version 2.0.0**
230269

231270
* Trained with more T1w and T2w MRIs,

docs/about/release-notes.md

Lines changed: 20 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,11 @@ Current maintainers:
1616

1717
## HSF
1818

19+
### Version 1.1.0 (N/A)
20+
21+
* New optional multispectral mode de segment from both T1 AND T2 images
22+
* Bug fixes and optimizations
23+
1924
### Version 1.0.1 (2021-12-07)
2025

2126
* Fixed batch size issue
@@ -44,13 +49,26 @@ Current maintainers:
4449

4550
## Models
4651

52+
### Version 3.0.0 (2022-04-24)
53+
54+
* More data (coming from the Human Connectome Project),
55+
* New sparse and int8-quantized models.
56+
57+
### Version 2.1.1 (2022-03-03)
58+
59+
* Fixed some tails in 3T CoroT2w images (MemoDev)
60+
61+
### Version 2.1.0 (N/A)
62+
63+
* Corrected incorrect T1w labels used for training,
64+
* Trained on slightly more data (T1w @1.5T & 3T, T2w; Healthy, Epilepsy & Alzheimer)
65+
4766
### Version 2.0.0 (2021-11-12)
4867

4968
* Trained with more T1w and T2w MRIs,
5069
* Trained on more hippocampal sclerosis and Alzheimer's disease cases,
5170
* Updated training pipeline (hyperparameter tuning),
52-
* `single` models are now independant from bags,
53-
* `bagging` have `sparse` and `sparseqat` versions for sparsification and Quantization Aware Training.
71+
* `single` models are now independant from bags.
5472

5573
### Version 1.0.0 (2021-09-24)
5674

docs/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,8 @@
55
<br>
66
<font size="+2"><b>Hippocampal</b> <i>Segmentation</i> Factory</font>
77
<br>
8-
<b>Current HSF version:</b> 1.0.1<br>
9-
<b>Built-in Models version:</b> 2.0.0<br>
8+
<b>Current HSF version:</b> 1.1.0<br>
9+
<b>Built-in Models version:</b> 3.0.0<br>
1010
<b>Models in the Hub:</b> 4
1111
</p>
1212

docs/model-hub.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -48,10 +48,10 @@ All the models are using the same `ARUnet` architecture, which will be detailed
4848
Test-time augmentation also allows the computation of an uncertainty map to analyze
4949
the quality of the resulting segmentation.
5050

51-
<!-- ??? example "`bagging_sparse` and `bagging_sparseqat` models"
51+
??? example "`single_sq` and `bagging_sq` models"
5252

53-
Those are the two most advanced methods. Inference is done with 5 models trained
54-
on different subsets of the dataset (random sampling with replacement).
53+
Those are the two most advanced methods. Inference is done with 1 or 5 models
54+
trained on different subsets of the dataset (random sampling with replacement).
5555

5656
This allows to have models with different learned properties, offering a better
5757
segmentation, but a slower inference compared to the classic `single_*` models.
@@ -60,16 +60,16 @@ All the models are using the same `ARUnet` architecture, which will be detailed
6060
the computational cost of the inference while retaining an optimal sub-model
6161
following the lottery ticket hypothesis.
6262

63-
The `sparseqat` method also includes Quantization Aware Training to improve even
63+
The `sq` method also includes Quantization Aware Training to improve even
6464
more the efficiency of the inference.
6565

6666
Those methods are appropriate for recent hardware supporting efficient computations
6767
on sparse vectors. Int8 Quantization is better used on hardware supporting fast
6868
int8 matrix computations. For example, on a CPU supporting the AVX512-VNNI vector
69-
instruction set, your best bet is to use the `bagging_sparseqat` segmentation method.
69+
instruction set, your best bet is to use the `bagging_sq` segmentation method.
7070

7171
Test-time augmentation also allows the computation of an uncertainty map to analyze
72-
the quality of the resulting segmentation. -->
72+
the quality of the resulting segmentation.
7373

7474
## Third-party Segmentation Models
7575

docs/user-guide/configuration.md

Lines changed: 28 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,15 +22,20 @@ conf
2222
│ │ onnxruntime.yaml
2323
│ │ deepsparse.yaml
2424
25+
└───multipectrality
26+
│ │ default.yaml
27+
2528
└───roiloc
2629
│ │ default_corot2.yaml
2730
│ │ default_t2iso.yaml
2831
2932
└───segmentation
3033
│ single_fast.yaml
3134
│ single_accurate.yaml
35+
│ single_sq.yaml
3236
│ bagging_fast.yaml
3337
│ bagging_accurate.yaml
38+
│ bagging_sq.yaml
3439
```
3540

3641
Groups can be selected with `group=option`. For example: `hsf segmentation=bagging_fast`
@@ -57,6 +62,26 @@ The following example will recursively search all `*T2w.nii.gz` files in the `~D
5762
hsf files.path="~/Datasets/MRI/" files.pattern="**/*T2w.nii.gz" files.mask_pattern="*T2w_bet_mask.nii.gz
5863
```
5964
65+
### Multispectral mode
66+
67+
Since v1.1.0, HSF supports multispectral mode, where the segmentation is defined from a consensus between segmentations from both T1 and T2 images. Default parameters are defined in [`conf/multispectrality/default.yaml`](https://github.com/clementpoiret/HSF/blob/master/hsf/conf/multispectrality/default.yaml).
68+
69+
- `pattern` defines how to find the alternative contrast of the subject.
70+
- `same_space` defines whether the alternative contrast is already in the same space as the main one. If not, a registration will be performed with the `registration.*` arguments.
71+
- `registration` are the parameters given to [`ants.registration`](https://antspy.readthedocs.io/en/latest/registration.html), such as `type_of_transform`.
72+
73+
You can use the multispectral mode with the following example. For each T2w MRI, it will search a local T1w MRI in the same folder, then register the T1 to the T2 image using an affine registration (default behavior), using the meansquares metric.
74+
75+
```sh
76+
hsf files.path="~/Datasets/MRI/" files.pattern="**/*T2w.nii.gz" multispectrality.pattern="T1w_hires.nii.gz" multispectrality.same_space=False +multispectrality.registration.aff_metric="meansquares"
77+
```
78+
79+
!!! warning "Multispectral mode may not always be the best choice"
80+
Because it comes from a consensus between T1 and T2 images, it is highly dependent on the quality of the registration.
81+
If hippocampi do not overlap well, the consensus will be biased.
82+
83+
A good choice might be to manually register the images, perform a quality check, then use the multispectral mode while passing `same_space=True`.
84+
6085
### Preprocessing pipeline
6186
6287
The preprocessing pipeline is kept as minimal as possible.
@@ -237,9 +262,9 @@ hsf hardware=deepsparse
237262
For example, models can be pruned (e.g. weights are removed to obtain an optimal sub-model),
238263
or Quantized (e.g. weights, biases and activations are quantized to 8-bit).
239264
240-
<!-- Since HSF v1.0.0, we provide sparsified and quantized models. Therefore, to fully benefit from
265+
Since HSF v1.1.0, we provide sparsified and quantized models. Therefore, to fully benefit from
241266
DeepSparse, you can our sparsified bootstrapped models trained with Quantization Aware Training (QAT):
242267
243268
```sh
244-
hsf hardware=deepsparse segmentation=bagging_sparseqat
245-
``` -->
269+
hsf hardware=deepsparse segmentation=bagging_sq
270+
```

docs/user-guide/installation.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,10 +42,11 @@ You'll be able to to install HSF from PyPI by following the instructions:
4242
ONNXRuntime 1.8 requires at least CUDA 11.0.3, and cuDNN 8.0.4. For newer versions, please
4343
check the [ONNXRuntime documentation](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html)
4444

45-
Then run the following commands:
45+
Then run the following commands (please note that sometimes, you need to uninstall `onnxruntime` for the GPU to be detected correctly):
4646

4747
```shell
4848
pip install hsf
49+
pip uninstall onnxruntime # If needed
4950
pip install onnxruntime-gpu
5051
```
5152

hsf/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
__version__ = '1.0.1'
1+
__version__ = '1.1.0'

0 commit comments

Comments
 (0)