Skip to content
Closed
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 43 additions & 0 deletions quick_start_guides/use_contrast_agnostic_weights_with_nnUNet.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
## Using contrast agnostic weights with nnUNet

This guide provides instructions on how to train nnUNet models with the contrast-agnostic pre-trained weights. Ideal use cases include using the pretrained weights for training/finetuning on any spinal-cord-related segmentation task (e.g. lesions, rootlets, etc.).

### Step 1: Download the pretrained weights

Download the pretrained weights from the [latest release](https://github.com/sct-pipeline/contrast-agnostic-softseg-spinalcord/releases) of the contrast-agnostic model. This refers to the `.zip` file of the format `model_contrast_agnostic_<date>_nnunet_compatible.zip`.

> [!WARNING]
> Only download the model with the `nnunet_compatible` suffix. If a release does not have this suffix, then that model weights are not directly compatible with nnUNet.


### Step 2: Modify the plans file

In your `$nnUNet_preprocessed/<dataset_name_or_id>/nnUNetPlans.json`, change the following keys for the `3d_fullres` configuration:

- `strides`:

```
[[1, 1, 1], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [1, 2, 2]]
```

- `patch_size`:

```
[64, 192, 320]
```

### Step 3: Train/Finetune the nnUNet model on your task

Provide the path to the downloaded pretrained weights:

```bash

nnUNetv2_train <dataset_name_or_id> <configuration> <fold> -pretrained_weights <path_to_pretrained_weights> -tr <nnUNetTrainer_Xepochs>

```

> [!IMPORTANT]
> * Training/finetuning with contrast-agnostic weights only works when for 3D nnUNet models.
> * Ensure that all images are in the RPI orientation before running `nnUNetv2_plan_and_preprocess`. This is because the updated `patch_size` refers to patches in RPI orientation (if images are in different orientation, then the patch size might be sub-optimal).
> * Ensure that `X` in `nnUNetTrainer_Xepochs` is set to a lower value than 1000. The idea is the finetuning does not require as many epochs as training from scratch because the contrast-agnostic model has already been trained on a lot of spinal cord images (so it might not require 1000 epochs to converge).

Loading