-
I've been using MONAI to train a segmentation model before importing to MONAILabel. Evaluating the same model on the same image using both MONAI and using MONAILabel's generic segmentation app inferrer (ensuring the same pre- and post-transforms) yield different results ranging from minute pixel-level differences to segmentations with zero overlap. These are the transforms used within MONAILabel's inferrer
Post-transforms
Vs the MONAI inferrer
Post-transforms A possible source of difference:
Despite this however, I also found though that significant differences were noticed even on images that would not have been padded. I was wondering what further steps can be taken to reduce the model output differences? |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 1 reply
-
What is provided in the example app is not going to work for all kind of dataset/images/use-cases. This is only reference implementation over one kind of images/dataset. As the data changes, you might need something more in your pre/post transforms.. For example, you don't have to use Restored.. In the example app, using Restored may not enough for all kind of images... how you restore the pred based on the original image actually depends on the pre-transforms. In some cases, inverse transform can be a simple trick to rollback all the restore... But for some this might not be enough.. You can also override |
Beta Was this translation helpful? Give feedback.
-
Thanks for reporting this, @egrabke. This is a very good question. In addition to what @SachidanandAlle says, I also wanted to ask you whether the training you're running in MONAI is also patch-based as it is done in the Segmentation App in MONAI Label (https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/segmentation/lib/train.py#L102). In the infer.py file of the Segmentation App in MONAI Label you could see that the inferrer used there is a sliding window one (https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/segmentation/lib/infer.py#L65). This is used when the training is patch-based. Please let us know if both configurations (MONAI vs MONAI Label) are the same in those aspects: type of training (patch-based or whole image) and type of inferrer |
Beta Was this translation helpful? Give feedback.
-
Thank you both for the comments. @SachidanandAlle : Thank you for bringing the invertible transforms to my attention, I wasn't sure how to invert padding and can make note of that. Despite this though, even when I don't have spatial padding in the MONAILabel pre_transforms I still need the restored() transform to produce an output, I'm not entirely sure why. Would there be any other resizing done during inference beyond what's listed in apps/segmentation/lib/infer.py? @diazandr3s : The training for MONAI is patch-based. Patch size for MONAI training as well as sliding window size for both the MONAI and MONAILabel inferences were all set to the same values (full image x and y with cropped z), and the sliding window parameters (roi_size, sw_batch_size, overlap) were set identically between the two with the rest kept to default. The only difference was for MONAI I used sliding_window_inference from monai.inferers, vs for MONAILabel I used SlidingWindowInferer from monai.inferers. I haven't done any training with MONAILabel though and have made sure to keep the 3D Slicer Auto-Update Model box unchecked. |
Beta Was this translation helpful? Give feedback.
-
So I copied the MONAI transforms directly to MONAILabel pre-transforms and then overwrote the inverse_transforms method as:
I still need restored() but model now outputs exactly as expected! Thank you both for your help! |
Beta Was this translation helpful? Give feedback.
So I copied the MONAI transforms directly to MONAILabel pre-transforms and then overwrote the inverse_transforms method as:
I still need restored() but model now outputs exactly as expected! Thank you both for your help!