Replies: 2 comments 6 replies
-
Simple question.. let's say you are not using monailabel and with the data you have.. using the same, are you able train to any CNN model (say some UNET) for segmentation? what is the number of epochs and accuracy you are getting? |
Beta Was this translation helpful? Give feedback.
-
Thanks for the summary, @MrMarkusJ For now, considering the available GPU memory my suggestion is to use the segmentation app with the following changes: 1/ Use a better intensity normalization transform along with gaussian smooth. Here is an example of vertebra segmentation: https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/lib/trainers/localization_spine.py#L80-L82 2/ Start with a patch size of 96x96x96 3/ Use these sliding window arguments: https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/lib/trainers/localization_spine.py#L117-L119 for both inference and training. In general, you could follow the localization_spine segmentation model. Just update the spacing here to 1.0, label names and give it a try. Hope this helps |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I wonder how to decide on a good strategy to train a new model from scratch for MULTI-LABEL purpose.
Currently I am training a model for segmentation of rodent skull (3 labels: skull, left mandible, right mandible).
The image size is ~ 500 x 500 x 700, and only 8 GB GPU RAM are available. Further, ~ 15 manuall annotated
volumes are used for the training. The training is done with the 'segmentation' app.
Unfortunately the accuracy is ~ 60%, which is far too bad (even after 500 epochs).
Now I would like to improve this performance but I am not sure where to start and which parameters/ settings
to adapt.
Therefore it would be great if the MONAILabel team could point out on the options one could try to
improve the accuracy based on the segmentation task and available data.
Starting point: image data
In case labels are available
How to choose the most appropriate application based on this initial information?
In case of deepedit
in case of segmentation
Which one of these two methods (deepedit, segmentation) results in higher accuracy?
In case of large image dimension:
Where to find information about setting parameters like "self.spatial_size" and about the options available in the MonAI plugin of Slicer3D like "lambda".
Such information would help the user to "tweak" the training process according to the actual segmentation task.
Thanks for your valuable input.
Best,
Markus
Beta Was this translation helpful? Give feedback.
All reactions