The problem with the accuracy of training from scratch #922
-
My version: from docker projectmonai/monailabel:0.4.2 3D Slicer version: 5.0.3 I referred to the suggestions here to add my own segmentation model. Since my data belongs to PET CT images, it is not implemented in the current pre-training model, so I plan to train a new model by myself. At present, the modified code can run normally in 3D Slicer. I currently want to try the performance of the model, so I prepare 5 sets of labeled data and 1 set of unlabeled data. When training, 4 sets of training and 1 set of verification. Finally, after repeated execution 3 times, an accuracy rate of about 49% can be obtained, but the results displayed when performing inference seem to be much less similar to 50%.
Thanks in advance. Server Executio: Here are the datasets, code, and training results used.
|
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 3 replies
-
Hi @Minxiangliu, Thanks for the detailed discussion. I've checked the files you attached and one of the main issues is that not all the HNC volumes have the same number of labels - some of them have only one label. In the most recent MONAI Label version, we've proposed a transform to deal with these cases: https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/lib/trainers/segmentation.py#L77 The second issue I saw is that in the segmentation model there is no need to declare a background label. It is only needed for the DeepEdit model - This is because the DeepEdit model simulates clicks also for the background. Another issue is that you weren't using the most recent MONAI Label version. Please fetch the latest docker image. Here you can find a model I've trained on the dataset you attached: https://drive.google.com/file/d/19JQNDx57RVqg9qqI6xOPw7ZzqWs-Agep/view?usp=sharing And here is a video showing how that works: hnc_project.mp4Hope that helps, |
Beta Was this translation helpful? Give feedback.
-
Hi, @diazandr3s Thank you for your help, first of all I have to clarify why I don't use the new docker first, the latest version has an error a few days ago, and the error occurs after executing the I have modified the The possible reason is that the inference setting is wrong, or the number of training samples is too small to cause overfitting? Training result: Results file: https://drive.google.com/file/d/1__5twHLRt99bNxWBU-PGfzQywAx_hF4P/view?usp=sharing |
Beta Was this translation helpful? Give feedback.
-
Hi @diazandr3s , Final code in lib/infers:
Compared to the previous results, it is indeed a bit more normal. |
Beta Was this translation helpful? Give feedback.
Hi @Minxiangliu,
Thanks for the detailed discussion.
I've checked the files you attached and one of the main issues is that not all the HNC volumes have the same number of labels - some of them have only one label. In the most recent MONAI Label version, we've proposed a transform to deal with these cases: https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/lib/trainers/segmentation.py#L77
The second issue I saw is that in the segmentation model there is no need to declare a background label. It is only needed for the DeepEdit model - This is because the DeepEdit model simulates clicks also for the background.
Another issue is that you weren't using the most recent…