Validation scores on DeepEdit inference results #937
-
Hello all, I want to first thank the team for being so helpful. Its not my first time asking a question on here and I learn something new every time I do. MONAILabel has been a pleasure to work with. I am trying to show some quantitative results on some of the experiments I have been running for comparison similar to those shown on the last 2 columns of the table on page 11 of the MONAI Label research paper (the validation dice scores for DG 3d). I am using DeepEdit and I can't figure out what exactly is the best way to measure the accuracy of the inference results numerically. How would you go about measuring the accuracy of inference results as to fill out a comparison table for different scenarios? Thanks for reading! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @sabino-ramirez, Thanks for asking. Here is a video showing how you could run batch inference - just make sure you write the correct model name you're validating (deepedit_seg or segmentation): batch-inference-.mp4Hope this helps |
Beta Was this translation helpful? Give feedback.
Hi @sabino-ramirez,
Thanks for asking.
I assume you have a pre-trained model and a validation set with ground truth labels.
One way to validate the model is by starting MONAI Label and then running batch inference on the validation set. Once you get the predictions (they should be in labels/original folder) you can use an external script to compare the metric between the predicted and ground truth.
Here is a video showing how you could run batch inference - just make sure you write the correct model name you're validating (deepedit_seg or segmentation):
batch-inference-.mp4
Hope this helps