Replies: 1 comment 2 replies
-
As far as I know, MONAI Label performs evaluation on validation set or auto partitioned set when conducting training. And active learning or scoring process such as epistemic scoring can score each unlabeled data with an uncertainty measure. Except these, there is no further "simulated annotator", maybe @diazandr3s can help more here about DeepEdit. Thank you for the discussion. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
thanks for this wonderful project! I was wondering whether it is possible to automatically evaluate trained models by simulating the user during the testing stage. Most works in interactive segmentation start from a random initialization of clicks and then iteratively add clicks in the center of largest connected error component, similar to the training handlers in MONAI Label.
Is there a tutorial or a piece of code that I missed out regarding doing this during the evaluation? I would like to see how a trained model, e.g., DeepEdit, performs on the whole dataset, for which it would be impractical to conduct a user study with real annotators. A simulated annotator would also give a good indication of how the performance increases with the number of interactions (i.e. IoU@NoC).
Thanks very much in advance and keep up the great work!
Best,
Zdravko
Beta Was this translation helpful? Give feedback.
All reactions