Identifying problematic frames in ground-truth user predictions #1516
Replies: 3 comments
-
The other issue with this sleap.info.metrics tool is in #1517, fwiw. |
Beta Was this translation helpful? Give feedback.
-
Hi @eringiglio,
I think this is something worth including. In our own lab we have also recently run into this after a large round of labeling by multiple different annotators. If you have a script/workflow that you already use, I'd be happy to collaborate on something to be rolled into main SLEAP. Thanks, |
Beta Was this translation helpful? Give feedback.
-
Well, it's been nine months, but I have a script running that has been very effective for me. Here's what I've got at the moment:
Please let me know if you'd still like to work together to merge it into the main SLEAP routine when you get back from Cosyne. It's been very handy! |
Beta Was this translation helpful? Give feedback.
-
Hi! My project is big enough that many of my user predictions (N = 3781 on the project I'm looking at--one of 16 total!) are generated by undergrads doing their best to assign points on a mouse from above. I have some reason to believe some of these user predicted 'ground-truth' frames are of, shall we see, varying quality. I've been writing a script to identify individual frames that my best-working models do the worst at predicting, which can then be turned into labeling suggestions that I can flip quickly through and double-check.
It'd be great to have an out-of-the-box way to do this, but I'm totally capable of writing my own (and am--love the commitment to command-line and Pythonic tools in this package generally!). What occurs to me is that this cannot be a problem specific only to me and my ridiculous monster project.
It's been a while since I collaborated on a Github project belonging to anyone else; is there interest in writing my script up so that it can be folded into the main program, or is this something that is largely just worth it to me?
I know there is a tool already that should do a big piece of this in sleap.info.metrics, matching two ranges of labels and finding the distance between each point of each, but it seems to be missing a necessary function (will submit a related Issue about that).
Beta Was this translation helpful? Give feedback.
All reactions