-
Notifications
You must be signed in to change notification settings - Fork 0
Description
This issue originated from a meeting between me, @agahkarakuzu, @matteomancini, and @stikov, which we're moving here so that anyone can get involved with the discussion.
The idea is to identify if the data collected by the challenge may be open for some potentially interesting statistical analysis, and to discuss how to best implement these (and in particular, using open-source tools). Also, we could also maybe identify some statistical analyses that would be interesting but that we don't have sufficient data for, and leave that for an open challenge for people to collect more data for.
I think we should start by describing the datasets we have at hand and some of the remaining corrections or post-processing steps that should be done, and then explore some statistical analysis ideas that are well suited for this dataset and doesn't overlap with other similar studies (such as Banes et al. 2017 that used the NIST phantom on multisites but with much stricter protocol implementation rules that our current challenge, which was to investigate the differences or robustness against cross-site implementations). We also have some human datasets to compare with, which could also be explored (human<->human and/or NIST<->human).