-
Notifications
You must be signed in to change notification settings - Fork 13
Open
Labels
EnhancementNew feature or requestNew feature or request
Description
Set up within- and between-subjects statistical tests, alongside the correction for multiple comparison
Detailed Description
First, check the consistency of the effects at the trials/events (intra-subject level); second, the group-level (inter-subject) model followed by the permutation-based correction for the multiple comparison to identify the strength of the effects observed in the empirical SDI comparing against surrogate SDIs across and within subjects
Context / Motivation
Trials/events-level SDI needs to be looked into for its consistency before the group-level; this hierarchical stat model handles that
Possible Implementation
Set up the code for;
- Make a distribution from the Empirical and Surrogate SDIs using Signed Rank
- Perform one-sample ttest to test the SDI in the trials/events against null
- Perform massive univariate tests among subjects, and subsequently correct for multiple comparisons using permutations
- Reimplement the code for Scipy's signed rank test for natively including it in NiGSP
- Run tests and reproduce the original results; checks out okay ?
- Reimplement the code for MNE's
ttest_1samp_no_pfor native inclusion - Run tests and reproduce the original results; checks out okay ?
- Figure out if reimplementing MNE-python's
permutation_t_testis a viable option; EDIT: many local imports; making it all native make the code convoluted & hard to read and follow, better to keep mne as optional requirement for stats - Final tests pass ?
- Moderate and quick refactoring the code
- Polishing the docstring
- Run and test locally
- Automated tests
Metadata
Metadata
Assignees
Labels
EnhancementNew feature or requestNew feature or request