-
Notifications
You must be signed in to change notification settings - Fork 39
Add permutation test #726
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add permutation test #726
Conversation
…nt with both TTest and Wilcoxontest and add seed
|
Cool! Let me know when you want me or Gregor to have a look, please. |
|
@Zethson @grst based on the tests I ran locally, this version should now pass. However, there seems to be a docs issue unrelated to my code changes (see other recent PRs), because a dependency cannot be installed. So from my side, you can go ahead and take a look already and let me know if anything needs to be adapted. Another idea that I had was that it would be interesting to be able to compare any values in |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #726 +/- ##
==========================================
- Coverage 72.74% 72.68% -0.06%
==========================================
Files 47 47
Lines 5510 5517 +7
==========================================
+ Hits 4008 4010 +2
- Misses 1502 1507 +5
🚀 New features to boost your workflow:
|
Zethson
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool!
So name wise, there's some overlap with https://pertpy.readthedocs.io/en/stable/usage/tools/pertpy.tools.DistanceTest.html#pertpy.tools.DistanceTest which also kind of labels itself as a permutation test but in different way. We also have to resolve this naming clash because the DistanceTest is currently under the title "distances and permutation test" which I consider an issue. We should only have this label once or be more specific.
| fit_kwargs: Additional kwargs passed to the test function. | ||
| test_kwargs: Additional kwargs passed to the test function. | ||
| """ | ||
| if len(fit_kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't get the point of this. Is it required? Can this be fixed upstream aka in the interface by making this optional to have?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is inherited from the base class and not a new introduction of this PR. MethodsBase is also used for linear models which require the fit_kwargs. I will change the docstring to indicate that these are not used for the simple tests.
|
If you merge main into this, the RTD job will work again. |
|
@maltekuehl, could the paralellization you implemented be pushed up to the abstract base class? Then also wilcoxon and ttest would benefit from it. In that case, would it even be necessary to re-implement the interfaces specifically for the permutation test, or could you just use an extension class? |
|
@grst good idea to push this upstream. The reason I had to recreate the |
…tation arguments, passing others through kwargs
|
@Zethson what would you suggest naming wise? The docs mention a |
…turning statistic from tests and fix bug where the permutation_test was not applied
| elif permutation_test is None and cls.__name__ == "PermutationTest": | ||
| logger.warning("No permutation test specified. Using WilcoxonTest as default.") | ||
|
|
||
| comparison_indices = [_get_idx(column, group_to_compare) for group_to_compare in groups_to_compare] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I get this right, this implements parallelism only at the level of comparisons. This means, if there's only one comparison, there would be no benefit of parallelization. I think it would be more beneficial to implement parallelism at the level of variables, i.e. in _compare_single_group at for var in tqdm(self.adata.var_names):
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The permutation_test also has a vectorize attribute which allows to pass entire matrices to the test. This likely also speeds up testing quite a bit when whole blocks of data are processed together. Maybe we even get implicit parallelism through the BLAS backend of numpy when doing so.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It probably requires a bit of testing to find out what's faster. Per-variable paralleism, just passing the entire array into stats.permutation_test(..., vectorized=True) or splitting the data in chunks and then passing it to permutation_test(..., vectorized=True).
| def _test(x0: np.ndarray, x1: np.ndarray, paired: bool, return_attribute: str = "pvalue", **kwargs) -> float: | ||
| if paired: | ||
| return scipy.stats.wilcoxon(x0, x1, **kwargs).pvalue | ||
| return scipy.stats.wilcoxon(x0, x1, **kwargs).__getattribute__(return_attribute) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there could be value in returning multiple attributes, not just the p-value. In particular for the permutation test, but also for the t-test it would be nice to have the t-statistics alongside the p-value. I therefore suggest to change the function signature of _test to returning a dictionary or named tuple instead.
This can then be included in https://github.com/scverse/pertpy/pull/726/files#diff-5892917e4e62a1165dda9ac148f802a12e3a95735a367b5e1bf771cb228fcd0dR86 as
res.append({"variable": var, "log_fc": np.log2(mean_x1) - np.log2(mean_x0), **res})| "The `test` argument cannot be `PermutationTest`. Use a base test like `WilcoxonTest` or `TTest`." | ||
| ) | ||
|
|
||
| def call_test(data_baseline, data_comparison, axis: int | None = None, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you really need another test here? Essentially the statistic we want to test is the fold change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The advantage of permutation tests is their generalizability without (almost) any assumptions, as long as the test statistic is related to the null hypothesis we want to test. I would thus view the ability to use any statistic, either those that are already implemented in pertpy or any callable that accepts two NDArrays and **kwargs as core functionality. With the latest update, this PR supports any statistic, e.g., it would also be trivial to use a comparison of means, of medians or any other function that you can implement in < 5 lines of numpy code with the PermutationTest. I opted for Wilcoxon statistic as a default because the ranksum is fairly general and it's something that's already implemented in pertpy. Of course, we could also add an explicit collection of other statistics but it could never cover all use cases and defining this statistic should be part of the thought process when a user uses the permutation test, so I'm not convinced of the value and necessity of covering this as part of the library itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@grst is this resolved?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We changed this to the t-statistic after in person discussion, as the wilcoxon statistic would just be the wilcoxon test and from what I understood from the in person discussion the rationale for keeping this function was agreed to, but perhaps @grst can confirm again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still don't get why you would want to use a permutation test with a test statistics. If I'm interested in the difference in means, I would use the difference in means as statistics.
In my understanding, the whole point of using Wilcoxon or T test is that one can compare against a theoretical distribution, avoiding permutations in the first place.
In any case, I would prefer passing a simple lambda over accepting another pertpy test class. This could be a simple
test_statistic: Callable[[np.ndarray, np.ndarray], float] = lambda x,y: np.log2(np.mean(x)) - np.log2(np.mean(y))or if you really want to use a t-statistics
test_statistic = lambda x, y: scipy.stats.ttest_ind(x, y).statistic
Zethson
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few more minor requests, please.
| "The `test` argument cannot be `PermutationTest`. Use a base test like `WilcoxonTest` or `TTest`." | ||
| ) | ||
|
|
||
| def call_test(data_baseline, data_comparison, axis: int | None = None, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@grst is this resolved?
|
@grst @maltekuehl what's the status of this PR now? |
|
I still need to give it a proper review, didn't have the time yet |
| "The `test` argument cannot be `PermutationTest`. Use a base test like `WilcoxonTest` or `TTest`." | ||
| ) | ||
|
|
||
| def call_test(data_baseline, data_comparison, axis: int | None = None, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still don't get why you would want to use a permutation test with a test statistics. If I'm interested in the difference in means, I would use the difference in means as statistics.
In my understanding, the whole point of using Wilcoxon or T test is that one can compare against a theoretical distribution, avoiding permutations in the first place.
In any case, I would prefer passing a simple lambda over accepting another pertpy test class. This could be a simple
test_statistic: Callable[[np.ndarray, np.ndarray], float] = lambda x,y: np.log2(np.mean(x)) - np.log2(np.mean(y))or if you really want to use a t-statistics
test_statistic = lambda x, y: scipy.stats.ttest_ind(x, y).statistic|
Had to see @grst in person to remember this PR, now all comments should have been addressed. I greatly simplified everything by focussing on callables only, not wrapping other simple tests. Basically just a small wrapper around the scipy version now. Moved the extra arguments to the permutation test itself and out of the base class. Test failing in 3.13 --pre is unrelated to this PR and on main, too. |
grst
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, LGTM now!
I added one final code suggestion.
Co-authored-by: Gregor Sturm <[email protected]>
Set default value for test_statistic parameter.
|
Looking this weekend. Sorry for the delay. |
Zethson
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you very much! I'll make a few nitpick cosmetic (docstring) changes outside of this PR.
Generally, we could also feature this in one of our tutorials but this could be a follow up PR.
PR Checklist
docsis updatedDescription of changes
Adds a
PermutationTestto the tools submodule, similar toTTestandWilcoxonTest.Usage:
Technical details
compare_groupsto have the number of permutations and the test to use after permutation as explicit parameters and to parallelize processing.WilcoxonTest.Additional context
Part of the scverse x owkin hackathon.