Skip to content

Conversation

@maltekuehl
Copy link
Contributor

@maltekuehl maltekuehl commented Mar 18, 2025

PR Checklist

  • Referenced issue is linked
  • If you've fixed a bug or added code that should be tested, add tests!
  • Documentation in docs is updated

Description of changes

Adds a PermutationTest to the tools submodule, similar to TTest and WilcoxonTest.

Usage:

result = PermutationTest.compare_groups(
    pdata,
    column="group",
    baseline="A",
    groups_to_compare=["B"],
    test=pertpy.tools.WilcoxonTest, # optional, defaults to WilcoxonTest
    n_permutations=100, # optional, defaults to 100
    seed=42, # optional, defaults to 0
)

Technical details

  • Currently no specific reference p-values for other permutation tests to compare to exist and the standard Wilcoxon values from R deviate from the results of the PermutationTest. However, there is full agreement regarding which genes are significant and I have adapted the test to check for this.
  • Needed to reimplement compare_groups to have the number of permutations and the test to use after permutation as explicit parameters and to parallelize processing.
  • Users need to provide the test they want to use after permutation themselves, if they don't want to use the standard WilcoxonTest.

Additional context

Part of the scverse x owkin hackathon.

@Zethson
Copy link
Member

Zethson commented Mar 18, 2025

Cool! Let me know when you want me or Gregor to have a look, please.

@maltekuehl
Copy link
Contributor Author

maltekuehl commented Mar 18, 2025

@Zethson @grst based on the tests I ran locally, this version should now pass. However, there seems to be a docs issue unrelated to my code changes (see other recent PRs), because a dependency cannot be installed. So from my side, you can go ahead and take a look already and let me know if anything needs to be adapted.

Another idea that I had was that it would be interesting to be able to compare any values in obs and obsm as well. One use case would be if you have a spatial transcriptomics image for each sample within a group for which you can calculate Moran's I at the sample level (for each gene or a single gene of interest). You may want to store this data not in its own pdata but rather in the metadata, so flexibilizing the compare_groups function to not be restricted to pdata.var_names for the variables that are compared would be a nice addition.

@codecov-commenter
Copy link

codecov-commenter commented Mar 18, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 72.68%. Comparing base (28b8291) to head (52d2d58).

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #726      +/-   ##
==========================================
- Coverage   72.74%   72.68%   -0.06%     
==========================================
  Files          47       47              
  Lines        5510     5517       +7     
==========================================
+ Hits         4008     4010       +2     
- Misses       1502     1507       +5     
Files with missing lines Coverage Δ
pertpy/tools/__init__.py 77.77% <100.00%> (ø)
...py/tools/_differential_gene_expression/__init__.py 92.30% <100.00%> (-0.29%) ⬇️
...y/tools/_differential_gene_expression/_pydeseq2.py 91.89% <100.00%> (ø)
...ols/_differential_gene_expression/_simple_tests.py 97.77% <100.00%> (+0.21%) ⬆️

... and 2 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Member

@Zethson Zethson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool!

So name wise, there's some overlap with https://pertpy.readthedocs.io/en/stable/usage/tools/pertpy.tools.DistanceTest.html#pertpy.tools.DistanceTest which also kind of labels itself as a permutation test but in different way. We also have to resolve this naming clash because the DistanceTest is currently under the title "distances and permutation test" which I consider an issue. We should only have this label once or be more specific.

fit_kwargs: Additional kwargs passed to the test function.
test_kwargs: Additional kwargs passed to the test function.
"""
if len(fit_kwargs):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't get the point of this. Is it required? Can this be fixed upstream aka in the interface by making this optional to have?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is inherited from the base class and not a new introduction of this PR. MethodsBase is also used for linear models which require the fit_kwargs. I will change the docstring to indicate that these are not used for the simple tests.

@Zethson
Copy link
Member

Zethson commented Mar 18, 2025

If you merge main into this, the RTD job will work again.

@grst
Copy link
Collaborator

grst commented Mar 18, 2025

@maltekuehl, could the paralellization you implemented be pushed up to the abstract base class? Then also wilcoxon and ttest would benefit from it. In that case, would it even be necessary to re-implement the interfaces specifically for the permutation test, or could you just use an extension class?

@maltekuehl
Copy link
Contributor Author

@grst good idea to push this upstream. The reason I had to recreate the compare_groups function was that I wanted to explicitly expose the seed, test and n_permutations parameters, as these are key to how the permutation test works and should imo not just be passed through test_kwargs without further documentation. We could however add these as unsued parameters for the other classes to the base class or we could move the functionality of compare_groups to a helper function and then just overwrite the call to that helper with an update of the test_kwargs. This would however mean that we would have to have essentially the same parameters for both the function and the helper, leading perhaps to unnecessary code duplication. What are your thoughts?

@maltekuehl
Copy link
Contributor Author

@Zethson what would you suggest naming wise? The docs mention a pertpy.tools.PermutationTest but this does not actually seem to be implemented in the code, hence I went with this name. However, the docs should be updated, but that relates to the distance functionality which you are more familiar with. The docstring for DistanceTest also mentions permutation tests but that could perhaps be rephrased to "Monte-Carlo simulation" or be specified in some other way? For now, I however think that outside the non-existing (or only now created) function in the docs that there is little potential for confusion.

@maltekuehl maltekuehl requested a review from Zethson March 18, 2025 16:34
…turning statistic from tests and fix bug where the permutation_test was not applied
elif permutation_test is None and cls.__name__ == "PermutationTest":
logger.warning("No permutation test specified. Using WilcoxonTest as default.")

comparison_indices = [_get_idx(column, group_to_compare) for group_to_compare in groups_to_compare]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I get this right, this implements parallelism only at the level of comparisons. This means, if there's only one comparison, there would be no benefit of parallelization. I think it would be more beneficial to implement parallelism at the level of variables, i.e. in _compare_single_group at for var in tqdm(self.adata.var_names):

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The permutation_test also has a vectorize attribute which allows to pass entire matrices to the test. This likely also speeds up testing quite a bit when whole blocks of data are processed together. Maybe we even get implicit parallelism through the BLAS backend of numpy when doing so.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It probably requires a bit of testing to find out what's faster. Per-variable paralleism, just passing the entire array into stats.permutation_test(..., vectorized=True) or splitting the data in chunks and then passing it to permutation_test(..., vectorized=True).

def _test(x0: np.ndarray, x1: np.ndarray, paired: bool, return_attribute: str = "pvalue", **kwargs) -> float:
if paired:
return scipy.stats.wilcoxon(x0, x1, **kwargs).pvalue
return scipy.stats.wilcoxon(x0, x1, **kwargs).__getattribute__(return_attribute)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there could be value in returning multiple attributes, not just the p-value. In particular for the permutation test, but also for the t-test it would be nice to have the t-statistics alongside the p-value. I therefore suggest to change the function signature of _test to returning a dictionary or named tuple instead.

This can then be included in https://github.com/scverse/pertpy/pull/726/files#diff-5892917e4e62a1165dda9ac148f802a12e3a95735a367b5e1bf771cb228fcd0dR86 as

res.append({"variable": var,  "log_fc": np.log2(mean_x1) - np.log2(mean_x0), **res})

"The `test` argument cannot be `PermutationTest`. Use a base test like `WilcoxonTest` or `TTest`."
)

def call_test(data_baseline, data_comparison, axis: int | None = None, **kwargs):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you really need another test here? Essentially the statistic we want to test is the fold change.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The advantage of permutation tests is their generalizability without (almost) any assumptions, as long as the test statistic is related to the null hypothesis we want to test. I would thus view the ability to use any statistic, either those that are already implemented in pertpy or any callable that accepts two NDArrays and **kwargs as core functionality. With the latest update, this PR supports any statistic, e.g., it would also be trivial to use a comparison of means, of medians or any other function that you can implement in < 5 lines of numpy code with the PermutationTest. I opted for Wilcoxon statistic as a default because the ranksum is fairly general and it's something that's already implemented in pertpy. Of course, we could also add an explicit collection of other statistics but it could never cover all use cases and defining this statistic should be part of the thought process when a user uses the permutation test, so I'm not convinced of the value and necessity of covering this as part of the library itself.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@grst is this resolved?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We changed this to the t-statistic after in person discussion, as the wilcoxon statistic would just be the wilcoxon test and from what I understood from the in person discussion the rationale for keeping this function was agreed to, but perhaps @grst can confirm again.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't get why you would want to use a permutation test with a test statistics. If I'm interested in the difference in means, I would use the difference in means as statistics.

In my understanding, the whole point of using Wilcoxon or T test is that one can compare against a theoretical distribution, avoiding permutations in the first place.

In any case, I would prefer passing a simple lambda over accepting another pertpy test class. This could be a simple

test_statistic: Callable[[np.ndarray, np.ndarray], float] = lambda x,y: np.log2(np.mean(x)) - np.log2(np.mean(y))

or if you really want to use a t-statistics

test_statistic = lambda x, y: scipy.stats.ttest_ind(x, y).statistic

Copy link
Member

@Zethson Zethson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few more minor requests, please.

"The `test` argument cannot be `PermutationTest`. Use a base test like `WilcoxonTest` or `TTest`."
)

def call_test(data_baseline, data_comparison, axis: int | None = None, **kwargs):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@grst is this resolved?

@grst grst self-requested a review April 12, 2025 19:00
@Zethson
Copy link
Member

Zethson commented Apr 28, 2025

@grst @maltekuehl what's the status of this PR now?

@grst
Copy link
Collaborator

grst commented Apr 28, 2025

I still need to give it a proper review, didn't have the time yet

@maltekuehl
Copy link
Contributor Author

@Zethson from my side all of your comments are addressed but waiting on the comments from @grst. Hadnt checked back because I'm on holiday but can make any necessary changes next week.

"The `test` argument cannot be `PermutationTest`. Use a base test like `WilcoxonTest` or `TTest`."
)

def call_test(data_baseline, data_comparison, axis: int | None = None, **kwargs):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't get why you would want to use a permutation test with a test statistics. If I'm interested in the difference in means, I would use the difference in means as statistics.

In my understanding, the whole point of using Wilcoxon or T test is that one can compare against a theoretical distribution, avoiding permutations in the first place.

In any case, I would prefer passing a simple lambda over accepting another pertpy test class. This could be a simple

test_statistic: Callable[[np.ndarray, np.ndarray], float] = lambda x,y: np.log2(np.mean(x)) - np.log2(np.mean(y))

or if you really want to use a t-statistics

test_statistic = lambda x, y: scipy.stats.ttest_ind(x, y).statistic

@maltekuehl
Copy link
Contributor Author

Had to see @grst in person to remember this PR, now all comments should have been addressed. I greatly simplified everything by focussing on callables only, not wrapping other simple tests. Basically just a small wrapper around the scipy version now. Moved the extra arguments to the permutation test itself and out of the base class. Test failing in 3.13 --pre is unrelated to this PR and on main, too.

@maltekuehl maltekuehl requested a review from grst September 11, 2025 18:36
Copy link
Collaborator

@grst grst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, LGTM now!

I added one final code suggestion.

@maltekuehl maltekuehl requested a review from Zethson September 12, 2025 12:26
@Zethson
Copy link
Member

Zethson commented Oct 2, 2025

Looking this weekend. Sorry for the delay.

Copy link
Member

@Zethson Zethson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much! I'll make a few nitpick cosmetic (docstring) changes outside of this PR.

Generally, we could also feature this in one of our tutorials but this could be a follow up PR.

@Zethson Zethson merged commit ef25311 into scverse:main Oct 3, 2025
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants