Conversation
* Use default Context(). I guess there's no particular reason to use the inline job executor? * Consistent spaces, flake8
From what I understood, users are supposed to call `from_parameters()` and not use `preprocess_geometry()` directly, right? If not, what would be a use case for calling it directly? Since the signature is very similar to `from_parameters()`, IMO a good idea to mark this as internal to avoid confusion and keep the public API compact.
This should give coverage for the JITed functions. Performance impact seems minimal for current tests, so just running the whole lot. If necessary one can later work with pytest.mark like in LiberTEM: * https://github.com/LiberTEM/LiberTEM/blob/60eb353e1ae68e3b664fad41242d8bcc5f40e375/pytest.ini#L8 * https://github.com/LiberTEM/LiberTEM/blob/60eb353e1ae68e3b664fad41242d8bcc5f40e375/tox.ini#L56-L60 * e.g. https://github.com/LiberTEM/LiberTEM/blob/60eb353e1ae68e3b664fad41242d8bcc5f40e375/tests/common/test_numba.py#L18-L22
Since `from_parameters()` is the designated entry point and this is relevant for users, this should probably be documented here? FIXME at some point set up proper documentation, probably
Helps catch mix-ups between x and y This introduces a test failure just barely above the threshold. Is this numerical instability or a real difference?
Now that it is described below, I guess no longer needed? Should also be easy enough to find when looking into the source code. This makes it easier to change the implementation without breaking references in the documentation.
|
Thanks @uellue! There was indeed a very subtle bug with flipped/transposed detector orientations and the implicit fftshift centering for even grids, which this expanded set of tests caught!
Indeed, yes -- I can upload this somewhere more permanent like Zenodo tomorrow. Since it's very low dose and binary counts it's actually quite small (~80 Mb), so it could conceivably also live in repository. I will update the README / add a tutorial once I do that, and then we can merge this. |
|
Tiny dataset uploaded to Zenodo here: https://doi.org/10.5281/zenodo.18346853 Once you've gotten a chance to test @uellue, we can merge this and deploy? |
Gives a nicer user experience :-)
Happy to hear that it helped to be a bit pedantic! Also stuff like 1/2 px offset from From my side this looks good! Before release, maybe you can have a discussion with @sk1p about how to set up a release pipeline, not sure how far you are with this already? This is super super helpful with maintenance since releases can be minted with minimal effort. |
|
...a godsend if upstream changes and breaks the package. Point release with a quick fix or version pin is a matter of minutes with such a pipeline. |
Sure, yeah -- happy to discuss. I suspect the only required change would be to add a diff --git a/.github/workflows/deploy-to-pypi.yml b/.github/workflows/deploy-to-pypi.yml
index 8ab24da..c126980 100644
--- a/.github/workflows/deploy-to-pypi.yml
+++ b/.github/workflows/deploy-to-pypi.yml
@@ -10,7 +10,7 @@ jobs:
name: deploy
runs-on: ubuntu-latest
env:
- UV_PUBLISH_TOKEN: ${{ secrets.TEST_PYPI_TOKEN }}
+ UV_PUBLISH_TOKEN: ${{ secrets.LT_PYPI_TOKEN }}
steps:
- uses: actions/checkout@v6
@@ -27,4 +27,4 @@ jobs:
run: uv build
- name: Publish to TestPyPI
- run: uv publish --publish-url https://test.pypi.org/legacy/
+ run: uv publish |
Looks reasonable. The token is currently per-repo, but I've added one under that name. I would also change the PyPI workflow to only run on tags. I'll link the example from another project. These lines make sure the workflow is also running on pushes to tags. I think in case of this repo, only the In case of LiberTEM-rs, the "release" job is integrated into the main CI workflow, so it needs to be skipped on anything that is not a tag run. As the deploy workflow is separate here, I think we don't need it here (just for completeness): Optionally, I would recommend to upload release assets also to the GitHub release. I'm using With this setup, the release procedure (which needs a short mention in the docs) becomes:
I'll be out of office the next week, but do let me know if you need any support. BTW: docs might also need a paragraph about the dev/main branches |
Hi @gvarnavi, looking good! Just a few small things, most I could address myself in this PR as suggestions.
Closes #1
../data/apoF_4mrad_1.5um-df_3A-step_30eA2_binary.npyavailable for download somehow, use another publicly available file, or simulate data on the fly e.g. with abTEM, so that the example in the README can easily be tried out? If I understood correctly, the method requires specific imaging conditions and matching parameters? We can also include such a file in https://libertem.github.io/LiberTEM/sample_datasets.htmluvthis way. I guessuv run pytest tests/is the way? I've added that to CONTRIBUTORS.md, please adjust if necessary!test_utils_against_quantem.pyto catch x/y mixups (hopefully)float64or does Torch default tofloat32? If confirmed numerical stability issue one can also just increase the threshold a bit, I guess?test_suppress_nyquist_frequency()andtest_kernel_weight_conservation()test with an (8, 8) shape. Is it possible to test with uneven shapes like intest_utils_against_quantem.py?