Skip to content

16 - Using a pre-trained denoiser in pytorch in the CIL FISTA algorithm #24

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Jun 4, 2025

Conversation

MargaretDuff
Copy link
Member

@MargaretDuff MargaretDuff commented Apr 10, 2025

Describe your contribution

This notebook covers:

  • How to set up an environment with CIL and pytorch and deep inverse
  • How to create a CIL function to wrap a pytorch function or operator
  • Examples of image reconstruction using a pre-trained denoiser in CIL
  • Timings of the data copies between pytorch and CIL

Checklist when you are ready to request a review

  • I have performed a self-review of my code
  • I have created a new folder, containing my contributions which are in the form of jupyter notebooks, with any necessary supporting python files, and a LICENSE file.
  • I have added a description of my contribution(s) to the top of my file(s)
  • If publicly available, I have added a link to the dataset used near the top of my file(s)
  • I have added the CIL version I ran with near the top of my file(s)
  • The content of this Pull Request (the Contribution) is intentionally submitted for inclusion in CIL-User-Showcase.
  • I confirm that the contribution does not violate any intellectual property rights of third parties.
  • I confirm that I have added license headers to all of the files I am contributing (with a license of my choice)
  • Change pull request label to 'Waiting for review'

Note: for an example of a contribution, where a license header, description, data link and CIL version has been added, please
see: example_contribution

@MargaretDuff MargaretDuff marked this pull request as ready for review May 7, 2025 15:35
Copy link

@jakobsj jakobsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks really amazing. Here my comments.

Just under the heading "FISTA with the proximal step replaced by a learned denoiser" the first sentence, "When we use FISTA to ..." is very long and something seems broken or missing around "with g was a regularization".

On the "pip installing" suggest "pip installing pytorch into an environment that already contains CIL."

Bit further down the term "denoiser model" is used. I don't think "Model" is the best choice, since model means so many different things already including used in "forward model", "linear model", and some inverse problems communities (geoscience for example) use "model" as the name of the "solution". How about just "denoiser" or "denoiser method"?

I think it would be illustrative to demonstrate the effect of the chosen denoiser on an image, as it comes, before applying it as part of the iterative FISTA method. To convey what the denoiser "does" on its own.

Under "too small regularization" parameter typos "postentially" and "converegent".

The part on the cost of copying is potentially really great. I do not understand though "copying 0 to 20 times in each proximal operation", same next line "0 to 100", can you elaborate/rephrase to be clearer?

Next part about the 3D, it says we apply the 2D denoiser 3 times: once in the horizontal plane, once in the vertical plane. I don't get how that makes 3 times? Should it be once horizontal and once in each of the two vertical planes?

In the code after that, I only see the torch thing done one time, between permuting, but not three times? And some code is commented, should it not be or be omitted?

On the runtime reported at the end 5min, is it possible to report also the TV runtime (and FBP perhaps) to have a better reference of whether 5min is fast or slow?

@MargaretDuff
Copy link
Member Author

Thanks @jakobsj - I have made those changes

The part on the cost of copying is potentially really great. I do not understand though "copying 0 to 20 times in each proximal operation", same next line "0 to 100", can you elaborate/rephrase to be clearer?
I have tried to rephrase/elaborate on this - does it make any more sense? I am also not sure how to comment on the time calculated for the copying - it probably doesn't make sense as an absolute value, but I am not sure what to compare it to?
Next part about the 3D, it says we apply the 2D denoiser 3 times: once in the horizontal plane, once in the vertical plane. I don't get how that makes 3 times? Should it be once horizontal and once in each of the two vertical planes?
Thanks for spotting this typo - so I was doing it 3 times, once in the horizontal and once in each of the vertical planes but then convinced myself I only needed to do it twice, in one vertical and one horizontal plane, and then I should get the third for free. I could be convinced otherwise! I have changed the sentence to "In the next section, we instead create a hacky solution by applying the denoiser 2 times, once in the horizontal and once in the vertical plane, for each call of the proximal. "
In the code after that, I only see the torch thing done one time, between permuting, but not three times? And some code is commented, should it not be or be omitted?
It is applied twice in the current code. The commented code was for when I applied it 3 times, but I have deleted that now:

            x_torch = self.denoiser(x_torch, tau) # denoiser applied once
            x_torch = x_torch.permute( 2, 1, 0, 3 ) #permute
            x_torch= self.denoiser(x_torch, tau) # denoiser applied second time
            x_torch = x_torch.permute( 2, 1, 0, 3 ) #permute back

On the runtime reported at the end 5min, is it possible to report also the TV runtime (and FBP perhaps) to have a better reference of whether 5min is fast or slow?
It took about 20seconds for the TV runtime and have added this

@MargaretDuff
Copy link
Member Author

@leftaroundabout - Would also appreciate if you have any thoughts or comments on the showcase!

@MargaretDuff MargaretDuff self-assigned this May 15, 2025
@leftaroundabout
Copy link

I would say it is a good proof-of-concept display, and well written documentation. Making the plots a bit smaller and perhaps also just removing some would make it rather more readable as a whole, though.

I have not tried running the notebook in the current form, nor had much time to think about the technical details again.

Regarding the matter of denoising 2 or 3 times: I certainly agree that 2 is enough to ensure all directions get some regularity, but I wonder whether this might introduce some anisotropic bias. One axis is processed twice as much as the others in this scheme, and diagonal features in the YZ plane are only seen by the denoiser at $\tfrac{\sqrt2}2$ their actual frequency.
The proper thing to do would of course be to use a dedicated 3D denoiser.

@MargaretDuff
Copy link
Member Author

Thanks @leftaroundabout for your comments! I have shrunk some of the plots, so they are on one line and less overwhelming! I also went back to applying the denoiser in all 3 dimensions and made it clear that the proper thing to do would be to use a 3D denoiser!

@jakobsj - if you are happy with the changes, we can merge

@MargaretDuff MargaretDuff requested a review from jakobsj May 29, 2025 10:09
@github-project-automation github-project-automation bot moved this to Todo in CIL work May 29, 2025
@MargaretDuff MargaretDuff moved this from Todo to Blocked in CIL work May 29, 2025
Copy link

@jakobsj jakobsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very happy indeed, what a brilliant notebook! In particular great to see the effect of applying the denoiser in all three dimensions. Thanks very much @leftaroundabout and @MargaretDuff!

@MargaretDuff MargaretDuff merged commit 0b75c46 into main Jun 4, 2025
@github-project-automation github-project-automation bot moved this from Blocked to Done in CIL work Jun 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

3 participants