-
Notifications
You must be signed in to change notification settings - Fork 107
Backend/pytorch arrays 2 #1679
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
leftaroundabout
wants to merge
52
commits into
odlgroup:master
Choose a base branch
from
leftaroundabout:backend/pytorch-arrays-2
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Backend/pytorch arrays 2 #1679
leftaroundabout
wants to merge
52
commits into
odlgroup:master
from
leftaroundabout:backend/pytorch-arrays-2
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
No changes made yet.
Only superficially tested so far.
Results match the NumPy version, though not exactly because a compact kernel is used with direct convolution whereas the NumPy version uses FFT convolution.
It is debateable whether this is what `asarray` is there for. Perhaps it would be better to simply expose `.data` for this purpose and keep `.asarray` NumPy- specific. However, since `__array__` is already there, it seems to fulfill that purpose as well.
Does not seem to work yet.
That is, values that are compatible for being multiplied with element-vectors of the space in question.
This approach diverges from the plan to base everything on `__array_ufunc__` as the deprecation notes suggest, but that is probably not tenable if we want to properly support dissimilar backends.
The numpy-style version was very slow. These operations can be expressed nicely in terms of convolutions, which PyTorch supports well. Still requires _not_ performing in-place update to get good performance. Some padding modes are not supported yet.
This solver / backend combination now runs efficiently in simple tests.
Less code duplication.
This is to make them more general with respect to backend, particularly towards PyTorch.
The implementation with a random check is an ugly hack, barely acceptable for this purpose. Arguably, hardcoding a full matrix of what is convertible would be a cleaner solution, but it would actuall be more problematic from a maintenance perspective because Torch might change what conversions are supported. The only proper solution would be to use a Torch function corresponding to np.can_cast, but torch.can_cast does not do that, it is more restrictive.
The size-filling required some nontrivial conversions.
It does not make much sense to use PyTorch storage but NumPy-based Fourier transform.
This is an important auxiliary function for ODL's Fourier transforms (or rather, to their pre- and post-processing).
Using the new array-manager classes.
It makes no sense to pass the `out` argument as the input parameter. The `x` argument of the `_postprocess` method was completely unused. Upon investigating why this never caused any problems, I found that in all the unit tests `x is out` held true. In that case both are interchangeable, but this cannot in general be assumed. I can imagine no setting where it would actually be necessary to double-pass `out` this way. As for why the old version used `out` as the input argument: this originated in 4e4e928, where the call to `dft_postprocess_data` was refactored: it had previously been in the `_call_pyfftw` method, where indeed the data was stored in `out` (having come out of `pyfftw_call`). It appears that this call was copy&pasted into the then-new `_postprocess` method, but forgotten to change the argument to `x`.
…ady on Torch. The old `torch.tensor` converter did just the right thing: construct a new tensor when given lists or NumPy arrays, and simply clone the input if it was already a PyTorch tensor. For some reason, Torch has deprecated this behaviour, so it is now necessary to hard-code the different possibilities.
…TensorSpace objects. This removes the necessity for some redundant checks/conversions. Backend-specific arrays are used in several different places; TensorSpace-element construction is only one of them.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Draft of the changes needed to store PyTorch arrays internally, as an alternative to NumPy.