Skip to content

Backend/pytorch arrays 2 #1679

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 52 commits into
base: master
Choose a base branch
from

Conversation

leftaroundabout
Copy link
Contributor

Draft of the changes needed to store PyTorch arrays internally, as an alternative to NumPy.

Only superficially tested so far.
Results match the NumPy version, though not exactly because a compact
kernel is used with direct convolution whereas the NumPy version uses
FFT convolution.
It is debateable whether this is what `asarray` is there for. Perhaps it would
be better to simply expose `.data` for this purpose and keep `.asarray` NumPy-
specific. However, since `__array__` is already there, it seems to fulfill that
purpose as well.
That is, values that are compatible for being multiplied with element-vectors
of the space in question.
This approach diverges from the plan to base everything on `__array_ufunc__`
as the deprecation notes suggest, but that is probably not tenable if
we want to properly support dissimilar backends.
The numpy-style version was very slow. These operations can be expressed
nicely in terms of convolutions, which PyTorch supports well.
Still requires _not_ performing in-place update to get good performance.

Some padding modes are not supported yet.
This solver / backend combination now runs efficiently in simple tests.
Less code duplication.
This is to make them more general with respect to backend, particularly towards PyTorch.
The implementation with a random check is an ugly hack, barely acceptable for
this purpose. Arguably, hardcoding a full matrix of what is convertible would be
a cleaner solution, but it would actuall be more problematic from a maintenance
perspective because Torch might change what conversions are supported.

The only proper solution would be to use a Torch function corresponding to
np.can_cast, but torch.can_cast does not do that, it is more restrictive.
The size-filling required some nontrivial conversions.
It does not make much sense to use PyTorch storage but NumPy-based Fourier transform.
This is an important auxiliary function for ODL's Fourier transforms (or
rather, to their pre- and post-processing).
Using the new array-manager classes.
It makes no sense to pass the `out` argument as the input parameter.
The `x` argument of the `_postprocess` method was completely unused.
Upon investigating why this never caused any problems, I found that in
all the unit tests `x is out` held true. In that case both are interchangeable,
but this cannot in general be assumed.
I can imagine no setting where it would actually be necessary to double-pass
`out` this way.

As for why the old version used `out` as the input argument: this originated in
4e4e928, where the call to `dft_postprocess_data` was refactored:
it had previously been in the `_call_pyfftw` method, where indeed the data was
stored in `out` (having come out of `pyfftw_call`). It appears that this call
was copy&pasted into the then-new `_postprocess` method, but forgotten to change
the argument to `x`.
…ady on Torch.

The old `torch.tensor` converter did just the right thing: construct a new
tensor when given lists or NumPy arrays, and simply clone the input if it was
already a PyTorch tensor.
For some reason, Torch has deprecated this behaviour, so it is now necessary
to hard-code the different possibilities.
…TensorSpace objects.

This removes the necessity for some redundant checks/conversions.
Backend-specific arrays are used in several different places;
TensorSpace-element construction is only one of them.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant