Skip to content

Conversation

@crutcher
Copy link

This manipulates layout to unfold a window along a dimension.

See: https://docs.pytorch.org/docs/2.8/generated/torch.Tensor.unfold.html

TODO: I need help with backprop.rs::Tensor::backward()

This manipulates layout to unfold a window along a dimension.

See: https://docs.pytorch.org/docs/2.8/generated/torch.Tensor.unfold.html

TODO: I need help with `backprop.rs::Tensor::backward()`
@crutcher
Copy link
Author

I'm working to thread unfold through rust tensor backends, to support lower-memory spatial transforms; particularly those being used by burn.

I'm new to candle, and don't know how to write the backprop code; could use a suggested impl or some pointers.

@crutcher crutcher changed the title [WIP] Implement 1-dim unfolded window views. [WIP] Implement Tensor::unfold(dim, size, step): 1-dim unfolded window views. Sep 22, 2025
@crutcher
Copy link
Author

A heavier handed approach would be something like Tensor::with_layout(); which I understand is a scary API to provide, though it is actually pretty simple to validate. Any given layout has an easy to compute max and min ravel offset; and the test is just "is that inside the storage size?"

crutcher added a commit to crutcher/burn that referenced this pull request Sep 22, 2025
- switched to pytorch's return shape.
- added burn-router
- exposed unfold calculation module.

- ndarray and candle both need either upstream support or work-arounds.

candle has a PR in-flight (from me):
huggingface/candle#3091
@crutcher
Copy link
Author

I have a broken impl of backprop; I'm trying to debug it, but it never runs? there's an assert in it that never triggers.

@crutcher
Copy link
Author

@ivarflakstad ping?

crutcher added a commit to crutcher/burn that referenced this pull request Sep 23, 2025
- switched to pytorch's return shape.
- added burn-router
- exposed unfold calculation module.

- ndarray and candle both need either upstream support or work-arounds.

candle has a PR in-flight (from me):
huggingface/candle#3091
crutcher added a commit to crutcher/burn that referenced this pull request Sep 23, 2025
- switched to pytorch's return shape.
- added burn-router
- exposed unfold calculation module.

- ndarray and candle both need either upstream support or work-arounds.

candle has a PR in-flight (from me):
huggingface/candle#3091
crutcher added a commit to crutcher/burn that referenced this pull request Sep 23, 2025
- switched to pytorch's return shape.
- added burn-router
- exposed unfold calculation module.

- ndarray and candle both need either upstream support or work-arounds.

candle has a PR in-flight (from me):
huggingface/candle#3091
crutcher added a commit to crutcher/burn that referenced this pull request Sep 24, 2025
- switched to pytorch's return shape.
- added burn-router
- exposed unfold calculation module.

- ndarray and candle both need either upstream support or work-arounds.

candle has a PR in-flight (from me):
huggingface/candle#3091
laggui pushed a commit to tracel-ai/burn that referenced this pull request Sep 24, 2025
* [WIP] towards pytorch.unfold()

* torch

* Expand unfold impl.

- switched to pytorch's return shape.
- added burn-router
- exposed unfold calculation module.

- ndarray and candle both need either upstream support or work-arounds.

candle has a PR in-flight (from me):
huggingface/candle#3091

* docs

* Implement `unfold` operation for tensor backends.

- Added `unfold` function for `candle`, `ndarray` backends with a copy-based implementation.
- Updated function docs and tensor operation traits accordingly.
- Incorporated window shape calculation for `unfold`.

* Simplify `into_ranges` call for tensor shape calculation.

* Remove redundant field repetition in `UnfoldOpIr` initialization.

* book

* Update `slice` implementation to use step-aware slice arguments in `unfold`.

* rustfmt; fix rebase errors.

* Optimize `unfold4d` implementation for zero-padding and unit-dilation cases. Update imports.

* Refactor `unfold` implementation by introducing `calculate_unfold_shape`.

- Replaced `calculate_unfold_windows` with `calculate_unfold_shape` across tensor backends.
- Updated function documentation to reflect the new shape calculation.
- Added unit tests for `calculate_unfold_shape`.
- Simplified shape handling in tensor unfold operations.
@greenrazer
Copy link
Contributor

Hi @crutcher,

In your test instead of:

let data = &[[0f32, 1., 2., 3., 4.], [5f32, 6., 7., 8., 9.]];
    let x = Tensor::new(data, device)?;

you should do:

    let data = &[[0f32, 1., 2., 3., 4.], [5f32, 6., 7., 8., 9.]];
    let v = Var::new(data, device)?;
    let x = v.as_tensor();

This sets the variable field on the tensor to true, which in turn makes x.track_op() return true. As a result, BackpropOp::new1(self, |arg| Op::Unfold(arg, dim, size, step)) returns Some instead of None, which allows sorted_nodes to return a non-empty list.

With this changeassert!(false, "never runs"); should execute

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants