Skip to content

Conversation

@wincent8
Copy link

@wincent8 wincent8 commented Oct 31, 2025

disable_distributed
disable_e2e
disable_build

@wincent8 wincent8 requested a review from daisyden October 31, 2025 04:07
@wincent8 wincent8 force-pushed the wliao2/enable_sparse branch from 87aecd5 to 143ea51 Compare November 3, 2025 13:31
wincent8 and others added 9 commits November 3, 2025 21:36
To solve #2207 
This PR adds FP8 data types support for `torch.cat` and `torch.where` on
XPU backend.

---------

Co-authored-by: Cui, Yifeng <[email protected]>
Co-authored-by: Copilot <[email protected]>
To solve #2207
Extends support for float8 data types across various XPU tensor indexing
and transformation kernels, ensuring these operations are compatible
with the new types. It also adds a regression test for flipping float8
tensors and removes the skip for float8 indexing tests.

**Float8 type support:**

* Updated dispatch macros in `XPUScalar.cpp` and `Indexing.cpp` to
include `AT_FLOAT8_TYPES`, enabling float8 support in scalar extraction,
indexing, index_put, and deterministic index_put kernels.
* Modified `flip_kernel` in `TensorTransformationsKernels.cpp` to
support float8 and barebones unsigned types, updating the dispatch
mechanism accordingly.
* Included the new dispatch header `Dispatch_v2.h` for the updated
dispatch macros.

**Testing improvements:**

* Added a regression test for flipping float8 tensors in
`test_index_and_index_put.py` to verify correctness of the operation on
XPU.
* Removed the skip for float8 tests in `test_indexing_xpu.py`,
re-enabling these tests now that support is implemented.

---------

Co-authored-by: Cui, Yifeng <[email protected]>
This was referenced Nov 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants