Skip to content

Fix a bug in flash attention where kv_seq_len should divide block_k_m… #10997

Fix a bug in flash attention where kv_seq_len should divide block_k_m…

Fix a bug in flash attention where kv_seq_len should divide block_k_m… #10997

Triggered via push February 10, 2025 21:42
Status Success
Total duration 1h 47m 23s
Artifacts 3
get-torch-commit
3s
get-torch-commit
Build PyTorch/XLA  /  build
34m 20s
Build PyTorch/XLA / build
Matrix: CPU tests / test
Fit to window
Zoom out
Zoom in

Artifacts

Produced during runtime
Name Size
cpp-test-bin
672 MB
github-pages
5.69 MB
torch-xla-wheels
225 MB