Skip to content

Fix a bug in flash attention where kv_seq_len should divide block_k_major. #10989

Fix a bug in flash attention where kv_seq_len should divide block_k_major.

Fix a bug in flash attention where kv_seq_len should divide block_k_major. #10989

Re-run triggered February 10, 2025 18:06
Status Success
Total duration 2h 25m 34s
Artifacts 3

build_and_test.yml

on: pull_request
get-torch-commit
3s
get-torch-commit
Build PyTorch/XLA  /  build
1h 11m
Build PyTorch/XLA / build
Matrix: CPU tests / test
Fit to window
Zoom out
Zoom in

Artifacts

Produced during runtime
Name Size
cpp-test-bin
672 MB
github-pages
5.69 MB
torch-xla-wheels
225 MB