Skip to content

Fix a bug in flash attention where kv_seq_len should divide block_k_major. #10988

Fix a bug in flash attention where kv_seq_len should divide block_k_major.

Fix a bug in flash attention where kv_seq_len should divide block_k_major. #10988

Re-run triggered February 8, 2025 09:11
Status Failure
Total duration 1h 30m 20s
Artifacts 3

build_and_test.yml

on: pull_request
get-torch-commit
1s
get-torch-commit
Build PyTorch/XLA  /  build
1h 11m
Build PyTorch/XLA / build
Matrix: CPU tests / test
Fit to window
Zoom out
Zoom in

Annotations

1 error and 1 warning
TPU tests / tpu-test
Process completed with exit code 1.
TPU tests / tpu-test
This job failure may be caused by using an out of date self-hosted runner. You are currently using runner version 2.321.0. Please update to the latest version 2.322.0

Artifacts

Produced during runtime
Name Size
cpp-test-bin
672 MB
github-pages
5.69 MB
torch-xla-wheels
225 MB