Skip to content

Fix a bug in flash attention where kv_seq_len should divide block_k_major. #10980

Fix a bug in flash attention where kv_seq_len should divide block_k_major.

Fix a bug in flash attention where kv_seq_len should divide block_k_major. #10980

Re-run triggered February 7, 2025 04:35
Status Cancelled
Total duration 38m 53s
Artifacts

build_and_test.yml

on: pull_request
get-torch-commit
1s
get-torch-commit
Build PyTorch/XLA  /  build
38m 32s
Build PyTorch/XLA / build
Build docs  /  build-docs
Build docs / build-docs
TPU tests  /  tpu-test
TPU tests / tpu-test
Matrix: CPU tests / test
Waiting for pending jobs
Fit to window
Zoom out
Zoom in

Annotations

2 errors
Build PyTorch/XLA / build
Canceling since a higher priority waiting request for 'Build and test-8671-false-false' exists
Build PyTorch/XLA / build
The operation was canceled.