Fix a bug in flash attention where kv_seq_len should divide block_k_major. #10980
build_and_test.yml
on: pull_request
Annotations
2 errors
Build PyTorch/XLA / build
Canceling since a higher priority waiting request for 'Build and test-8671-false-false' exists
|
Build PyTorch/XLA / build
The operation was canceled.
|