Skip to content

[CUDA] Support FP8 (E4M3) KV Cache for Group Query Attention #10520

[CUDA] Support FP8 (E4M3) KV Cache for Group Query Attention

[CUDA] Support FP8 (E4M3) KV Cache for Group Query Attention #10520

Triggered via pull request February 14, 2026 04:12
Status Success
Total duration 20m 52s
Artifacts
build_x64_release_ep_generic_interface
18m 26s
build_x64_release_ep_generic_interface
Fit to window
Zoom out
Zoom in

Annotations

6 warnings