Skip to content

[CUDA] Support FP8 (E4M3) KV Cache for Group Query Attention #50649

[CUDA] Support FP8 (E4M3) KV Cache for Group Query Attention

[CUDA] Support FP8 (E4M3) KV Cache for Group Query Attention #50649

Annotations

5 warnings

Python format

succeeded Feb 14, 2026 in 2m 2s