Skip to content

[Shape Inference] Fix GQA shape inference for present outputs #10217

[Shape Inference] Fix GQA shape inference for present outputs

[Shape Inference] Fix GQA shape inference for present outputs #10217

Re-run triggered February 5, 2026 08:36
Status Success
Total duration 2h 36m 47s
Artifacts 1

windows_cuda.yml

on: pull_request
Windows GPU CUDA CI Pipeline
42m 49s
Windows GPU CUDA CI Pipeline
Windows GPU CUDA CI Pipeline Test Job
27m 39s
Windows GPU CUDA CI Pipeline Test Job
Fit to window
Zoom out
Zoom in

Annotations

6 warnings
Windows GPU CUDA CI Pipeline: onnxruntime/core/mlas/lib/amd64/QgemmU8X8KernelAvx2.asm#L1234
epilog offset from end of function exceeds 4095
Windows GPU CUDA CI Pipeline: onnxruntime/core/mlas/lib/amd64/QgemmU8X8KernelAvx2.asm#L1227
epilog offset from end of function exceeds 4095
Windows GPU CUDA CI Pipeline: onnxruntime/core/mlas/lib/amd64/QgemmU8X8KernelAvx2.asm#L1220
epilog offset from end of function exceeds 4095
Windows GPU CUDA CI Pipeline: onnxruntime/core/mlas/lib/amd64/QgemmU8X8KernelAvx2.asm#L1213
epilog offset from end of function exceeds 4095
Windows GPU CUDA CI Pipeline: onnxruntime/core/mlas/lib/amd64/QgemmU8X8KernelAvx2.asm#L1206
epilog offset from end of function exceeds 4095
Windows GPU CUDA CI Pipeline: onnxruntime/core/mlas/lib/amd64/QgemmU8X8KernelAvx2.asm#L1199
epilog offset from end of function exceeds 4095

Artifacts

Produced during runtime
Name Size Digest
build-artifacts Expired
1.98 GB
sha256:b4f186d897a1c36c7d5517e31feaef1131f9517d2ff4bfb1cfbaf3b1683da12a