Skip to content

[Shape Inference] Fix GQA shape inference for present outputs (#27250) #10333

[Shape Inference] Fix GQA shape inference for present outputs (#27250)

[Shape Inference] Fix GQA shape inference for present outputs (#27250) #10333

Triggered via push February 10, 2026 22:36
Status Success
Total duration 1h 57m 46s
Artifacts 1
Windows GPU TensorRT CI Pipeline
38m 58s
Windows GPU TensorRT CI Pipeline
Windows GPU TensorRT CI Pipeline Test Job
37m 4s
Windows GPU TensorRT CI Pipeline Test Job
Fit to window
Zoom out
Zoom in

Annotations

6 warnings
Windows GPU TensorRT CI Pipeline: onnxruntime/core/mlas/lib/amd64/QgemmU8X8KernelAvx2.asm#L1234
epilog offset from end of function exceeds 4095
Windows GPU TensorRT CI Pipeline: onnxruntime/core/mlas/lib/amd64/QgemmU8X8KernelAvx2.asm#L1227
epilog offset from end of function exceeds 4095
Windows GPU TensorRT CI Pipeline: onnxruntime/core/mlas/lib/amd64/QgemmU8X8KernelAvx2.asm#L1220
epilog offset from end of function exceeds 4095
Windows GPU TensorRT CI Pipeline: onnxruntime/core/mlas/lib/amd64/QgemmU8X8KernelAvx2.asm#L1213
epilog offset from end of function exceeds 4095
Windows GPU TensorRT CI Pipeline: onnxruntime/core/mlas/lib/amd64/QgemmU8X8KernelAvx2.asm#L1206
epilog offset from end of function exceeds 4095
Windows GPU TensorRT CI Pipeline: onnxruntime/core/mlas/lib/amd64/QgemmU8X8KernelAvx2.asm#L1199
epilog offset from end of function exceeds 4095

Artifacts

Produced during runtime
Name Size Digest
build-artifacts
1.89 GB
sha256:ada204875654245bca56897afa647de88e936a5e47eb0efd720289158eec105b