Skip to content

Remove some dead code. #10372

Remove some dead code.

Remove some dead code. #10372

Triggered via pull request February 12, 2026 00:47
Status Failure
Total duration 38m 8s
Artifacts

windows_tensorrt.yml

on: pull_request
Windows GPU TensorRT CI Pipeline
35m 28s
Windows GPU TensorRT CI Pipeline
Windows GPU TensorRT CI Pipeline Test Job
0s
Windows GPU TensorRT CI Pipeline Test Job
Fit to window
Zoom out
Zoom in

Annotations

10 errors and 6 warnings
Windows GPU TensorRT CI Pipeline: onnxruntime/core/providers/cuda/llm/attention.cc#L219
'=': cannot convert from 'const onnxruntime::cuda::Attention<float>::ComputeInternal::CudaT *' to 'const U *'
Windows GPU TensorRT CI Pipeline: onnxruntime/core/providers/cuda/llm/attention.cc#L218
'=': cannot convert from 'const onnxruntime::cuda::Attention<float>::ComputeInternal::CudaT *' to 'const U *'
Windows GPU TensorRT CI Pipeline: onnxruntime/core/providers/cuda/llm/attention.cc#L217
'=': cannot convert from 'const onnxruntime::cuda::Attention<float>::ComputeInternal::CudaT *' to 'const T *'
Windows GPU TensorRT CI Pipeline: onnxruntime/core/providers/cuda/llm/attention.cc#L216
'=': cannot convert from 'const onnxruntime::cuda::Attention<float>::ComputeInternal::CudaT *' to 'const T *'
Windows GPU TensorRT CI Pipeline: onnxruntime/core/providers/cuda/llm/attention.cc#L215
'=': cannot convert from 'const onnxruntime::cuda::Attention<float>::ComputeInternal::CudaT *' to 'const T *'
Windows GPU TensorRT CI Pipeline: onnxruntime/core/providers/cuda/llm/attention.cc#L199
'onnxruntime::contrib::cuda::GroupQueryAttentionData<T,U> onnxruntime::contrib::cuda::GroupQueryAttentionData(onnxruntime::contrib::cuda::GroupQueryAttentionData<T,U>)': expects 1 arguments - 0 provided
Windows GPU TensorRT CI Pipeline: onnxruntime/core/providers/cuda/llm/attention.cc#L199
'onnxruntime::contrib::cuda::GroupQueryAttentionData<T,U> onnxruntime::contrib::cuda::GroupQueryAttentionData(void)': could not deduce template argument for 'U'
Windows GPU TensorRT CI Pipeline: onnxruntime/core/providers/cuda/llm/attention.cc#L199
'onnxruntime::contrib::cuda::GroupQueryAttentionData<T,U> onnxruntime::contrib::cuda::GroupQueryAttentionData(void)': could not deduce template argument for 'T'
Windows GPU TensorRT CI Pipeline: onnxruntime/core/providers/cuda/llm/attention.cc#L199
cannot deduce template arguments for 'onnxruntime::contrib::cuda::GroupQueryAttentionData'
Windows GPU TensorRT CI Pipeline: onnxruntime/core/providers/cuda/llm/attention.cc#L199
'onnxruntime::contrib::cuda::GroupQueryAttentionData': too few template arguments
Windows GPU TensorRT CI Pipeline: onnxruntime/core/mlas/lib/amd64/QgemmU8X8KernelAvx2.asm#L1234
epilog offset from end of function exceeds 4095
Windows GPU TensorRT CI Pipeline: onnxruntime/core/mlas/lib/amd64/QgemmU8X8KernelAvx2.asm#L1227
epilog offset from end of function exceeds 4095
Windows GPU TensorRT CI Pipeline: onnxruntime/core/mlas/lib/amd64/QgemmU8X8KernelAvx2.asm#L1220
epilog offset from end of function exceeds 4095
Windows GPU TensorRT CI Pipeline: onnxruntime/core/mlas/lib/amd64/QgemmU8X8KernelAvx2.asm#L1213
epilog offset from end of function exceeds 4095
Windows GPU TensorRT CI Pipeline: onnxruntime/core/mlas/lib/amd64/QgemmU8X8KernelAvx2.asm#L1206
epilog offset from end of function exceeds 4095
Windows GPU TensorRT CI Pipeline: onnxruntime/core/mlas/lib/amd64/QgemmU8X8KernelAvx2.asm#L1199
epilog offset from end of function exceeds 4095