Skip to content

add int8 quantization support for llm models #1183

add int8 quantization support for llm models

add int8 quantization support for llm models #1183

Triggered via pull request February 20, 2026 00:18
Status Failure
Total duration 25m 49s
Artifacts 1
generate-matrix  /  generate
5s
generate-matrix / generate
filter-matrix
23s
filter-matrix
Matrix: build
Matrix: L0-dynamo-converter-tests
Waiting for pending jobs
Matrix: L0-dynamo-core-tests
Waiting for pending jobs
Matrix: L0-py-core-tests
Waiting for pending jobs
Matrix: L1-dynamo-compile-tests
Waiting for pending jobs
Matrix: L1-dynamo-core-tests
Waiting for pending jobs
Matrix: L1-torch-compile-tests
Waiting for pending jobs
Matrix: L2-dynamo-compile-tests
Waiting for pending jobs
Matrix: L2-dynamo-core-tests
Waiting for pending jobs
Matrix: L2-dynamo-plugin-tests
Waiting for pending jobs
Matrix: L2-torch-compile-tests
Waiting for pending jobs
Fit to window
Zoom out
Zoom in

Annotations

1 error

Artifacts

Produced during runtime
Name Size Digest
pytorch_tensorrt__3.10_cu129_x86_64
3.15 MB
sha256:c8a5d7a9cd80ca33dca8177217a670ab4d92dd7d76dc0c922c715f796f1eb286