add int8 quantization support for llm models #1167
Triggered via pull request
February 20, 2026 00:09
Status
Cancelled
Total duration
8m 40s
Artifacts
–
build-test-windows_rtx.yml
on: pull_request
generate-matrix
/
generate
8s
Matrix: build
Matrix: L0-dynamo-converter-tests
Waiting for pending jobs
Matrix: L0-dynamo-core-tests
Waiting for pending jobs
Matrix: L0-py-core-tests
Waiting for pending jobs
Matrix: L1-dynamo-compile-tests
Waiting for pending jobs
Matrix: L1-dynamo-core-tests
Waiting for pending jobs
Matrix: L1-torch-compile-tests
Waiting for pending jobs
Matrix: L2-dynamo-compile-tests
Waiting for pending jobs
Matrix: L2-dynamo-core-tests
Waiting for pending jobs
Matrix: L2-dynamo-plugin-tests
Waiting for pending jobs
Matrix: L2-torch-compile-tests
Waiting for pending jobs
Annotations
3 errors
|
RTX - Build Windows torch-tensorrt whl package / build-wheel-py3_10-cuda13_0
Canceling since a higher priority waiting request for RTX - Build and test Windows wheels-4086-tensorrt-rtx--false- exists
|
|
RTX - Build and test Windows wheels
Canceling since a higher priority waiting request for RTX - Build and test Windows wheels-4086-tensorrt-rtx--false- exists
|
|
RTX - Build and test Windows wheels
Canceling since a higher priority waiting request for RTX - Build and test Windows wheels-4086-tensorrt-rtx--false- exists
|