Skip to content

add int8 quantization support for llm models #1167

add int8 quantization support for llm models

add int8 quantization support for llm models #1167

Triggered via pull request February 20, 2026 00:09
Status Cancelled
Total duration 8m 40s
Artifacts

build-test-windows_rtx.yml

on: pull_request
generate-matrix  /  generate
8s
generate-matrix / generate
filter-matrix
9s
filter-matrix
substitute-runner
3s
substitute-runner
Matrix: build
Matrix: L0-dynamo-converter-tests
Waiting for pending jobs
Matrix: L0-dynamo-core-tests
Waiting for pending jobs
Matrix: L0-py-core-tests
Waiting for pending jobs
Matrix: L1-dynamo-compile-tests
Waiting for pending jobs
Matrix: L1-dynamo-core-tests
Waiting for pending jobs
Matrix: L1-torch-compile-tests
Waiting for pending jobs
Matrix: L2-dynamo-compile-tests
Waiting for pending jobs
Matrix: L2-dynamo-core-tests
Waiting for pending jobs
Matrix: L2-dynamo-plugin-tests
Waiting for pending jobs
Matrix: L2-torch-compile-tests
Waiting for pending jobs
Fit to window
Zoom out
Zoom in

Annotations

3 errors
RTX - Build Windows torch-tensorrt whl package / build-wheel-py3_10-cuda13_0
Canceling since a higher priority waiting request for RTX - Build and test Windows wheels-4086-tensorrt-rtx--false- exists
RTX - Build and test Windows wheels
Canceling since a higher priority waiting request for RTX - Build and test Windows wheels-4086-tensorrt-rtx--false- exists
RTX - Build and test Windows wheels
Canceling since a higher priority waiting request for RTX - Build and test Windows wheels-4086-tensorrt-rtx--false- exists