Skip to content

add int8 quantization support for llm models #2172

add int8 quantization support for llm models

add int8 quantization support for llm models #2172

Triggered via pull request February 20, 2026 00:09
Status Cancelled
Total duration 10m 42s
Artifacts

build-test-linux-aarch64.yml

on: pull_request
generate-matrix  /  generate
5s
generate-matrix / generate
filter-matrix
9s
filter-matrix
Matrix: build
Fit to window
Zoom out
Zoom in

Annotations

6 errors
Build SBSA torch-tensorrt whl package / build-wheel-3.10-cu129-cuda-aarch64-aarch64-false-false
Canceling since a higher priority waiting request for Build and test Linux aarch64 wheels-4086--false- exists
Build SBSA torch-tensorrt whl package / build-wheel-3.10-cu130-cuda-aarch64-aarch64-false-false
Canceling since a higher priority waiting request for Build and test Linux aarch64 wheels-4086--false- exists
Build and test Linux aarch64 wheels
Canceling since a higher priority waiting request for Build and test Linux aarch64 wheels-4086--false- exists
Build and test Linux aarch64 wheels
Canceling since a higher priority waiting request for Build and test Linux aarch64 wheels-4086--false- exists