add int8 quantization support for llm models #2172
Triggered via pull request
February 20, 2026 00:09
Status
Cancelled
Total duration
10m 42s
Artifacts
–
build-test-linux-aarch64.yml
on: pull_request
Annotations
6 errors
|
Build SBSA torch-tensorrt whl package / build-wheel-3.10-cu129-cuda-aarch64-aarch64-false-false
Canceling since a higher priority waiting request for Build and test Linux aarch64 wheels-4086--false- exists
|
|
Build SBSA torch-tensorrt whl package / build-wheel-3.10-cu129-cuda-aarch64-aarch64-false-false
The operation was canceled.
|
|
Build SBSA torch-tensorrt whl package / build-wheel-3.10-cu130-cuda-aarch64-aarch64-false-false
Canceling since a higher priority waiting request for Build and test Linux aarch64 wheels-4086--false- exists
|
|
Build SBSA torch-tensorrt whl package / build-wheel-3.10-cu130-cuda-aarch64-aarch64-false-false
The operation was canceled.
|
|
Build and test Linux aarch64 wheels
Canceling since a higher priority waiting request for Build and test Linux aarch64 wheels-4086--false- exists
|
|
Build and test Linux aarch64 wheels
Canceling since a higher priority waiting request for Build and test Linux aarch64 wheels-4086--false- exists
|