Skip to content

add int8 quantization support for llm models #114

add int8 quantization support for llm models

add int8 quantization support for llm models #114

Triggered via pull request February 20, 2026 00:09
Status Skipped
Total duration 1s
Artifacts

release-linux-aarch64.yml

on: pull_request
generate-matrix  /  generate
generate-matrix / generate
generate-release-tarball-matrix
0s
generate-release-tarball-matrix
generate-release-wheel-matrix
0s
generate-release-wheel-matrix
Matrix: Release aarch64 torch-tensorrt cxx11 tarball artifacts
Waiting for pending jobs
Matrix: Release aarch64 torch-tensorrt wheel artifacts
Waiting for pending jobs
Fit to window
Zoom out
Zoom in