Skip to content

add int8 quantization support for llm models #2172

add int8 quantization support for llm models

add int8 quantization support for llm models #2172

Annotations

4 errors

Build SBSA torch-tensorrt whl package  /  build-wheel-3.10-cu130-cuda-aarch64-aarch64-false-false

cancelled Feb 20, 2026 in 10m 14s