Skip to content

add int8 quantization support for llm models #1167

add int8 quantization support for llm models

add int8 quantization support for llm models #1167

Annotations

2 errors

generate-matrix  /  generate

succeeded Feb 20, 2026 in 8s