Skip to content

add int8 quantization support for llm models #11339

add int8 quantization support for llm models

add int8 quantization support for llm models #11339

Triggered via pull request February 20, 2026 00:18
Status Success
Total duration 43s
Artifacts

linter.yml

on: pull_request
C++ Linting
29s
C++ Linting
Python Linting
37s
Python Linting
Fit to window
Zoom out
Zoom in