qwen3.5-27B gguf not surpport? #19766
lanyuflying
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
when I use sglang branch qwen3.5 deploy qwen3.5 27b 4bit gguf model with ktansformers , it report qwen3.5 gguf not surpport?
if there are some plan to surpport cpu/gpu inferrence with gguf format ?
Beta Was this translation helpful? Give feedback.
All reactions