Sneak Peek: NexaQuant models! #418
iwr-redmond
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
What if you could run a fast quantized LLM and get performance similar to an unquantized model requiring four times the resources? That's exactly what the new NexaQuant models deliver.
Based on the popular DeepSeek R1, the NexaQuant distilled models have reasoning capabilities close to, or in some cases even exceeding!, their unquantized R1-distill sources.


Get started now with Nexa SDK:
DeepSeek R1 Distill Llama 8B:
nexa run DeepSeek-R1-Distill-Llama-8B-NexaQuant:q4_0
DeepSeek R1 Distill Qwen 1.5B:
nexa run DeepSeek-R1-Distill-Qwen-1.5B-NexaQuant:q4_0
* Disclaimer: I'm not affiliated with Nexa AI. Information presented is unofficial and subject to change without notice.
Beta Was this translation helpful? Give feedback.
All reactions