-
Notifications
You must be signed in to change notification settings - Fork 24
Quant fallback to 8w per token + other quant improvements for multimodal #154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
fallback_linear_config_key = None | ||
else: | ||
assert qlinear_group_size % 2 == 0, "Linear quantization group size must be a multiple of 2." | ||
assert qlinear_group_size % 2 == 0, f"Linear quantization group size must be a multiple of 2, got {qlinear_group_size}." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is groupsixe a multiple of 2? Shouldn't it be a multiple of 32?
9238ad0
to
3b3ae50
Compare
3b3ae50
to
d2f238e
Compare
d2f238e
to
a872c53
Compare
quantize_lm_head_kwargs = { | ||
"eager_model": eager_model.lm_head, | ||
"qlinear_config": qlinear_config, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you guard this by whether eager_model has lm_head
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, curious though is there a model without lm_head
?
Big quantization improvements for Gemma3 4B vision (7.4 GB -> 3.0 GB)
fc2
layers)