Conversation
Summary of ChangesHello @Datta0, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the flexibility of FP8 (8-bit floating point) quantization by decoupling its usage from the 'fast inference' mode. Previously, users were forced to enable fast inference when loading models with FP8. This change removes that constraint, enabling broader application of FP8 for potential memory and performance benefits across various inference setups, not just those optimized for speed. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request aims to allow using load_in_fp8 without fast_inference. The removal of the check in FastLanguageModel.from_pretrained is a correct step towards this goal. However, the change is incomplete as similar checks exist in other parts of the codebase which will still prevent this functionality from working as intended.
Specifically:
- An identical check is present in
FastModel.from_pretrainedin the same file (unsloth/models/loader.py, lines 843-848). This should also be removed for consistency across models. - A more critical check exists in
_get_fp8_mode_and_check_settingswithinunsloth/models/loader_utils.py(lines 364-367), which raises aValueErroriffast_inferenceis notTrue. This function is called by bothFastLanguageModel.from_pretrainedandFastModel.from_pretrained.
To fully enable load_in_fp8 for non-fast inference, these other checks also need to be removed. I recommend updating this pull request to include the removal of these additional checks for a complete implementation.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 4b4c01a363
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
Tested on Qwen3-8B with load_in_fp8=True and fast_inference=False