Skip to content

Comments

Allow fp8 for non fast inference#3904

Open
Datta0 wants to merge 3 commits intounslothai:mainfrom
Datta0:fp8_enable
Open

Allow fp8 for non fast inference#3904
Datta0 wants to merge 3 commits intounslothai:mainfrom
Datta0:fp8_enable

Conversation

@Datta0
Copy link
Collaborator

@Datta0 Datta0 commented Jan 19, 2026

Tested on Qwen3-8B with load_in_fp8=True and fast_inference=False

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Datta0, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the flexibility of FP8 (8-bit floating point) quantization by decoupling its usage from the 'fast inference' mode. Previously, users were forced to enable fast inference when loading models with FP8. This change removes that constraint, enabling broader application of FP8 for potential memory and performance benefits across various inference setups, not just those optimized for speed.

Highlights

  • FP8 Quantization Flexibility: Removed a restriction that previously required fast_inference = True when load_in_fp8 was enabled, allowing FP8 quantization to be used in non-fast inference contexts.
  • Error Handling: The NotImplementedError that enforced the fast_inference requirement for FP8 loading has been removed from the model loader.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to allow using load_in_fp8 without fast_inference. The removal of the check in FastLanguageModel.from_pretrained is a correct step towards this goal. However, the change is incomplete as similar checks exist in other parts of the codebase which will still prevent this functionality from working as intended.

Specifically:

  1. An identical check is present in FastModel.from_pretrained in the same file (unsloth/models/loader.py, lines 843-848). This should also be removed for consistency across models.
  2. A more critical check exists in _get_fp8_mode_and_check_settings within unsloth/models/loader_utils.py (lines 364-367), which raises a ValueError if fast_inference is not True. This function is called by both FastLanguageModel.from_pretrained and FastModel.from_pretrained.

To fully enable load_in_fp8 for non-fast inference, these other checks also need to be removed. I recommend updating this pull request to include the removal of these additional checks for a complete implementation.

@Datta0 Datta0 changed the title Allow fp8 for non fast inference [WIP] Allow fp8 for non fast inference Jan 19, 2026
@Datta0 Datta0 marked this pull request as draft January 19, 2026 10:46
@Datta0 Datta0 marked this pull request as ready for review January 19, 2026 11:04
@Datta0 Datta0 changed the title [WIP] Allow fp8 for non fast inference Allow fp8 for non fast inference Jan 19, 2026
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 4b4c01a363

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant