Skip to content

fix return compatibility with 0.6.6#2871

Closed
aleozlx wants to merge 2 commits intoflashinfer-ai:mainfrom
aleozlx:fix_rmsnorm_quant_compat
Closed

fix return compatibility with 0.6.6#2871
aleozlx wants to merge 2 commits intoflashinfer-ai:mainfrom
aleozlx:fix_rmsnorm_quant_compat

Conversation

@aleozlx
Copy link
Copy Markdown
Collaborator

@aleozlx aleozlx commented Mar 24, 2026

📌 Description

🔍 Related Issues

#2832

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Bug Fixes

    • Fixed a quantization routine so it now correctly returns the output tensor instead of returning nothing.
  • Documentation

    • Updated the function documentation to include a Returns section describing the returned tensor.

@aleozlx aleozlx requested a review from yzh119 as a code owner March 24, 2026 02:54
@aleozlx aleozlx added the run-ci label Mar 24, 2026
@aleozlx aleozlx added the v0.6.7 release blocker label for 0.6.7 label Mar 24, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a backward compatibility issue with the rmsnorm_quant function. It modifies the function's signature and implementation to explicitly return a torch.Tensor, aligning its behavior with previous versions and ensuring smooth integration for users relying on its output.

Highlights

  • rmsnorm_quant return type: The return type annotation for the rmsnorm_quant function was changed from None to torch.Tensor.
  • rmsnorm_quant implementation: An explicit return out statement was added to the rmsnorm_quant function to ensure it returns the output tensor, addressing backward compatibility with v0.6.6.
  • Documentation: The docstring for the rmsnorm_quant function was updated to include a Returns section, documenting the output tensor.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@aleozlx
Copy link
Copy Markdown
Collaborator Author

aleozlx commented Mar 24, 2026

/bot run

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 24, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 70fb89e0-e5b8-4aa4-856a-48eea0db93fa

📥 Commits

Reviewing files that changed from the base of the PR and between 50a5ca1 and 72f234d.

📒 Files selected for processing (1)
  • flashinfer/norm/__init__.py

📝 Walkthrough

Walkthrough

The rmsnorm_quant function in flashinfer/norm/__init__.py now returns the output tensor (torch.Tensor) instead of None; the annotation, implementation (added return out), and docstring Returns section were updated accordingly.

Changes

Cohort / File(s) Summary
RMSNorm Quantization Return Type Fix
flashinfer/norm/__init__.py
Changed rmsnorm_quant and _rmsnorm_quant_fake signatures from -> None to -> torch.Tensor, added return out in implementations, and updated docstring with a Returns section.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Suggested reviewers

  • yzh119
  • yyihuang
  • yongwww
  • cyx-6
  • bkryu

Poem

🐰 A tensor once wandered, nowhere to go,
I nudged it gently, "Return!" — now it shows.
From doc to code, a small hop of delight,
The function gives back what keeps math upright. 🥕

🚥 Pre-merge checks | ❌ 3

❌ Failed checks (2 warnings, 1 inconclusive)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is incomplete; the 📌 Description section contains only a template placeholder with no actual explanation of changes, and the Reviewer Notes section is also empty. Fill in the Description section with a brief explanation of why the return type was changed and how it impacts users; add any relevant notes for reviewers.
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'fix return compatibility with 0.6.6' is vague and does not clearly specify what return type change or compatibility issue is being fixed. Make the title more specific, e.g., 'Fix rmsnorm_quant return type to torch.Tensor for 0.6.6 compatibility' to clearly indicate what is being fixed.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

GitLab MR !453 has been created, and the CI pipeline #46838255 is currently running. I'll report back once the pipeline job completes.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The rmsnorm_quant function in flashinfer/norm/__init__.py was updated to explicitly return a torch.Tensor. This change involved modifying the function's return type annotation from None to torch.Tensor, adding a corresponding 'Returns' section to its docstring, and inserting a return out statement at the end of the function body for backwards compatibility with the v0.6.6 API.

@aleozlx
Copy link
Copy Markdown
Collaborator Author

aleozlx commented Mar 24, 2026

@flashinfer-bot run

@aleozlx aleozlx added run-ci and removed run-ci labels Mar 24, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@flashinfer/norm/__init__.py`:
- Line 172: _rmsnorm_quant_fake currently returns None while rmsnorm_quant
returns a torch.Tensor, which breaks torch.compile/FakeTensor shape propagation;
modify _rmsnorm_quant_fake to return a torch.Tensor with the same shape, dtype
and device contract as rmsnorm_quant (e.g., construct and return an
uninitialized or zeros tensor matching the input/output shape) so the fake op
mirrors the real op's return type and enables shape/dtype/device propagation for
torch.compile; update the function body of _rmsnorm_quant_fake to produce and
return that tensor accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 09a67be5-806a-4034-ba03-b53b49f81c0a

📥 Commits

Reviewing files that changed from the base of the PR and between 1de1b97 and 50a5ca1.

📒 Files selected for processing (1)
  • flashinfer/norm/__init__.py

Comment thread flashinfer/norm/__init__.py
@aleozlx
Copy link
Copy Markdown
Collaborator Author

aleozlx commented Mar 24, 2026

this was actually a false positive...
0.6.6 had merely type annotation error, no actual return value change
https://github.com/flashinfer-ai/flashinfer/blob/v0.6.6/flashinfer/norm.py#L130-L132

closing

@aleozlx aleozlx closed this Mar 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

run-ci v0.6.7 release blocker label for 0.6.7

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants