feat: FP32 dtype output for BF16 matmuls (CUTLASS & cuDNN)#2644
feat: FP32 dtype output for BF16 matmuls (CUTLASS & cuDNN)#2644bkryu merged 1 commit intoflashinfer-ai:mainfrom
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (6)
🚧 Files skipped from review as they are similar to previous changes (2)
📝 WalkthroughWalkthroughAdds fp32 (torch.float32) as a supported output dtype for BF16 GEMM across CUTLASS runtime, Python validation/JIT, benchmarks, and tests; updates dispatch, dtype mappings, docs, and tests to include fp32. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant PythonAPI as Python API (gemm_base)
participant JIT as JIT (gen_gemm_sm100_module_cutlass_bf16)
participant Native as Native/C++ (bf16_gemm_cutlass.cu)
participant GPU
User->>PythonAPI: Call mm_bf16/bmm_bf16(out_dtype=float32)
PythonAPI->>PythonAPI: validate out_dtype (accept fp32)
PythonAPI->>JIT: request kernel variant (dtype_list includes "float")
JIT->>Native: load/compile kernel variant (fp32 variant)
Native->>Native: dispatch runGemm<float> branch
Native->>GPU: execute CUTLASS kernel (fp32 output path)
GPU-->>Native: results
Native-->>PythonAPI: return tensor (fp32)
PythonAPI-->>User: deliver output tensor
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can generate a title for your PR based on the changes with custom instructions.Set the |
|
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces support for FP32 as an output data type for BF16 matrix multiplication operations ( Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds support for FP32 output data type for BF16 matrix multiplications (mm_bf16 and bmm_bf16) for both CUTLASS and cuDNN backends. The changes are comprehensive, touching the C++ implementation, Python bindings, JIT compilation, documentation, and tests. The implementation correctly adds template instantiations and dispatch logic for float output in the CUTLASS backend, and updates the type mapping for cuDNN. The tests are properly extended to cover the new FP32 output functionality. The changes are well-executed and appear correct.
|
/bot run |
|
[FAILED] Pipeline #44988476: 10/20 passed |
|
@bkryu when you get the chance, could you rerun CI? |
|
@bkryu bumping again |
|
/bot run |
|
[CANCELING] Pipeline #46375530: canceled |
Signed-off-by: raayandhar <raayan.dhar@gmail.com>
00c9897 to
78061e4
Compare
|
/bot run |
|
@nv-yunzheq, can you help review this PR? |
| if out_dtype != torch.bfloat16: | ||
| raise ValueError( | ||
| "You cannot provide an output dtype to the TGV backend. Use the CUTLASS backend instead." | ||
| "You cannot provide an output dtype to the TGV backend. Use the CUTLASS or cuDNN backend instead." |
There was a problem hiding this comment.
This seem to be a fix for an old incorrect information. Is it true?
There was a problem hiding this comment.
Yes, we have tests for it.
Exception is that for SM103 it doesn't work...
https://github.com/flashinfer-ai/flashinfer/blob/main/tests/gemm/test_mm_bf16.py#L51
Worth mentioning here you think?
|
[FAILED] Pipeline #46376023: 13/20 passed |
…r-ai#2644) <!-- .github/pull_request_template.md --> ## 📌 Description <!-- What does this PR do? Briefly describe the changes and why they’re needed. --> Adds support for FP32 dtype output for `mm_bf16` and `bmm_bf16` for the CUTLASS and cuDNN backends. I'm not familiar enough with the TGV kernel to know if / how to support it for that backend. ## 🔍 Related Issues flashinfer-ai#2624 <!-- Link any related issues here --> ## 🚀 Pull Request Checklist Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete. ### ✅ Pre-commit Checks - [X] I have installed `pre-commit` by running `pip install pre-commit` (or used your preferred method). - [X] I have installed the hooks with `pre-commit install`. - [X] I have run the hooks manually with `pre-commit run --all-files` and fixed any reported issues. > If you are unsure about how to set up `pre-commit`, see [the pre-commit documentation](https://pre-commit.com/). ## 🧪 Tests - [X] Tests have been added or updated as needed. - [X] All tests are passing (`unittest`, etc.). ## Reviewer Notes <!-- Optional: anything you'd like reviewers to focus on, concerns, etc. --> <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * BF16-based matrix ops (mm_bf16, bmm_bf16) now allow float32 outputs in addition to bfloat16 and float16; supported across applicable backends. * **Tests** * Tests extended to cover float32 outputs for BF16/GEMM operations. * **Documentation** * User-facing docs and validation messages updated to list bf16, fp16, fp32 as valid output dtypes. <!-- end of auto-generated comment: release notes by coderabbit.ai --> Signed-off-by: raayandhar <raayan.dhar@gmail.com>
…r-ai#2644) <!-- .github/pull_request_template.md --> ## 📌 Description <!-- What does this PR do? Briefly describe the changes and why they’re needed. --> Adds support for FP32 dtype output for `mm_bf16` and `bmm_bf16` for the CUTLASS and cuDNN backends. I'm not familiar enough with the TGV kernel to know if / how to support it for that backend. ## 🔍 Related Issues flashinfer-ai#2624 <!-- Link any related issues here --> ## 🚀 Pull Request Checklist Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete. ### ✅ Pre-commit Checks - [X] I have installed `pre-commit` by running `pip install pre-commit` (or used your preferred method). - [X] I have installed the hooks with `pre-commit install`. - [X] I have run the hooks manually with `pre-commit run --all-files` and fixed any reported issues. > If you are unsure about how to set up `pre-commit`, see [the pre-commit documentation](https://pre-commit.com/). ## 🧪 Tests - [X] Tests have been added or updated as needed. - [X] All tests are passing (`unittest`, etc.). ## Reviewer Notes <!-- Optional: anything you'd like reviewers to focus on, concerns, etc. --> <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * BF16-based matrix ops (mm_bf16, bmm_bf16) now allow float32 outputs in addition to bfloat16 and float16; supported across applicable backends. * **Tests** * Tests extended to cover float32 outputs for BF16/GEMM operations. * **Documentation** * User-facing docs and validation messages updated to list bf16, fp16, fp32 as valid output dtypes. <!-- end of auto-generated comment: release notes by coderabbit.ai --> Signed-off-by: raayandhar <raayan.dhar@gmail.com> Signed-off-by: Amey Naik <212485788+ameynaik-hub@users.noreply.github.com>
📌 Description
Adds support for FP32 dtype output for
mm_bf16andbmm_bf16for the CUTLASS and cuDNN backends. I'm not familiar enough with the TGV kernel to know if / how to support it for that backend.🔍 Related Issues
#2624
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commitby runningpip install pre-commit(or used your preferred method).pre-commit install.pre-commit run --all-filesand fixed any reported issues.🧪 Tests
unittest, etc.).Reviewer Notes
Summary by CodeRabbit
New Features
Tests
Documentation