Optimize FP8 Triton kernels for better performance#1066
Open
yurekami wants to merge 1 commit intodeepseek-ai:mainfrom
Open
Optimize FP8 Triton kernels for better performance#1066yurekami wants to merge 1 commit intodeepseek-ai:mainfrom
yurekami wants to merge 1 commit intodeepseek-ai:mainfrom
Conversation
This PR addresses Issue deepseek-ai#1052 with the following improvements: 1. act_quant_kernel: - Added boundary masking for partial blocks to prevent out-of-bounds access - Added n_elements parameter for proper boundary handling - Extracted FP8_E4M3_MAX constant for clarity 2. fp8_gemm_kernel: - Extended autotuning configs with larger block sizes (128x256) - Added dynamic num_warps calculation based on block dimensions - Added M to autotune key for better config selection - Introduced explicit stride parameters for flexible memory layouts - Improved code documentation 3. Autotuning improvements: - Expanded block_m options: [16, 32, 64] -> [16, 32, 64, 128] - Expanded block_n options: [32, 64, 128] -> [32, 64, 128, 256] - Reduced num_stages options: [3, 4, 5, 6] -> [3, 4, 5] for faster tuning - Added tile size limit (16384) to avoid excessive register pressure 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Summary
This PR addresses Issue #1052 by optimizing the Triton FP8 kernels for improved performance and correctness.
Changes
1. act_quant_kernel improvements:
n_elementsparameter for proper boundary handlingFP8_E4M3_MAXconstant (448.0) for code clarity2. fp8_gemm_kernel optimizations:
num_warpscalculation based on block dimensions for optimal GPU occupancyMdimension to autotune key for better configuration selection3. Autotuning configuration improvements:
block_moptions:[16, 32, 64]→[16, 32, 64, 128]block_noptions:[32, 64, 128]→[32, 64, 128, 256]num_stagesoptions:[3, 4, 5, 6]→[3, 4, 5]for faster tuningExpected Benefits
Test plan
python3 -m py_compile inference/kernel.py)Related Issues
Closes #1052
🤖 Generated with Claude Code