[Performance Optimization] Rewrite GPU TopK kernel with radix-select and multi-tier sorting#78409
Merged
zhengshengning merged 17 commits intoPaddlePaddle:developfrom Apr 17, 2026
Merged
Conversation
Replace the existing GPU TopK implementation with a new radix-select based algorithm and multi-tier sorting strategy for improved performance: - Radix-select for efficient top-k selection - Multi-block top-k (mbtopk) for large slices - Single-block top-k (sbtopk) for smaller slices - Three-tier sort dispatch: Bitonic Sort (k<=32), WarpMergeSort (k<=128), BlockRadixSort (k<=4096), ArgsortKernel fallback (k>4096) - Rename old TopkKernel to TopkKernelOld for reference
|
你的PR提交成功,感谢你对开源项目的贡献! |
On LP64 Linux, int64_t is typedef of long, not long long. Using int64_t caused duplicate specialization. Restore original long long / unsigned long long types with NOLINT to suppress cpplint, and remove the duplicate int64_t specialization.
When k comes from a tensor, InferMeta may set output dims with -1, making metadata invalid. Calling Alloc before resolving the actual k value triggers PreconditionNotMetError. Fix: move Alloc after FromTensor() resize, add empty-output guard and empty-input handling to match the old kernel behavior.
When k comes from a tensor, InferMeta may set output dims with -1, making metadata invalid. Calling Alloc before resolving the actual k value triggers PreconditionNotMetError. Fix: move Alloc after FromTensor() resize, add empty-output guard and empty-input handling to match the old kernel behavior.
… into acc_opt_topk
- Bitfield: add HIP fallback using bit shifts instead of PTX asm (bfe.u32/u64, bfi.b32/b64 are NVIDIA PTX only) - getLaneId/getLaneMaskLe/getLaneMaskLt: use HIP intrinsics on __HIPCC__ - CubKeyType<bfloat16>: use hip_bfloat16 instead of __nv_bfloat16 - Replace cudaStream_t with gpuStream_t (Paddle's unified type alias)
gpuStream_t is defined in phi:: namespace (via gpu_decls.h). The helper functions in the anonymous namespace cannot access it without qualification. Add 'using phi::gpuStream_t;' at the top of the anonymous namespace.
- Guard __syncwarp() with #if !defined(__HIPCC__) since HIP/DCU does not provide this intrinsic (AMD wavefronts are lockstep) - Replace cudaMemsetAsync with hipMemsetAsync under PADDLE_WITH_HIP - Use conservative defaults for regsPerMultiprocessor (65536) and maxBlocksPerMultiProcessor on HIP since hipDeviceProp_t lacks these members
c51553e to
0877f9b
Compare
Jiang-Jia-Jun
approved these changes
Apr 17, 2026
d8f60c6
into
PaddlePaddle:develop
144 of 151 checks passed
sneaxiy
pushed a commit
that referenced
this pull request
Apr 17, 2026
…radix-select and multi-tier sorting #78409 (#78659) * [TopK] Rewrite GPU TopK kernel with radix-select and multi-tier sorting Replace the existing GPU TopK implementation with a new radix-select based algorithm and multi-tier sorting strategy for improved performance: - Radix-select for efficient top-k selection - Multi-block top-k (mbtopk) for large slices - Single-block top-k (sbtopk) for smaller slices - Three-tier sort dispatch: Bitonic Sort (k<=32), WarpMergeSort (k<=128), BlockRadixSort (k<=4096), ArgsortKernel fallback (k>4096) - Rename old TopkKernel to TopkKernelOld for reference * Fix doLdg duplicate definition: restore long long types with NOLINT On LP64 Linux, int64_t is typedef of long, not long long. Using int64_t caused duplicate specialization. Restore original long long / unsigned long long types with NOLINT to suppress cpplint, and remove the duplicate int64_t specialization. * Fix TopkKernel crash: defer Alloc until after FromTensor resize When k comes from a tensor, InferMeta may set output dims with -1, making metadata invalid. Calling Alloc before resolving the actual k value triggers PreconditionNotMetError. Fix: move Alloc after FromTensor() resize, add empty-output guard and empty-input handling to match the old kernel behavior. * Fix HIP/ROCm compilation errors in top_k_cuda_kernel.cu - Bitfield: add HIP fallback using bit shifts instead of PTX asm (bfe.u32/u64, bfi.b32/b64 are NVIDIA PTX only) - getLaneId/getLaneMaskLe/getLaneMaskLt: use HIP intrinsics on __HIPCC__ - CubKeyType<bfloat16>: use hip_bfloat16 instead of __nv_bfloat16 - Replace cudaStream_t with gpuStream_t (Paddle's unified type alias) * Fix Windows build: bring gpuStream_t into anonymous namespace gpuStream_t is defined in phi:: namespace (via gpu_decls.h). The helper functions in the anonymous namespace cannot access it without qualification. Add 'using phi::gpuStream_t;' at the top of the anonymous namespace. * Fix DCU/HIP compilation errors in top_k_cuda_kernel.cu - Guard __syncwarp() with #if !defined(__HIPCC__) since HIP/DCU does not provide this intrinsic (AMD wavefronts are lockstep) - Replace cudaMemsetAsync with hipMemsetAsync under PADDLE_WITH_HIP - Use conservative defaults for regsPerMultiprocessor (65536) and maxBlocksPerMultiProcessor on HIP since hipDeviceProp_t lacks these members * rename tok_cuda_kernel * fix * fix * fix2 * fix * fix2 * fix
zhengshengning
added a commit
to zhengshengning/Paddle
that referenced
this pull request
Apr 17, 2026
…and multi-tier sorting (PaddlePaddle#78409) * [TopK] Rewrite GPU TopK kernel with radix-select and multi-tier sorting Replace the existing GPU TopK implementation with a new radix-select based algorithm and multi-tier sorting strategy for improved performance: - Radix-select for efficient top-k selection - Multi-block top-k (mbtopk) for large slices - Single-block top-k (sbtopk) for smaller slices - Three-tier sort dispatch: Bitonic Sort (k<=32), WarpMergeSort (k<=128), BlockRadixSort (k<=4096), ArgsortKernel fallback (k>4096) - Rename old TopkKernel to TopkKernelOld for reference * Fix doLdg duplicate definition: restore long long types with NOLINT On LP64 Linux, int64_t is typedef of long, not long long. Using int64_t caused duplicate specialization. Restore original long long / unsigned long long types with NOLINT to suppress cpplint, and remove the duplicate int64_t specialization. * Fix TopkKernel crash: defer Alloc until after FromTensor resize When k comes from a tensor, InferMeta may set output dims with -1, making metadata invalid. Calling Alloc before resolving the actual k value triggers PreconditionNotMetError. Fix: move Alloc after FromTensor() resize, add empty-output guard and empty-input handling to match the old kernel behavior. * Fix TopkKernel crash: defer Alloc until after FromTensor resize When k comes from a tensor, InferMeta may set output dims with -1, making metadata invalid. Calling Alloc before resolving the actual k value triggers PreconditionNotMetError. Fix: move Alloc after FromTensor() resize, add empty-output guard and empty-input handling to match the old kernel behavior. * Fix HIP/ROCm compilation errors in top_k_cuda_kernel.cu - Bitfield: add HIP fallback using bit shifts instead of PTX asm (bfe.u32/u64, bfi.b32/b64 are NVIDIA PTX only) - getLaneId/getLaneMaskLe/getLaneMaskLt: use HIP intrinsics on __HIPCC__ - CubKeyType<bfloat16>: use hip_bfloat16 instead of __nv_bfloat16 - Replace cudaStream_t with gpuStream_t (Paddle's unified type alias) * Fix Windows build: bring gpuStream_t into anonymous namespace gpuStream_t is defined in phi:: namespace (via gpu_decls.h). The helper functions in the anonymous namespace cannot access it without qualification. Add 'using phi::gpuStream_t;' at the top of the anonymous namespace. * Fix DCU/HIP compilation errors in top_k_cuda_kernel.cu - Guard __syncwarp() with #if !defined(__HIPCC__) since HIP/DCU does not provide this intrinsic (AMD wavefronts are lockstep) - Replace cudaMemsetAsync with hipMemsetAsync under PADDLE_WITH_HIP - Use conservative defaults for regsPerMultiprocessor (65536) and maxBlocksPerMultiProcessor on HIP since hipDeviceProp_t lacks these members * rename tok_cuda_kernel * fix * fix * fix2 * fix * fix2 * fix
This was referenced Apr 19, 2026
sneaxiy
pushed a commit
that referenced
this pull request
Apr 21, 2026
…and multi-tier sorting (#78409) (#78703) * [TopK] Rewrite GPU TopK kernel with radix-select and multi-tier sorting Replace the existing GPU TopK implementation with a new radix-select based algorithm and multi-tier sorting strategy for improved performance: - Radix-select for efficient top-k selection - Multi-block top-k (mbtopk) for large slices - Single-block top-k (sbtopk) for smaller slices - Three-tier sort dispatch: Bitonic Sort (k<=32), WarpMergeSort (k<=128), BlockRadixSort (k<=4096), ArgsortKernel fallback (k>4096) - Rename old TopkKernel to TopkKernelOld for reference * Fix doLdg duplicate definition: restore long long types with NOLINT On LP64 Linux, int64_t is typedef of long, not long long. Using int64_t caused duplicate specialization. Restore original long long / unsigned long long types with NOLINT to suppress cpplint, and remove the duplicate int64_t specialization. * Fix TopkKernel crash: defer Alloc until after FromTensor resize When k comes from a tensor, InferMeta may set output dims with -1, making metadata invalid. Calling Alloc before resolving the actual k value triggers PreconditionNotMetError. Fix: move Alloc after FromTensor() resize, add empty-output guard and empty-input handling to match the old kernel behavior. * Fix TopkKernel crash: defer Alloc until after FromTensor resize When k comes from a tensor, InferMeta may set output dims with -1, making metadata invalid. Calling Alloc before resolving the actual k value triggers PreconditionNotMetError. Fix: move Alloc after FromTensor() resize, add empty-output guard and empty-input handling to match the old kernel behavior. * Fix HIP/ROCm compilation errors in top_k_cuda_kernel.cu - Bitfield: add HIP fallback using bit shifts instead of PTX asm (bfe.u32/u64, bfi.b32/b64 are NVIDIA PTX only) - getLaneId/getLaneMaskLe/getLaneMaskLt: use HIP intrinsics on __HIPCC__ - CubKeyType<bfloat16>: use hip_bfloat16 instead of __nv_bfloat16 - Replace cudaStream_t with gpuStream_t (Paddle's unified type alias) * Fix Windows build: bring gpuStream_t into anonymous namespace gpuStream_t is defined in phi:: namespace (via gpu_decls.h). The helper functions in the anonymous namespace cannot access it without qualification. Add 'using phi::gpuStream_t;' at the top of the anonymous namespace. * Fix DCU/HIP compilation errors in top_k_cuda_kernel.cu - Guard __syncwarp() with #if !defined(__HIPCC__) since HIP/DCU does not provide this intrinsic (AMD wavefronts are lockstep) - Replace cudaMemsetAsync with hipMemsetAsync under PADDLE_WITH_HIP - Use conservative defaults for regsPerMultiprocessor (65536) and maxBlocksPerMultiProcessor on HIP since hipDeviceProp_t lacks these members * rename tok_cuda_kernel * fix * fix * fix2 * fix * fix2 * fix
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
PR Category
Performance Optimization
PR Types
Performance
Description
本 PR 对 GPU TopK 算子进行了重写,采用了基于 radix-select 的高性能实现方案,替代了原有实现,以提升大规模数据场景下的性能表现。
**Topk index 和 Torch 进行了对齐,换成了下面的排序算法!
主要改动
新增
top_k_cuda_kernel.cu:全新的 GPU TopK 实现,包含以下核心算法:修改
top_k_kernel.cu:将原有TopkKernel重命名为TopkKernelOld,注册为topk_old,保留供对比参考性能优势
新实现通过 radix-select 算法在不完全排序的情况下高效选出 top-k 元素,相比原实现在多种 k 值和数据规模下均有显著性能提升。

H800:
A100:

是否引起精度变化
是