Skip to content

[Feat] Add CountAndGather and MoeReduce operations for FusedMoe#180

Open
zhen8838 wants to merge 2 commits intotile-ai:mainfrom
zhen8838:feat/fused_moe
Open

[Feat] Add CountAndGather and MoeReduce operations for FusedMoe#180
zhen8838 wants to merge 2 commits intotile-ai:mainfrom
zhen8838:feat/fused_moe

Conversation

@zhen8838
Copy link
Collaborator

Description

This PR implements the Fused MoE routing and aggregation path in TileOPs:

fixes #179 #178

  • count_and_gather migrated to TileLang kernel path (with staged logic)
  • moe_reduce migrated to TileLang kernel path and renamed at Op level to avoid generic naming conflict
  • Op/function/kernel call chain cleaned up to reduce redundant wrappers
  • Tests moved/extended with reference-based validation

Type of Change

  • Bug fix
  • New operator implementation
  • Performance improvement
  • Documentation update
  • Infrastructure/CI

Checklist

  • I have run pre-commit run --all-files and fixed all linting issues.
  • I have verified that my changes pass local unit tests.
  • (For new ops) I have added the corresponding Benchmark class in benchmarks/.
  • (For new ops) I have reported benchmark results in the tracking issue.

@zhen8838 zhen8838 requested review from a team and Copilot February 24, 2026 07:43
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @zhen8838, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces core functionalities for Fused Mixture of Experts (MoE) within TileOPs, specifically focusing on the routing and aggregation phases. It integrates highly optimized CountAndGather and MoeReduce operations, which are backed by custom TileLang kernels for improved performance. The changes also include structural improvements to the operator call hierarchy and robust testing to validate the new implementations.

Highlights

  • New Operations Implemented: Implemented CountAndGather and MoeReduce operations, crucial components for Mixture of Experts (MoE) routing and aggregation, leveraging TileLang kernels for efficient execution.
  • TileLang Kernel Migration: Migrated the count_and_gather logic to a TileLang kernel with staged processing and the moe_reduce logic to a TileLang kernel, which was also renamed at the Op level to avoid naming conflicts.
  • Codebase Refinement: Cleaned up the Op/function/kernel call chain by reducing redundant wrappers, enhancing code clarity and maintainability.
  • Comprehensive Testing: Added and extended tests for both new operations, incorporating reference-based validation to ensure correctness and reliability.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • tests/ops/test_count_and_gather.py
    • Added a new test file for CountAndGatherOp, including a reference implementation and tests for basic functionality and performance.
  • tests/ops/test_moe_reduce.py
    • Added a new test file for MoeReduceOp, including a reference implementation and tests for basic functionality, shared output, and invalid shapes.
  • top/functions/fuse_moe.py
    • Added count_and_gather function to dispatch to CountAndGatherKernel.
    • Added reduce function to dispatch to MoeReduceKernel.
    • Added a placeholder fuse_moe_pertensor_fp8 function, marked as not yet implemented.
  • top/kernels/fuse_moe/init.py
    • Exported CountAndGatherKernel and MoeReduceKernel.
  • top/kernels/fuse_moe/count_and_gather.py
    • Implemented CountAndGatherKernel using TileLang JIT, providing kernels for counting sequence lengths and gathering input tokens.
  • top/kernels/fuse_moe/moe_reduce.py
    • Implemented MoeReduceKernel using TileLang JIT, providing a kernel for scatter-add aggregation of expert outputs.
  • top/ops/init.py
    • Imported and added CountAndGatherOp, MoeReduceOp, and FuseMoePertensorFp8Op to the __all__ list.
  • top/ops/count_and_gather.py
    • Added the CountAndGatherOp class, which wraps the CountAndGatherKernel and manages its configuration and execution.
  • top/ops/fuse_moe_pertensor_fp8.py
    • Added a placeholder FuseMoePertensorFp8Op class for future implementation of fused MoE with FP8 quantization.
  • top/ops/moe_reduce.py
    • Added the MoeReduceOp class, which wraps the MoeReduceKernel and handles input validation and execution.
Activity
  • The pull request introduces new features for Fused MoE operations.
  • New CountAndGather and MoeReduce operations have been implemented using TileLang kernels.
  • Associated tests have been added to validate the functionality and performance of these new operations.
  • The author has confirmed that pre-commit hooks passed and local unit tests were verified.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request introduces CountAndGather and MoeReduce operations for FusedMoe, along with their respective TileLang kernels and tests. The new functionality is well-tested with reference implementations and basic performance checks. The overall structure follows the Op and Kernel abstraction, which is good for modularity.

However, there are critical efficiency concerns regarding the TileLang kernel compilation strategy. The current design causes recompilation of the JIT kernels whenever dynamic input shapes change, which will lead to significant performance overhead. Additionally, the high-level functions in top/functions/fuse_moe.py instantiate kernels on every call, exacerbating this recompilation issue. Addressing these recompilation issues is crucial for the performance of these operations.

Comment on lines +44 to +51
kernel = CountAndGatherKernel(
num_seq=num_seq,
hidden_size=hidden_size,
num_topk=num_topk,
num_expert=num_expert,
config={"tile_m": tile_m},
)
return kernel.count_and_gather(x, topk_ids, rank_ep)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The count_and_gather function instantiates a new CountAndGatherKernel on every call. This is highly inefficient as it leads to repeated kernel initialization and potential JIT recompilation overhead, especially if this function is called frequently. Kernels, especially JIT-compiled ones, should ideally be instantiated once and reused to avoid this overhead. Consider refactoring to accept a pre-initialized kernel or Op instance, or to manage the kernel's lifecycle more effectively.

Comment on lines +102 to +107
kernel = MoeReduceKernel(
num_seq=num_seq,
hidden_size=hidden_size,
num_topk=num_topk,
)
return kernel.forward(x, topk_pos, topk_scale, shared_output)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Similar to count_and_gather, the reduce function instantiates a new MoeReduceKernel on every call. This will cause repeated kernel initialization and potential JIT recompilation, which is inefficient. For optimal performance, kernels should be instantiated once and reused across multiple calls. Consider passing a pre-initialized kernel or Op instance to this function.

Comment on lines +9 to +10
def _count_seq_and_cuseq_kernel(total_num_topk: int, num_expert: int, start_expert: int,
end_expert: int, tile_m: int):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The _count_seq_and_cuseq_kernel function takes dynamic dimensions (total_num_topk, num_expert, start_expert, end_expert, tile_m) as arguments to its outer Python function. When these values change (e.g., with different input tensor shapes), tilelang.jit will recompile the kernel. This leads to significant performance overhead for dynamic workloads. To avoid recompilation, dynamic dimensions should be passed as T.int32 scalar arguments to the T.prim_func itself, and T.Tensor declarations should use symbolic dimensions or maximum possible dimensions.

Comment on lines +55 to +56
def _gather_kernel(num_seq: int, hidden_size: int, num_topk: int, total_num_topk: int,
num_expert: int, start_expert: int, end_expert: int):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The _gather_kernel function also takes dynamic dimensions (num_seq, hidden_size, num_topk, total_num_topk, num_expert, start_expert, end_expert) as arguments to its outer Python function. This will cause tilelang.jit to recompile the kernel whenever these values change, leading to significant performance overhead. Refactor this JIT kernel to accept dynamic dimensions as T.int32 scalar arguments to the T.prim_func itself, using symbolic or maximum dimensions for T.Tensor declarations.

from top.kernels.kernel import Kernel


def _moe_reduce_kernel(total_num_seq: int, num_seq: int, hidden_size: int, num_topk: int):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The _moe_reduce_kernel function takes dynamic dimensions (total_num_seq, num_seq, hidden_size, num_topk) as arguments to its outer Python function. This design causes tilelang.jit to recompile the kernel every time these values change, which is a critical performance bottleneck for dynamic shapes. The kernel should be refactored to accept these dynamic dimensions as T.int32 scalar arguments to the T.prim_func itself, and T.Tensor declarations should use symbolic or maximum possible dimensions.

print(f"Average time per iteration: {avg_time * 1000:.2f} ms")
print(f"Throughput: {throughput:.2f} sequences/sec")

assert avg_time < 0.1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The performance assertion assert avg_time < 0.1 uses a magic number 0.1. Hardcoded thresholds can make tests brittle and prone to failure across different environments or as code evolves. It's generally better to use more robust performance checks, such as relative thresholds or named constants to improve maintainability.

Suggested change
assert avg_time < 0.1
PERFORMANCE_THRESHOLD_MS = 100 # Example: 100ms
assert avg_time * 1000 < PERFORMANCE_THRESHOLD_MS

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements the Fused MoE routing and aggregation operations in TileLang, adding CountAndGatherOp and MoeReduceOp to support MoE layer computations. The implementation follows the kernel -> op -> function layering pattern, with TileLang kernels handling the low-level compute, ops providing the high-level interface, and functions offering standalone convenience wrappers.

Changes:

  • Migrated count_and_gather and moe_reduce kernels to TileLang with staged logic
  • Added Op wrappers (CountAndGatherOp, MoeReduceOp) with input validation and kernel dispatch
  • Created standalone function wrappers in top/functions/fuse_moe.py
  • Added comprehensive unit tests with reference-based validation
  • Included placeholder FuseMoePertensorFp8Op for future FP8 quantization support

Reviewed changes

Copilot reviewed 10 out of 10 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
top/kernels/fuse_moe/moe_reduce.py TileLang kernel implementation for MoE reduce operation with FP32 accumulation
top/kernels/fuse_moe/count_and_gather.py Two-stage TileLang kernel: count tokens per expert and gather inputs by expert
top/kernels/fuse_moe/init.py Kernel module exports
top/ops/moe_reduce.py Op wrapper for MoeReduceKernel with input validation and dimension management
top/ops/count_and_gather.py Op wrapper for CountAndGatherKernel with config management
top/ops/fuse_moe_pertensor_fp8.py Placeholder Op for future FP8 quantization support (not implemented)
top/ops/init.py Added new Op exports
top/functions/fuse_moe.py Standalone function wrappers for count_and_gather, reduce, and fuse_moe_pertensor_fp8
tests/ops/test_moe_reduce.py Unit tests for MoeReduceOp with reference implementation and edge cases
tests/ops/test_count_and_gather.py Unit tests for CountAndGatherOp with reference implementation and performance checks

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[New Op Sub-task] Fused MoE - moe_reduce (L1/L2)

3 participants