Skip to content

Conversation

@Clorist33
Copy link
Contributor

@Clorist33 Clorist33 commented Dec 15, 2025

What this PR does / why we need it?

Use group_list[0] to replace group_diff[0] in function "cumsum_group_list" (moe_mlp.py).
The purpose is to modify it to the correct logic of converting cumsum to count.

Does this PR introduce any user-facing change?

No

Signed-off-by: tanqingshan (A) <[email protected]>

Signed-off-by: tanqingshan (A)  <[email protected]>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes a bug in the cumsum_group_list function by changing group_diff[0] to group_list[0]. This ensures the logic for converting a cumulative sum list to a difference list is correct. The change is accurate and addresses the intended issue. I've also added a comment to handle a potential IndexError on an empty tensor, which would improve the robustness of the function.

Comment on lines 47 to 50
if src_list_type == 0 and dst_list_type == 1:
group_diff = torch.diff(group_list)
new_group = torch.cat([group_diff[0].unsqueeze(0), group_diff], dim=0)
new_group = torch.cat([group_list[0].unsqueeze(0), group_diff], dim=0)
return new_group
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There's a potential IndexError here if group_list is an empty tensor. Accessing group_list[0] on line 49 would cause a crash. It's good practice to handle this edge case, for example by checking if the tensor is empty before proceeding.

Suggested change
if src_list_type == 0 and dst_list_type == 1:
group_diff = torch.diff(group_list)
new_group = torch.cat([group_diff[0].unsqueeze(0), group_diff], dim=0)
new_group = torch.cat([group_list[0].unsqueeze(0), group_diff], dim=0)
return new_group
if src_list_type == 0 and dst_list_type == 1:
if not group_list.numel():
return group_list
group_diff = torch.diff(group_list)
new_group = torch.cat([group_list[0].unsqueeze(0), group_diff], dim=0)
return new_group

@Clorist33 Clorist33 changed the title [Bugfix]use group_list[0] to replace group_diff[0] in moe_mlp (vllm-ascend main) [Bugfix] Fix precision issues in moe_mlp (vllm-ascend main) Dec 15, 2025
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@weijinqian0 weijinqian0 added ready read for review ready-for-test start test by label for PR labels Dec 15, 2025
@Clorist33 Clorist33 force-pushed the bugfix_group_list_main branch from ea908c0 to 0528d70 Compare December 15, 2025 10:11
@wangxiyuan wangxiyuan merged commit d43cabc into vllm-project:main Dec 16, 2025
67 of 73 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module:ops ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants