Skip to content

Conversation

@tongtong0613
Copy link

Add npu fused operators supported in modeling_qwen2, include RMSNorm, SwiGLU and RoPE

PR type

  • Bug Fix
  • New Feature
  • Document Updates
  • More Models or Datasets Support

PR information

This PR adds support for NPU fused operators in ms-swift, enhance the training performance on ascend npu.

This PR now adapt flowing fused operators for Qwen2: RMSNorm, SwiGLU and RoPE.In future, we can add more supported models and fused operators.

Experiment results

Paste your experiment result here(if needed).

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @tongtong0613, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates NPU-optimized fused operators into the Qwen2 model architecture within the ms-swift framework. This enhancement leverages Ascend NPU capabilities to accelerate training by replacing standard PyTorch operations with their NPU-fused equivalents for RMSNorm, SwiGLU, and Rotary Position Embeddings. The changes are implemented conditionally, ensuring that the NPU-specific optimizations are only applied when the necessary hardware and software (torch_npu) are detected.

Highlights

  • NPU Fused Operators for Qwen2: This PR introduces support for NPU fused operators (RMSNorm, SwiGLU, and RoPE) specifically for the modeling_qwen2 module, aiming to enhance training performance on Ascend NPUs.
  • Dynamic NPU Patcher Loading: A new npu_patcher.py module is added, which dynamically loads and applies NPU-specific optimizations only when torch_npu is available, ensuring compatibility and efficient resource utilization.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for NPU fused operators for Qwen2 models, which should improve training and inference performance on Ascend NPUs. The changes introduce a new npu_patcher.py file that monkey-patches Qwen2RMSNorm, apply_rotary_pos_emb, and Qwen2MLP.forward with NPU-optimized implementations from torch_npu. The patch is conditionally applied only when an NPU is detected, which is good practice. The implementation is clear, but I have a couple of suggestions to improve maintainability and to note a potential future performance optimization.

Comment on lines +1 to +5
# Copyright (c) Alibaba, Inc. and its affiliates.
import torch
import torch_npu
from torch import nn
from transformers.models.qwen2 import modeling_qwen2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This new file introduces several monkey-patches for NPU performance. It would be beneficial to add a module-level docstring explaining the purpose of this file (i.e., to replace certain transformers operations with NPU-accelerated versions from torch_npu for better performance on Ascend hardware). Also, adding brief comments to each function explaining what it replaces and why would improve long-term maintainability.

Comment on lines +31 to +34
def npu_swiglu_forward(self, hidden_state):
return self.down_proj(
torch_npu.npu_swiglu(torch.cat((self.gate_proj(hidden_state), self.up_proj(hidden_state)), dim=-1), dim=-1)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The use of torch.cat here to combine the outputs of gate_proj and up_proj before passing them to the fused npu_swiglu kernel might introduce some performance overhead due to memory allocation and an extra kernel launch. While this is a valid approach for monkey-patching, for optimal performance, it would be better to avoid this runtime concatenation. A more advanced optimization could involve modifying the Qwen2MLP layer to use a single, larger linear layer for both projections, thus producing a pre-concatenated output. This is a more invasive change but is worth considering for future performance tuning if this part of the model proves to be a bottleneck.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant