Skip to content

Update qwen3_vl.py#5572

Open
iceflysnow wants to merge 1 commit intoverl-project:mainfrom
iceflysnow:main
Open

Update qwen3_vl.py#5572
iceflysnow wants to merge 1 commit intoverl-project:mainfrom
iceflysnow:main

Conversation

@iceflysnow
Copy link

Fix issue: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, npu:0 and cpu!

What does this PR do?

Add concise overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, veomni, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward, fully_async, one_step_off
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Fix issue:  RuntimeError: Expected all tensors to be on the same device, but found at least two devices, npu:0 and cpu!
@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a monkey patch for qwen3_vl_moe to address a device mismatch error on NPU when using FSDP with parameter offloading. The patch correctly identifies that grid_thw.device should be used as the target device instead of self.pos_embed.weight.device. However, I've found a critical issue in the implementation of the patch that will cause a runtime error. Please see my comment for details.

Comment on lines +397 to +398
h_idxs = torch.linspace(0, self.num_grid_per_side - 1, h)
w_idxs = torch.linspace(0, self.num_grid_per_side - 1, w)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The steps argument of torch.linspace must be an integer, but h and w are 0-dimensional tensors from iterating over grid_hs and grid_ws. This will raise a TypeError and cause the program to crash. You should use .item() to convert them to Python integers.

Additionally, performing these calculations on the CPU within the loop and then transferring to the target device can be inefficient. Consider performing the computations directly on the grid_thw.device to avoid unnecessary data transfers between CPU and NPU.

Suggested change
h_idxs = torch.linspace(0, self.num_grid_per_side - 1, h)
w_idxs = torch.linspace(0, self.num_grid_per_side - 1, w)
h_idxs = torch.linspace(0, self.num_grid_per_side - 1, h.item())
w_idxs = torch.linspace(0, self.num_grid_per_side - 1, w.item())

@wuxibin89
Copy link
Collaborator

wuxibin89 commented Mar 13, 2026

I think we should fix NPU fsdp load/offload instead of patch specific model. cc @ji-huazhong

@ji-huazhong
Copy link
Collaborator

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants