fix: handle device in the same way as dtype in aten.full_like
decomposition
#3538
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR extends the changes introduced in PR #3535 by applying similar handling for the
device
, which was previously missed.In the original PR, the focus was on ensuring the correct propagation of
dtype
when usingtorch.full_like
. However,torch.full_like
also accepts adevice
argument, and if adevice
is explicitly passed, it may differ from the input tensor'sdevice
. This can result in the output tensor being created on a different device than the input, leading to device mismatch issues.Results:
To prevent this, this PR ensures that the
device
is handled in the same way asdtype
in the previous PR.Type of change
Checklist: