-
Notifications
You must be signed in to change notification settings - Fork 3.7k
[Runtime][MatchShape] Type error: Cannot convert from type ' DLTensor* ' to ' ffi.Shape ' #18546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…* ' to ' ffi.Shape ' - Error occurred at runtime when the first args of the MatchShape function (in runtime/vm/builtin.cc) is DLTensor*. If (auto opt_nd = args[0].as<Tensor>()) is false, args[0] (DLTensor*) will try convert to Shape.
Summary of ChangesHello @locnd182644, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical runtime type conversion error within the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses a runtime type error in MatchShape when it receives a DLTensor* argument. The fix correctly handles this case. However, my review identifies a critical regression where the support for tvm::runtime::Tensor arguments is unintentionally removed. I've provided a code suggestion to restore this functionality while keeping the fix, ensuring that MatchShape can correctly handle Tensor, DLTensor*, and ffi::Shape arguments as intended.
src/runtime/vm/builtin.cc
Outdated
| if (auto opt_nd = args[0].try_cast<DLTensor*>()) { | ||
| DLTensor* ptr = opt_nd.value(); | ||
| input_shape = ffi::Shape(ptr->shape, ptr->shape + ptr->ndim); | ||
| } else { | ||
| input_shape = args[0].cast<ffi::Shape>(); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current change correctly adds support for DLTensor* arguments, which fixes the reported issue. However, it removes the existing support for tvm::runtime::Tensor arguments by replacing the check instead of adding to it. This will break existing code that passes a Tensor to MatchShape.
The logic should be updated to handle Tensor, DLTensor*, and ffi::Shape arguments, as indicated by the updated comment on line 125. You can chain the checks to support all three types.
if (auto opt_tensor = args[0].as<Tensor>()) {
input_shape = opt_tensor.value().Shape();
} else if (auto opt_dltensor = args[0].try_cast<DLTensor*>()) {
DLTensor* ptr = opt_dltensor.value();
input_shape = ffi::Shape(ptr->shape, ptr->shape + ptr->ndim);
} else {
input_shape = args[0].cast<ffi::Shape>();
}There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this review makes sense. Could you update the code like this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I replaced Tensor to DLTensor* because I think tensor can cast to DLTensor*. This review makes sense because "as" operator cost is less than that of "cast" operator or other reason, could you tell me know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main reason here is not related to the performance difference between as<Tensor>() and try_cast<DLTensor*>(). The issue is type coverage and backward compatibility. Tensor cannot be assumed to always cast safely to DLTensor*. Also, existing callers still pass tvm::runtime::Tensor as an argument. If we replace the Tensor branch with DLTensor*, the previous behavior may break.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand. I will update the code like this review
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tlopex Thank you for explaining. I updated the code.
|
@tvm-bot rerun |
src/runtime/vm/builtin.cc
Outdated
| if (auto opt_nd = args[0].try_cast<DLTensor*>()) { | ||
| DLTensor* ptr = opt_nd.value(); | ||
| input_shape = ffi::Shape(ptr->shape, ptr->shape + ptr->ndim); | ||
| } else { | ||
| input_shape = args[0].cast<ffi::Shape>(); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this review makes sense. Could you update the code like this?
…* ' to ' ffi.Shape ' - Edit code like review
|
Hi @locnd182644 This pr is the same. Could you retrigger the CI? Thanks! |
|
@tlopex This PR has completed the check |
Summary
Reproduce
RPC
C++
Resolved