[onert] Support to load BROADCAST_TO operation#15164
[onert] Support to load BROADCAST_TO operation#15164hseok-oh merged 3 commits intoSamsung:masterfrom
Conversation
- BROADCAST_TO operation is already supported in runtime, but it was under CUSTOM operation. so it's fixed to be able to load BROADCAST_TO directly. - Added test cases for BROADCAST_TO operation. ONE-DCO-1.0-Signed-off-by: Seockho Kim seockho.kim@samsung.com
|
|
||
| #include "GenModelTest.h" | ||
|
|
||
| TEST_F(GenModelTest, OneOp_BroadcastTo_1D_to_2D) |
There was a problem hiding this comment.
@seockho-kim We have a requirement to have # of positive tc = # of negative tc.
There was a problem hiding this comment.
@seockho-kim We have a requirement to have # of positive tc = # of negative tc.
Okay, I'll update.
|
@seockho-kim Could you please use |
Okay, I got it. |
- Added OperationValidator and ShapeValidator for BROADCAST_TO - Added negative testcases for BROADCAST_TO ONE-DCO-1.0-Signed-off-by: Seockho Kim seockho.kim@samsung.com
| SUCCEED(); | ||
| } | ||
|
|
||
| TEST_F(GenModelTest, OneOp_BroadcastTo_InputOutputDifferentType) |
There was a problem hiding this comment.
| TEST_F(GenModelTest, OneOp_BroadcastTo_InputOutputDifferentType) | |
| TEST_F(GenModelTest, neg_OneOp_BroadcastTo_InputOutputDifferentType) |
Please add neg_ prefix to all negative test name. It requires to calculate pos/neg test count on internal validator.
You can find negative test naming policy here:
ONE/.ahub/tcchecker-tca/config.yaml
Lines 64 to 68 in 7b87c29
There was a problem hiding this comment.
Thanks for noticing me. I'll update it.
ONE-DCO-1.0-Signed-off-by: Seockho Kim seockho.kim@samsung.com
| int in_len = input_shape.size(); | ||
| int tgt_len = target_shape.size(); | ||
| int max_len = std::max(in_len, tgt_len); | ||
|
|
||
| std::vector<int32_t> in_shape_padded(max_len, 1); | ||
| std::vector<int32_t> tgt_shape_padded(max_len, 1); | ||
|
|
||
| for (int i = 0; i < in_len; i++) | ||
| { | ||
| in_shape_padded[max_len - in_len + i] = input_shape[i]; | ||
| } | ||
| for (int i = 0; i < tgt_len; i++) | ||
| { | ||
| tgt_shape_padded[max_len - tgt_len + i] = target_shape[i]; | ||
| } | ||
|
|
||
| for (int i = max_len - 1; i >= 0; --i) | ||
| { | ||
| OP_REQUIRES((in_shape_padded[i] == tgt_shape_padded[i]) || (in_shape_padded[i] == 1)); | ||
| } | ||
| } |
There was a problem hiding this comment.
- We don't need to make two extended shape of copies.
- 2nd input
shapemust be greater or equal to input shape.
Refer to the corresponding implementation in tensorflow lite 2.18.0
TfLiteStatus ResizeOutputTensor(TfLiteContext* context,
BroadcastToContext* op_context) {
// Ensures the shape is 1D tensor.
TF_LITE_ENSURE_EQ(context, NumDimensions(op_context->shape), 1);
// Ensure output dims is not less than input dims.
int input_num_dims = NumDimensions(op_context->input);
int output_num_dims = SizeOfDimension(op_context->shape, 0);
TF_LITE_ENSURE_MSG(context, input_num_dims <= output_num_dims,
"Output shape must be broadcastable from input shape.");
TF_LITE_ENSURE_MSG(context, output_num_dims <= kMaxDims,
"BroadcastTo only supports 1-8D tensor.");
// Check if output shape is broadcastable from input shape.
auto get_shape_data = [op_context](int i) -> int32_t {
if (op_context->shape->type == kTfLiteInt32) {
return GetTensorData<int32_t>(op_context->shape)[i];
} else {
return GetTensorData<int64_t>(op_context->shape)[i];
}
};
int extending_dims = output_num_dims - input_num_dims;
for (int idx = 0; idx < input_num_dims; ++idx) {
TF_LITE_ENSURE_MSG(context,
(SizeOfDimension(op_context->input, idx) == 1 ||
SizeOfDimension(op_context->input, idx) ==
get_shape_data(extending_dims + idx)),
"Output shape must be broadcastable from input shape.");
}
glistening
left a comment
There was a problem hiding this comment.
LGTM. I will make an updating PR for #15164 (comment)
Okay, I'll look forward to it. :) |
ONE-DCO-1.0-Signed-off-by: Seockho Kim seockho.kim@samsung.com