-
Notifications
You must be signed in to change notification settings - Fork 2.6k
[Transformations][CPU] Introduce Convolution fusion with bias #29076
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
2c87699
to
ce6fcca
Compare
@@ -51,5 +54,8 @@ std::vector<TRShape> shape_infer(const TOp* op, | |||
return output_shapes; | |||
} | |||
} // namespace v1 | |||
|
|||
using v1::shape_infer; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could using
be avoided in .hpp file? How complex is alternative solution?
fyi @praasz
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be removed or have some local scope only like: function, code block etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adjusted, the only drawback is the change of 1 line in GPU plugin
Are there going to be any tests added for the added functionality? |
namespace op { | ||
namespace internal { | ||
|
||
class TRANSFORMATIONS_API ConvolutionBiased : public ov::op::util::ConvolutionFwdPropBase { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TRANSFORMATIONS_API is usually used for exporting transformations, not operations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, this is strange. I can see TRANSFORMATIONS_API
macro usage in this context across many files in src/common/transformations/include/ov_ops
dir. Do they also use that in incorrect way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, then we may re-do this in separate PR. Operation is not a transformation:)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be TRANSFORMATIONS_API
as it regards build target, not class category.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok for core part.
@aobolensk General comment for this PR:
Item 1 is especially important to avoid cartesian product of operation types in bounds of item 2. So applying it for Convolution I would say we need to have 2 Internal operations:
GPU plugin already support such semantics inplemented as intel_gpu::op::Convolution (src/plugins/intel_gpu/include/intel_gpu/op/convolution.hpp). |
447e18b
to
14089a2
Compare
#include "itt.hpp" | ||
#include "openvino/op/util/precision_sensitive_attribute.hpp" | ||
|
||
using namespace std; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed
const auto& bias_et = get_input_element_type(2); | ||
result_et = bias_et; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
const auto& bias_et = get_input_element_type(2); | |
result_et = bias_et; | |
result_et = get_input_element_type(2); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
||
using namespace std; | ||
|
||
namespace ov { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
namespace ov { | |
namespace ov::op::internal { |
Just optional detail to consider
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
||
const auto output_shapes = op::shape_infer(this, input_shapes, m_pads_begin, m_pads_end); | ||
set_output_type(0, result_et, output_shapes[0]); | ||
set_num_spatial(num_spatial, input_shapes); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be set if value is undefined?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please clarify what value do you mean here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case when num_spatial == util::num_spatial_undefined
?
Maybe it should be set only when is different than num_spatial_undefined
?
...ormations/src/transformations/op_conversions/convert_convolution_to_convolution_internal.cpp
Outdated
Show resolved
Hide resolved
src/core/shape_inference/include/internal_convolution_shape_inference.hpp
Outdated
Show resolved
Hide resolved
template <class TOp, | ||
class TShape, | ||
class TRShape = result_shape_t<TShape>, | ||
typename std::enable_if<std::is_same<TOp, internal::Convolution>::value>::type* = nullptr> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the enable_if shoudl not be required just use internal::Convolution as type
...lugins/intel_cpu/tests/unit/shape_inference_test/convolution_biased_shape_inference_test.cpp
Outdated
Show resolved
Hide resolved
...lugins/intel_cpu/tests/unit/shape_inference_test/convolution_biased_shape_inference_test.cpp
Outdated
Show resolved
Hide resolved
9cbf58e
to
53607e1
Compare
@@ -0,0 +1,57 @@ | |||
// Copyright (C) 2025 Intel Corporation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Copyright (C) 2025 Intel Corporation | |
// Copyright (C) 2018-2025 Intel Corporation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Applied
a2bf6e8
to
97c687e
Compare
This PR will be closed in a week because of 2 weeks of no activity. |
acfbc8d
to
b5bd157
Compare
} | ||
|
||
int64_t groups = -1; | ||
auto weights_shape = gconv->get_input_partial_shape(1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
auto weights_shape = gconv->get_input_partial_shape(1); | |
const auto& weights_shape = gconv->get_input_partial_shape(1); |
#include "ov_ops/convolution.hpp" | ||
#include "transformations/utils/utils.hpp" | ||
|
||
static inline std::vector<size_t> getNormalizedDimsBySize(const std::vector<size_t>& dims, size_t ndims) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
static inline std::vector<size_t> getNormalizedDimsBySize(const std::vector<size_t>& dims, size_t ndims) { | |
static inline std::vector<size_t> getNormalizedDimsBySize(const ov::Shape& dims, size_t ndims) { |
ov::NodeVector new_ops; | ||
|
||
std::shared_ptr<ov::Node> final_bias = bias; | ||
auto add_shape = add->get_output_partial_shape(0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
auto add_shape = add->get_output_partial_shape(0); | |
const auto& add_shape = add->get_output_partial_shape(0); |
|
||
ov::pass::ConvolutionBiasFusion::ConvolutionBiasFusion() { | ||
MATCHER_SCOPE(ConvolutionBiasFusion); | ||
using namespace ov::pass::pattern; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be removed?
Details:
Tickets: