Skip to content

[CPU] fix CPU FullyConnected Node zeros value output issue #30197

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -280,7 +280,8 @@ TRANSFORMATIONS_API bool can_eliminate_eltwise_node(const std::shared_ptr<Node>&

TRANSFORMATIONS_API bool is_constant_and_all_values_equal_int(const Output<Node>& output, const int64_t& v);

TRANSFORMATIONS_API bool is_on_constant_path(const ov::Output<ov::Node>& output);
TRANSFORMATIONS_API bool is_on_constant_path(const ov::Output<ov::Node>& output,
const std::unordered_set<std::type_index>& break_node_types = {});

TRANSFORMATIONS_API bool process_subgraph(ov::pass::ModelPass& model_pass, const std::shared_ptr<Node>& node);

Expand Down
11 changes: 10 additions & 1 deletion src/common/transformations/src/transformations/utils/utils.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -489,14 +489,23 @@ bool is_constant_and_all_values_equal_int(const Output<Node>& output, const int6
return false;
}

bool is_on_constant_path(const ov::Output<ov::Node>& output) {
bool is_on_constant_path(const ov::Output<ov::Node>& output,
const std::unordered_set<std::type_index>& break_node_types) {
auto status = true;
std::deque<ov::Node*> nodes_to_calculate = {output.get_node()};

while (status && !nodes_to_calculate.empty()) {
auto current_node = nodes_to_calculate.front();
nodes_to_calculate.pop_front();

// Check if the current node matches any type in break_node_types
if (!break_node_types.empty()) {
std::type_index current_type(typeid(*current_node));
if (break_node_types.find(current_type) != break_node_types.end()) {
return false;
}
}

if (current_node->get_input_size() == 0 && !ov::is_type<ov::op::v0::Constant>(current_node)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be a bug on is_on_constant_path(). why not try more general fixing? The RandomUniform() node can't be const folded.

Suggested change
if (current_node->get_input_size() == 0 && !ov::is_type<ov::op::v0::Constant>(current_node)) {
if ((current_node->get_input_size() == 0 && !ov::is_type<ov::op::v0::Constant>(current_node)) ||
current_node->can_constant_fold(current_node->input_values())){

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, @luweizhou2016 : do you mean with this changes || !current_node->can_constant_fold(current_node->input_values()) ?
but it seems with this kind of changes, we will limit the condition to constant inputs only, while this seems will make current 'constant_path' logic miss some cases, e.g. this specific following kind( the following second 'Eltwise' pattern should be suitable pattern for 'constant_path' logic if none the 'RandomUniform'):
image

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. seems can_constant_fold() can't propagate to the graph input.
What about " || ov::is_type(ov::op::v8::RandomUniform)", the key is this should be a bug and should be fixed in the is_on_constant_path() internally not by the API user. Once the graph path has the RandomUniform OP, it should return false.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, fix in the 'is_on_constant_path()' internally can solve our issue and without change current API. But I think using current method would be easier for further extend(if some other Ops like 'RandomUniform' needed to be exclude from 'is_on_constant_path()' logic in the future, the modification will only needed in user space instead of this function).

since synced with @luweizhou2016 offline, currently this is the only concern of this pr, @maxnick which method do you suggest?(and could you please help have a final review of this pr when you are free? thanks)

status = false;
} else {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
#include "openvino/core/type/element_type.hpp"
#include "openvino/op/convert.hpp"
#include "openvino/op/matmul.hpp"
#include "openvino/op/random_uniform.hpp"
#include "openvino/op/transpose.hpp"
#include "openvino/pass/pattern/op/wrap_type.hpp"
#include "ov_ops/fully_connected.hpp"
Expand All @@ -19,7 +20,7 @@ ov::intel_cpu::ConvertMatMulToFC::ConvertMatMulToFC() {
MATCHER_SCOPE(ConvertMatMulToFC);
auto activations_m = ov::pass::pattern::any_input(ov::pass::pattern::has_static_rank());
auto weights_path = [](const ov::Output<ov::Node>& output) {
return ov::op::util::is_on_constant_path(output);
return ov::op::util::is_on_constant_path(output, {typeid(ov::op::v8::RandomUniform)});
};
auto weights_m = ov::pass::pattern::any_input(weights_path);
auto matmul_m = ov::pass::pattern::wrap_type<ov::op::v0::MatMul>({activations_m, weights_m},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
#include "openvino/opsets/opset1_decl.hpp"
#include "openvino/opsets/opset3_decl.hpp"
#include "openvino/opsets/opset7_decl.hpp"
#include "openvino/opsets/opset8_decl.hpp"
#include <openvino/pass/manager.hpp>
#include <ov_ops/type_relaxed.hpp>
#include <transformations/cpu_opset/common/pass/convert_matmul_to_fc.hpp>
Expand All @@ -25,6 +26,7 @@
#include "openvino/op/shape_of.hpp"
#include "openvino/op/subtract.hpp"
#include "openvino/op/transpose.hpp"
#include "openvino/op/random_uniform.hpp"

using namespace testing;
using namespace ov::intel_cpu;
Expand Down Expand Up @@ -543,3 +545,38 @@ TEST_F(TransformationTestsF, ConvertMatMulToFCTest_compressed_u8_weights) {
model_ref = std::make_shared<ov::Model>(ov::OutputVector{matmul}, ov::ParameterVector{data});
}
}

TEST_F(TransformationTestsF, ConvertMatMulToFCTest_WithRandomUniform) {
{
auto input1 = std::make_shared<ov::opset1::Parameter>(ov::element::f32, ov::PartialShape{-1, -1, -1});

auto random_uniform_shape = ov::opset1::Constant::create(ov::element::i32, ov::Shape{2}, {2, 2});
auto random_uniform_min = ov::opset1::Constant::create(ov::element::f32, ov::Shape{1}, {0.0});
auto random_uniform_max = ov::opset1::Constant::create(ov::element::f32, ov::Shape{1}, {1.0});
auto random_uniform = std::make_shared<ov::op::v8::RandomUniform>(random_uniform_shape,
random_uniform_min,
random_uniform_max,
ov::element::f32);

auto matmul = std::make_shared<ov::opset1::MatMul>(input1, random_uniform, false, false);

model = std::make_shared<ov::Model>(ov::OutputVector{matmul}, ov::ParameterVector{input1});

manager.register_pass<ConvertMatMulToFC>();
}
{
auto input1 = std::make_shared<ov::opset1::Parameter>(ov::element::f32, ov::PartialShape{-1, -1, -1});

auto random_uniform_shape = ov::opset1::Constant::create(ov::element::i32, ov::Shape{2}, {2, 2});
auto random_uniform_min = ov::opset1::Constant::create(ov::element::f32, ov::Shape{1}, {0.0});
auto random_uniform_max = ov::opset1::Constant::create(ov::element::f32, ov::Shape{1}, {1.0});
auto random_uniform = std::make_shared<ov::op::v8::RandomUniform>(random_uniform_shape,
random_uniform_min,
random_uniform_max,
ov::element::f32);

auto matmul = std::make_shared<ov::opset1::MatMul>(input1, random_uniform, false, false);

model_ref = std::make_shared<ov::Model>(ov::OutputVector{matmul}, ov::ParameterVector{input1});
}
}
Loading