Skip to content

【UnitTestFix No.1】fix test_activation_op.py#75553

Merged
luotao1 merged 19 commits intoPaddlePaddle:developfrom
scyyh11:fix/test_activation_op
Oct 13, 2025
Merged

【UnitTestFix No.1】fix test_activation_op.py#75553
luotao1 merged 19 commits intoPaddlePaddle:developfrom
scyyh11:fix/test_activation_op

Conversation

@scyyh11
Copy link
Copy Markdown
Contributor

@scyyh11 scyyh11 commented Sep 26, 2025

PR Category

Operator Mechanism

PR Types

Bug fixes

Description

修复了test_activation_op.py中不可用的test。
删除了SoftRelu相关的测试,因为2.6版本后已经不支持SoftRelu。

ed2cb0a17a8180c8fbec01c0c2607457

@luotao1 @YqGe585

scyyh11 and others added 14 commits September 18, 2025 23:53
….gradient" is deprecated since 2.1.0, and will be removed in future versions. Reason: Please use tensor.grad, which returns the tensor value of the gradient."
…tPow_API, TestRelu6APIWarnings) with OldIrGuard plus fresh Program guards. Adjusted TestRelu_NanInput to convert the NaN-count tensor to a host scalar before asserting, sidestepping PIR’s static bool(Tensor) restriction.
- Fix shape comparison in TestSinhAPI and TestCoshAPI by converting shapes to lists
- Disable gradient check for TestRelu_NanInput class to handle NaN input cases
- Refactor TestSqrtOutAndAlias to use PIR-compatible API with positional arguments
- Simplify test execution by removing unnecessary startup program call
- Update variable naming and data feeding for better PIR support
- Enable int32 input support for sqrt, tanh, sinh, cosh ops with auto-cast to float32
- Fix shape comparison in TestTanAPI by converting shapes to lists
- Refactor TestRelu_NanInput to support both static and dygraph execution modes
- Update test comments to reflect new int32 input support capabilities
- Deleted the TestSoftRelu class to streamline activation operation tests.
- Updated test creation calls to exclude TestSoftRelu for both FP16 and BF16 classes.
- Added a test_check_output method to override the base class behavior.
- Refactored NaN count calculation to use numpy's isnan method for clarity.
- Set check_prim_pir to False in TestSigmoidBF16 and TestPow classes to improve compatibility with PIR.
- Adjusted test configurations to ensure consistent behavior across activation operation tests.
The TestPow FP16 test was failing because it incorrectly expected the pow
operation to be decomposed in PIR mode (check_prim_pir=True). However,
pow is a primitive operation and should not be decomposed. Changed the
configuration to check_prim_pir=False to match the primitive nature
of the pow operation.
- Remove unnecessary comments and clean up code.
- Adjusted assertions in TestPow_API for clarity and consistency.
@scyyh11 scyyh11 marked this pull request as ready for review September 26, 2025 10:59
Comment thread test/legacy_test/test_activation_op.py Outdated
Comment thread test/legacy_test/test_activation_op.py Outdated
check_prim=False,
check_pir=True,
check_prim_pir=True,
check_prim_pir=False,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

check_prim 可以关,check_prim_pir 不能关呀,PIR 下有问题应该去修复

Copy link
Copy Markdown
Contributor Author

@scyyh11 scyyh11 Sep 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

您好,我在修复这个部分的时候遇到了如下的问题:

  • primitive.yamlprimitive_ops.h 里,sigmoidpow 被定义为 primitive 算子
  • 但在 composite_rules.py 里,它们又有明确的 分解规则(比如 sigmoid → 1/(1+exp(-x)))。
  • 在测试时,虽然我调用了 decomposition pass,但这两个算子并不会真的被分解,操作列表保持不变,导致断言失败。

所以我现在的问题是,对于pow和sigmoid,是要把他们作为primitive 算子来对待么

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

组合算子的细节需要节后 @xiaoguoguo626807 确认下

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个理论上这两个算子是组合算子,但是因为拆解之后会引起精度或者性能问题,所以实际使用的时候不会拆解

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

你描述的问题是旧ir 下遇到的吗,提到的文件都是旧ir的

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

你描述的问题是旧ir 下遇到的吗,提到的文件都是旧ir的

我是在做旧IR到新IR的迁移,修复部分因为框架更新不能用的test

@paddle-bot paddle-bot Bot added the contributor External developers label Sep 26, 2025
…ack to True for sigmoid and pow test(need fix).
- skip check_static_comp for prim operator in prim_op_test.py
- refactor the prim_op_type for TestPow and TestSigmoidBF16 in test_activation_op.py since they are both primitive operators.
@scyyh11
Copy link
Copy Markdown
Contributor Author

scyyh11 commented Oct 9, 2025

@xiaoguoguo626807 您好,可以确认一下 sigmoidpow 目前是否作为primitive算子?

Comment thread test/legacy_test/prim_op_test.py Outdated
core.set_prim_eager_enabled(False)

def check_static_comp(self):
# prim op don't need gradient decomposition
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里不对吧,prim_op 是需要测试反向拆解的

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好的

class TestSigmoidBF16(OpTest):
def setUp(self):
self.op_type = "sigmoid"
self.prim_op_type = "comp"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个算子比较特殊,可以测comp , 测试前向拆解正确性, 也可以测prim, 测反向拆解正确性,严格来说都需要测一下

@scyyh11 scyyh11 marked this pull request as draft October 10, 2025 08:35
@scyyh11 scyyh11 marked this pull request as ready for review October 10, 2025 09:22
@scyyh11
Copy link
Copy Markdown
Contributor Author

scyyh11 commented Oct 10, 2025

@xiaoguoguo626807 您好,我为sigmoid算子添加了TestSigmoidFp32_Comp用于测试前向拆解正确性。对于pow算子,我保留了self.prim_op_type = "prim"。请问这样的修改可以么?

@xiaoguoguo626807
Copy link
Copy Markdown
Contributor

@xiaoguoguo626807 您好,我为sigmoid算子添加了TestSigmoidFp32_Comp用于测试前向拆解正确性。对于pow算子,我保留了self.prim_op_type = "prim"。请问这样的修改可以么?

pow 也需要这样处理下,辛苦加一下吧

… and gradient checks for pow operation in FP64 precision
@scyyh11
Copy link
Copy Markdown
Contributor Author

scyyh11 commented Oct 11, 2025

@xiaoguoguo626807 您好,我为sigmoid算子添加了TestSigmoidFp32_Comp用于测试前向拆解正确性。对于pow算子,我保留了self.prim_op_type = "prim"。请问这样的修改可以么?

pow 也需要这样处理下,辛苦加一下吧

您好,现在已经加上了,麻烦您review一下

)
create_test_act_fp16_class(TestBRelu, check_pir=True)
create_test_act_fp16_class(TestRelu6)
create_test_act_fp16_class(TestSoftRelu, check_dygraph=False)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TestSoftRelu 删掉的理由是什么?3090313 提到 Deleted the TestSoftRelu class to streamline activation operation tests. 是说 TestSoftRelu 和别的有重复么?相关改动原因最好在 PR 描述里说明下

Copy link
Copy Markdown
Contributor Author

@scyyh11 scyyh11 Oct 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

您好,删除SoftRelu相关测试的原因是在2.6版本之后SoftRelu在官方文档里已经搜不到相关内容,保留会遇到SoftRelu算子未注册的报错,我和@YqGe585 沟通确认后删除了相关的测试。

@scyyh11
Copy link
Copy Markdown
Contributor Author

scyyh11 commented Oct 11, 2025

/re-run all-failed

@scyyh11 scyyh11 requested a review from SigureMo October 12, 2025 00:54
Copy link
Copy Markdown
Contributor

@xiaoguoguo626807 xiaoguoguo626807 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for decomp test

Copy link
Copy Markdown
Member

@YqGe585 YqGe585 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@luotao1 luotao1 merged commit 169e64c into PaddlePaddle:develop Oct 13, 2025
68 of 69 checks passed
@scyyh11 scyyh11 deleted the fix/test_activation_op branch October 13, 2025 10:13
SigureMo pushed a commit to cattidea/Paddle that referenced this pull request Oct 14, 2025
* fix: eliminate warning "API "paddle.base.dygraph.tensor_patch_methods.gradient" is deprecated since 2.1.0, and will be removed in future versions. Reason: Please use tensor.grad, which returns the tensor value of the gradient."

* fix: skip unsupported integer gradient checks for ceil/floor prim tests

* fix: Added paddle.pir_utils import and wrapped legacy-only tests (TestPow_API, TestRelu6APIWarnings) with OldIrGuard plus fresh Program guards. Adjusted TestRelu_NanInput to convert the NaN-count tensor to a host scalar before asserting, sidestepping PIR’s static bool(Tensor) restriction.

* fix: improve activation tests for PIR compatibility and shape handling

- Fix shape comparison in TestSinhAPI and TestCoshAPI by converting shapes to lists
- Disable gradient check for TestRelu_NanInput class to handle NaN input cases
- Refactor TestSqrtOutAndAlias to use PIR-compatible API with positional arguments
- Simplify test execution by removing unnecessary startup program call
- Update variable naming and data feeding for better PIR support

* fix: improve activation op tests for type compatibility and PIR support

- Enable int32 input support for sqrt, tanh, sinh, cosh ops with auto-cast to float32
- Fix shape comparison in TestTanAPI by converting shapes to lists
- Refactor TestRelu_NanInput to support both static and dygraph execution modes
- Update test comments to reflect new int32 input support capabilities

* refactor: remove TestSoftRelu class from activation tests

- Deleted the TestSoftRelu class to streamline activation operation tests.
- Updated test creation calls to exclude TestSoftRelu for both FP16 and BF16 classes.

* fix: update TestRelu_NanInput to prevent base class method call

- Added a test_check_output method to override the base class behavior.
- Refactored NaN count calculation to use numpy's isnan method for clarity.

* fix: update activation op tests to disable check_prim_pir

- Set check_prim_pir to False in TestSigmoidBF16 and TestPow classes to improve compatibility with PIR.
- Adjusted test configurations to ensure consistent behavior across activation operation tests.

* fix: correct TestPow FP16 prim checker configuration

The TestPow FP16 test was failing because it incorrectly expected the pow
operation to be decomposed in PIR mode (check_prim_pir=True). However,
pow is a primitive operation and should not be decomposed. Changed the
configuration to check_prim_pir=False to match the primitive nature
of the pow operation.

* fix:
- Remove unnecessary comments and clean up code.
- Adjusted assertions in TestPow_API for clarity and consistency.

* refactor: optimize import

* fix: remove OldIr related test case. revert modified check_prim_pir back to True for sigmoid and pow test(need fix).

* fix:
- skip check_static_comp for prim operator in prim_op_test.py
- refactor the prim_op_type for TestPow and TestSigmoidBF16 in test_activation_op.py since they are both primitive operators.

* fix: reset check_static_comp

* fix: add new TestSigmoidFp32_Comp to verify forward decomposition correctness
  of sigmoid under FP32

* fix: add TestPowFp64_Comp to verify forward decomposition correctness and gradient checks for pow operation in FP64 precision
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contributor External developers HappyOpenSource 快乐开源活动issue与PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants