【UnitTestFix No.1】fix test_activation_op.py#75553
【UnitTestFix No.1】fix test_activation_op.py#75553luotao1 merged 19 commits intoPaddlePaddle:developfrom
Conversation
….gradient" is deprecated since 2.1.0, and will be removed in future versions. Reason: Please use tensor.grad, which returns the tensor value of the gradient."
…tPow_API, TestRelu6APIWarnings) with OldIrGuard plus fresh Program guards. Adjusted TestRelu_NanInput to convert the NaN-count tensor to a host scalar before asserting, sidestepping PIR’s static bool(Tensor) restriction.
- Fix shape comparison in TestSinhAPI and TestCoshAPI by converting shapes to lists - Disable gradient check for TestRelu_NanInput class to handle NaN input cases - Refactor TestSqrtOutAndAlias to use PIR-compatible API with positional arguments - Simplify test execution by removing unnecessary startup program call - Update variable naming and data feeding for better PIR support
- Enable int32 input support for sqrt, tanh, sinh, cosh ops with auto-cast to float32 - Fix shape comparison in TestTanAPI by converting shapes to lists - Refactor TestRelu_NanInput to support both static and dygraph execution modes - Update test comments to reflect new int32 input support capabilities
- Deleted the TestSoftRelu class to streamline activation operation tests. - Updated test creation calls to exclude TestSoftRelu for both FP16 and BF16 classes.
- Added a test_check_output method to override the base class behavior. - Refactored NaN count calculation to use numpy's isnan method for clarity.
- Set check_prim_pir to False in TestSigmoidBF16 and TestPow classes to improve compatibility with PIR. - Adjusted test configurations to ensure consistent behavior across activation operation tests.
The TestPow FP16 test was failing because it incorrectly expected the pow operation to be decomposed in PIR mode (check_prim_pir=True). However, pow is a primitive operation and should not be decomposed. Changed the configuration to check_prim_pir=False to match the primitive nature of the pow operation.
| check_prim=False, | ||
| check_pir=True, | ||
| check_prim_pir=True, | ||
| check_prim_pir=False, |
There was a problem hiding this comment.
check_prim 可以关,check_prim_pir 不能关呀,PIR 下有问题应该去修复
There was a problem hiding this comment.
您好,我在修复这个部分的时候遇到了如下的问题:
- 在
primitive.yaml和primitive_ops.h里,sigmoid和pow被定义为 primitive 算子。 - 但在
composite_rules.py里,它们又有明确的 分解规则(比如 sigmoid →1/(1+exp(-x)))。 - 在测试时,虽然我调用了 decomposition pass,但这两个算子并不会真的被分解,操作列表保持不变,导致断言失败。
所以我现在的问题是,对于pow和sigmoid,是要把他们作为primitive 算子来对待么
There was a problem hiding this comment.
这个理论上这两个算子是组合算子,但是因为拆解之后会引起精度或者性能问题,所以实际使用的时候不会拆解
There was a problem hiding this comment.
你描述的问题是旧ir 下遇到的吗,提到的文件都是旧ir的
There was a problem hiding this comment.
你描述的问题是旧ir 下遇到的吗,提到的文件都是旧ir的
我是在做旧IR到新IR的迁移,修复部分因为框架更新不能用的test
…ack to True for sigmoid and pow test(need fix).
|
@xiaoguoguo626807 您好,可以确认一下 |
| core.set_prim_eager_enabled(False) | ||
|
|
||
| def check_static_comp(self): | ||
| # prim op don't need gradient decomposition |
There was a problem hiding this comment.
这里不对吧,prim_op 是需要测试反向拆解的
| class TestSigmoidBF16(OpTest): | ||
| def setUp(self): | ||
| self.op_type = "sigmoid" | ||
| self.prim_op_type = "comp" |
There was a problem hiding this comment.
这个算子比较特殊,可以测comp , 测试前向拆解正确性, 也可以测prim, 测反向拆解正确性,严格来说都需要测一下
…rectness of sigmoid under FP32
|
@xiaoguoguo626807 您好,我为sigmoid算子添加了TestSigmoidFp32_Comp用于测试前向拆解正确性。对于pow算子,我保留了self.prim_op_type = "prim"。请问这样的修改可以么? |
pow 也需要这样处理下,辛苦加一下吧 |
… and gradient checks for pow operation in FP64 precision
您好,现在已经加上了,麻烦您review一下 |
| ) | ||
| create_test_act_fp16_class(TestBRelu, check_pir=True) | ||
| create_test_act_fp16_class(TestRelu6) | ||
| create_test_act_fp16_class(TestSoftRelu, check_dygraph=False) |
There was a problem hiding this comment.
TestSoftRelu 删掉的理由是什么?3090313 提到 Deleted the TestSoftRelu class to streamline activation operation tests. 是说 TestSoftRelu 和别的有重复么?相关改动原因最好在 PR 描述里说明下
There was a problem hiding this comment.
您好,删除SoftRelu相关测试的原因是在2.6版本之后SoftRelu在官方文档里已经搜不到相关内容,保留会遇到SoftRelu算子未注册的报错,我和@YqGe585 沟通确认后删除了相关的测试。
|
/re-run all-failed |
xiaoguoguo626807
left a comment
There was a problem hiding this comment.
LGTM for decomp test
* fix: eliminate warning "API "paddle.base.dygraph.tensor_patch_methods.gradient" is deprecated since 2.1.0, and will be removed in future versions. Reason: Please use tensor.grad, which returns the tensor value of the gradient." * fix: skip unsupported integer gradient checks for ceil/floor prim tests * fix: Added paddle.pir_utils import and wrapped legacy-only tests (TestPow_API, TestRelu6APIWarnings) with OldIrGuard plus fresh Program guards. Adjusted TestRelu_NanInput to convert the NaN-count tensor to a host scalar before asserting, sidestepping PIR’s static bool(Tensor) restriction. * fix: improve activation tests for PIR compatibility and shape handling - Fix shape comparison in TestSinhAPI and TestCoshAPI by converting shapes to lists - Disable gradient check for TestRelu_NanInput class to handle NaN input cases - Refactor TestSqrtOutAndAlias to use PIR-compatible API with positional arguments - Simplify test execution by removing unnecessary startup program call - Update variable naming and data feeding for better PIR support * fix: improve activation op tests for type compatibility and PIR support - Enable int32 input support for sqrt, tanh, sinh, cosh ops with auto-cast to float32 - Fix shape comparison in TestTanAPI by converting shapes to lists - Refactor TestRelu_NanInput to support both static and dygraph execution modes - Update test comments to reflect new int32 input support capabilities * refactor: remove TestSoftRelu class from activation tests - Deleted the TestSoftRelu class to streamline activation operation tests. - Updated test creation calls to exclude TestSoftRelu for both FP16 and BF16 classes. * fix: update TestRelu_NanInput to prevent base class method call - Added a test_check_output method to override the base class behavior. - Refactored NaN count calculation to use numpy's isnan method for clarity. * fix: update activation op tests to disable check_prim_pir - Set check_prim_pir to False in TestSigmoidBF16 and TestPow classes to improve compatibility with PIR. - Adjusted test configurations to ensure consistent behavior across activation operation tests. * fix: correct TestPow FP16 prim checker configuration The TestPow FP16 test was failing because it incorrectly expected the pow operation to be decomposed in PIR mode (check_prim_pir=True). However, pow is a primitive operation and should not be decomposed. Changed the configuration to check_prim_pir=False to match the primitive nature of the pow operation. * fix: - Remove unnecessary comments and clean up code. - Adjusted assertions in TestPow_API for clarity and consistency. * refactor: optimize import * fix: remove OldIr related test case. revert modified check_prim_pir back to True for sigmoid and pow test(need fix). * fix: - skip check_static_comp for prim operator in prim_op_test.py - refactor the prim_op_type for TestPow and TestSigmoidBF16 in test_activation_op.py since they are both primitive operators. * fix: reset check_static_comp * fix: add new TestSigmoidFp32_Comp to verify forward decomposition correctness of sigmoid under FP32 * fix: add TestPowFp64_Comp to verify forward decomposition correctness and gradient checks for pow operation in FP64 precision
PR Category
Operator Mechanism
PR Types
Bug fixes
Description
修复了test_activation_op.py中不可用的test。
删除了SoftRelu相关的测试,因为2.6版本后已经不支持SoftRelu。
@luotao1 @YqGe585