Commit 169e64c
authored
【UnitTestFix No.1】fix test_activation_op.py (#75553)
* fix: eliminate warning "API "paddle.base.dygraph.tensor_patch_methods.gradient" is deprecated since 2.1.0, and will be removed in future versions. Reason: Please use tensor.grad, which returns the tensor value of the gradient."
* fix: skip unsupported integer gradient checks for ceil/floor prim tests
* fix: Added paddle.pir_utils import and wrapped legacy-only tests (TestPow_API, TestRelu6APIWarnings) with OldIrGuard plus fresh Program guards. Adjusted TestRelu_NanInput to convert the NaN-count tensor to a host scalar before asserting, sidestepping PIR’s static bool(Tensor) restriction.
* fix: improve activation tests for PIR compatibility and shape handling
- Fix shape comparison in TestSinhAPI and TestCoshAPI by converting shapes to lists
- Disable gradient check for TestRelu_NanInput class to handle NaN input cases
- Refactor TestSqrtOutAndAlias to use PIR-compatible API with positional arguments
- Simplify test execution by removing unnecessary startup program call
- Update variable naming and data feeding for better PIR support
* fix: improve activation op tests for type compatibility and PIR support
- Enable int32 input support for sqrt, tanh, sinh, cosh ops with auto-cast to float32
- Fix shape comparison in TestTanAPI by converting shapes to lists
- Refactor TestRelu_NanInput to support both static and dygraph execution modes
- Update test comments to reflect new int32 input support capabilities
* refactor: remove TestSoftRelu class from activation tests
- Deleted the TestSoftRelu class to streamline activation operation tests.
- Updated test creation calls to exclude TestSoftRelu for both FP16 and BF16 classes.
* fix: update TestRelu_NanInput to prevent base class method call
- Added a test_check_output method to override the base class behavior.
- Refactored NaN count calculation to use numpy's isnan method for clarity.
* fix: update activation op tests to disable check_prim_pir
- Set check_prim_pir to False in TestSigmoidBF16 and TestPow classes to improve compatibility with PIR.
- Adjusted test configurations to ensure consistent behavior across activation operation tests.
* fix: correct TestPow FP16 prim checker configuration
The TestPow FP16 test was failing because it incorrectly expected the pow
operation to be decomposed in PIR mode (check_prim_pir=True). However,
pow is a primitive operation and should not be decomposed. Changed the
configuration to check_prim_pir=False to match the primitive nature
of the pow operation.
* fix:
- Remove unnecessary comments and clean up code.
- Adjusted assertions in TestPow_API for clarity and consistency.
* refactor: optimize import
* fix: remove OldIr related test case. revert modified check_prim_pir back to True for sigmoid and pow test(need fix).
* fix:
- skip check_static_comp for prim operator in prim_op_test.py
- refactor the prim_op_type for TestPow and TestSigmoidBF16 in test_activation_op.py since they are both primitive operators.
* fix: reset check_static_comp
* fix: add new TestSigmoidFp32_Comp to verify forward decomposition correctness
of sigmoid under FP32
* fix: add TestPowFp64_Comp to verify forward decomposition correctness and gradient checks for pow operation in FP64 precision1 parent 31f801d commit 169e64c
1 file changed
Lines changed: 145 additions & 118 deletions
0 commit comments