-
Notifications
You must be signed in to change notification settings - Fork 62
Open
Labels
bug_fix_stage3skippedUsed for temp UT failure to parallel fixUsed for temp UT failure to parallel fix
Milestone
Description
🐛 Describe the bug
Cases:
op_ut,third_party.torch-xpu-ops.test.xpu.functorch.test_ops_functorch_xpu.TestOperatorsXPU,test_vmapjvpall_has_batch_rule_nn_functional_logsigmoid_xpu_float32
op_ut,third_party.torch-xpu-ops.test.xpu.functorch.test_ops_functorch_xpu.TestOperatorsXPU,test_vmapjvpall_nn_functional_logsigmoid_xpu_float32
op_ut,third_party.torch-xpu-ops.test.xpu.functorch.test_ops_functorch_xpu.TestOperatorsXPU,test_vmapvjp_has_batch_rule_nn_functional_conv3d_xpu_float32
op_ut,third_party.torch-xpu-ops.test.xpu.functorch.test_ops_functorch_xpu.TestOperatorsXPU,test_vmapjvpvjp_nn_functional_logsigmoid_xpu_float32
_ TestOperatorsXPU.test_vmapjvpall_has_batch_rule_nn_functional_logsigmoid_xpu_float32 _
[gw5] linux -- Python 3.10.19 /tmp/xpu-tool/Python/3.10.19/x64/bin/python
Traceback (most recent call last):
File "/tmp/xpu-tool/Python/3.10.19/x64/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1151, in test_wrapper
return test(*args, **kwargs)
File "/__w/torch-xpu-ops/torch-xpu-ops/pytorch/third_party/torch-xpu-ops/test/xpu/functorch/test_ops_functorch_xpu.py", line 1480, in test_vmapjvpall_has_batch_rule
check_vmap_fallback(self, test, op, dry_run=False)
File "/__w/torch-xpu-ops/torch-xpu-ops/pytorch/third_party/torch-xpu-ops/test/xpu/functorch/common_utils.py", line 607, in check_vmap_fallback
thunk()
File "/__w/torch-xpu-ops/torch-xpu-ops/pytorch/third_party/torch-xpu-ops/test/xpu/functorch/test_ops_functorch_xpu.py", line 1471, in test
for loop_out, batched_out in get_fallback_and_vmap_exhaustive(
File "/__w/torch-xpu-ops/torch-xpu-ops/pytorch/third_party/torch-xpu-ops/test/xpu/functorch/common_utils.py", line 397, in get_fallback_and_vmap_exhaustive
for quantities in _compute_quantities_for_vmap_test(
File "/__w/torch-xpu-ops/torch-xpu-ops/pytorch/third_party/torch-xpu-ops/test/xpu/functorch/common_utils.py", line 317, in _compute_quantities_for_vmap_test
batched_out = vmap(op, in_dims=in_dims, out_dims=out_dim)(
File "/tmp/xpu-tool/Python/3.10.19/x64/lib/python3.10/site-packages/torch/_functorch/apis.py", line 208, in wrapped
return vmap_impl(
File "/tmp/xpu-tool/Python/3.10.19/x64/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 283, in vmap_impl
return _flat_vmap(
File "/tmp/xpu-tool/Python/3.10.19/x64/lib/python3.10/site-packages/torch/_functorch/vmap.py", line 433, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/__w/torch-xpu-ops/torch-xpu-ops/pytorch/third_party/torch-xpu-ops/test/xpu/functorch/test_ops_functorch_xpu.py", line 333, in wrapped
primals_out, tangents_out = jvp(fn, primals_in, tangents_in)
File "/tmp/xpu-tool/Python/3.10.19/x64/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py", line 1043, in jvp
return _jvp_with_argnums(
File "/tmp/xpu-tool/Python/3.10.19/x64/lib/python3.10/site-packages/torch/_functorch/eager_transforms.py", line 1102, in _jvp_with_argnums
result_duals = func(*duals)
File "/__w/torch-xpu-ops/torch-xpu-ops/pytorch/third_party/torch-xpu-ops/test/xpu/functorch/test_ops_functorch_xpu.py", line 149, in wrapped
result = f(*_args, **kwargs)
File "/tmp/xpu-tool/Python/3.10.19/x64/lib/python3.10/site-packages/torch/testing/_internal/opinfo/core.py", line 1186, in __call__
return self.op(*args, **kwargs)
RuntimeError: Trying to set a forward gradient that has a different size than that of the original Tensor, this is not supported. Tensor is of size [] while the given forward gradient is of size [1].
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/xpu-tool/Python/3.10.19/x64/lib/python3.10/unittest/case.py", line 59, in testPartExecutor
yield
File "/tmp/xpu-tool/Python/3.10.19/x64/lib/python3.10/unittest/case.py", line 591, in run
self._callTestMethod(testMethod)
File "/tmp/xpu-tool/Python/3.10.19/x64/lib/python3.10/unittest/case.py", line 549, in _callTestMethod
method()
File "/tmp/xpu-tool/Python/3.10.19/x64/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3300, in wrapper
method(*args, **kwargs)
File "/tmp/xpu-tool/Python/3.10.19/x64/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 428, in instantiated_test
result = test(self, **param_kwargs)
File "/tmp/xpu-tool/Python/3.10.19/x64/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1707, in wrapper
fn(*args, **kwargs)
File "/tmp/xpu-tool/Python/3.10.19/x64/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1163, in test_wrapper
raise e_tracked from e
Exception: Trying to set a forward gradient that has a different size than that of the original Tensor, this is not supported. Tensor is of size [] while the given forward gradient is of size [1].
Caused by sample input at index 2: SampleInput(input=Tensor[size=(), device="xpu:0", dtype=torch.float32], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=2 PYTORCH_TEST_WITH_SLOW=1 python test/xpu/functorch/test_ops_functorch_xpu.py TestOperatorsXPU.test_vmapjvpall_has_batch_rule_nn_functional_logsigmoid_xpu_float32
Versions
Metadata
Metadata
Assignees
Labels
bug_fix_stage3skippedUsed for temp UT failure to parallel fixUsed for temp UT failure to parallel fix