[release/2.7] support experimental CU carveout #2700
Merged
ROCm Repo Management API / Tests / Tests / Test Distributed / Run pytorch_distributed_2
failed
Oct 8, 2025 in 0s
TestDistBackendWithSpawn.test_ddp_apply_optim_in_backward_ignored_params failed
TestDistBackendWithSpawn.test_ddp_apply_optim_in_backward_ignored_params failed
Details
TestDistBackendWithSpawn.test_ddp_apply_optim_in_backward_ignored_params
AssertionError: Scalars are not equal!
Expected 0 but got -6.
Absolute difference: 6
Relative difference: inf
Expected zero exit code but got -6 for pid: 550066
Stack trace
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_distributed.py", line 605, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_distributed.py", line 845, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_distributed.py", line 920, in _check_return_codes
self.assertEqual(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 4123, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 0 but got -6.
Absolute difference: 6
Relative difference: inf
Expected zero exit code but got -6 for pid: 550066
Standard error
/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/backends/cudnn/__init__.py:144: UserWarning: cuDNN Benchmark limit is not supported in MIOpen and will have no effect. (Triggered internally at /var/lib/jenkins/pytorch/torch/csrc/cuda/Module.cpp:1920.)
torch._C._cuda_set_cudnn_benchmark_limit(_benchmark_limit)
Loading