Skip to content

NotImplementedError detected int test_ops.py #2254

@daisyden

Description

@daisyden

🐛 Describe the bug

Cases:
op_ut,third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU,test_out_cholesky_xpu_float32
op_ut,third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU,test_out_histogramdd_xpu_float32
op_ut,third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU,test_out_warning_cholesky_inverse_xpu
op_ut,third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU,test_out_warning_cholesky_xpu
op_ut,third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU,test_out_warning_geqrf_xpu
op_ut,third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU,test_out_warning_histogramdd_xpu
op_ut,third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU,test_out_warning_ormqr_xpu
op_ut,third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU,test_variant_consistency_eager_histogramdd_xpu_float32
op_ut,third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU,test_multiple_devices_histogramdd_xpu_float32
op_ut,third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU,test_noncontiguous_samples_histogramdd_xpu_float32

| op_ut | third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU | test_out_cholesky_xpu_float32 | failed | NotImplementedError: The operator 'aten::cholesky.out' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. ; Exception: The operator 'aten::cholesky.out' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. | XML |
| op_ut | third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU | test_out_histogramdd_xpu_float32 | failed | NotImplementedError: The operator 'aten::_histogramdd_bin_edges' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. ; Exception: The operator 'aten::_histogramdd_bin_edges' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. | XML |
| op_ut | third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU | test_out_warning_cholesky_inverse_xpu | failed | NotImplementedError: The operator 'aten::cholesky_inverse.out' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. ; Exception: The operator 'aten::cholesky_inverse.out' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. | XML |
| op_ut | third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU | test_out_warning_cholesky_xpu | failed | NotImplementedError: The operator 'aten::cholesky.out' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. ; Exception: The operator 'aten::cholesky.out' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. | XML |
| op_ut | third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU | test_out_warning_geqrf_xpu | failed | NotImplementedError: The operator 'aten::geqrf.a' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. ; Exception: The operator 'aten::geqrf.a' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. | XML |
| op_ut | third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU | test_out_warning_histogramdd_xpu | failed | NotImplementedError: The operator 'aten::_histogramdd_bin_edges' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. ; Exception: The operator 'aten::_histogramdd_bin_edges' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. | XML |
| op_ut | third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU | test_out_warning_ormqr_xpu | failed | NotImplementedError: The operator 'aten::ormqr.out' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. ; Exception: The operator 'aten::ormqr.out' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. | XML |
| op_ut | third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU | test_variant_consistency_eager_histogramdd_xpu_float32 | failed | NotImplementedError: The operator 'aten::_histogramdd_bin_edges' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. ; Exception: The operator 'aten::_histogramdd_bin_edges' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. | XML |
| op_ut | third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU | test_multiple_devices_histogramdd_xpu_float32 | failed | NotImplementedError: The operator 'aten::_histogramdd_bin_edges' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. ; Exception: The operator 'aten::_histogramdd_bin_edges' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. | XML |
| op_ut | third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU | test_noncontiguous_samples_histogramdd_xpu_float32 | failed | NotImplementedError: The operator 'aten::_histogramdd_bin_edges' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. ; Exception: The operator 'aten::_histogramdd_bin_edges' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU. | XML |

Versions

main

Metadata

Metadata

Assignees

Labels

skippedUsed for temp UT failure to parallel fix

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions