Skip to content

Conversation

@lchen2331
Copy link
Contributor

Fix issues: #2703
The issue is caused by an invalid type conversion when the input tensor values are integers while the padding value is of double-float type, resulting in incorrect casting behavior. This is fixed by clamping the padding value to the valid min/max range of the target input dtype.

Copilot AI review requested due to automatic review settings January 23, 2026 02:27
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes integer type support in amin/amax/argmin operations by addressing invalid type conversions between double-float padding values and integer tensor values. The fix clamps padding values to the valid min/max range of the target dtype.

Changes:

  • Replaced direct static_cast of padding_value with a call to _get_padding_value helper function that properly handles type conversion for both floating-point and integer types

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@lchen2331 lchen2331 requested a review from CuiYifeng January 23, 2026 06:12
Copy link
Contributor

@CuiYifeng CuiYifeng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@CuiYifeng
Copy link
Contributor

Please update PR title since the fixing is for NestedTensor, thanks

@lchen2331 lchen2331 changed the title Support integer types in amin/amax/argmin Fix NestedTensor min/max/argmin operations for integer dtypes Jan 23, 2026
@lchen2331 lchen2331 changed the title Fix NestedTensor min/max/argmin operations for integer dtypes Fix NestedTensor amin/amax/argmin operations for integer dtypes Jan 23, 2026
Copy link
Contributor

@guangyey guangyey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

@CuiYifeng
Copy link
Contributor

New Passing Known Issues in #2703:

op_ut,third_party.torch-xpu-ops.test.xpu.test_nestedtensor_xpu.TestNestedTensorDeviceTypeXPU,test_jagged_amin_dtypes_xpu_int32
op_ut,third_party.torch-xpu-ops.test.xpu.test_nestedtensor_xpu.TestNestedTensorDeviceTypeXPU,test_jagged_amin_dtypes_xpu_int64
op_ut,third_party.torch-xpu-ops.test.xpu.test_nestedtensor_xpu.TestNestedTensorDeviceTypeXPU,test_jagged_argmin_dtypes_xpu_int32
op_ut,third_party.torch-xpu-ops.test.xpu.test_nestedtensor_xpu.TestNestedTensorDeviceTypeXPU,test_jagged_argmin_dtypes_xpu_int64
op_ut,third_party.torch-xpu-ops.test.xpu.test_nestedtensor_xpu.TestNestedTensorDeviceTypeXPU,test_jagged_min_dtypes_xpu_int32
op_ut,third_party.torch-xpu-ops.test.xpu.test_nestedtensor_xpu.TestNestedTensorDeviceTypeXPU,test_jagged_min_dtypes_xpu_int64

@github-actions
Copy link

Performance outliers, please check!

  • 🟡 [80%, 90%), may be fluctuations
Category Model Target vs. Baseline [Eager] Target vs. Baseline [Inductor]
torchbench_bfloat16_training resnet18 0.937951 0.83756

@CuiYifeng CuiYifeng added this pull request to the merge queue Jan 28, 2026
Merged via the queue into main with commit bc4a992 Jan 28, 2026
124 of 130 checks passed
@CuiYifeng CuiYifeng deleted the NestedTensor branch January 28, 2026 02:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants