Skip to content

Conversation

@pianpwk
Copy link

@pianpwk pianpwk commented Jan 16, 2026

No description provided.

PyTorch is adding a skip_root parameter to linalg_vector_norm that
skips the root portion of the p-norm computation. This enables better
numerics for distributed gradient norm computation.

Update the native_functions.yaml to match PyTorch's new signature.
The XPU implementation doesn't need changes - skip_root defaults to
False and the existing behavior is preserved.

See: pytorch/pytorch#172602

Authored with Claude.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant