Skip to content

Conversation

@xiaowangintel
Copy link
Contributor

Related to #2772

The ldexp's implementation have been changed in pytorch/pytorch#168357. The new implementation involves adding a stub kernel to replace the decomposition. And we meets error as 'DispatchStub: missing kernel for xpu' when use ldexp operator with stock pytorch.

Copy link
Contributor

@CuiYifeng CuiYifeng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xiaowangintel Please check UT summary. I noticed that there are still ldexp related cases in failure list.

REGISTER_XPU_DISPATCH(
shifted_chebyshev_polynomial_w_stub,
&xpu::shifted_chebyshev_polynomial_w_kernel);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove this blank line.

@xiaowangintel
Copy link
Contributor Author

@xiaowangintel Please check UT summary. I noticed that there are still ldexp related cases in failure list.

Sync with @daisyden , we need to execute ci first and ensure the relevant test pass. Then we can consider removing items from the failure list.

template <typename scalar_t>
struct LdexpFunctor {
scalar_t operator()(scalar_t x, int exp) const {
return ::ldexp(x, exp);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One more question: which component provides ::ldexp?

@github-actions
Copy link

Performance outliers, please check!

  • 🟡 [80%, 90%), may be fluctuations
Category Model Target vs. Baseline [Eager] Target vs. Baseline [Inductor]
torchbench_bfloat16_training resnet18 0.952289 0.854704

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants