Skip to content

[Cpp API Compatibility] Add TORCH_WARN macro and fix resize_#78576

Open
youge325 wants to merge 1 commit intoPaddlePaddle:developfrom
youge325:cAlign
Open

[Cpp API Compatibility] Add TORCH_WARN macro and fix resize_#78576
youge325 wants to merge 1 commit intoPaddlePaddle:developfrom
youge325:cAlign

Conversation

@youge325
Copy link
Copy Markdown
Contributor

@youge325 youge325 commented Apr 3, 2026

PR Category

Execute Infrastructure

PR Types

New features

Description

新增 TORCH_WARN

修复编译 DeepEP 时发现的 resize_ 接口编译错误

/home/may/miniconda3/lib/python3.13/site-packages/paddle/include/paddle/phi/api/include/compat/ATen/ops/resize.h: In member function ‘const at::Tensor& at::Tensor::resize_(at::IntArrayRef, std::optional<c10::MemoryFormat>) const’:
/home/may/miniconda3/lib/python3.13/site-packages/paddle/include/paddle/phi/api/include/compat/ATen/ops/resize.h:80:60: error: ‘using std::__shared_ptr_access<phi::DenseTensor, __gnu_cxx::_S_atomic, false, false>::element_type = class phi::DenseTensor’ {aka ‘class phi::DenseTensor’} has no member named ‘offset’
   80 |           : dense_tensor->Holder()->size() - dense_tensor->offset();
      |                                                            ^~~~~~
/home/may/miniconda3/lib/python3.13/site-packages/paddle/include/paddle/phi/api/include/compat/ATen/ops/resize.h:90:43: error: ‘using std::__shared_ptr_access<phi::DenseTensor, __gnu_cxx::_S_atomic, false, false>::element_type = class phi::DenseTensor’ {aka ‘class phi::DenseTensor’} has no member named ‘offset’
   90 |   const size_t old_offset = dense_tensor->offset();
      |                                           ^~~~~~
/home/may/miniconda3/lib/python3.13/site-packages/paddle/include/paddle/phi/api/include/compat/ATen/ops/resize.h:99:34: error: ‘using std::__shared_ptr_access<phi::DenseTensor, __gnu_cxx::_S_atomic, false, false>::element_type = class phi::DenseTensor’ {aka ‘class phi::DenseTensor’} has no member named ‘mutable_data’
   99 |   void* new_data = dense_tensor->mutable_data(place, dense_tensor->dtype());
      |                                  ^~~~~~~~~~~~
/home/may/DeepEP/csrc/deep_ep.cpp compile failed, command '/usr/bin/g++' failed with exit code 1

是否引起精度变化

Copilot AI review requested due to automatic review settings April 3, 2026 06:46
@paddle-bot
Copy link
Copy Markdown

paddle-bot bot commented Apr 3, 2026

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added the contributor External developers label Apr 3, 2026
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the PaddlePaddle C++ API compatibility layer to better match PyTorch behavior by adding TORCH_WARN-family macros and fixing at::Tensor::resize_ compilation issues when interacting with phi::DenseTensor internals.

Changes:

  • Add a simplified warning API and TORCH_WARN / TORCH_WARN_DEPRECATION / TORCH_WARN_ONCE macros under c10/util/Exception.h.
  • Fix resize_ implementation to use DenseTensorMeta::offset and DenseTensor::ResizeAndAllocate / data() APIs.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
paddle/phi/api/include/compat/c10/util/Exception.h Introduces TORCH_WARN* macros and minimal warning plumbing for PyTorch compatibility.
paddle/phi/api/include/compat/ATen/ops/resize.h Fixes resize_ to compile with current phi::DenseTensor by switching from removed APIs (offset(), mutable_data) to supported ones.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +88 to +90
// Helper macro for generating unique variable names
#define C10_CONCATENATE_IMPL(s1, s2) s1##s2
#define C10_CONCATENATE(s1, s2) C10_CONCATENATE_IMPL(s1, s2)
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

C10_CONCATENATE(_IMPL) is redefined here, but it is already defined in c10/macros/Macros.h (and headers like c10/core/DispatchKeySet.h include both). This can trigger macro-redefinition diagnostics (often treated as errors under -Werror). Prefer including <c10/macros/Macros.h> and removing these duplicate macro definitions (or add #ifndef guards).

Suggested change
// Helper macro for generating unique variable names
#define C10_CONCATENATE_IMPL(s1, s2) s1##s2
#define C10_CONCATENATE(s1, s2) C10_CONCATENATE_IMPL(s1, s2)
// Helper macro for generating unique variable names
#ifndef C10_CONCATENATE_IMPL
#define C10_CONCATENATE_IMPL(s1, s2) s1##s2
#endif
#ifndef C10_CONCATENATE
#define C10_CONCATENATE(s1, s2) C10_CONCATENATE_IMPL(s1, s2)
#endif

Copilot uses AI. Check for mistakes.
Comment on lines +160 to +166
#define TORCH_WARN_ONCE(...) \
do { \
static bool C10_ANONYMOUS_VARIABLE(torch_warn_once_) = [] { \
TORCH_WARN(__VA_ARGS__); \
return true; \
}(); \
} while (0)
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TORCH_WARN_ONCE expands C10_ANONYMOUS_VARIABLE(...), but this header doesn’t ensure that macro is defined (it’s provided by c10/macros/Macros.h). Using TORCH_WARN_ONCE from a TU that only includes <c10/util/Exception.h> will fail to compile. Include <c10/macros/Macros.h> here (preferred) or add a guarded fallback definition for C10_ANONYMOUS_VARIABLE.

Copilot uses AI. Check for mistakes.
@youge325
Copy link
Copy Markdown
Contributor Author

youge325 commented Apr 3, 2026

/re-run all-failed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contributor External developers

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants