-
Notifications
You must be signed in to change notification settings - Fork 1.3k
[clad] Bump clad to v1.10. #18383
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[clad] Bump clad to v1.10. #18383
Conversation
As one can see from the Here an example macro: #include <Math/CladDerivator.h>
double g(double *variables) { return variables[0] * variables[0] * variables[1]; }
void simpleclad() {
// Call clad to generate the gradient of g.
auto g_grad = clad::gradient(g, "variables");
// Execute the generated gradient function.
double variables[]{3., 4.};
double grad_output[]{0., 0.};
g_grad.execute(variables, grad_output);
std::cout << "grad_output[0]: " << grad_output[0] << std::endl;
std::cout << "grad_output[1]: " << grad_output[1] << std::endl;
// Dump the generated gradient code to standard output.
g_grad.dump();
} The output with ROOT compiled with Clad root [0]
Processing simpleclad.C...
grad_output[0]: 0
grad_output[1]: 0
The code is:
root [1] |
The regression originated from this commit, @vgvassilev: |
Test Results 18 files 18 suites 3d 22h 56m 23s ⏱️ Results for commit 9b1aa3b. ♻️ This comment has been updated with latest results. |
Can you try this patch: diff --git a/tools/ClangPlugin.cpp b/tools/ClangPlugin.cpp
index 8f5a2550..919ccb79 100644
--- a/tools/ClangPlugin.cpp
+++ b/tools/ClangPlugin.cpp
@@ -138,7 +138,7 @@ namespace clad {
if (!isa<FunctionDecl>(D))
continue;
FunctionDecl* FD = cast<FunctionDecl>(D);
- if (FD->isConstexpr()) {
+ if (FD->isConstexpr() || !m_Multiplexer) {
DiffCollector collector(DGR, CladEnabledRange, m_DiffRequestGraph, S,
opts);
break; |
Indeed, that fixes it! Thanks for your quick help |
Ok, in that case I can land that on the other side and release. |
@guitargeek, do we need somehow a clean build or there is something else to be fixed? |
The problem will be fixed by this PR: |
This release introduces significant updates, including support for clang versions 10 through 20 and improved diagnostics for unsupported features. Forward mode now handles variadic functions, while reverse mode sees major enhancements like marking const methods as non-differentiable, enabling differentiation of global variables and static member functions, generating constructor pullbacks automatically, and reducing tape usage through optimizations. CUDA support has been expanded with better handling of device pullbacks and kernel variables. For details, please see the release notes.
master
on the CI
@dpiparo this seems that it can go forward for the 6.36 release. |
Hi @vgvassilev yes I think we can do that. Can you approve this PR so it can me merged? Then we'll see about the 6.36 backport |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lgtm.
This release introduces significant updates, including support for clang versions 10 through 20 and improved diagnostics for unsupported features. Forward mode now handles variadic functions, while reverse mode sees major enhancements like marking const methods as non-differentiable, enabling differentiation of global variables and static member functions, generating constructor pullbacks automatically, and reducing tape usage through optimizations. CUDA support has been expanded with better handling of device pullbacks and kernel variables. For details, please see the release notes.