Performance comparison MOOSE vs. Abaqus: Bottleneck in Jacobian assembly #32768
Unanswered
TheMagnificentM
asked this question in
Q&A Modules: Solid mechanics
Replies: 1 comment 5 replies
-
|
Hi @TheMagnificentM, thanks for taking the time to build such a clear presentation and reporting it to us! Would you be interested in providing some more atomic profiling, details given at https://mooseframework.inl.gov/application_development/profiling.html? If not, we may be able to do some investigation ourselves, but we're generally pretty busy. @loganharbour this could potentially be a performance data exploration opportunity even though this is outside of NEAMS. This kind of report is typical within NEAMS so I think we could still very defensibly register profiling this as NEAMS work |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Dear MOOSE community,
As I have recently started working with MOOSE, my initial goal was to set up a simple baseline calculation using a linear elastic material but including contact, to compare it with Abaqus. The setup investigates calculation times using different element types (C3D8, C3D20, C3D20R) coupled with a surface-to-surface penalty contact formulation.
During this process, I noticed a significant difference in computational time. While both solvers yield identical physical results and require a similar number of time steps and nonlinear iterations to converge, the total computational time in MOOSE is significantly higher. Based on the performance logs, the bottleneck appears to be the assembly of the Jacobian matrix. Furthermore, I have already tested the setup without Automatic Differentiation (AD), but it did not lead to a significant performance improvement.
I have set up a public repository containing the specific calculation cases used for this benchmark, including all necessary input files and relevant results. A direct evaluation and comparison of the performance metrics can be found in the short PDF presentation included in the repository.
Link to the repository: https://github.com/TheMagnificentM/moose-vs-abaqus_first-benchmark
Since I am new to the framework, are there any recommended MOOSE-specific performance optimizations, AD-flags (Automatic Differentiation), or alternative settings to accelerate the Jacobian evaluation for this specific setup?
Thank you for your time and feedback.
Beta Was this translation helpful? Give feedback.
All reactions