Benchmarking of simulation software #13318
joergfunger
started this conversation in
General
Replies: 1 comment
-
|
FYI @KratosMultiphysics/technical-committee |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Dear colleagues,
I’m Jörg F. Unger from the Federal Institute for Materials Research and Testing (BAM) in Berlin Germany.
In a project related to research software, we are interested in identifying quality criteria for simulation software, currently with a focus on FEM and CFD, such that when using those software tools for safety-critical applications at least some basic tests are performed (e.g. patch tests, convergence order, ..). Currently, every software tool probably does those tests in a continuous integration type of software development, but the results are very difficult to compare and not accessible for comparison. We believe that this could significantly improve the quality of simulation software, in particular on a research level, and might lead to an overall improvement of the software quality.
For understanding the need of the community and the current state of the art (so what tests do you actually perform, what are the metrices you use for comparison, how do you publish those results and in what format) we are organizing an online workshop (90mins) in the context of a larger community meeting of the NFDI4ING project.
We would be very happy, if you some of the developers of Kratos would be interested in participating in this workshop (please feel free to forward this message).
The agenda is published here as well as the option to register, the link of the meeting will be send after registration. If you have further question, please let me now.
Best regards,
Jörg F. Unger
Beta Was this translation helpful? Give feedback.
All reactions