Performance issues on HPC #26585
Replies: 5 comments 19 replies
-
|
Hello Which installation path did you follow? Was the MPI distribution used compiled by the cluster administrators? Guillaume |
Beta Was this translation helpful? Give feedback.
-
|
I have followed this : |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
I could not find the script where to specify the optimizer options. |
Beta Was this translation helpful? Give feedback.
-
|
SMP
REGARDS
Le jeu. 22 févr. 2024, 15:39, Guillaume Giudicelli ***@***.***>
a écrit :
… this all seems reasonable. 10k dofs per core is a little under what we
prescribe but it s not a hard rule.
what preconditioning are you using? If it s one that does not scale I
could see the performance loss there
—
Reply to this email directly, view it on GitHub
<#26585 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AFBUGDEMHIZGJOOCL3JJMRLYU5KB7AVCNFSM6AAAAABCAB6JKOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DKNJXG4ZTA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear moose developpers,
I have installed moose on a National HPC. When Testing, I noticed that On my old Dell station moose run faster than on the HPC Cluster ! The cluster has the most new and performant nodes possible I could have. The engineers told me that moose was not compiled with good optimization options to profit from the cluster capabilities. Here my compilation (offline) slurm script:
to run a job I use this:
`
`
Thank you for any suggestion.
Regards,
Saber
Beta Was this translation helpful? Give feedback.
All reactions