Singularity install? #847
Replies: 8 comments 1 reply
-
|
I also use Singularity. I found the major issue is $HOME in singularity is different from what DAFoam expects. Before |
Beta Was this translation helpful? Give feedback.
-
|
It looks like an MPI error. I allocated a node with 4 cores and then started the container with the env settings for HOME and DAFOAM_ROOT_PATH. As you’re using singularity, how are you starting your dafoam container?
This is the error:
[p-sc-2145:2100881] OPAL ERROR: Unreachable in file ext3x_client.c at line 112
--------------------------------------------------------------------------
The application appears to have been direct launched using "srun",
but OMPI was not built with SLURM's PMI support and therefore cannot
execute. There are several options for building PMI support under
SLURM, depending upon the SLURM version you are using:
version 16.05 or later: you can use SLURM's PMIx support. This
requires that you configure and build SLURM --with-pmix.
Versions earlier than 16.05: you must use either SLURM's PMI-1 or
PMI-2 support. SLURM builds PMI-1 by default, or you can manually
install PMI-2. You must then build Open MPI using --with-pmi pointing
to the SLURM PMI library location.
Please configure as appropriate and try again.
--------------------------------------------------------------------------
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
[p-sc-2145:2100881] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
/*---------------------------------------------------------------------------*\
| ========= | |
| \\ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \\ / O peration | Version: v1812 |
| \\ / A nd | Web: www.OpenFOAM.com<http://www.openfoam.com/> |
| \\/ M anipulation | |
\*---------------------------------------------------------------------------*/
Build : v1812 OPENFOAM=1812
Arch : "LSB;label=32;scalar=64"
Exec : plot3dToFoam -noBlank volumeMesh.xyz
Date : Jul 09 2025
Time : 18:39:50
Host : p-sc-2145
PID : 2100887
I/O : uncollated
Case : /scratch/nucci/tutorials-main/NACA0012_Airfoil/incompressible
nProcs : 1
trapFpe: Floating point exception trapping enabled (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster (fileModificationSkew 10)
allowSystemOperations : Allowing user-supplied system call operations
From: Zhen Zhang ***@***.***>
Date: Tuesday, July 8, 2025 at 4:43 PM
To: mdolab/dafoam ***@***.***>
Cc: Nucciarone, Jeffrey John ***@***.***>, Author ***@***.***>
Subject: Re: [mdolab/dafoam] Singularity install? (Discussion #847)
I also use Singularity. I found the major issue is $HOME in singularity is different from what DAFoam expects. Before export DAFOAM_ROOT_PATH=$HOME/dafoam, if you add a line export HOME=home/dafoamuser, you may solve most part of the problem.
—
Reply to this email directly, view it on GitHub<#847 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ATIYHQV4AN6GOS5XZ6OVSND3HQUQRAVCNFSM6AAAAACAJFFOVOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTGNZQGA4DCNY>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
You need to load the singularity module and do:
This will build the Singularity container and save it as dafoam_latest.sif file. Then, you can put these lines into your Slurm script:
Note that you can use only one node on the HPC. The current DAFoam Docker image is not setup for running across multiple nodes on HPCs. |
Beta Was this translation helpful? Give feedback.
-
|
Thanks for the information.
Unfortunately, I am still seeing errors. MPI still appears to be an issue, even in the preprocessing step. I mam trying the tutorial ./ tutorials-main/NACA0012_Airfoil/incompressible
These are the errors, I only tried to run prepreocessing.sh:
$ singularity exec dafoam_latest.sif /bin/bash -l -c 'export HOME=/home/dafoamuser && . /home/dafoamuser/dafoam/loadDAFoam.sh && ./preProcessing.sh'
Generating mesh..
…--> FOAM FATAL IO ERROR:
Attempt to get back from bad stream
file: volumeMesh.xyz at line 1.
From function bool Foam::Istream::getBack(Foam::token&)
in file db/IOstreams/IOstreams/Istream.C at line 56.
FOAM exiting
--> FOAM FATAL ERROR:
Cannot find file "points" in directory "polyMesh" in times "0" down to constant
From function virtual Foam::IOobject Foam::fileOperation::findInstance(const Foam::IOobject&, Foam::scalar, const Foam::word&) const
in file global/fileOperations/fileOperation/fileOperation.C at line 879.
FOAM exiting
--> FOAM FATAL ERROR:
Cannot find file "points" in directory "polyMesh" in times "0" down to constant
From function virtual Foam::IOobject Foam::fileOperation::findInstance(const Foam::IOobject&, Foam::scalar, const Foam::word&) const
in file global/fileOperations/fileOperation/fileOperation.C at line 879.
FOAM exiting
--> FOAM FATAL ERROR:
Cannot find file "points" in directory "polyMesh" in times "0" down to constant
From function virtual Foam::IOobject Foam::fileOperation::findInstance(const Foam::IOobject&, Foam::scalar, const Foam::word&) const
in file global/fileOperations/fileOperation/fileOperation.C at line 879.
FOAM exiting
Generating mesh.. Done!
***@***.*** incompressible]$
It appears that MPI is still an issue, here’s the log from mesh generation . log:
[p-sc-2147:1082192] OPAL ERROR: Unreachable in file ext3x_client.c at line 112
--------------------------------------------------------------------------
The application appears to have been direct launched using "srun",
but OMPI was not built with SLURM's PMI support and therefore cannot
execute. There are several options for building PMI support under
SLURM, depending upon the SLURM version you are using:
version 16.05 or later: you can use SLURM's PMIx support. This
requires that you configure and build SLURM --with-pmix.
Versions earlier than 16.05: you must use either SLURM's PMI-1 or
PMI-2 support. SLURM builds PMI-1 by default, or you can manually
install PMI-2. You must then build Open MPI using --with-pmi pointing
to the SLURM PMI library location.
Please configure as appropriate and try again.
--------------------------------------------------------------------------
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
[p-sc-2147:1082192] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that al
From: Ping He ***@***.***>
Date: Saturday, July 19, 2025 at 9:35 AM
To: mdolab/dafoam ***@***.***>
Cc: Nucciarone, Jeffrey John ***@***.***>, Author ***@***.***>
Subject: Re: [mdolab/dafoam] Singularity install? (Discussion #847)
You need to load the singularity module and do:
singularity build dafoam_latest.sif docker://dafoam/opt-packages:latest
This will build the Singularity container and save it as dafoam_latest.sif file.
Then, you can put these lines into your Slurm script:
module load singularity
singularity exec your_path_to_dafoam_latest.sif /bin/bash -l -c '. /home/dafoamuser/dafoam/loadDAFoam.sh && ./preProcessing.sh && mpirun -np 4 python runScript.py’
Note that you can use only one node on the HPC. The current DAFoam Docker image is not setup for running across multiple nodes on HPCs.
—
Reply to this email directly, view it on GitHub<#847 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ATIYHQXARQ3DKDTP5DR747T3JJCSHAVCNFSM6AAAAACAJFFOVOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTGOBRGY4DAMI>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
Can you open logMeshGeneration.txt and see what error you got? |
Beta Was this translation helpful? Give feedback.
-
|
I managed to get it to work. I had to disable SLURM passing the environment, apparently there was a setting in there that upset MPI.
I had to add “export HOME=/home/dafoamuser && export MPLCONFIGDIR=/tmp “
SLURM_EXPORT_ENV=NONE is required for salloc sessions, otherwise MPI gets very unhappy. We don’t need this is running strainght from sbatch.
…--Jeff
From: Ping He ***@***.***>
Date: Friday, July 25, 2025 at 12:30 PM
To: mdolab/dafoam ***@***.***>
Cc: Nucciarone, Jeffrey John ***@***.***>, Author ***@***.***>
Subject: Re: [mdolab/dafoam] Singularity install? (Discussion #847)
Can you open logMeshGeneration.txt and see what error you got?
—
Reply to this email directly, view it on GitHub<#847 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ATIYHQRTZVVAEMQQQOK72ZL3KJLSFAVCNFSM6AAAAACAJFFOVOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTGOBZGA4TONA>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
I encountered exactly the same problem as you did, and I successfully got it running using the method you described. Thank you very much!I had been trying to enter the image using the shell, but kept encountering environment variable issues. It worked fine after switching to sbatch. Here's my sbatch script: #SBATCH --job-name=NACA0012 singularity exec dafoam_v4.0.2.sif /bin/bash -l -c 'export HOME=/home/dafoamuser && export MPLCONFIGDIR=/tmp && source /home/dafoamuser/dafoam/loadDAFoam.sh && ./preProcessing.sh && mpirun -np 4 python runScript.py' |
Beta Was this translation helpful? Give feedback.
-
|
@nucci6 We are conducting a campaign to collect user feedback and improve DAFoam. Check #883. Are you available for a 30-minute Zoom meeting with our DAFoam team? If yes, please email me at [email protected]. Thanks! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying to install the most recent version of dafoam on my system. We have singularity, not docker. I was able to build a sif image without issue. However, container launch might be a problem as docker supports starting the container as a different user (in this case, dafoamuser), while singularity does not take this approach, generally running as the user who launched the container.
As the dafoamuser has environment variables that are automatically set upon login, starting as my own user and manually sourcing said commands might not be an ideal solution as the setup may be incomplete. For example, even though I can manually source the dafoamuser bashrc file, I till have to manually run the .sh file .bashrc would run (as the default path in the .bashrc is not correct when running as my user).
I also find I cannot successfully run the tutorials, and I suspect that might have to do with the environment not quite being correct.I'll worry about that step once I am sure running this as a singularity container is correct. Can anyone please advise?
Thank you.
Beta Was this translation helpful? Give feedback.
All reactions