MPI Compatibility Issue with Firedrake Container on HPC System #4279
Unanswered
TahaniSaleem
asked this question in
Firedrake support
Replies: 1 comment
-
Our Docker containers are now built using OpenMPI. I expect that the singularity containers need to be rebuilt. This process is detailed here. Note that the bit below "Create a Docker container from scratch" is outdated and will not work. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I'm encountering an MPI compatibility issue while running a script inside the firedrake_latest.sif Apptainer container on the Aire HPC system.
The system is configured with OpenMPI, but the container appears to be built with MPICH. As a result, I receive the following repeated warning during execution:
Your environment has OMPI_COMM_WORLD_SIZE=16 set, but mpi4py was built with MPICH.
You may be using
mpiexec
ormpirun
from a different MPI implementation.Could you please advise on how to properly run Firedrake containers in this situation? Is there a recommended MPI setup or a preferred method to avoid these conflicts when using Firedrake with Apptainer on systems where OpenMPI is the default?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions