Skip to content

Conversation

gabsillis
Copy link

Based on information provided in the INSTALL file and CMake documentation.
Followed the directory structure in the Makefile based tutorial

Closes issue #290

Based on information provided in the INSTALL file and CMake documentation
Followed the directory structure in the Makefile based tutorial
cmake .. -DMFEM_USE_CUDA=YES
```
Note that this requires CMake 3.8 or newer

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
To specify what CUDA architecture to target:
```sh
cmake .. -DCUDA_ARCH="sm_70"
```
The CUDA architecture is formatted as `sm_{CC}`, where CC is the GPU compute capability of the target GPU without the decimal point. A list of Nvidia GPU compute capabilities can be found in [the Nividia developers documentation](https://developer.nvidia.com/cuda-gpus).

TODO: currently I think MFEM's CMake setup doesn't allow targeting multiple CUDA architectures at once. I think we should fix this?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a CMake 3.18 target property CUDA_ARCHITECTURES that allows semicolon separated lists https://cmake.org/cmake/help/latest/prop_tgt/CUDA_ARCHITECTURES.html

Right now it looks like mfem manually sets the -arch flag. From the nvcc compiler documentation it seems like it might take comma separated lists if the format is the same as the --gpu-code flag. I don't have access to cuda enabled machines to try this out right now.

The all, all-major, and native options for the architecture may be worth mentioning.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I expanded support for all, all-major, and native as well as multiple specific cuda architectures in this PR: pr4561.

Supported formats now include:
-DCUDA_ARCH="all"
-DCUDA_ARCH="all-major"
-DCUDA_ARCH="native"
-DCUDA_ARCH="{ARCH1},{ARCH2},..."
-DCUDA_ARCH="{ARCH1};{ARCH2};..."
where ARCHN can be either just the CC number (70, 86, etc.), or optionally prefixed with sm_ (sm_70, sm_86, etc.).

This should work even for CMake older than 3.18 (the current CMakeLists minimum version is 3.8).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks great, added some documentation for this!

src/building.md Outdated
```

## Building MFEM with CMake
To build a serial form of MFEM with CMake first create a build directory. For example, using a build directory named `build`:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/form/version

src/building.md Outdated
cmake --build . -j 4
```
### Parallel build using CMake
To build a parallel form of MFEM with CMake first build METIS and Hypre as described above.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/form/version

To build a parallel form of MFEM with CMake first build METIS and Hypre as described above.
From the MFEM source directory, create a build directory. For example, using a build directory named `build`:
```sh
cd mfem-4.5
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't add versions to general instructions

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was something I had a question about while writing this, the Makefile section of these instructions assume mfem-4.5, should I stick to that convention or use "MFEM_DIR", and should the Makefile instructions do the same?

```

Run the CMake configuration on the MFEM source directory using the `MFEM_USE_MPI` CMake variable to enable MPI.
This will automatically search for the system MPI implementation, the METIS installation (in `<mfem-source-dir>/../metis-4.0`), and Hypre installation (in `<mfem-source-dir/../hypre`).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same

src/building.md Outdated
cmake --build . -j 4
```

### Alternate configuration steps
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/Alternate/Alternative

or "Advanced"?

@v-dobrev v-dobrev mentioned this pull request Dec 5, 2024
61 tasks
Left version numbers for now as this mirrors the GNU makefile portion of the instructions
@tzanio
Copy link
Member

tzanio commented Jan 4, 2025

@jandrej and @v-dobrev, can you take another look when you get a chance?

@tzanio tzanio changed the title Add documentation on the "Building MFEM" page for cmake install Add documentation on the "Building MFEM" page for CMake install Jan 4, 2025
@tzanio
Copy link
Member

tzanio commented Jan 18, 2025

ping: @jandrej and @v-dobrev

@tzanio tzanio mentioned this pull request Feb 3, 2025
15 tasks
@tzanio
Copy link
Member

tzanio commented Sep 30, 2025

@helloworld922 and @cjvogl, what do you think about this PR?

@helloworld922
Copy link

helloworld922 commented Oct 1, 2025

The special CUDA_ARCH specifications stuff (list of multiple architectures, all, native, etc.) ended up getting removed from #4561 due to other incompatible changes with how we build with CUDA/HIP.

I've been using CMAKE_CUDA_ARCHITECTURES and CMAKE_HIP_ARCHITECTURES and omitting CUDA_ARCH and HIP_ARCH entirely to use CMake's native multi-gpu arch support.

Do we want to add support for this to CUDA_ARCH? I can revive those changes so CUDA_ARCH is interchangeable with CMAKE_CUDA_ARCHITECTURES. I think HIP_ARCH should work as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants