Solver Strategy Adjustment for Efficient MultiApp CH-Stokes Coupling in 3D Simulations #30602
Unanswered
bo-qian
asked this question in
Q&A Modules: Navier-Stokes
Replies: 2 comments 10 replies
-
|
Hello for 3D solves our most efficient fluid flow solver is the linear finite volume discretization of the incompressible Navier Stokes equations. This uses the SIMPLE/PIMPLE algorithm instead of a Newton solve. Then there has been some successes using a field split to precondition the pressure-velocity equations. This should also scale better than LU to large simulations. |
Beta Was this translation helpful? Give feedback.
9 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Check these boxes if you have followed the posting rules.
Question
Background Information
The governing equations are the Cahn–Hilliard (CH) phase-field equation and the incompressible Stokes equations, solved in a one-way coupled manner. The
MultiAppcapability is used:Since the coupling is one-way, the phase-field variable transferred from the master to the sub-application at each time step is treated as a constant parameter. All kernels are custom implemented.
Current Status
In the 2D case, the simulations work well using the following solver configuration:
This setup yields very fast and robust convergence for 2D simulations.
However, in 3D, where the mesh can contain tens of millions of elements, direct solvers like LU with MUMPS or SuperLU result in memory overflow and cannot complete execution.
To address this, I tested the following alternative solver settings (still on a 2D problem but with modified solver options):
This configuration is adapted from the example:
modules/navier_stokes/test/tests/finite_element/ins/lid_driven/lid_driven.iAlthough it converges, convergence is slow, and the simulation requires a long time to meet the specified error tolerance.
Problem Statement
Given that the current direct solvers (e.g., LU) are not feasible for large-scale 3D simulations, is there a more efficient solver configuration that:
so that it can be directly applied to upcoming 3D coupled CH-Stokes simulations?
Two attached output files provide single-core performance logs for the two solver strategies described above.
asm.txt
lu.txt
Additional information
Beta Was this translation helpful? Give feedback.
All reactions