Issue: Stokes Solver Residual Behavior #31590
-
Check these boxes if you have followed the posting rules.
QuestionProblem Description:I am solving a Stokes equation with a variable viscosity field. During the 2D simulation, I am facing convergence issues in two scenarios: 1. Using LU Direct SolverSolver Configuration: [Preconditioning]
[CH_Stokes]
type = SMP
full = true
petsc_options = '-ksp_converged_reason'
petsc_options_iname = '-pc_type -pc_factor_mat_solver_type -ksp_gmres_restart -pc_factor_shift_type'
petsc_options_value = 'lu mumps 2500 NONZERO'
[]
[]Output: Question: 2. Using FSP PreconditionerSolver Configuration: [Preconditioning]
[FSP]
type = FSP
topsplit = 'velocitypressure'
[velocitypressure]
splitting = 'velocity pressure'
splitting_type = schur
petsc_options = '-ksp_converged_reason'
petsc_options_iname = '-pc_fieldsplit_schur_fact_type -pc_fieldsplit_schur_precondition -ksp_gmres_restart -ksp_type -ksp_pc_side -ksp_rtol'
petsc_options_value = 'full self 300 fgmres right 1e-7'
[]
[velocity]
vars = 'u v'
petsc_options = '-ksp_converged_reason'
petsc_options_iname = '-pc_type -pc_hypre_type -ksp_type -ksp_rtol -ksp_gmres_restart -ksp_pc_side -pc_hypre_boomeramg_strong_threshold'
petsc_options_value = 'hypre boomeramg gmres 1e-2 300 right 0.8'
[]
[pressure]
vars = 'p'
petsc_options = '-pc_lsc_scale_diag -ksp_converged_reason'
petsc_options_iname = '-ksp_type -ksp_gmres_restart -ksp_rtol -pc_type -ksp_pc_side -lsc_pc_type -lsc_pc_hypre_type -lsc_ksp_type -lsc_ksp_rtol -lsc_ksp_pc_side -lsc_ksp_gmres_restart -lsc_pc_hypre_boomeramg_strong_threshold'
petsc_options_value = 'fgmres 300 1e-2 lsc right hypre boomeramg gmres 1e-1 right 300 0.8'
[]
[]
[]Output: Question: Request:
Additional informationMesh size and type: 2D mesh. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 36 replies
-
|
Hello OUTMEMORY often means not out of memory, but an ill-posed problem. |
Beta Was this translation helpful? Give feedback.
-
|
We would also need to see your |
Beta Was this translation helpful? Give feedback.
It does sound reasonable. The callback from PETSc does not work as I expected.
This is always called before the solve, so moving the two calls within the FEProblemSolve.C routine did not change the execution order.
I ll take a quick look to see if there are other options. You could hack post-NONLINEAR with the NONLINEAR_CONVERGENCE execute_on. But you would need petsc to declare the solve converged (with loose tolerances) and a MOOSE Convergence object to keep iterating. So pretty hacky