Isotropic constant thermal conduction#1806
Conversation
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Code Review
This pull request introduces an explicit flux-limited electron thermal conduction module to the Quokka simulation framework, including the core implementation, integration into the main simulation loop, and new test problems. The review feedback identifies several critical issues that need to be addressed: compilation errors in the new test problems caused by incorrect physics traits and copy-paste errors, and logic errors in the initial condition setup. Additionally, the feedback highlights physical and numerical deficiencies in the conduction implementation, such as the omission of magnetic energy in the total energy update for MHD simulations, an incomplete timestep calculation that ignores density and heat capacity, and the failure to apply the calculated flux limiter. Improvements to the temperature floor logic were also suggested to ensure consistency with user-defined parameters.
| const amrex::Real q_sat_face = 0.5 * (qsat[bx](i, j, k) + qsat[bx](i - 1, j, k)); | ||
| const amrex::Real limiter = 1.0 + std::abs(q_classical) / amrex::max(q_sat_face, small); |
There was a problem hiding this comment.
The flux limiter is calculated but not applied to the heat flux. While the PR notes mention a preference for classical flux for now, calculating the limiter without using it is inefficient. If the limiter is intended for future use, it should be applied to q_classical to ensure stability in high-gradient regions, or the calculation should be removed to avoid redundant operations.
const amrex::Real limiter = 1.0 + std::abs(q_classical) / amrex::max(q_sat_face, small);
flux_x[bx](i, j, k) = q_classical / limiter;There was a problem hiding this comment.
I've re-added the limiter and set it to a large value in the input file in order to obviate its use.
| } | ||
| } | ||
|
|
||
| template <typename problem_t> auto AMRSimulation<problem_t>::computeTimestepAtLevel(int lev) -> amrex::ValLocPair<amrex::Real, amrex::IntVect> |
There was a problem hiding this comment.
Because the diffusion timestep scales as
However, the current form is fine if you only want to do single-level simulations without AMR.
There was a problem hiding this comment.
I will keep this in mind. We would definitely need to work with AMR in the future. @evan1022
There was a problem hiding this comment.
@evan1022 For reference, this is the exact algorithm we currently use to set the timesteps on each level:
function computeTimestepAtLevel(lev):
# Problem-specific signal speed calculation happens in subclass code.
computeMaxSignalLocal(lev)
domain_signal_max = norm_inf(max_signal_speed_[lev])
domain_signal_maxloc = index_of_max(max_signal_speed_[lev])
if signalSpeedAbort_ > 0 and domain_signal_max > signalSpeedAbort_:
print cell properties at domain_signal_maxloc
abort
dx = geom[lev].CellSizeArray()
dx_min = min(cell widths on this level)
hydro_dt = cflNumber_ * dx_min / domain_signal_max
particle_dt = +infinity
if AMREX_SPACEDIM == 3 and particleRegister_.HasMassiveParticles():
max_particle_speed = particleRegister_.computeMaxParticleSpeed(lev)
assert max_particle_speed is finite and not NaN
if particleSpeedAbort_ > 0 and max_particle_speed > particleSpeedAbort_:
abort
# Avoid division by ~0 / useless particle limit when particles are very slow.
if max_particle_speed > 1e-5 * (dx_min / hydro_dt):
particle_dt = particleCflNumber_ * dx_min / max_particle_speed
return min(hydro_dt, particle_dt)
function computeTimestep():
# 1. Compute a local candidate dt on each active level.
dt_tmp = array(size = finest_level + 1)
for level = 0 to finest_level:
dt_tmp[level] = computeTimestepAtLevel(level)
# 2. Limit how much dt can increase relative to the previous step.
change_max = 1.1
for level = 0 to finest_level:
dt_tmp[level] = min(dt_tmp[level], change_max * dt_[level])
# 3. Set default subcycling counts from refinement ratios.
if do_subcycle == 1:
for lev = 1 to max_level:
nsubsteps[lev] = MaxRefRatio(lev - 1)
# 4. Choose the coarse-level timestep dt_0 so that every level can subcycle into it.
dt_0 = dt_tmp[0]
level_that_sets_dt_0 = 0
n_factor = 1
for level = 0 to finest_level:
n_factor = n_factor * nsubsteps[level] # nsubsteps[0] is 1
old_dt_0 = dt_0
dt_0 = min(dt_0, n_factor * dt_tmp[level])
if dt_0 < old_dt_0:
level_that_sets_dt_0 = level
dt_0 = min(dt_0, maxDt_)
if tNew_[level] == 0.0:
dt_0 = min(dt_0, initDt_)
if constantDt_ > 0.0:
dt_0 = constantDt_
# 5. Optionally shrink the very first coarse step.
if tNew_[0] == 0.0:
dt_0 = dt_0 * initShrink_
# 6. Prevent overshooting stopTime_.
eps = 1e-3 * dt_0
if tNew_[0] + dt_0 > stopTime_ - eps:
dt_0 = stopTime_ - tNew_[0]
# 7. Assign dt to all active levels.
dt_[0] = dt_0
for level = 1 to finest_level:
dt_[level] = dt_[level - 1] / nsubsteps[level]
Regridding has one extra timestep-setting path for newly created fine levels:
after regrid:
for k = old_finest + 1 to finest_level:
if do_subcycle != 0:
dt_[k] = dt_[k - 1] / nsubsteps[k]
else:
dt_[k] = dt_[k - 1]
|
Azure Pipelines successfully started running 2 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 2 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 2 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 2 pipeline(s). |
|
The GPU tests are failing because of Out of Memory error- @BenWibking any suggestions? |
|
Since it runs for many steps without running out of memory, this suggests that memory is secularly increasing with time, so something may not be getting deallocated properly. Can you print out the memory usage, say, every 100 timesteps, and see if it's increasing? You can do that using the method described here: AMReX-Codes/amrex#4467 |
|
auggie review |
🤖 Augment PR SummarySummary: This PR adds an explicit isotropic (constant-(\kappa)) electron thermal conduction operator to Quokka, along with runtime controls, timestep limiting, and a regression/convergence test. Changes:
Technical notes: The implementation currently restricts conduction to single-level runs (no AMR) and supports selecting temperature/sound-speed evaluation via either the analytic EOS or resampled cooling tables (via 🤖 Was this summary useful? React with 👍 or 👎 |
| auto conductivity_arr = conductivity.arrays(); | ||
| auto saturated_flux_arr = saturated_flux.arrays(); | ||
| amrex::IntVect ng = amrex::IntVect(AMREX_D_DECL(2, 2, 2)); | ||
| std::optional<decltype(tables.const_tables())> tables_dev; |
There was a problem hiding this comment.
src/conduction/ElectronConduction.hpp:81: std::optional (and dereferencing it) is captured/used inside an AMREX_GPU_DEVICE lambda; that’s often not device-compatible and this repo even has src/util/Optional.hpp for GPU-safe optionals. Consider restructuring so the device lambda captures a plain tables object (or a GPU-safe optional) rather than std::optional.
Severity: high
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
It runs successfully for 128^3 and 256^3 and crashes only for the 512^3 test. So I am not sure it's an issue with deallocation. |
Ah, I see. 512^3 is just too big of a simulation to fit on a single GPU. I would recommend starting at a smaller grid size, and then stopping at 256^3. |
|
/azp run |
|
Azure Pipelines successfully started running 2 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 2 pipeline(s). |





Description
The changes implemented in this pull request will add thermal conduction capability to the code.
This PR makes the following changes-
src/conductioncomprisingElectronConduction.hppwhich essentially modifies the gas internal energy of the cell. The heat fluxes are calculated at the faces using constant isotropic conductivity.QuokkaSimulation.hppto include an initialiser for the conduction routine.simulation.hppto estimate the conduction time step and compare it against the hydro and particle timesteps. This requires setting up of a conduction cfl which I have set to 0.2 for the test problem.Note-
Related issues
Are there any GitHub issues that are fixed by this pull request? Add a link to them here.
Checklist
Before this pull request can be reviewed, all of these tasks should be completed. Denote completed tasks with an
xinside the square brackets[ ]in the Markdown source below:/azp run.