Skip to content

Isotropic constant thermal conduction#1806

Open
aditivijayan wants to merge 125 commits into
developmentfrom
av/thermalconduction-clean
Open

Isotropic constant thermal conduction#1806
aditivijayan wants to merge 125 commits into
developmentfrom
av/thermalconduction-clean

Conversation

@aditivijayan
Copy link
Copy Markdown
Contributor

Description

The changes implemented in this pull request will add thermal conduction capability to the code.

This PR makes the following changes-

  1. Adds a new folder src/conduction comprising ElectronConduction.hpp which essentially modifies the gas internal energy of the cell. The heat fluxes are calculated at the faces using constant isotropic conductivity.
  2. Modifies QuokkaSimulation.hpp to include an initialiser for the conduction routine.
  3. Modifies simulation.hpp to estimate the conduction time step and compare it against the hydro and particle timesteps. This requires setting up of a conduction cfl which I have set to 0.2 for the test problem.
  4. Includes a test problem that evolves 1D gaussian. I have tested the gaussian as well as the step function in 3D. The test also compares the numerical result of the 1D problem against the analytical solution.

Note-

  1. This PR implements only constant conductivity. Future PRs will include spitzer and anisotropic conductivities.
  2. The thermal conduction module calculates saturated flux but uses only classical. I would prefer to keep it this way since flux limiter is required for future problems.
  3. For the 1D and 3D gaussian problems I have verified that the absolute error decreases with increasing resolution.

Related issues

Are there any GitHub issues that are fixed by this pull request? Add a link to them here.

Checklist

Before this pull request can be reviewed, all of these tasks should be completed. Denote completed tasks with an x inside the square brackets [ ] in the Markdown source below:

  • [ x] I have added a description (see above).
  • I have added a link to any related issues (if applicable; see above).
  • [x ] I have read the Contributing Guide.
  • [ x] I have added tests for any new physics that this PR adds to the code.
  • (For quokka-astro org members) I have manually triggered the GPU tests with the magic comment /azp run.

@dosubot dosubot Bot added the size:XL This PR changes 500-999 lines, ignoring generated files. label Apr 16, 2026
@dosubot dosubot Bot added the enhancement New feature or request label Apr 16, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an explicit flux-limited electron thermal conduction module to the Quokka simulation framework, including the core implementation, integration into the main simulation loop, and new test problems. The review feedback identifies several critical issues that need to be addressed: compilation errors in the new test problems caused by incorrect physics traits and copy-paste errors, and logic errors in the initial condition setup. Additionally, the feedback highlights physical and numerical deficiencies in the conduction implementation, such as the omission of magnetic energy in the total energy update for MHD simulations, an incomplete timestep calculation that ignores density and heat capacity, and the failure to apply the calculated flux limiter. Improvements to the temperature floor logic were also suggested to ensure consistency with user-defined parameters.

Comment thread src/problems/ThermalConduction/testConduction.cpp Outdated
Comment thread src/conduction/ElectronConduction.hpp Outdated
Comment thread src/problems/ThermalConduction/testConduction.cpp Outdated
Comment thread src/problems/ThermalConduction/testConduction.cpp Outdated
Comment thread src/simulation.hpp Outdated
Comment on lines +135 to +136
const amrex::Real q_sat_face = 0.5 * (qsat[bx](i, j, k) + qsat[bx](i - 1, j, k));
const amrex::Real limiter = 1.0 + std::abs(q_classical) / amrex::max(q_sat_face, small);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The flux limiter is calculated but not applied to the heat flux. While the PR notes mention a preference for classical flux for now, calculating the limiter without using it is inefficient. If the limiter is intended for future use, it should be applied to q_classical to ensure stability in high-gradient regions, or the calculation should be removed to avoid redundant operations.

const amrex::Real limiter = 1.0 + std::abs(q_classical) / amrex::max(q_sat_face, small);
			flux_x[bx](i, j, k) = q_classical / limiter;

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've re-added the limiter and set it to a large value in the input file in order to obviate its use.

Comment thread src/conduction/ElectronConduction.hpp Outdated
@aditivijayan
Copy link
Copy Markdown
Contributor Author

Plots of 1D tests

Solutions at different times for an initial Gaussian
image

Comparison of 1D result with analytical solution
image

Error vs Cells

image

@aditivijayan
Copy link
Copy Markdown
Contributor Author

Some plots from 3D runs

Temperature slice with error, nx=256
Error = |numerical-analytical|/numerical

image

Temperature slice with error, nx=512

image

Comment thread src/simulation.hpp
}
}

template <typename problem_t> auto AMRSimulation<problem_t>::computeTimestepAtLevel(int lev) -> amrex::ValLocPair<amrex::Real, amrex::IntVect>
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the diffusion timestep scales as $\Delta x^2$ (instead of $\Delta x$), getting this to work with multiple AMR levels will require a substantial rewrite of this function.

However, the current form is fine if you only want to do single-level simulations without AMR.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will keep this in mind. We would definitely need to work with AMR in the future. @evan1022

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@evan1022 For reference, this is the exact algorithm we currently use to set the timesteps on each level:

  function computeTimestepAtLevel(lev):
      # Problem-specific signal speed calculation happens in subclass code.
      computeMaxSignalLocal(lev)

      domain_signal_max    = norm_inf(max_signal_speed_[lev])
      domain_signal_maxloc = index_of_max(max_signal_speed_[lev])

      if signalSpeedAbort_ > 0 and domain_signal_max > signalSpeedAbort_:
          print cell properties at domain_signal_maxloc
          abort

      dx     = geom[lev].CellSizeArray()
      dx_min = min(cell widths on this level)

      hydro_dt = cflNumber_ * dx_min / domain_signal_max

      particle_dt = +infinity

      if AMREX_SPACEDIM == 3 and particleRegister_.HasMassiveParticles():
          max_particle_speed = particleRegister_.computeMaxParticleSpeed(lev)
          assert max_particle_speed is finite and not NaN

          if particleSpeedAbort_ > 0 and max_particle_speed > particleSpeedAbort_:
              abort

          # Avoid division by ~0 / useless particle limit when particles are very slow.
          if max_particle_speed > 1e-5 * (dx_min / hydro_dt):
              particle_dt = particleCflNumber_ * dx_min / max_particle_speed

      return min(hydro_dt, particle_dt)


  function computeTimestep():
      # 1. Compute a local candidate dt on each active level.
      dt_tmp = array(size = finest_level + 1)
      for level = 0 to finest_level:
          dt_tmp[level] = computeTimestepAtLevel(level)

      # 2. Limit how much dt can increase relative to the previous step.
      change_max = 1.1
      for level = 0 to finest_level:
          dt_tmp[level] = min(dt_tmp[level], change_max * dt_[level])

      # 3. Set default subcycling counts from refinement ratios.
      if do_subcycle == 1:
          for lev = 1 to max_level:
              nsubsteps[lev] = MaxRefRatio(lev - 1)

      # 4. Choose the coarse-level timestep dt_0 so that every level can subcycle into it.
      dt_0 = dt_tmp[0]
      level_that_sets_dt_0 = 0
      n_factor = 1

      for level = 0 to finest_level:
          n_factor = n_factor * nsubsteps[level]   # nsubsteps[0] is 1

          old_dt_0 = dt_0
          dt_0 = min(dt_0, n_factor * dt_tmp[level])
          if dt_0 < old_dt_0:
              level_that_sets_dt_0 = level

          dt_0 = min(dt_0, maxDt_)

          if tNew_[level] == 0.0:
              dt_0 = min(dt_0, initDt_)

          if constantDt_ > 0.0:
              dt_0 = constantDt_

      # 5. Optionally shrink the very first coarse step.
      if tNew_[0] == 0.0:
          dt_0 = dt_0 * initShrink_

      # 6. Prevent overshooting stopTime_.
      eps = 1e-3 * dt_0
      if tNew_[0] + dt_0 > stopTime_ - eps:
          dt_0 = stopTime_ - tNew_[0]

      # 7. Assign dt to all active levels.
      dt_[0] = dt_0
      for level = 1 to finest_level:
          dt_[level] = dt_[level - 1] / nsubsteps[level]

Regridding has one extra timestep-setting path for newly created fine levels:

  after regrid:
      for k = old_finest + 1 to finest_level:
          if do_subcycle != 0:
              dt_[k] = dt_[k - 1] / nsubsteps[k]
          else:
              dt_[k] = dt_[k - 1]

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines successfully started running 2 pipeline(s).

@aditivijayan
Copy link
Copy Markdown
Contributor Author

/azp run

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines successfully started running 2 pipeline(s).

@aditivijayan
Copy link
Copy Markdown
Contributor Author

/azp run

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines successfully started running 2 pipeline(s).

@aditivijayan
Copy link
Copy Markdown
Contributor Author

/azp run

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines successfully started running 2 pipeline(s).

@aditivijayan
Copy link
Copy Markdown
Contributor Author

@BenWibking
Copy link
Copy Markdown
Collaborator

Since it runs for many steps without running out of memory, this suggests that memory is secularly increasing with time, so something may not be getting deallocated properly. Can you print out the memory usage, say, every 100 timesteps, and see if it's increasing?

You can do that using the method described here: AMReX-Codes/amrex#4467

@BenWibking
Copy link
Copy Markdown
Collaborator

auggie review

@augmentcode
Copy link
Copy Markdown

augmentcode Bot commented May 12, 2026

🤖 Augment PR Summary

Summary: This PR adds an explicit isotropic (constant-(\kappa)) electron thermal conduction operator to Quokka, along with runtime controls, timestep limiting, and a regression/convergence test.

Changes:

  • Introduces src/conduction/ElectronConduction.hpp implementing an explicit, flux-limited conduction update (classical flux with a saturation-based limiter).
  • Wires conduction into the Strang-split source update path in src/QuokkaSimulation.hpp, including parameter parsing under the [conduction] namespace.
  • Adds a conduction timestep estimate to AMRSimulation::computeTimestepAtLevel and includes it in the global timestep minimum.
  • Adjusts timestep scaling logic in computeTimestep() when conduction is enabled (reflecting \(\Delta t \propto \Delta x^2\) behavior).
  • Adds a new test problem src/problems/ThermalConduction/testThermalConduction.cpp that evolves a 1D Gaussian and checks convergence via Richardson extrapolation.
  • Adds a corresponding example input file inputs/ThermalConduction.toml for running the conduction setup.

Technical notes: The implementation currently restricts conduction to single-level runs (no AMR) and supports selecting temperature/sound-speed evaluation via either the analytic EOS or resampled cooling tables (via conduction.eos_flag).

🤖 Was this summary useful? React with 👍 or 👎

Copy link
Copy Markdown

@augmentcode augmentcode Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review completed. 4 suggestions posted.

Fix All in Augment

Comment augment review to trigger a new review at any time.

auto conductivity_arr = conductivity.arrays();
auto saturated_flux_arr = saturated_flux.arrays();
amrex::IntVect ng = amrex::IntVect(AMREX_D_DECL(2, 2, 2));
std::optional<decltype(tables.const_tables())> tables_dev;
Copy link
Copy Markdown

@augmentcode augmentcode Bot May 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

src/conduction/ElectronConduction.hpp:81: std::optional (and dereferencing it) is captured/used inside an AMREX_GPU_DEVICE lambda; that’s often not device-compatible and this repo even has src/util/Optional.hpp for GPU-safe optionals. Consider restructuring so the device lambda captures a plain tables object (or a GPU-safe optional) rather than std::optional.

Severity: high

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.

Comment thread src/conduction/ElectronConduction.hpp
Comment thread src/simulation.hpp Outdated
Comment thread src/problems/ThermalConduction/testThermalConduction.cpp Outdated
@aditivijayan
Copy link
Copy Markdown
Contributor Author

Since it runs for many steps without running out of memory, this suggests that memory is secularly increasing with time, so something may not be getting deallocated properly. Can you print out the memory usage, say, every 100 timesteps, and see if it's increasing?

You can do that using the method described here: AMReX-Codes/amrex#4467

It runs successfully for 128^3 and 256^3 and crashes only for the 512^3 test. So I am not sure it's an issue with deallocation.

@BenWibking
Copy link
Copy Markdown
Collaborator

Since it runs for many steps without running out of memory, this suggests that memory is secularly increasing with time, so something may not be getting deallocated properly. Can you print out the memory usage, say, every 100 timesteps, and see if it's increasing?
You can do that using the method described here: AMReX-Codes/amrex#4467

It runs successfully for 128^3 and 256^3 and crashes only for the 512^3 test. So I am not sure it's an issue with deallocation.

Ah, I see. 512^3 is just too big of a simulation to fit on a single GPU. I would recommend starting at a smaller grid size, and then stopping at 256^3.

@aditivijayan
Copy link
Copy Markdown
Contributor Author

/azp run

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines successfully started running 2 pipeline(s).

@aditivijayan
Copy link
Copy Markdown
Contributor Author

/azp run

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines successfully started running 2 pipeline(s).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request size:XL This PR changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants