Skip to content

Microphysics scheduler#664

Draft
kaiyuan-cheng wants to merge 12 commits into
mainfrom
kyc/mp_scheduler
Draft

Microphysics scheduler#664
kaiyuan-cheng wants to merge 12 commits into
mainfrom
kyc/mp_scheduler

Conversation

@kaiyuan-cheng
Copy link
Copy Markdown
Collaborator

Motivation

Microphysics can be expensive, yet the right coupling frequency with the dynamics is still an open question (substep vs. superstep, and at what cadence). This PR exposes that knob via a microphysics_schedule keyword. The default (nothing) is a
no-op, byte-identical to before; setting a schedule lets you super-step microphysics and study the trade-off without touching the dycore call site.

Summary

Adds opt-in scheduled microphysics to AtmosphereModel via a new microphysics_schedule keyword (e.g., IterationInterval(N), TimeInterval(Δt)).

When set, microphysics tendencies are cached to per-prognostic CenterFields and refilled only when the schedule fires. In between firings, the dycore reads the held cache, allowing the dynamics to super-step microphysics. The operator-split entry point microphysics_model_update! is plumbed with an explicit Δt_eff = clock.time - last_fire_time, so schemes (e.g. DCMIP2016KesslerMicrophysics) integrate over the actual elapsed window rather than a single dycore step.

With microphysics_schedule = nothing (the default), the path is bit-identical to the previous code: cache fields are nothing, dispatch resolves to the existing inline grid_microphysical_tendency, and the operator-split call reduces to microphysics_model_update!(μ, model, clock.last_Δt).

Changes

  • New microphysics_schedule keyword on AtmosphereModel + cached tendency fields + MicrophysicsScheduleState
  • microphysics_model_update! refactored to a 3-arg form (microphysics, model, Δt_eff) with a backward-compatible 2-arg shim; DCMIP2016KM (grid + parcel) now reads Δt_eff instead of the clock
  • Cache-aware grid_microphysical_tendency overload (compile-time haskey via Val{N} dispatch)
  • New compute_microphysics_tendencies! kernel that builds 𝒰 and once per grid point and writes the tendency for every cached name via static Val iteration
  • New update_microphysics!(model) driver replaces the bare microphysics_model_update! call in update_state!
  • Three dycore tendency kernels (scalar_tendency, static_energy_tendency, potential_temperature_tendency) thread the cache through common_args
  • Base.show(io, ::AtmosphereModel) displays the schedule when set; constructor docstring updated
  • New example examples/splitting_supercell_scheduled_microphysics.jl (DCMIP2016 Kessler + TimeInterval(20) at fixed Δt = 4 s → fires every 5 dycore steps)

kaiyuan-cheng and others added 12 commits April 29, 2026 19:14
Inert plumbing only: new fields default to nothing, behavior unchanged.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Three-arg form is the substantive method. Two-arg shim forwards clock.last_Δt
so existing call sites are unchanged. DCMIP2016KM now reads Δt from Δt_eff
instead of model.clock.last_Δt.
…est comment

The 2-arg shim is now typed model::AtmosphereModel so it cannot accidentally
intercept calls on unrelated types. Because AtmosphereModel is defined after
microphysics_interface.jl loads, the typed shim is placed in
update_atmosphere_model_state.jl (which loads after atmosphere_model.jl);
microphysics_interface.jl retains the docstring and declares the generic
function stub. The docstring gains a warning that scheme implementations must
extend the 3-arg form.

The DCMIP2016KM consumes Dt_eff test comment no longer asserts a specific
temperature; hardcoded 287 and 1003 constants are replaced with
dry_air_gas_constant(constants) and constants.dry_air.heat_capacity so the
setup tracks project constants if they ever change.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Existing inline path is now the cache=nothing method. New cache::NamedTuple
method reads precomputed tendency fields; compile-time haskey via Val{N}
makes cache misses branch-free zero. Transitional forwarders preserve
existing call sites (no cache arg); they will be removed in Task 6 once
the three dycore tendency kernels migrate.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…round the function call (where the scalar GPU read\nhappens), not the assertion. The original placement was a no-op on GPU.\nAlso use FT = eltype(grid) and one(FT) for the rho argument so the test\nis forward-compatible with Float32 runs.
…ernel builds U / M once per grid point and writes every cached\ntendency name via static Val iteration. GPU type-stable.
…el_update! call in update_state!. Without\na schedule, behaves identically to before. With a schedule, gates both the\noperator-split update and the cache refill on schedule firing, with Δt_eff\ncomputed from the last fire time.\n\nAlso updates the construction test: set!(model, theta) in\ninitialize_model_thermodynamics! calls update_state!, which now fires\nupdate_microphysics! at iteration 0, so last_fire_iteration is 0\n(not -1) immediately after construction with a scheduled microphysics.
…kernel signatures gain a cache argument right after microphysical_fields.\ncommon_args in compute_tendencies! now threads model.microphysics_tendencies.\nWithout a schedule, cache=nothing falls through to the inline path.
Task 6 updated all three dycore tendency kernels to pass an explicit cache
argument, so the no-cache 11-arg forwarders introduced in Task 3 are now
unreachable from production code.
Adds a conditional ├── microphysics_schedule: line to Base.show when the
keyword is set; documents it in the AtmosphereModel(grid; ...) docstring.
Cache-freezing across non-firing iterations (verified via last_fire_iteration
since SaturationAdjustment produces zero tendencies); mass conservation under
super-stepping with a SaturationAdjustment scheme (no precipitation flux,
so moisture mass is conserved to roundoff).
Mirrors the splitting_supercell example but exercises microphysics_schedule =
TimeInterval(20) on AtmosphereModel, with fixed Δt = 4 s so microphysics fires
every 5 dycore steps. Demonstrates Δt_eff plumbing through
DCMIP2016KesslerMicrophysics's operator-split update and the cached tendency
fields.
Copilot AI review requested due to automatic review settings April 30, 2026 16:22
@kaiyuan-cheng kaiyuan-cheng marked this pull request as draft April 30, 2026 16:22
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds an opt-in microphysics scheduler to AtmosphereModel so microphysics can be run less frequently than the dycore (super-stepping), while holding cached microphysics tendencies constant between firings and plumbing an effective timestep Δt_eff into operator-split microphysics updates.

Changes:

  • Add microphysics_schedule to AtmosphereModel, plus cached tendency fields and schedule state tracking.
  • Refactor microphysics_model_update! to a 3-arg form (μ, model, Δt_eff) and update built-in microphysics schemes accordingly.
  • Thread cached microphysics tendencies through dycore tendency kernels; add tests and a new scheduled-microphysics supercell example.

Reviewed changes

Copilot reviewed 12 out of 12 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
src/AtmosphereModels/atmosphere_model.jl Adds schedule keyword + cached tendency fields + schedule state to the model struct and constructor.
src/AtmosphereModels/microphysics_interface.jl Implements cache-aware grid_microphysical_tendency, cache materialization/fill kernels, and the update_microphysics! driver.
src/AtmosphereModels/update_atmosphere_model_state.jl Switches update_state! to call update_microphysics! and threads microphysics_tendencies through common tendency args; adds 2-arg shim.
src/AtmosphereModels/dynamics_kernel_functions.jl Threads microphysics_tendencies into scalar tendency kernel and uses cache-aware tendency reads.
src/StaticEnergyFormulations/static_energy_tendency.jl Updates signature and call to read microphysics tendency from cache when present.
src/PotentialTemperatureFormulations/potential_temperature_tendency.jl Updates signature and call to read microphysics tendency from cache when present.
src/Microphysics/dcmip2016_kessler.jl Updates operator-split microphysics update to consume Δt_eff (grid + parcel).
src/Microphysics/saturation_adjustment.jl Updates microphysics update signature to 3-arg form (no-op implementation).
src/Microphysics/bulk_microphysics.jl Updates bulk microphysics update forwarding to 3-arg form.
src/AtmosphereModels/AtmosphereModels.jl Exports update_microphysics!.
test/scheduled_microphysics.jl Adds comprehensive tests covering construction, shims, cache behavior, schedule honoring, and Δt_eff plumbing.
examples/splitting_supercell_scheduled_microphysics.jl Adds an example demonstrating scheduled microphysics in a supercell setup.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +237 to +239
# Default (no cache): build microphysical state and dispatch to microphysical_tendency.
@inline function grid_microphysical_tendency(i, j, k, grid, microphysics, name, ::Nothing,
ρ, fields, 𝒰, constants, velocities)
Comment on lines +251 to +256
# Nothing microphysics — always zero, regardless of cache type.
@inline grid_microphysical_tendency(i, j, k, grid, ::Nothing, name, ::Nothing,
ρ, μ, 𝒰, constants, velocities) = zero(eltype(grid))
@inline grid_microphysical_tendency(i, j, k, grid, ::Nothing, ::Val{N}, cache::NamedTuple,
ρ, μ, 𝒰, constants, velocities) where N =
haskey(cache, N) ? @inbounds(cache[N][i, j, k]) : zero(eltype(grid))
Comment on lines +736 to +757
Fill the cached microphysics tendency `cache` for `microphysics` on `model`.
Builds `𝒰` and `ℳ` once per grid point and writes the tendency for every
prognostic name in `keys(cache)` via static iteration over `Val(name)`.

`Δt_eff` is forwarded for diagnostic / forward-Euler-style schemes that use it;
the standard inline path ignores it.
"""
function compute_microphysics_tendencies!(cache, microphysics, model, Δt_eff)
cache === nothing && return nothing
grid = model.grid
arch = grid.architecture
fields = model.microphysical_fields
velocities = model.velocities
constants = model.thermodynamic_constants
formulation = model.formulation
dynamics = model.dynamics
moisture = specific_prognostic_moisture(model)
names = Val(keys(cache))

launch!(arch, grid, :xyz,
_compute_microphysics_tendencies!,
cache, names, grid, microphysics, fields, formulation, dynamics, moisture, constants, velocities)
@glwagner
Copy link
Copy Markdown
Member

glwagner commented Apr 30, 2026

Microphysics isn't expensive for us (its a tiny fraction of the cost of WENO advection). I propose not adding something like this until we need it. I suspect we won't ever need it --- the main issue for microphysics is that it adds tracers, which need to be advected. It would be great if we can avoid these kinds of hacks too!

@glwagner
Copy link
Copy Markdown
Member

I do think it will be interesting to investigate evaluating microphysics on the acoustic substep though. But we need to wait for #622 for that.

I think it may be reasonable to evaluate microphysics on the acoustic substep if/when it is relatively cheap, which seems to be the case for most of the schemes right now.

@kaiyuan-cheng
Copy link
Copy Markdown
Collaborator Author

kaiyuan-cheng commented Apr 30, 2026

Microphysics isn't expensive for us (its a tiny fraction of the cost of WENO advection). I propose not adding something like this until we need it. I suspect we won't ever need it --- the main issue for microphysics is that it adds tracers, which need to be advected. It would be great if we can avoid these kinds of hacks too!

I understand the concern. While this approach may appear to be a step backward, I argue that it is actually a necessary functionality. One motivation is to align our splitting supercell simulation with the literature. Our supercell splits and becomes unorganized more quickly than reported by Zarzycki et al. (2019). Furthermore, the supercell develops too vigorously. I suspect this occurs because we update the microphysics at every timestep, which may not be suitable for moment schemes that are all designed for larger spatial and temporal scales. Indeed, updating the Kessler microphysics at a lower frequency yields much better results.

Kessler called every 20 seconds

image

vs. every 4 seconds

image

@glwagner
Copy link
Copy Markdown
Member

Are you sure that the Kessler parameters shouldn't just be re-tuned? I don't think that it's possible that calling the scheme less frequently can make it more accurate. I think its more likely that you have simply revealed a deficiency in the parameterization.

As a thought experiment, wouldn't you simply recover the same problem if you used a smaller time-step? (This could be a further experiment you might try). It's very important that all numerical schemes are "consistent", in the sense that they converge as the time-step or grid spacing is reduced. When a model does not have this property, it is very hard to make sense of its results.

@glwagner
Copy link
Copy Markdown
Member

glwagner commented May 1, 2026

@kaiyuan-cheng one major difference about our case is that we don't have any explicit diffusion, whereas the test case suggests a Laplacian diffusion:

image

I suspect diffusion will tend to spread out the convective cores and make them weaker. Note the variety of schemes that many models used (in addition to the recommended one, which is a second order diffusion applied to momentum and tracers separately)

@glwagner
Copy link
Copy Markdown
Member

glwagner commented May 1, 2026

This is interesting:

image

we should perform a resolution study as well (perhaps the example can add one?)

here's the same for us, but the example only has one resolution right now:

image

I think it is a point of pride to have stronger vertical velocities! This is what we want --- note that vertical velocity increases with finer resolution. Having stronger vertical velocities is a sign of higher effective resolution. However, we should also add the Laplacian diffusion specified by the test, because that may reduce our vertical velocity somewhat.

@glwagner
Copy link
Copy Markdown
Member

glwagner commented May 1, 2026

Seems like the closure is WAY overkill. I don't understand the other model results?

image

(there could also be a problem with how closures are implemented in Breeze)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants