This tutorial is about the efficient solution of production-destruction systems (PDS) with a large number of differential equations. We will explore several ways to represent such large systems and assess their efficiency.
One example of the occurrence of a PDS with a large number of equations is the space discretization of a partial differential equation. In this tutorial we want to solve the linear advection equation
with a>0, t≥ 0, x\in[0,1] and periodic boundary conditions.
To keep things as simple as possible, we
discretize the space domain as 0=x_0<x_1\dots <x_{N-1}<x_N=1 with
x_i = i Δ x for i=0,\dots,N and Δx=1/N. An upwind discretization
of the spatial derivative yields the ODE system
where u_i(t) is an approximation of u(t,x_i) for i=1,\dots, N.
This system can also be written as \partial_t \mathbf u(t)=\mathbf A\mathbf u(t)
with \mathbf u(t)=(u_1(t),\dots,u_N(t)) and
In particular the matrix \mathbf A shows that there is a single production
term and a single destruction term per equation.
Furthermore, the system is conservative as \mathbf A has column sum zero.
To be precise, the production matrix \mathbf P = (p_{i,j}) of this
conservative PDS is given by
In addition, all production and destruction terms not listed have the value zero.
Since the PDS is conservative, we have d_{i,j}=p_{j,i} and the system is
fully determined by the production matrix \mathbf P.
Now we are ready to define a ConservativePDSProblem and to solve
this problem with a method of PositiveIntegrators.jl
or OrdinaryDiffEq.jl.
In the following we use a=1, N=1000 and the time domain t\in[0,1].
Moreover, we choose the step function
as initial condition. Due to the periodic boundary conditions and the transport
velocity a=1, the solution at time t=1 is identical to the initial
distribution, i.e. u(1,x) = u_0(x).
N = 1000 # number of subintervals
dx = 1/N # mesh width
x = LinRange(dx, 1.0, N) # discretization points x_1,...,x_N = x_0
u0 = @. 0.0 + (0.4 ≤ x ≤ 0.6) * 1.0 # initial solution
tspan = (0.0, 1.0) # time domain
nothing #hide
As mentioned above, we will try different approaches to solve this PDS and compare their efficiency. These are
- an in-place implementation with a dense matrix,
- an in-place implementation with a sparse matrix.
By default, we will use dense matrices to store the production terms and to setup/solve the linear systems arising in MPRK methods. Of course, this is not efficient for large and sparse systems like in this case.
using PositiveIntegrators # load ConservativePDSProblem
function lin_adv_P!(P, u, p, t)
fill!(P, 0.0)
N = length(u)
dx = 1 / N
P[1, N] = u[N] / dx
for i in 2:N
P[i, i - 1] = u[i - 1] / dx
end
return nothing
end
prob = ConservativePDSProblem(lin_adv_P!, u0, tspan) # create the PDS
sol = solve(prob, MPRK43I(1.0, 0.5); save_everystep = false)
nothing #hide
using Plots
plot(x, u0; label = "u0", xguide = "x", yguide = "u")
plot!(x, last(sol.u); label = "u")
We can use isnonnegative to check that the computed solution
is nonnegative, as expected from an MPRK scheme.
isnonnegative(sol)
To use different matrix types for the production terms and linear systems,
we can use the keyword argument p_prototype of
ConservativePDSProblem and PDSProblem.
using SparseArrays
p_prototype = spdiagm(-1 => ones(eltype(u0), N - 1),
N - 1 => ones(eltype(u0), 1))
prob_sparse = ConservativePDSProblem(lin_adv_P!, u0, tspan; p_prototype=p_prototype)
sol_sparse = solve(prob_sparse, MPRK43I(1.0, 0.5); save_everystep = false)
nothing #hide
plot(x,u0; label = "u0", xguide = "x", yguide = "u")
plot!(x, last(sol_sparse.u); label = "u")
Also this solution is nonnegative.
isnonnegative(sol_sparse)
Finally, we use BenchmarkTools.jl to compare the performance of the different implementations.
using BenchmarkTools
@benchmark solve(prob, MPRK43I(1.0, 0.5); save_everystep = false)
@benchmark solve(prob_sparse, MPRK43I(1.0, 0.5); save_everystep = false)
By default, we use an LU factorization for the linear systems. At the time of writing, Julia uses SparseArrays.jl defaulting to UMFPACK from SuiteSparse in this case. However, the linear systems do not necessarily have the structure for which UMFPACK is optimized for. Thus, it is often possible to gain performance by switching to KLU instead.
using LinearSolve
@benchmark solve(prob_sparse, MPRK43I(1.0, 0.5; linsolve = KLUFactorization()); save_everystep = false)
These results were obtained using the following versions.
using InteractiveUtils
versioninfo()
println()
using Pkg
Pkg.status(["PositiveIntegrators", "SparseArrays", "KLU", "LinearSolve", "OrdinaryDiffEq"],
mode=PKGMODE_MANIFEST)
nothing # hide