|
| 1 | +# # [Linear Time Discrete System](@id linear_discrete) |
| 2 | +# |
| 3 | +# We will start by estimating the underlying dynamical system of a time discrete process based on some measurements via [Dynamic Mode Decomposition](https://arxiv.org/abs/1312.0041) on a simple linear system of the form ``u(k+1) = A u(k)``. |
| 4 | +# |
| 5 | +# At first, we simulate the correspoding system using `OrdinaryDiffEq.jl` and generate a [`DiscreteDataDrivenProblem`](@ref DataDrivenProblem) from the simulated data. |
| 6 | + |
| 7 | +using DataDrivenDiffEq |
| 8 | +using ModelingToolkit |
| 9 | +using LinearAlgebra |
| 10 | +using OrdinaryDiffEq |
| 11 | +#md using Plots |
| 12 | + |
| 13 | +A = [0.9 -0.2; 0.0 0.2] |
| 14 | +u0 = [10.0; -10.0] |
| 15 | +tspan = (0.0, 11.0) |
| 16 | + |
| 17 | +f(u,p,t) = A*u |
| 18 | + |
| 19 | +sys = DiscreteProblem(f, u0, tspan) |
| 20 | +sol = solve(sys, FunctionMap()); |
| 21 | + |
| 22 | +# Next we transform our simulated solution into a [`DataDrivenProblem`](@ref). Given that the solution knows its a discrete solution, we can simply write |
| 23 | + |
| 24 | +prob = DataDrivenProblem(sol) |
| 25 | + |
| 26 | +# And plot the solution and the problem |
| 27 | + |
| 28 | +#md plot(sol, label = string.([:x₁ :x₂])) |
| 29 | +#md scatter!(prob) |
| 30 | + |
| 31 | +# To estimate the underlying operator in the states ``x_1, x_2``, we `solve` the estimation problem using the [`DMDSVD`](@ref) algorithm for approximating the operator. First, we will have a look at the [`DataDrivenSolution`](@ref) |
| 32 | + |
| 33 | +res = solve(prob, DMDSVD(), digits = 1) |
| 34 | +#md println(res) # hide |
| 35 | + |
| 36 | +# We see that the system has been recovered correctly, indicated by the small error and high AIC score of the result. We can confirm this by looking at the resulting [`Basis`](@ref) |
| 37 | + |
| 38 | +system = result(res) |
| 39 | +using Symbolics |
| 40 | + |
| 41 | +#md println(system) # hide |
| 42 | + |
| 43 | +# And also plot the prediction of the recovered dynamics |
| 44 | + |
| 45 | +#md plot(res) |
| 46 | + |
| 47 | +# Or a have a look at the metrics of the result |
| 48 | + |
| 49 | +#md metrics(res) |
| 50 | + |
| 51 | +# To have a look at the representation of the operator as a `Matrix`, we can simply call |
| 52 | + |
| 53 | +#md Matrix(system) |
| 54 | + |
| 55 | +# to see that the operator is indeed our initial `A`. Since we have a linear representation, we can gain further insights into the stability of the dynamics via its eigenvalues |
| 56 | + |
| 57 | +#md eigvals(system) |
| 58 | + |
| 59 | +# And plot the stability margin of the discrete System |
| 60 | + |
| 61 | +#md φ = 0:0.01π:2π |
| 62 | +#md plot(sin.(φ), cos.(φ), xlabel = "Real", ylabel = "Im", label = "Stability margin", color = :red, linestyle = :dash) |
| 63 | +#md scatter!(real(eigvals(system)), imag(eigvals(system)), label = "Eigenvalues", color = :black, marker = :cross) |
| 64 | + |
| 65 | +# Similarly, we could use a sparse regression to derive our system from our data. We start by defining a [`Basis`](@ref) |
| 66 | + |
| 67 | +using ModelingToolkit |
| 68 | + |
| 69 | +@parameters t |
| 70 | +@variables x[1:2](t) |
| 71 | + |
| 72 | +basis = Basis(x, x, independent_variable = t, name = :LinearBasis) |
| 73 | +#md print(basis) #hide |
| 74 | + |
| 75 | +# Afterwards, we simply `solve` the already defined problem with our `Basis` and a `SparseOptimizer` |
| 76 | + |
| 77 | +sparse_res = solve(prob, basis, STLSQ()) |
| 78 | +#md println(sparse_res) #hide |
| 79 | + |
| 80 | +# Which holds the same equations |
| 81 | +sparse_system = result(sparse_res) |
| 82 | +#md println(sparse_system) #hide |
| 83 | + |
| 84 | +# Again, we can have a look at the result |
| 85 | + |
| 86 | +#md plot( |
| 87 | +#md plot(prob), plot(sparse_res), layout = (1,2) |
| 88 | +#md ) |
| 89 | + |
| 90 | +# Both results can be converted into a `DiscreteProblem` |
| 91 | + |
| 92 | +@named sys = DiscreteSystem(equations(sparse_system), get_iv(sparse_system),states(sparse_system), parameters(sparse_system)) |
| 93 | +#md println(sys) #hide |
| 94 | + |
| 95 | +# And simulated using `OrdinaryDiffEq.jl` using the (known) initial conditions and the parameter mapping of the estimation. |
| 96 | + |
| 97 | +x0 = [x[1] => u0[1], x[2] => u0[2]] |
| 98 | +ps = parameter_map(sparse_res) |
| 99 | + |
| 100 | +discrete_prob = DiscreteProblem(sys, x0, tspan, ps) |
| 101 | +estimate = solve(discrete_prob, FunctionMap()); |
| 102 | + |
| 103 | +# And look at the result |
| 104 | +#md plot(sol, color = :black) |
| 105 | +#md plot!(estimate, color = :red, linestyle = :dash) |
| 106 | + |
| 107 | +#md # ## [Copy-Pasteable Code](@id linear_discrete_copy_paste) |
| 108 | +#md # |
| 109 | +#md # ```julia |
| 110 | +#md # @__CODE__ |
| 111 | +#md # ``` |
| 112 | + |
| 113 | +## Test the result #src |
| 114 | +for r_ in [res, sparse_res] #src |
| 115 | + @test all(l2error(r_) .<= 1e-10) #src |
| 116 | + @test all(aic(r_) .>= 1e10) #src |
| 117 | + @test all(determination(r_) .≈ 1.0) #src |
| 118 | +end #src |
| 119 | +@test Array(sol) ≈ Array(estimate) #src |
| 120 | + |
| 121 | + |
0 commit comments