You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/prob_and_solve.md
+12Lines changed: 12 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -67,6 +67,15 @@ ps = parameter_map(res)
67
67
68
68
## [Optional Arguments](@id optional_arguments)
69
69
70
+
!!! info
71
+
The keyword argument `eval_expression` controls the function creation
72
+
behavior. `eval_expression=true` means that `eval` is used, so normal
73
+
world-age behavior applies (i.e. the functions cannot be called from
74
+
the function that generates them). If `eval_expression=false`,
75
+
then construction via GeneralizedGenerated.jl is utilized to allow for
76
+
same world-age evaluation. However, this can cause Julia to segfault
77
+
on sufficiently large basis functions. By default eval_expression=false.
78
+
70
79
Koopman based algorithms can be called without a [`Basis`](@ref), resulting in dynamic mode decomposition like methods, or with a basis for extened dynamic mode decomposition :
71
80
72
81
```julia
@@ -81,6 +90,9 @@ Possible keyworded arguments include
81
90
+`digits` controls the digits / rounding used for deriving the system equations (`digits = 1` would round `10.02` to `10.0`)
82
91
+`operator_only` returns a `NamedTuple` containing the operator, input and output mapping and matrices used for updating the operator as described [here](https://arxiv.org/pdf/1406.7187.pdf)
83
92
93
+
!!! info
94
+
If `eval_expression` is set to `true`, the returning result of the Koopman based inference will not contain a parametrized equation, but rather use the numeric values of the operator/generator.
Copy file name to clipboardExpand all lines: src/optimizers/sparseregression.jl
+9-3Lines changed: 9 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -3,11 +3,17 @@
3
3
$(SIGNATURES)
4
4
5
5
Implements a sparse regression, given an `AbstractOptimizer` or `AbstractSubspaceOptimizer`.
6
-
`maxiter` indicate the maximum iterations for each call of the optimizer, `abstol` the absolute tolerance of
6
+
`X` denotes the coefficient matrix, `A` the design matrix and `Y` the matrix of observed or target values.
7
+
`X` can be derived via `init(opt, A, Y)`.
8
+
`maxiter` indicates the maximum iterations for each call of the optimizer, `abstol` the absolute tolerance of
7
9
the difference between iterations in the 2 norm. If the optimizer is called with a `Vector` of thresholds, each `maxiter` indicates
8
10
the maximum iterations for each threshold.
9
11
10
-
If `progress` is set to `true`, a progressbar will be available.
12
+
If `progress` is set to `true`, a progressbar will be available. `progress_outer` and `progress_offset` are used to compute the initial offset of the
13
+
progressbar.
14
+
15
+
If used with a `Vector` of thresholds, the functions `f` with signature `f(X, A, Y)` and `g` with signature `g(x, threshold) = G(f(X, A, Y))` with the arguments given as stated above can be passed in. These are
16
+
used for finding the pareto-optimal solution to the sparse regression.
11
17
"""
12
18
functionsparse_regression!(X, A, Y, opt::AbstractOptimizer{T};
13
19
maxiter::Int=maximum(size(A)),
@@ -83,7 +89,7 @@ function sparse_regression!(X, A, Y, opt::AbstractOptimizer{T};
0 commit comments