Skip to content

Commit baec142

Browse files
authored
Merge pull request #141 from control-toolbox/update-doc
docs and CTDirect update
2 parents 33dbf2e + 9107365 commit baec142

File tree

9 files changed

+203
-83
lines changed

9 files changed

+203
-83
lines changed

Project.toml

+2-2
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,6 @@ DocStringExtensions = "ffbed154-4ef7-542d-bbb7-c09d3a79fcae"
1111

1212
[compat]
1313
CTBase = "0.7"
14-
CTDirect = "0.3"
14+
CTDirect = "0.4"
1515
CTFlows = "0.3"
16-
julia = "1.8"
16+
julia = "1.9"

docs/Project.toml

+1
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,3 @@
11
[deps]
22
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
3+
OptimalControl = "5f98b655-cc9a-415a-b60e-744165666948"

docs/make.jl

+3-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,9 @@ makedocs(
66
format = Documenter.HTML(prettyurls = false),
77
pages = [
88
"Introduction" => "index.md",
9-
"API" => "api.md"
9+
"Tutorials" => ["basic-example.md", "goddard.md"],
10+
"API" => "api.md",
11+
"Developpers" => "dev-api.md"
1012
]
1113
)
1214

docs/src/basic-example.md

+60
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
# Basic example
2+
3+
Consider we want to minimise the cost functional
4+
5+
```math
6+
\frac{1}{2}\int_{0}^{1} u^2(t) \, \mathrm{d}t
7+
```
8+
9+
subject to the dynamical constraints for $t \in [0, 1]$
10+
11+
```math
12+
\dot x_1(t) = x_2(t), \quad \dot x_2(t) = u(t) \in \mathbb{R},
13+
```
14+
15+
and the limit conditions
16+
17+
```math
18+
x(0) = (-1, 0), \quad x(1) = (0, 0).
19+
```
20+
21+
First, we need to import the `OptimalControl.jl` package:
22+
23+
```@example main
24+
using OptimalControl
25+
```
26+
27+
Then, we can define the problem
28+
29+
```@example main
30+
t0 = 0
31+
tf = 1
32+
A = [ 0 1
33+
0 0 ]
34+
B = [ 0
35+
1 ]
36+
37+
@def ocp_di begin
38+
t ∈ [ t0, tf ], time # time interval
39+
x ∈ R², state # state
40+
u ∈ R, control # control
41+
x(t0) == [-1, 0], (initial_con) # initial condition
42+
x(tf) == [0, 0], (final_con) # final condition
43+
ẋ(t) == A * x(t) + B * u(t) # dynamics
44+
∫( 0.5u(t)^2 ) → min # objective
45+
end
46+
nothing # hide
47+
```
48+
49+
Solve it
50+
51+
```@example main
52+
sol_di = solve(ocp_di)
53+
nothing # hide
54+
```
55+
56+
and plot the solution
57+
58+
```@example main
59+
plot(sol_di, size=(700, 700))
60+
```

docs/src/dev-api.md

+11
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
# Internal functions
2+
3+
```@meta
4+
CurrentModule = OptimalControl
5+
```
6+
7+
```@autodocs
8+
Modules = [OptimalControl]
9+
Order = [:module, :type, :function, :macro]
10+
Public = false
11+
```

docs/src/goddard.md

+105
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,105 @@
1+
# Advanced example
2+
3+
This well-known problem[^1] [^2] models the ascent of a rocket through the atmosphere, and we restrict here ourselves to vertical (one dimensional) trajectories. The state variables are the altitude $r$, speed $v$ and mass $m$ of the rocket during the flight, for a total dimension of 3. The rocket is subject to gravity $g$, thrust $u$ and drag force $D$ (function of speed and altitude). The final time $T$ is free, and the objective is to reach a maximal altitude with a bounded fuel consumption.
4+
5+
We thus want to solve the optimal control problem in Mayer form
6+
7+
```math
8+
\max\, r(T)
9+
```
10+
11+
subject to the control dynamics
12+
13+
```math
14+
\dot{r} = v, \quad
15+
\dot{v} = \frac{T_{\max}\,u - D(r,v)}{m} - g, \quad
16+
\dot{m} = -u,
17+
```
18+
19+
and subject to the control constraint $u(t) \in [0,1]$ and the state constraint
20+
$v(t) \leq v_{\max}$. The initial state is fixed while only the final mass is prescribed.
21+
22+
!!! note
23+
24+
The Hamiltonian is affine with respect to the control, so singular arcs may occur,
25+
as well as constrained arcs due to the path constraint on the velocity (see below).
26+
27+
We import the `OptimalControl.jl` package:
28+
29+
```@example main
30+
using OptimalControl
31+
```
32+
33+
We define the problem
34+
35+
```@example main
36+
# Parameters
37+
const Cd = 310
38+
const Tmax = 3.5
39+
const β = 500
40+
const b = 2
41+
42+
t0 = 0
43+
r0 = 1
44+
v0 = 0
45+
vmax = 0.1
46+
m0 = 1
47+
mf = 0.6
48+
49+
# Initial state
50+
x0 = [ r0, v0, m0 ]
51+
52+
# Abstract model
53+
@def ocp_goddard begin
54+
55+
tf, variable
56+
t ∈ [ t0, tf ], time
57+
x ∈ R³, state
58+
u ∈ R, control
59+
60+
r = x₁
61+
v = x₂
62+
m = x₃
63+
64+
x(t0) == [ r0, v0, m0 ]
65+
0 ≤ u(t) ≤ 1
66+
r(t) ≥ r0, (1)
67+
0 ≤ v(t) ≤ vmax, (2)
68+
mf ≤ m(t) ≤ m0, (3)
69+
70+
ẋ(t) == F0(x(t)) + u(t) * F1(x(t))
71+
72+
r(tf) → max
73+
74+
end;
75+
76+
F0(x) = begin
77+
r, v, m = x
78+
D = Cd * v^2 * exp(-β*(r - 1))
79+
return [ v, -D/m - 1/r^2, 0 ]
80+
end
81+
82+
F1(x) = begin
83+
r, v, m = x
84+
return [ 0, Tmax/m, -b*Tmax ]
85+
end
86+
nothing # hide
87+
```
88+
89+
Solve it
90+
91+
```@example main
92+
N = 50
93+
direct_sol_goddard = solve(ocp_goddard, grid_size=N)
94+
nothing # hide
95+
```
96+
97+
and plot the solution
98+
99+
```@example main
100+
plot(direct_sol_goddard, size=(700, 700))
101+
```
102+
103+
[^1]: R.H. Goddard. A Method of Reaching Extreme Altitudes, volume 71(2) of Smithsonian Miscellaneous Collections. Smithsonian institution, City of Washington, 1919.
104+
105+
[^2]: H. Seywald and E.M. Cliff. Goddard problem in presence of a dynamic pressure limit. Journal of Guidance, Control, and Dynamics, 16(4):776–781, 1993.

docs/src/index.md

+13-74
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,17 @@
1-
# Introduction to the OptimalControl.jl package
1+
# OptimalControl.jl
22

3-
The `OptimalControl.jl` package is part of the [control-toolbox ecosystem](https://github.com/control-toolbox). It aims to provide tools to solve optimal control problems by direct and indirect methods. An optimal control problem can be described as minimising the cost functional
3+
```@meta
4+
CurrentModule = OptimalControl
5+
```
6+
7+
The `OptimalControl.jl` package is part of the [control-toolbox ecosystem](https://github.com/control-toolbox).
8+
9+
!!! note "Install"
10+
11+
To install a package from the control-toolbox ecosystem,
12+
please visit the [installation page](https://github.com/control-toolbox#installation).
13+
14+
This package aims to provide tools to solve optimal control problems by direct and indirect methods. An optimal control problem can be described as minimising the cost functional
415

516
```math
617
g(t_0, x(t_0), t_f, x(t_f)) + \int_{t_0}^{t_f} f^{0}(t, x(t), u(t))~\mathrm{d}t
@@ -23,75 +34,3 @@ and other constraints such as
2334
\phi_l &\le& \phi(t_0, x(t_0), t_f, x(t_f)) &\le& \phi_u.
2435
\end{array}
2536
```
26-
27-
**Contents.**
28-
29-
```@contents
30-
Pages = ["index.md", "api.md"]
31-
Depth = 2
32-
```
33-
34-
## Installation
35-
36-
To install a package from the control-toolbox ecosystem, please visit the [installation page](https://github.com/control-toolbox#installation).
37-
38-
## Basic usage
39-
40-
Consider we want to minimise the cost functional
41-
42-
```math
43-
\frac{1}{2}\int_{0}^{1} u^2(t) \, \mathrm{d}t
44-
```
45-
46-
subject to the dynamical constraints for $t \in [0, 1]$
47-
48-
```math
49-
\dot x_1(t) = x_2(t), \quad \dot x_2(t) = u(t) \in \mathbb{R},
50-
```
51-
52-
and the limit conditions
53-
54-
```math
55-
x(0) = (-1, 0), \quad x(1) = (0, 0).
56-
```
57-
58-
First, we need to import the `OptimalControl.jl` package:
59-
60-
```@example main
61-
using OptimalControl
62-
```
63-
64-
Then, we can define the problem
65-
66-
```@example main
67-
ocp = Model() # the model for the problem definition
68-
69-
state!(ocp, 2) # dimension of the state
70-
control!(ocp, 1) # dimension of the control
71-
time!(ocp, [0, 1]) # time interval
72-
73-
objective!(ocp, :lagrange, (x, u) -> 0.5u^2) # objective
74-
75-
A = [ 0 1
76-
0 0 ]
77-
B = [ 0
78-
1 ]
79-
dynamics!(ocp, (x, u) -> A*x + B*u) # dynamics
80-
81-
constraint!(ocp, :initial, [-1, 0]) # initial condition
82-
constraint!(ocp, :final, [0, 0]) # final condition
83-
nothing # hide
84-
```
85-
86-
Solve it
87-
88-
```@example main
89-
sol = solve(ocp)
90-
nothing # hide
91-
```
92-
93-
and plot the solution
94-
95-
```@example main
96-
plot(sol, size=(700, 700))
97-
```

src/solve.jl

+5-3
Original file line numberDiff line numberDiff line change
@@ -22,15 +22,15 @@ function solve(ocp::OptimalControlModel, description::Symbol...;
2222
method = getFullDescription(description, algorithmes)
2323

2424
# todo: OptimalControlInit must be in CTBase
25-
#=
25+
2626
if isnothing(init)
2727
init = OptimalControlInit()
2828
elseif init isa CTBase.OptimalControlSolution
2929
init = OptimalControlInit(init)
3030
else
31-
OptimalControlInit(init...)
31+
init = OptimalControlInit(x_init=init[rg(1,ocp.state_dimension)],u_init=init[rg(ocp.state_dimension+1,ocp.state_dimension+ocp.control_dimension)],v_init=init[rg(ocp.state_dimension+ocp.control_dimension+1,lastindex(init))])
3232
end
33-
=#
33+
3434

3535
# print chosen method
3636
display ? println("\nMethod = ", method) : nothing
@@ -41,6 +41,8 @@ function solve(ocp::OptimalControlModel, description::Symbol...;
4141
end
4242
end
4343

44+
rg(i,j) = i == j ? i : i:j
45+
4446
function clean(d::Description)
4547
return d\(:direct, )
4648
end

test/test_goddard_indirect.jl

+3-3
Original file line numberDiff line numberDiff line change
@@ -39,9 +39,9 @@ u1 = 1
3939
# singular control
4040
H0 = Lift(F0)
4141
H1 = Lift(F1)
42-
H01 = @Poisson {H0, H1}
43-
H001 = @Poisson {H0, H01}
44-
H101 = @Poisson {H1, H01}
42+
H01 = @Lie {H0, H1}
43+
H001 = @Lie {H0, H01}
44+
H101 = @Lie {H1, H01}
4545
us(x, p) = -H001(x, p) / H101(x, p)
4646

4747
# boundary control

0 commit comments

Comments
 (0)