Skip to content

Commit 9107365

Browse files
committed
foo
1 parent 3c1f663 commit 9107365

File tree

5 files changed

+81
-77
lines changed

5 files changed

+81
-77
lines changed

docs/Project.toml

+2-1
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,3 @@
11
[deps]
2-
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
2+
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
3+
OptimalControl = "5f98b655-cc9a-415a-b60e-744165666948"

docs/make.jl

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ makedocs(
66
format = Documenter.HTML(prettyurls = false),
77
pages = [
88
"Introduction" => "index.md",
9-
"Tutorials" => "tutorials.md",
9+
"Tutorials" => ["basic-example.md", "goddard.md"],
1010
"API" => "api.md",
1111
"Developpers" => "dev-api.md"
1212
]

docs/src/basic-example.md

+60
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
# Basic example
2+
3+
Consider we want to minimise the cost functional
4+
5+
```math
6+
\frac{1}{2}\int_{0}^{1} u^2(t) \, \mathrm{d}t
7+
```
8+
9+
subject to the dynamical constraints for $t \in [0, 1]$
10+
11+
```math
12+
\dot x_1(t) = x_2(t), \quad \dot x_2(t) = u(t) \in \mathbb{R},
13+
```
14+
15+
and the limit conditions
16+
17+
```math
18+
x(0) = (-1, 0), \quad x(1) = (0, 0).
19+
```
20+
21+
First, we need to import the `OptimalControl.jl` package:
22+
23+
```@example main
24+
using OptimalControl
25+
```
26+
27+
Then, we can define the problem
28+
29+
```@example main
30+
t0 = 0
31+
tf = 1
32+
A = [ 0 1
33+
0 0 ]
34+
B = [ 0
35+
1 ]
36+
37+
@def ocp_di begin
38+
t ∈ [ t0, tf ], time # time interval
39+
x ∈ R², state # state
40+
u ∈ R, control # control
41+
x(t0) == [-1, 0], (initial_con) # initial condition
42+
x(tf) == [0, 0], (final_con) # final condition
43+
ẋ(t) == A * x(t) + B * u(t) # dynamics
44+
∫( 0.5u(t)^2 ) → min # objective
45+
end
46+
nothing # hide
47+
```
48+
49+
Solve it
50+
51+
```@example main
52+
sol_di = solve(ocp_di)
53+
nothing # hide
54+
```
55+
56+
and plot the solution
57+
58+
```@example main
59+
plot(sol_di, size=(700, 700))
60+
```

docs/src/tutorials.md renamed to docs/src/goddard.md

+5-69
Original file line numberDiff line numberDiff line change
@@ -1,71 +1,6 @@
1-
# Tutorials
1+
# Advanced example
22

3-
## Basic example : double integrator
4-
5-
Consider we want to minimise the cost functional
6-
7-
```math
8-
\frac{1}{2}\int_{0}^{1} u^2(t) \, \mathrm{d}t
9-
```
10-
11-
subject to the dynamical constraints for $t \in [0, 1]$
12-
13-
```math
14-
\dot x_1(t) = x_2(t), \quad \dot x_2(t) = u(t) \in \mathbb{R},
15-
```
16-
17-
and the limit conditions
18-
19-
```math
20-
x(0) = (-1, 0), \quad x(1) = (0, 0).
21-
```
22-
23-
First, we need to import the `OptimalControl.jl` package:
24-
25-
```@example main
26-
using OptimalControl
27-
```
28-
29-
Then, we can define the problem
30-
31-
```@example main
32-
t0 = 0
33-
tf = 1
34-
A = [ 0 1
35-
0 0 ]
36-
B = [ 0
37-
1 ]
38-
39-
@def ocp_di begin
40-
t ∈ [ t0, tf ], time # time interval
41-
x ∈ R², state # state
42-
u ∈ R, control # control
43-
x(t0) == [-1, 0], (initial_con) # initial condition
44-
x(tf) == [0, 0], (final_con) # final condition
45-
ẋ(t) == A * x(t) + B * u(t) # dynamics
46-
∫( 0.5u(t)^2 ) → min # objective
47-
end
48-
nothing # hide
49-
```
50-
51-
Solve it
52-
53-
```@example main
54-
sol_di = solve(ocp_di)
55-
nothing # hide
56-
```
57-
58-
and plot the solution
59-
60-
```@example main
61-
plot(sol_di, size=(700, 700))
62-
```
63-
64-
## Advanced example : Goddard
65-
66-
This well-known problem[^1] [^2] models the ascent of a rocket through the atmosphere, and we restrict here ourselves to vertical (one dimensional) trajectories.
67-
The state variables are the altitude $r$, speed $v$ and mass $m$ of the rocket during the flight, for a total dimension of 3.
68-
The rocket is subject to gravity $g$, thrust $u$ and drag force $D$ (function of speed and altitude). The final time $T$ is free, and the objective is to reach a maximal altitude with a bounded fuel consumption.
3+
This well-known problem[^1] [^2] models the ascent of a rocket through the atmosphere, and we restrict here ourselves to vertical (one dimensional) trajectories. The state variables are the altitude $r$, speed $v$ and mass $m$ of the rocket during the flight, for a total dimension of 3. The rocket is subject to gravity $g$, thrust $u$ and drag force $D$ (function of speed and altitude). The final time $T$ is free, and the objective is to reach a maximal altitude with a bounded fuel consumption.
694

705
We thus want to solve the optimal control problem in Mayer form
716

@@ -86,7 +21,8 @@ $v(t) \leq v_{\max}$. The initial state is fixed while only the final mass is pr
8621

8722
!!! note
8823

89-
The Hamiltonian is affine with respect to the control, so singular arcs may occur, as well as constrained arcs due to the path constraint on the velocity (see below).
24+
The Hamiltonian is affine with respect to the control, so singular arcs may occur,
25+
as well as constrained arcs due to the path constraint on the velocity (see below).
9026

9127
We import the `OptimalControl.jl` package:
9228

@@ -166,4 +102,4 @@ plot(direct_sol_goddard, size=(700, 700))
166102

167103
[^1]: R.H. Goddard. A Method of Reaching Extreme Altitudes, volume 71(2) of Smithsonian Miscellaneous Collections. Smithsonian institution, City of Washington, 1919.
168104

169-
[^2]: H. Seywald and E.M. Cliff. Goddard problem in presence of a dynamic pressure limit. Journal of Guidance, Control, and Dynamics, 16(4):776–781, 1993.
105+
[^2]: H. Seywald and E.M. Cliff. Goddard problem in presence of a dynamic pressure limit. Journal of Guidance, Control, and Dynamics, 16(4):776–781, 1993.

docs/src/index.md

+13-6
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,17 @@
1-
# Introduction to the OptimalControl.jl package
1+
# OptimalControl.jl
22

3-
The `OptimalControl.jl` package is part of the [control-toolbox ecosystem](https://github.com/control-toolbox). It aims to provide tools to solve optimal control problems by direct and indirect methods. An optimal control problem can be described as minimising the cost functional
3+
```@meta
4+
CurrentModule = OptimalControl
5+
```
6+
7+
The `OptimalControl.jl` package is part of the [control-toolbox ecosystem](https://github.com/control-toolbox).
8+
9+
!!! note "Install"
10+
11+
To install a package from the control-toolbox ecosystem,
12+
please visit the [installation page](https://github.com/control-toolbox#installation).
13+
14+
This package aims to provide tools to solve optimal control problems by direct and indirect methods. An optimal control problem can be described as minimising the cost functional
415

516
```math
617
g(t_0, x(t_0), t_f, x(t_f)) + \int_{t_0}^{t_f} f^{0}(t, x(t), u(t))~\mathrm{d}t
@@ -23,7 +34,3 @@ and other constraints such as
2334
\phi_l &\le& \phi(t_0, x(t_0), t_f, x(t_f)) &\le& \phi_u.
2435
\end{array}
2536
```
26-
27-
## Installation
28-
29-
To install a package from the control-toolbox ecosystem, please visit the [installation page](https://github.com/control-toolbox#installation).

0 commit comments

Comments
 (0)