You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2-3Lines changed: 2 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,8 +25,7 @@ it empowers users to directly populate the linear system with gradients and Hess
25
25
This manual filling significantly curtails overhead and memory usage,
26
26
requiring storage only for the more compact gradient (and optionally, the Hessian).
27
27
28
-
However, these advantages are realized specifically when leveraging the Accumulation function;
29
-
they are not applicable when employing automatic or numerical differentiation techniques.
28
+
Note: even though Tinyopt supports sparse systems, it is not fast to optimize large ones. We're still missing some clever tricks to make the optimization fast. It will come so stay (fine) tuned!
30
29
31
30
## Table of Contents
32
31
[Installation](#installation-)
@@ -106,7 +105,7 @@ the full doc at [ReadTheDocs](https://tinyopt.readthedocs.io/en/latest).
106
105
## Setup
107
106
We're currently evaluating small dense problems (<50 dimensions) with one cost function on a
108
107
Ubuntu GNU/Linux 2024.04 64b.
109
-
We're showing without Automatic Differentiation as there's some time increase with it, but not that much.
108
+
We're showing without Automatic Differentiation as there's some time increase with it, but not that much for small systems.
110
109
The script `benchmarks/scripts/run.sh` was called after making sure the CPU powermodes were all in 'performance'.
111
110
Plotting is done using the notebook `benchmarks/scripts/results.ipynb`.
0 commit comments