-
Notifications
You must be signed in to change notification settings - Fork 29
add performance tips to docs #464
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its a good start, but a lot of information here is either factually wrong or just hallucinated. Its perfectly fine to use ChatGPT to improve English (as I always do myself nowadays), however, it is very dangerous to let it just generate everything. This text isn't helpful for an outsider because in its current shape its either incorrect or misleading.
On a separate note, I would also like to see some @examples
-s (builtin Documenter.jl feature, you can read the documentation of Documenter.jl or see other files for reference) as well as @ref
-rences to other pages in the documentation.
By implementing @example
blocks you can show the proposed solutions in action and see if they actually work or not.
@@ -33,7 +33,6 @@ makedocs(; | |||
"User guide" => [ | |||
"Getting started" => "manuals/getting-started.md", | |||
"RxInfer.jl vs. Others" => "manuals/comparison.md", | |||
"Using RxInfer from Python" => "manuals/how-to-use-rxinfer-from-python.md", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this line removed?
|
||
**Why?** | ||
Julia uses **Just-In-Time (JIT)** compilation. | ||
The **first time** you run a model (or even parts of it like `infer!`), Julia compiles the specialized machine code. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't have the infer!
function, the extra !
is wrong
|
||
## Non-Linear Nodes | ||
|
||
### Prefer Linearization Over Unscented or CUI Methods |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is a CUI
method? Perhaps you meant CVI
?
Thus, **Linearization is much faster** and usually good enough unless you need extremely high precision. | ||
|
||
**Further Reading:** | ||
See the “Deterministic Nodes” section of the RxInfer documentation for more on non-linear nodes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use @ref
from Documenter.jl
**Why?** | ||
When you use `free_energy = true`, RxInfer needs to store more detailed internal states for diagnostic purposes. | ||
When you specify `free_energy = Float64`, it reduces tracking to **minimal scalar computations**, making the process **leaner and faster**. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason for free_energy = Float64
is faster is because by default it stores the type of the computation as Real
in order to enable auto-differentiation
### SoftDot Node | ||
|
||
**Why?** | ||
Efficient for representing weighted sums with uncertainty — useful in regression-like structures. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An example with the @example
block would be nice
- **Forces premature message updates** | ||
- **May break inference stability** | ||
|
||
Use it carefully; it’s more for **debugging** deep models than regular optimization. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is hallucinated
- **Cuts off legitimate computations** | ||
- **Forces premature message updates** | ||
- **May break inference stability** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these bullets are entirely hallucinated
### How to Convert | ||
|
||
- **Smoothing models** use future observations to improve past estimates (i.e., two-pass inference). | ||
- **Filtering models** only use past and present data, updating beliefs in a **causal** fashion. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is a causal fashion?
**To convert:** | ||
- Remove or disable factors that depend on future observations. | ||
- Restrict message passing to only move forward in time. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are no capabilities in RxInfer to do that...
Hey @Thijsie2 do you need help with it? |
First draft of performance tips page for the documentation (in reply to issue #456)