Skip to content

add performance tips to docs #464

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

Thijsie2
Copy link

First draft of performance tips page for the documentation (in reply to issue #456)

Copy link
Member

@bvdmitri bvdmitri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its a good start, but a lot of information here is either factually wrong or just hallucinated. Its perfectly fine to use ChatGPT to improve English (as I always do myself nowadays), however, it is very dangerous to let it just generate everything. This text isn't helpful for an outsider because in its current shape its either incorrect or misleading.

On a separate note, I would also like to see some @examples-s (builtin Documenter.jl feature, you can read the documentation of Documenter.jl or see other files for reference) as well as @ref-rences to other pages in the documentation.

By implementing @example blocks you can show the proposed solutions in action and see if they actually work or not.

@@ -33,7 +33,6 @@ makedocs(;
"User guide" => [
"Getting started" => "manuals/getting-started.md",
"RxInfer.jl vs. Others" => "manuals/comparison.md",
"Using RxInfer from Python" => "manuals/how-to-use-rxinfer-from-python.md",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this line removed?


**Why?**
Julia uses **Just-In-Time (JIT)** compilation.
The **first time** you run a model (or even parts of it like `infer!`), Julia compiles the specialized machine code.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have the infer! function, the extra ! is wrong


## Non-Linear Nodes

### Prefer Linearization Over Unscented or CUI Methods
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is a CUI method? Perhaps you meant CVI?

Thus, **Linearization is much faster** and usually good enough unless you need extremely high precision.

**Further Reading:**
See the “Deterministic Nodes” section of the RxInfer documentation for more on non-linear nodes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use @ref from Documenter.jl

Comment on lines +61 to +63
**Why?**
When you use `free_energy = true`, RxInfer needs to store more detailed internal states for diagnostic purposes.
When you specify `free_energy = Float64`, it reduces tracking to **minimal scalar computations**, making the process **leaner and faster**.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason for free_energy = Float64 is faster is because by default it stores the type of the computation as Real in order to enable auto-differentiation

Comment on lines +81 to +84
### SoftDot Node

**Why?**
Efficient for representing weighted sums with uncertainty — useful in regression-like structures.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An example with the @example block would be nice

- **Forces premature message updates**
- **May break inference stability**

Use it carefully; it’s more for **debugging** deep models than regular optimization.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is hallucinated

Comment on lines +105 to +107
- **Cuts off legitimate computations**
- **Forces premature message updates**
- **May break inference stability**
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these bullets are entirely hallucinated

### How to Convert

- **Smoothing models** use future observations to improve past estimates (i.e., two-pass inference).
- **Filtering models** only use past and present data, updating beliefs in a **causal** fashion.
Copy link
Member

@bvdmitri bvdmitri Apr 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is a causal fashion?

Comment on lines +120 to +122
**To convert:**
- Remove or disable factors that depend on future observations.
- Restrict message passing to only move forward in time.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are no capabilities in RxInfer to do that...

@bvdmitri
Copy link
Member

bvdmitri commented May 9, 2025

Hey @Thijsie2 do you need help with it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants