Skip to content

Commit d66b44e

Browse files
committed
some cleanup
1 parent 3ec9713 commit d66b44e

File tree

1 file changed

+6
-5
lines changed

1 file changed

+6
-5
lines changed

content/posts/message_tracking/index.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title = "Automatic Message Tracking and Timing"
33
date = 2025-01-01
44
description = "How Mantra automatically tracks and times each message."
55
[taxonomies]
6-
tags = ["mantra", "in-situ telemetry"]
6+
tags = ["mantra", "telemetry"]
77
[extra]
88
comment = true
99
+++
@@ -20,8 +20,8 @@ While the main system will therefore have to perform a bit more work, the real-w
2020
In fact, after having implemented the design below, I found that the overhead was so minimal that I forewent the planned feature flag disabling of the tracking.
2121

2222
Moving on, the main telemetry metrics I was interested in are:
23-
- message propagation latency: "how long does it take for downstream messages to arrive at different parts of the system based on an ingested message"
24-
- message processing time: "how long does it take for message of type `T` to be processed by system `X`"
23+
- message propagation latency: how long does it take for downstream messages to arrive at different parts of the system based on an ingested message
24+
- message processing time: how long does it take for message of type `T` to be processed by system `X`
2525
- what are the downstream message produced by a given ingested message
2626

2727
This post will detail the message tracking design in **Mantra** to handle all of this as seemlessly as possible.
@@ -52,7 +52,7 @@ pub struct QueueMessage<T> {
5252
```
5353

5454
# `Actor`, `Spine` and `SpineAdapters`
55-
Now, it becomes extremely tedious and ugly if each of the `Producers` and `Consumers` has to take care of unpacking the `data`, process it, and then produce a new `QueueMessage` with the correct `origin_t` and `publish_t`, while also publishing the timing telemetry to the right timing queues.
55+
Now, it becomes extremely tedious and ugly if each of the `Producers` and `Consumers` have to take care of unpacking the `data`, process it, and then produce a new `QueueMessage` with the correct `origin_t` and `publish_t`, while also publishing the timing telemetry to the right timing queues.
5656
Instead, I designed **Mantra** in such a way that all of this is handled behind the scenes, and sub-systems can just take care of their business logic.
5757

5858
We start by defining an `Actor` trait which is implemented by each sub-system. An `Actor` has a `name` which is used to create timing queues, a `loop_body` implementing the business logic, and potentially the `on_init` and `on_exit` functions which are called before the main `Actor` loop starts and after it finishes, respectively.
@@ -91,7 +91,8 @@ This looks a bit convoluted, but it is this combined `SpineAdapter` structure th
9191
the `timestamp` of that message is set on the `SpineProducers`, which is then attached to whatever message that the `Actor` produces based on the consumed one.
9292
It completely solves the first issue of manually having to unpack and repack each message.
9393

94-
The second part is the automatic latency and processing time tracking of the messages. To enable this, we define a slightly augmented `Consumer` that holds a `Timer`:
94+
The second part is the automatic latency and processing time tracking of the messages. To enable this, we define a slightly augmented `Consumer` that holds a [`Timer`](@/posts/icc_1_seqlock/index.md#timing-101):
95+
9596
```rust
9697
#[derive(Clone, Copy, Debug)]
9798
pub struct Consumer<T: 'static + Copy + Default> {

0 commit comments

Comments
 (0)