You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/posts/message_tracking/index.md
+6-5Lines changed: 6 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title = "Automatic Message Tracking and Timing"
3
3
date = 2025-01-01
4
4
description = "How Mantra automatically tracks and times each message."
5
5
[taxonomies]
6
-
tags = ["mantra", "in-situ telemetry"]
6
+
tags = ["mantra", "telemetry"]
7
7
[extra]
8
8
comment = true
9
9
+++
@@ -20,8 +20,8 @@ While the main system will therefore have to perform a bit more work, the real-w
20
20
In fact, after having implemented the design below, I found that the overhead was so minimal that I forewent the planned feature flag disabling of the tracking.
21
21
22
22
Moving on, the main telemetry metrics I was interested in are:
23
-
- message propagation latency: "how long does it take for downstream messages to arrive at different parts of the system based on an ingested message"
24
-
- message processing time: "how long does it take for message of type `T` to be processed by system `X`"
23
+
- message propagation latency: how long does it take for downstream messages to arrive at different parts of the system based on an ingested message
24
+
- message processing time: how long does it take for message of type `T` to be processed by system `X`
25
25
- what are the downstream message produced by a given ingested message
26
26
27
27
This post will detail the message tracking design in **Mantra** to handle all of this as seemlessly as possible.
@@ -52,7 +52,7 @@ pub struct QueueMessage<T> {
52
52
```
53
53
54
54
# `Actor`, `Spine` and `SpineAdapters`
55
-
Now, it becomes extremely tedious and ugly if each of the `Producers` and `Consumers`has to take care of unpacking the `data`, process it, and then produce a new `QueueMessage` with the correct `origin_t` and `publish_t`, while also publishing the timing telemetry to the right timing queues.
55
+
Now, it becomes extremely tedious and ugly if each of the `Producers` and `Consumers`have to take care of unpacking the `data`, process it, and then produce a new `QueueMessage` with the correct `origin_t` and `publish_t`, while also publishing the timing telemetry to the right timing queues.
56
56
Instead, I designed **Mantra** in such a way that all of this is handled behind the scenes, and sub-systems can just take care of their business logic.
57
57
58
58
We start by defining an `Actor` trait which is implemented by each sub-system. An `Actor` has a `name` which is used to create timing queues, a `loop_body` implementing the business logic, and potentially the `on_init` and `on_exit` functions which are called before the main `Actor` loop starts and after it finishes, respectively.
@@ -91,7 +91,8 @@ This looks a bit convoluted, but it is this combined `SpineAdapter` structure th
91
91
the `timestamp` of that message is set on the `SpineProducers`, which is then attached to whatever message that the `Actor` produces based on the consumed one.
92
92
It completely solves the first issue of manually having to unpack and repack each message.
93
93
94
-
The second part is the automatic latency and processing time tracking of the messages. To enable this, we define a slightly augmented `Consumer` that holds a `Timer`:
94
+
The second part is the automatic latency and processing time tracking of the messages. To enable this, we define a slightly augmented `Consumer` that holds a [`Timer`](@/posts/icc_1_seqlock/index.md#timing-101):
0 commit comments