Skip to content

Conversation

@arielb1
Copy link
Contributor

@arielb1 arielb1 commented Mar 4, 2025

This PR provides integration of tokio-metrics and metrics.rs

@arielb1
Copy link
Contributor Author

arielb1 commented Apr 13, 2025

Any progress here?

@Darksonn
Copy link
Contributor

Sorry about the delay here. I think this code is a better fit for a separate crate that provides the integration. I don't see any reason it has to be in the same crate as the core tokio-metrics logic.

@arielb1 arielb1 force-pushed the metrics-integration branch from 054745a to 83f4535 Compare April 16, 2025 20:59
@arielb1
Copy link
Contributor Author

arielb1 commented Apr 16, 2025

r? @jlizen

@carllerche
Copy link
Member

I have spoken with/ @Darksonn and she is Ok w/ us moving forward w/ this PR.

Copy link
Member

@jlizen jlizen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Core implementation looked good, but lack of docs and feature naming are blocking concerns.

Support for customizing metric prefix is not blocking, but I strongly encourage it and would like to see a follow-up GH issue at least.

The concern about maintaining the hardcoded list of field names is not blocking. I think it's a good idea especially as we go to do the same for TaskMetrics, but I'd defer to you about whether it's worth the trouble for now.

/// ##### See also
/// - [`TaskMonitor::intervals`]:
/// produces [`TaskMetrics`] for user-defined sampling intervals, instead of cumulatively
/// produces [`TaskMetrics`] for user-defined sampling intervals, instead of cumulatively
Copy link
Member

@jlizen jlizen Apr 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Future scope, non-blocking)

Do you see any reason not to expose an API to also allow flushing TaskMonitor metrics to metrics.rs when used with its intervals() api? If not would you be willing to cut a GH issue to track that follow-up work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If someone has a concrete enough want

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One specific use case I've seen in the wild - profiling tasks specifically spawned by the hyper server executor, separate from general future polling. This in particular is useful for capturing things like time to first poll delay, which the runtime metrics don't cover.

}
}

metric_refs! {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The maintenance burden of this seems high, is there not a good way to derive this from the RuntimeMetrics fields via a proc macro or otherwise?

Copy link
Contributor Author

@arielb1 arielb1 Apr 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does not look that high to me, you just have to copy fields over. I'll rather maintain the independence.

We can always turn struct RuntimeMetrics to a macro but I think the juice is not worth the squeeze.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about a test case that does something hacky like, checks that the field count in the debug impl of RuntimeMetrics matches the number of metrics published by this reporter?

Just so that at least we a have a guardrail that will make CI fail if we forget?

@arielb1 arielb1 force-pushed the metrics-integration branch 3 times, most recently from 0c986a4 to 3d95cbd Compare April 17, 2025 14:29
Copy link
Member

@jlizen jlizen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Big improvements on the docs, thanks for that!

Only remaining things for me are:

  • how about a unit test guarding against drift between RuntimeMetrics and the fields hardcoded into our reporter?
  • same concern about this being set to a default feature, could you share a bit more about rationale?
  • fix for failing doctest

}
}

metric_refs! {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about a test case that does something hacky like, checks that the field count in the debug impl of RuntimeMetrics matches the number of metrics published by this reporter?

Just so that at least we a have a guardrail that will make CI fail if we forget?

@arielb1 arielb1 force-pushed the metrics-integration branch from 3d95cbd to 8adac45 Compare April 17, 2025 15:28
@arielb1 arielb1 force-pushed the metrics-integration branch from 8adac45 to e3bdc10 Compare April 17, 2025 15:28
@arielb1 arielb1 force-pushed the metrics-integration branch from 7b550fb to 0097f2d Compare April 17, 2025 15:39
Copy link
Member

@jlizen jlizen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding that test for covering all fields! Looks like it was already useful, turned up a few missing fields.

I'm good with this, will cut a separate issue for the TaskMetrics.

@arielb1 arielb1 merged commit a893585 into tokio-rs:main Apr 17, 2025
5 checks passed
@arielb1 arielb1 mentioned this pull request Apr 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants