-
Notifications
You must be signed in to change notification settings - Fork 33
metrics.rs integration #68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Any progress here? |
|
Sorry about the delay here. I think this code is a better fit for a separate crate that provides the integration. I don't see any reason it has to be in the same crate as the core tokio-metrics logic. |
054745a to
83f4535
Compare
|
r? @jlizen |
|
I have spoken with/ @Darksonn and she is Ok w/ us moving forward w/ this PR. |
jlizen
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Core implementation looked good, but lack of docs and feature naming are blocking concerns.
Support for customizing metric prefix is not blocking, but I strongly encourage it and would like to see a follow-up GH issue at least.
The concern about maintaining the hardcoded list of field names is not blocking. I think it's a good idea especially as we go to do the same for TaskMetrics, but I'd defer to you about whether it's worth the trouble for now.
| /// ##### See also | ||
| /// - [`TaskMonitor::intervals`]: | ||
| /// produces [`TaskMetrics`] for user-defined sampling intervals, instead of cumulatively | ||
| /// produces [`TaskMetrics`] for user-defined sampling intervals, instead of cumulatively |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Future scope, non-blocking)
Do you see any reason not to expose an API to also allow flushing TaskMonitor metrics to metrics.rs when used with its intervals() api? If not would you be willing to cut a GH issue to track that follow-up work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If someone has a concrete enough want
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One specific use case I've seen in the wild - profiling tasks specifically spawned by the hyper server executor, separate from general future polling. This in particular is useful for capturing things like time to first poll delay, which the runtime metrics don't cover.
| } | ||
| } | ||
|
|
||
| metric_refs! { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The maintenance burden of this seems high, is there not a good way to derive this from the RuntimeMetrics fields via a proc macro or otherwise?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does not look that high to me, you just have to copy fields over. I'll rather maintain the independence.
We can always turn struct RuntimeMetrics to a macro but I think the juice is not worth the squeeze.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about a test case that does something hacky like, checks that the field count in the debug impl of RuntimeMetrics matches the number of metrics published by this reporter?
Just so that at least we a have a guardrail that will make CI fail if we forget?
0c986a4 to
3d95cbd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Big improvements on the docs, thanks for that!
Only remaining things for me are:
- how about a unit test guarding against drift between
RuntimeMetricsand the fields hardcoded into our reporter? - same concern about this being set to a default feature, could you share a bit more about rationale?
- fix for failing doctest
| } | ||
| } | ||
|
|
||
| metric_refs! { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about a test case that does something hacky like, checks that the field count in the debug impl of RuntimeMetrics matches the number of metrics published by this reporter?
Just so that at least we a have a guardrail that will make CI fail if we forget?
3d95cbd to
8adac45
Compare
8adac45 to
e3bdc10
Compare
7b550fb to
0097f2d
Compare
jlizen
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding that test for covering all fields! Looks like it was already useful, turned up a few missing fields.
I'm good with this, will cut a separate issue for the TaskMetrics.
This PR provides integration of tokio-metrics and metrics.rs