MET-1593 DataDog compatible abstract metrics#906
Conversation
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Repository: lokalise/coderabbit/.coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThis pull request introduces two new abstract metric classes— Sequence Diagram(s)sequenceDiagram
participant Client
participant ADC as AbstractDimensionalCounterMetric
participant Register as prom-client Register
participant Counter as prom-client Counter
Client->>ADC: new AbstractDimensionalCounterMetric(config, client)
ADC->>ADC: Iterate each dimension
loop For each dimension
ADC->>Register: getSingleMetric('{prefix}_{dim}:{suffix}')
alt Metric exists
Register-->>ADC: Counter instance
else Metric does not exist
ADC->>Counter: new Counter({name, help, labelNames: []})
Counter-->>ADC: Counter instance
Register-->>Register: register new metric
end
ADC->>Counter: inc(0)
ADC->>ADC: Store in map[dimension] = Counter
end
Client->>ADC: registerMeasurement({dim1: val1, dim2: val2})
ADC->>ADC: Iterate measurement entries
loop For each dimension in measurement
ADC->>Counter: inc(value)
end
sequenceDiagram
participant Client
participant ADH as AbstractDimensionalHistogramMetric
participant Register as prom-client Register
participant Histogram as prom-client Histogram
Client->>ADH: new AbstractDimensionalHistogramMetric(config, client)
ADH->>ADH: Iterate each dimension
loop For each dimension
ADH->>Register: getSingleMetric('{prefix}_{dim}:{suffix}')
alt Metric exists
Register-->>ADH: Histogram instance
else Metric does not exist
ADH->>Histogram: new Histogram({name, help, buckets, labelNames: []})
Histogram-->>ADH: Histogram instance
Register-->>Register: register new metric
end
ADH->>ADH: Store in map[dimension] = Histogram
end
Client->>ADH: registerMeasurement(dimension, {time} or {startTime, endTime})
ADH->>ADH: Compute duration
alt Duration computed
ADH->>Histogram: observe({}, duration)
else Dimension not found
ADH->>ADH: no-op return
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (6)
packages/app/metrics-utils/src/AbstractDimensionalHistogramMetric.ts (2)
62-64: Consider adding parentheses for clarity on operator precedence.The expression
time ?? endTime - startTimerelies on??having lower precedence than-, which is correct but not immediately obvious to readers.✨ Suggested improvement for readability
const { time, startTime, endTime } = measurement - const duration = time ?? endTime - startTime + const duration = time ?? (endTime - startTime) histogram.observe({}, duration)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/app/metrics-utils/src/AbstractDimensionalHistogramMetric.ts` around lines 62 - 64, The expression computing duration in AbstractDimensionalHistogramMetric uses `time ?? endTime - startTime` which is correct but unclear; update the computation (referencing measurement, time, startTime, endTime and the histogram.observe call) to add parentheses for clarity, e.g. make the right-hand side explicitly `time ?? (endTime - startTime)` so readers immediately see the nullish-coalescing applies to the entire subtraction.
40-42: Unsafe cast when reusing existing metric.If a metric with the same name exists but is of a different type (e.g., a Counter), the cast
(existing as Histogram)will succeed but the object won't have the expectedobservemethod, causing a runtime error. Consider adding a type check or at minimum a defensive assertion.🛡️ Proposed defensive check
const existing = client.register.getSingleMetric(name) - const histogram = existing - ? (existing as Histogram) - : new client.Histogram({ + let histogram: Histogram + if (existing) { + if (typeof (existing as Histogram).observe !== 'function') { + throw new Error(`Metric "${name}" already exists but is not a Histogram`) + } + histogram = existing as Histogram + } else { + histogram = new client.Histogram({ name, help: metricConfig.helpDescription, buckets: metricConfig.buckets, labelNames: [], }) + } this.histograms.set(dimension, histogram)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/app/metrics-utils/src/AbstractDimensionalHistogramMetric.ts` around lines 40 - 42, The code unsafely casts an existing metric to Histogram using (existing as Histogram) which can hide type mismatches; update the reuse logic around client.register.getSingleMetric(name) in AbstractDimensionalHistogramMetric to perform a defensive type guard (e.g., check existing instanceof Histogram or inspect existing.type/name) before treating it as a Histogram, and if the existing metric is not a Histogram either throw or log a clear error and create a new Histogram (or avoid reusing) so you never call observe on an incompatible metric; reference the variables/functions client.register.getSingleMetric, existing, histogram, and the observe call when making this change.packages/app/metrics-utils/src/AbstractDimensionalCounterMetric.spec.ts (1)
40-51: Tests for disabled metrics appear duplicated.Similar to the histogram spec, "gracefully aborts if metrics are not enabled" and "should ignore measurements if client is not provided" test the same scenario. Consider consolidating.
Also applies to: 122-131
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/app/metrics-utils/src/AbstractDimensionalCounterMetric.spec.ts` around lines 40 - 51, The two tests in AbstractDimensionalCounterMetric.spec.ts that both check disabled metrics ("gracefully aborts if metrics are not enabled" and "should ignore measurements if client is not provided") are duplicative; consolidate them by removing one and expanding the remaining test to cover both conditions: construct ConcreteDimensionalCounterMetric with undefined client and with metrics disabled (if there’s a flag or constructor option), call registerMeasurement({ successful: 20, failed: 10 }) and assert counterMock and incMock were not called; update or keep the test name to clearly reflect both scenarios and remove the redundant test block referenced around registerMeasurement and the other duplicate at the later location.packages/app/metrics-utils/src/AbstractDimensionalHistogramMetric.spec.ts (1)
41-50: Tests for disabled metrics appear duplicated.The tests "gracefully aborts if metrics are not enabled" (lines 41-50) and "should ignore measurements if client is not provided" (lines 106-115) cover the same scenario with slightly different wording. Consider consolidating or differentiating them.
Also applies to: 106-115
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/app/metrics-utils/src/AbstractDimensionalHistogramMetric.spec.ts` around lines 41 - 50, The two tests duplicate behavior: both "gracefully aborts if metrics are not enabled" and "should ignore measurements if client is not provided" verify that ConcreteDimensionalHistogramMetric.registerMeasurement does nothing when no client/metrics are provided; consolidate by removing one test and keeping a single, clearly named spec (e.g., "ignores measurements when no metrics client is provided") that constructs ConcreteDimensionalHistogramMetric(undefined), calls registerMeasurement('successful', { time: 100 }), and asserts observeMock was not called; alternatively, if you want both tests to remain, change one to cover a distinct case (e.g., metrics disabled via a flag or null client vs undefined) and reference ConcreteDimensionalHistogramMetric and registerMeasurement in the updated assertion to make the difference explicit.packages/app/metrics-utils/src/AbstractDimensionalCounterMetric.ts (2)
39-42: Unsafe cast when reusing existing metric.Same issue as in
AbstractDimensionalHistogramMetric: if a metric with the same name exists but is a different type (e.g., a Histogram), the cast(existing as Counter)will succeed but the object won't have the expectedincmethod.🛡️ Proposed defensive check
const existing = client.register.getSingleMetric(name) - const counter = existing - ? (existing as Counter) - : new client.Counter({ name, help: metricConfig.helpDescription, labelNames: [] }) + let counter: Counter + if (existing) { + if (typeof (existing as Counter).inc !== 'function') { + throw new Error(`Metric "${name}" already exists but is not a Counter`) + } + counter = existing as Counter + } else { + counter = new client.Counter({ name, help: metricConfig.helpDescription, labelNames: [] }) + } counter.inc(0)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/app/metrics-utils/src/AbstractDimensionalCounterMetric.ts` around lines 39 - 42, The current reuse in AbstractDimensionalCounterMetric unsafely casts an existing metric returned by client.register.getSingleMetric(name) to a Counter; instead, add a defensive check before reusing: inspect the retrieved existing object (variable existing) to ensure it is actually a Counter (e.g., verify typeof existing.inc === "function" or check constructor/type against client.Counter) and if it is not, raise a clear error or avoid reusing it (throw or log and create a new uniquely named metric) rather than casting; update the logic around the counter variable so only a validated Counter is reused and otherwise handle the name/type conflict explicitly.
15-21:buildDimensionalMetricNameis duplicated across files.This helper function is identical in both
AbstractDimensionalCounterMetric.tsandAbstractDimensionalHistogramMetric.ts. Consider extracting to a shared utility if more dimensional metric types are expected.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/app/metrics-utils/src/AbstractDimensionalCounterMetric.ts` around lines 15 - 21, The helper function buildDimensionalMetricName is duplicated; extract it into a shared utility module (e.g., create a new exported function buildDimensionalMetricName in a metrics-utils/helpers or utils file), replace the local buildDimensionalMetricName definitions in AbstractDimensionalCounterMetric and AbstractDimensionalHistogramMetric with imports from that new module, and update any references in methods like the ones inside AbstractDimensionalCounterMetric (and the histogram counterpart) to use the imported function; remove the duplicated function definitions from both files and ensure the new utility is exported for reuse.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/app/metrics-utils/README.md`:
- Around line 182-185: The fenced code block containing the two metric lines
("request_duration_successful:histogram" and
"request_duration_failed:histogram") lacks a language specifier; update the
opening backticks to include a language (for example change ``` to ```text or
```yaml) so the block becomes ```text (or another appropriate language) followed
by the two lines and a closing ``` to enable proper syntax highlighting and
consistency with other blocks.
- Around line 8-9: The table of contents links use mixed-case fragments that
don't match markdown-generated anchors; update the TOC entries for
AbstractDimensionalCounterMetric and AbstractDimensionalHistogramMetric to use
lowercase fragments (e.g., change
"[AbstractDimensionalCounterMetric](`#AbstractDimensionalCounterMetric`)" to
"[AbstractDimensionalCounterMetric](`#abstractdimensionalcountermetric`)" and
similarly for AbstractDimensionalHistogramMetric) so the links resolve correctly
to the README headings.
- Around line 130-134: The fenced code block showing metric output (the block
containing the lines starting with "pizza_delivery_delivered_to_customer",
"pizza_delivery_delivered_to_pickup_point", and "pizza_delivery_not_delivered")
lacks a language specifier; update that triple-backtick fence to include a
language like "text" or "prometheus" (e.g., ```text) so markdown linters
recognize the block type and maintain consistency.
---
Nitpick comments:
In `@packages/app/metrics-utils/src/AbstractDimensionalCounterMetric.spec.ts`:
- Around line 40-51: The two tests in AbstractDimensionalCounterMetric.spec.ts
that both check disabled metrics ("gracefully aborts if metrics are not enabled"
and "should ignore measurements if client is not provided") are duplicative;
consolidate them by removing one and expanding the remaining test to cover both
conditions: construct ConcreteDimensionalCounterMetric with undefined client and
with metrics disabled (if there’s a flag or constructor option), call
registerMeasurement({ successful: 20, failed: 10 }) and assert counterMock and
incMock were not called; update or keep the test name to clearly reflect both
scenarios and remove the redundant test block referenced around
registerMeasurement and the other duplicate at the later location.
In `@packages/app/metrics-utils/src/AbstractDimensionalCounterMetric.ts`:
- Around line 39-42: The current reuse in AbstractDimensionalCounterMetric
unsafely casts an existing metric returned by
client.register.getSingleMetric(name) to a Counter; instead, add a defensive
check before reusing: inspect the retrieved existing object (variable existing)
to ensure it is actually a Counter (e.g., verify typeof existing.inc ===
"function" or check constructor/type against client.Counter) and if it is not,
raise a clear error or avoid reusing it (throw or log and create a new uniquely
named metric) rather than casting; update the logic around the counter variable
so only a validated Counter is reused and otherwise handle the name/type
conflict explicitly.
- Around line 15-21: The helper function buildDimensionalMetricName is
duplicated; extract it into a shared utility module (e.g., create a new exported
function buildDimensionalMetricName in a metrics-utils/helpers or utils file),
replace the local buildDimensionalMetricName definitions in
AbstractDimensionalCounterMetric and AbstractDimensionalHistogramMetric with
imports from that new module, and update any references in methods like the ones
inside AbstractDimensionalCounterMetric (and the histogram counterpart) to use
the imported function; remove the duplicated function definitions from both
files and ensure the new utility is exported for reuse.
In `@packages/app/metrics-utils/src/AbstractDimensionalHistogramMetric.spec.ts`:
- Around line 41-50: The two tests duplicate behavior: both "gracefully aborts
if metrics are not enabled" and "should ignore measurements if client is not
provided" verify that ConcreteDimensionalHistogramMetric.registerMeasurement
does nothing when no client/metrics are provided; consolidate by removing one
test and keeping a single, clearly named spec (e.g., "ignores measurements when
no metrics client is provided") that constructs
ConcreteDimensionalHistogramMetric(undefined), calls
registerMeasurement('successful', { time: 100 }), and asserts observeMock was
not called; alternatively, if you want both tests to remain, change one to cover
a distinct case (e.g., metrics disabled via a flag or null client vs undefined)
and reference ConcreteDimensionalHistogramMetric and registerMeasurement in the
updated assertion to make the difference explicit.
In `@packages/app/metrics-utils/src/AbstractDimensionalHistogramMetric.ts`:
- Around line 62-64: The expression computing duration in
AbstractDimensionalHistogramMetric uses `time ?? endTime - startTime` which is
correct but unclear; update the computation (referencing measurement, time,
startTime, endTime and the histogram.observe call) to add parentheses for
clarity, e.g. make the right-hand side explicitly `time ?? (endTime -
startTime)` so readers immediately see the nullish-coalescing applies to the
entire subtraction.
- Around line 40-42: The code unsafely casts an existing metric to Histogram
using (existing as Histogram) which can hide type mismatches; update the reuse
logic around client.register.getSingleMetric(name) in
AbstractDimensionalHistogramMetric to perform a defensive type guard (e.g.,
check existing instanceof Histogram or inspect existing.type/name) before
treating it as a Histogram, and if the existing metric is not a Histogram either
throw or log a clear error and create a new Histogram (or avoid reusing) so you
never call observe on an incompatible metric; reference the variables/functions
client.register.getSingleMetric, existing, histogram, and the observe call when
making this change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: lokalise/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 5769c991-4336-43a6-9a12-8c2e26174006
📒 Files selected for processing (6)
packages/app/metrics-utils/README.mdpackages/app/metrics-utils/src/AbstractDimensionalCounterMetric.spec.tspackages/app/metrics-utils/src/AbstractDimensionalCounterMetric.tspackages/app/metrics-utils/src/AbstractDimensionalHistogramMetric.spec.tspackages/app/metrics-utils/src/AbstractDimensionalHistogramMetric.tspackages/app/metrics-utils/src/index.ts
| namePrefix: string | ||
| nameSuffix: string | ||
| helpDescription: string |
There was a problem hiding this comment.
🟢 Could probably have some base dimensional params in AbstractMetric, similar to BaseMetricParams
Changes
We want new SLOs and alerts to be defined in DataDog. But is incompatible with current approach we add metrics.
Before we had 1 metric to register multiple outcomes like "successful" | "failed". These are labels. The problem with DataDog is that it is not able to read labels, so both successful and failed counters are considered as a single "event".
The PR adds Dimensional abstract metrics which are capable of registering multiple metrics based on dimensions provided.
Existing metric result:
New metric result:
Checklist
major,minor,patchorskip-releaseAI Assistance Tracking
We're running a metric to understand where AI assists our engineering work. Please select exactly one of the options below:
Mark "Yes" if AI helped in any part of this work, for example: generating code, refactoring, debugging support,
explaining something, reviewing an idea, or suggesting an approach.