Skip to content

Commit 6df12d2

Browse files
authored
docs: Working with histogram metrics (#1601)
1 parent 17a2d4e commit 6df12d2

File tree

1 file changed

+71
-0
lines changed

1 file changed

+71
-0
lines changed

docs/how-to-guides/write-dashboard-queries.md

Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -536,3 +536,74 @@ Then set the chart type to **Bar Chart**. Each bar represents a 'bucket' that ac
536536
LEFT JOIN actual_counts c ON m.bucket_index = c.bucket_index
537537
ORDER BY m.bucket_midpoint;
538538
```
539+
540+
## Working with histogram metrics
541+
542+
The `metrics` table is currently more difficult to work with, especially for histogram instruments.
543+
If possible, just use the aggregated `histogram_*` columns, something like this:
544+
545+
```sql
546+
SELECT
547+
time_bucket($resolution, recorded_timestamp) AS x,
548+
sum(histogram_count) as total_count,
549+
sum(histogram_sum) as total_sum,
550+
sum(histogram_sum) / sum(histogram_count) as average,
551+
min(histogram_min) as min,
552+
max(histogram_max) as max
553+
FROM metrics
554+
WHERE metric_name = '<fill in>'
555+
GROUP BY x
556+
```
557+
558+
If you need more detailed data, here's how.
559+
The Logfire SDK typically uses exponential histograms rather than explicit buckets.
560+
A single database row contains a list of counts for buckets with mathematically defined boundaries.
561+
Here's how to unpack this data into a more usable form. Copy the following into the start of a query:
562+
563+
```sql
564+
with indices as (
565+
select *, unnest(generate_series(1, array_length(exp_histogram_positive_bucket_counts, 1)::integer)) AS bucket_index
566+
from metrics
567+
),
568+
counts as (
569+
select *, exp_histogram_positive_bucket_counts[bucket_index] as bucket_count
570+
from indices
571+
),
572+
bounds as (
573+
select
574+
counts.*,
575+
greatest(
576+
histogram_min,
577+
power(2.0, (exp_histogram_positive_bucket_counts_offset+bucket_index-1)/power(2.0, exp_histogram_scale))
578+
) as lower_bound,
579+
least(
580+
histogram_max,
581+
power(2.0, (exp_histogram_positive_bucket_counts_offset+bucket_index )/power(2.0, exp_histogram_scale))
582+
) as upper_bound
583+
from counts
584+
where bucket_count > 0
585+
),
586+
```
587+
588+
Now you can query `bounds`. It will have the same columns as `metrics`, plus `bucket_index`, `bucket_count`, `lower_bound`, and `upper_bound`.
589+
Each row in `metrics` will be unpacked into multiple rows in `bounds`, one per non-empty bucket.
590+
591+
You can unpack this further to get approximate individual items within each bucket:
592+
593+
```sql
594+
bounds_with_index as (
595+
select unnest(generate_series(0, bucket_count - 1)) as i, *
596+
from bounds
597+
),
598+
bucket_item_approx as (
599+
select
600+
case
601+
when bucket_count = 1 then (lower_bound + upper_bound) / 2
602+
else lower_bound + i * (upper_bound - lower_bound) / (bucket_count - 1)
603+
end as approx_item, *
604+
from bounds_with_index
605+
)
606+
```
607+
608+
Then `bucket_item_approx.approx_item` can be fed into e.g. percentile calculations (see the Web Server Metrics standard dashboard)
609+
or the histogram recipe above.

0 commit comments

Comments
 (0)