You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/source/rest.j2
+18-1Lines changed: 18 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -864,6 +864,23 @@ different portions according to their participation in the timeframe being
864
864
consulted. Therefore, if one wants to use the re-aggregation process, he/she
865
865
needs to consider using the `use_history` option as `False`.
866
866
867
+
Resampling
868
+
~~~~~~~~~~
869
+
870
+
The Gnocchi aggregates API also supports the operation for resampling aggregated data. For instance, let's say we have a metric `metric_one`, where we push data every 10minutes, and we have an archive policy generating hourly data points with the aggregation methods `max, min, and mean`; then, let's say we want to reaggregate and get the `max` or `min` of each day. One alternative is to get all of the datapoints for the timeinterval, and process the output at the client side to find the `max` and `min` of each day; that might not be efficient and might not scale. The other alternative is to use Gnocchi to resample the already aggregated data, and to generate a new datapoint in runtime. For instance, we could use the following query operation with the aggregates API to get the `max` datapoint for the metric in a given interval at a daily granularity; the value `86400` represents a daily re-aggregation of the data.
871
+
872
+
873
+
.. note::
874
+
875
+
(resample max 86400 (aggregate max (metric metric_one max)))
876
+
877
+
The same can be done for `min`, and so on. We can also use other operations such as `sum`, `first`, `last`, and so on. All of the possible operations for the resampling operation are the following:
0 commit comments