-
Notifications
You must be signed in to change notification settings - Fork 3.9k
Description
Describe the bug
The max_query_length value in the limits config does not seem to work as expected when query sharding is enabled.
We are storing our Loki logs for 90 days, but only want our users to be able to search 30 days at once (during the entire 90 day window). Following the documentation we expected the max_query_length to handle this, as the default value is 30d1h. But even with this value we can still query the entire retention period.
After checking the Loki query-frontend logs it appears that the query sharding splits up longer queries into 24h queries, not considering the overall query_length. This allows all individual queries to run as they never actually reach the max_query_length.
I could also verify this behaviour by testing different values for max_query_length. Setting any value <24h would cancel a query longer than 24h, but as soon as max_query_length is set to 24h or higher it is never hit and all queries return logs.
To Reproduce
Steps to reproduce the behavior:
- Started Loki with query sharding enabled (tested in version 3.5.7)
- Started Grafana and connected the Loki datasource
- Search Loki for a time period
>30d1h(as that's the default formax_query_length) - Query is still returning a response and returning logs, although it exceeds the limit.
Expected behavior
I would expect the Loki query-frontend to recognise the query exceeding the max_query_length and cancelling the query. This should happen independently from the Loki query sharding as the overall query is still longer than the limit, even if the individual queries are shorter.
Environment:
- Infrastructure: Kubernetes
- Deployment tool: Helm chart v6.46.0