Description
Short description
The decision log plugin uses an adaptive uncompressed limit (softLimit) to make an educated guess on how many events can fit into a single upload (referred to as a chunk). The benefit is that the plugin doesn't have to compress each event upfront to figure out if the limit is reached, because if the limit does exceed from an new event it would have to decode and re-compress potentially thousands of events.
Unfortunately there is a bug because this adaptive uncompressed limit gets reset after each upload.
Steps To Reproduce
Using the latest OPA version (v1.4.2) with the following policy:
package example
allow if {
true
}
and the following config:
services:
fakeservice:
url: http://localhost:8080
decision_logs:
service: fakeservice
reporting:
upload_size_limit_bytes: 1000
min_delay_seconds: 5
max_delay_seconds: 5
And a simple Go server that prints number of events received per upload
Start the OPA server and Go Server:
> opa run -c opa-conf.yaml --server ./example.rego
> go run main.go
Send 100 events:
> for i in {1..100}; curl -X POST http://localhost:8181/v1/data/example/allow
Check output from the Go server:
15:36:09 -- Number of events: 2
15:36:09 -- Number of events: 5
15:36:09 -- Number of events: 10
15:36:09 -- Number of events: 10
15:36:09 -- Number of events: 10
15:36:09 -- Number of events: 10
15:36:09 -- Number of events: 10
15:36:09 -- Number of events: 10
15:36:09 -- Number of events: 10
15:36:09 -- Number of events: 10
15:36:09 -- Number of events: 13
15:36:19 -- Number of events: 2
15:36:19 -- Number of events: 5
15:36:19 -- Number of events: 10
15:36:19 -- Number of events: 10
15:36:19 -- Number of events: 10
15:36:19 -- Number of events: 10
15:36:19 -- Number of events: 10
15:36:19 -- Number of events: 10
15:36:19 -- Number of events: 10
15:36:19 -- Number of events: 10
15:36:19 -- Number of events: 13
Here you can see the problem. In the first chunk there is only 2 events, but then the soft limit grows allowing more events into each chunk. Then after the uploads are done sending another 100 events will shows the same pattern
Expected behavior
I would expect the chunks to stabilize and contain as many chunks as possible. So that in the second upload the number events is higher than 2.