-
Notifications
You must be signed in to change notification settings - Fork 553
Description
The Cloudflare Logpush integration is incompatible with Elastic Agent v8.19.12+. The aws-s3 input fails at startup with a validation error because the region field is not set when using non_aws_bucket_name.
Root Cause
Elastic Agent v8.19.12 added a new validation in config.go that requires the region field (not default_region) to be set when non_aws_bucket_name is configured.
However, the Cloudflare Logpush integration's Handlebars template (aws-s3.yml.hbs) only maps the Fleet UI's "Default AWS Region" setting to default_region — it never sets region.
This means no matter what value a user enters in the Kibana Fleet UI, the region config key is never populated, and the agent-side validation always fails.
Steps to Reproduce
- Install Elastic Agent v8.19.12 (or later)
- Add the Cloudflare Logpush integration via Kibana Fleet
- Configure an aws-s3 input using non_aws_bucket_name (e.g., for R2)
- Set "Default AWS Region" to any value in the Fleet UI
- Observe the agent fails to start the input due to a missing region validation error
Expected Behavior
The integration should work with Elastic Agent v8.19.12+. The rendered input config should include region so that the new validation passes.
Proposed Fix
The aws-s3.yml.hbs template in the Cloudflare Logpush integration package should be updated to also render:
region: {{default_region}}
(or introduce a new template variable) so that the region key is populated alongside default_region.
Workarounds
Downgrade to Elastic Agent 8.19.1 or earlier (before the validation was added) — it still works with default_region alone.
Environment
- Elastic Agent version: 8.19.12
- Integration: Cloudflare Logpush (aws-s3 input)
- Input type: aws-s3 with non_aws_bucket_name