Adjusting the Frequency of Syslog to AWS S3 #23719
Answered
by
thomasqueirozb
Asobigwatermelon
asked this question in
Q&A
-
QuestionWhen configuring syslog to AWS S3 with compression enabled (gzip), how can I modify the upload frequency (currently set to every 5 minutes)?Q&A Vector Configaws_s3:
type: aws_s3
inputs:
- rawlog
bucket: xxxx
compression: gzip
content_encoding: gzip
encoding:
codec: json
content_type: application/gzip
filename_append_uuid: true
region: xxxx
storage_class: STANDARD
key_prefix: "service=xxxx/year=%Y/month=%m/day=%d/"
filename_time_format: "%Y%m%dT%H%M%SZ"
auth:
imds:
connect_timeout_seconds: 3
read_timeout_seconds: 5
max_attempts: 3
buffer:
type: disk
max_size: 107374182400
when_full: drop_newest
timezone: local
batch:
timeout_sec: 0.1 Vector LogsNo response |
Beta Was this translation helpful? Give feedback.
Answered by
thomasqueirozb
Oct 1, 2025
Replies: 1 comment
-
You should be able to configure this by setting However if you are seeing write once every 5 minutes this seems like a bug, the buffer should be flushed before the the default value of |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
pront
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
You should be able to configure this by setting
batch.timeout_secs
to a lower value.However if you are seeing write once every 5 minutes this seems like a bug, the buffer should be flushed before the the default value of
timeout_secs
. There may be some weird behavior when the s3 sink is used with gzip.