Skip to content

Duplicate records when stream throughput limit exceeded #249

@adrian-skybaker

Description

@adrian-skybaker

I'm finding that I see significant numbers of duplicates if I hit throttling on the kinesis stream.

Obviously I realise I want to avoid throttling, but I'm wondering this is expected behaviour? For example, I would expect that even when batching, the plugin would only retry the failed parts of the batch.

If this is not expected, happy to provide more logging if that's helpful (below is warning level and above).

This is using amazon/aws-for-fluent-bit:init-2.28.1 .

Log sample:

2022-09-15T17:47:17.678+12:00 | time="2022-09-15T05:47:17Z" level=warning msg="[kinesis 0] 1/2 records failed to be delivered. Will retry.\n"
2022-09-15T17:47:17.678+12:00 | time="2022-09-15T05:47:17Z" level=warning msg="[kinesis 0] Throughput limits for the stream may have been exceeded."
2022-09-15T17:47:19.103+12:00 | [2022/09/15 05:47:19] [ warn] [engine] failed to flush chunk '1-1663220835.534380470.flb', retry in 11 seconds: task_id=1, input=forward.1 > output=kinesis.1 (out_id=1)
[OUTPUT]
    Name kinesis
    Match service-firelens*
    region ${AWS_REGION}
    stream my-stream-name
    aggregation true
    partition_key container_id
    compression gzip

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions