Skip to content

Retried ack requests are re-queued without respecting batch limits #2026

Closed
@1kilobit

Description

@1kilobit

When there's a PubSub backend error and acks are automatically re-queued to retry, if the ack queue is large enough, it occasionally triggers an INVALID_ARGUMENT: Request payload size exceeds the limit: 524288 bytes. error.

This seems like it may have been accidentally missed when batch size checks were added for regular paths in PR #1963.

A clear and concise description of what the bug is, and what you expected to happen.

It seems like the automatic message retry path doesn't trigger a flush before re-queueing messages when the max batch size would be exceeded, which can lead to more acks ending up batched in the same request than expected, potentially leading to the error above.

I'd expect the number of ack IDs in a single acknowledgment request to never exceed whatever is specified in the maxMessages batching option.

A clear and concise description WHY you expect this behavior, i.e., was it a recent change, there is documentation that points to this behavior, etc. **

Documentation describes maxMessages in BatchOptions as controlling the maximum number of messages batched together in requests: https://googleapis.dev/nodejs/pubsub/latest/global.html#BatchOptions

Calling Message.ack(), Message.ackWithResponse(), Message.nack(), and Message.nackWithResponse(), seems to properly batch messages together based on the specified maxMessages as documented.

Metadata

Metadata

Assignees

No one assigned

    Labels

    api: pubsubIssues related to the googleapis/nodejs-pubsub API.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions