Skip to content

KafkaProducer doesn’t flush #403

@Glutexo

Description

@Glutexo

The Kafka producer in the MQ service writes the event messages by the send method. The sending is asynchronous though and unless a blocking flush method is called, the messages may be lost if the process exits. See the docs for an example.

When everything goes right, this doesn’t cause problems, because there is an event loop running, waiting for new messages. Because of that there is always time for the sent message to be actually produced.

If something happens though – an error occurs, an exception is raised, or the consumer times out, then the event loop ends and so does the whole process. It doesn’t wait for the producer thread and kills it. This can be prevented by calling the KafkaProducer.flush method after the loop ends and possible also when an exception occurs.

This needs to be implemented in the KafkaEventProducer wrapper.

Thinking out loud:

  • Maybe this is a case, where catching ’em all is actually a good idea?
  • Maybe run the consumer in a separate thread from the producer, so an exception doesn’t take down the whole process?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions