You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+54-30Lines changed: 54 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,6 +44,12 @@ It may still be possible that there are issues with
44
44
- nats, if jetstream support is disabled
45
45
- TLS integration, as this also hasn't been tested a lot and is usually non-trivial to set up
46
46
47
+
48
+
Due to the large code base, it may still be possible that some endpoints may show
49
+
issues in production; therefore, they should be tested locally first. They all worked locally
50
+
for me and didn't show data loss during simple in-flight broker restarts.
51
+
Kafka, MongoDB, IBM-MQ, Files, and Memory are considered production-ready.
52
+
47
53
### When to use mq-bridge
48
54
***Hybrid Messaging**: Connect systems speaking different protocols (e.g., MQTT to Kafka) without writing custom adapters.
49
55
***Infrastructure Abstraction**: Write business logic that consumes `CanonicalMessage`s, allowing you to swap the underlying transport (e.g., switching from RabbitMQ to NATS) via configuration.
@@ -64,34 +70,40 @@ It may still be possible that there are issues with
64
70
***Middleware**: Components that intercept and process messages (e.g., for error handling).
65
71
***Handler**: A programmatic component for business logic, such as transforming/consuming messages (`CommandHandler`) or subscribe them (`EventHandler`).
66
72
67
-
## Endpoint Behavior
73
+
## Backend Features & Configuration
68
74
69
-
`mq-bridge` endpoints generally default to a **Consumer** pattern (Queue), where messages are persisted (if supported by the backend) and distributed among workers.
75
+
`mq-bridge` endpoints generally default to a **Consumer** pattern (Queue), where messages are persisted and distributed among workers. To achieve **Subscriber** (Pub/Sub) behavior, specific configuration is required.
70
76
71
-
To achieve **Subscriber** (Pub/Sub) behavior—where messages are broadcast to all active instances—you must configure the specific backend accordingly. There is no global "subscriber mode" toggle; it is determined by the configuration of the endpoint.
77
+
The table below summarizes the capabilities and configuration for each backend:
72
78
73
-
| Backend |Default Behavior (Queue) | Configuration for Subscriber (Pub/Sub) |Response Support |
79
+
| Backend |Subscriber Config (Pub/Sub) |Request-Reply | Nack Support |
|**ZeroMQ**| Ephemeral (PULL) | Set `socket_type: "sub"`| No |
87
-
88
-
### Response Mode
89
-
The `response` output endpoint allows sending a reply back to the requester. This is useful for synchronous request-reply patterns (e.g., HTTP-to-NATS-to-HTTP).
90
-
91
-
***Availability**: Only available if the **Input** endpoint supports request-reply (HTTP, NATS, Memory, MongoDB).
92
-
***Configuration**: Use `response: {}` as the output endpoint.
81
+
|**AMQP**| Set `subscribe_mode: true`| Emulated (Property) |**Yes** (Basic.nack) |
82
+
|**AWS**| N/A (Use SNS) | No |**Yes** (Visibility Timeout) |
83
+
|**File**| Set `mode: subscribe`| No | Simulated (In-Memory) |
***Emulated**: Publishes a new message to a reply destination (specified by the `reply_to` metadata field) carrying a `correlation_id` metadata field.
100
+
***Nack Support**: If "Yes", the backend supports explicit negative acknowledgement triggering redelivery. "Eventual" means redelivery depends on timeout or connection drop. "Simulated" is handled in-memory by the bridge.
101
+
102
+
### Response Endpoint
103
+
The `response` output endpoint allows sending a reply back to the requester. This is useful for synchronous request-reply patterns (e.g., HTTP-to-NATS-to-HTTP). Use `response: {}` as the output endpoint configuration.
104
+
93
105
***Caveats**:
94
-
* If the input does not support responses (e.g., File, Kafka), the message sent to `response` will be dropped.
106
+
* If the input does not support responses (e.g., File, SQLx), the message sent to `response` will be dropped.
95
107
* Ensure timeouts are configured correctly on the requester side, as the bridge processing time adds latency.
96
108
* Middleware that drops metadata (like `correlation_id`) may break the response chain.
97
109
@@ -339,18 +351,30 @@ The times are not stable yet, it is therefore recommended to perform the integra
339
351
340
352
## AI Disclaimer
341
353
342
-
This library has been widely written with AI assistance. I used Gemini for planning and writing,
343
-
CodeRabbit for reviews and Copilot/Claude for bugfixing and other small things.
354
+
This library has been widely written with AI assistance.
355
+
356
+
Some of the code - the core for example, was originally written by myself,
357
+
but most other was generated by AI. I mostly used Gemini for
358
+
planning and writing, CodeRabbit for reviews and Claude for bugfixing and
359
+
more complicated tasks that Gemini couldn't solve properly.
344
360
While some of the AI output was great, some other output wasn't.
345
361
I am aware that in year 2026, AI is still not generating perfect code and sometimes
346
-
even breaks simple stuff. I reviewed all the
347
-
output code and re-specified it or changed the code manually whan insuficcient.
362
+
breaks simple stuff or forgets important lines during refactorings that then cause
363
+
severe issues.
364
+
I reviewed all the output code, cleaned it up manually,
365
+
re-specified and refactored it when insuficcient.
366
+
**I do trust the current code as much as if it would be completely written by myself.**
367
+
348
368
I didn't change the AI code appearance, so you will sometimes still see code that just
349
369
looks as it is plain from AI and also most of the readme here was actually written
350
370
by AI. I don't think it is bad practice, to keep the original code and text appearance.
351
-
I'm not an english native speaker, so the AI output for text is mostly just
352
-
way better what I could write. For AI code, the readability is usually
353
-
sufficient, even if it is sometimes much more verbose what I would write in code.
371
+
I'm not an english native speaker, so the AI output for english text is just
372
+
way better than my text. For AI code, the readability is usually
373
+
good, even if it is more verbose than what I would write.
374
+
However, especially for the different endpoints, there is already a lot of existing
375
+
code and the AI could also just assist a lot there. Thats mostly the reason,
376
+
why there are so many available endpoints in this library, they just could be added
0 commit comments