Description
I know that more-than-one project are not using publishing though event store directly:
event_store.publish(OrderPublished, stream_name: "Order$#{order.id}")
Because of requirement to always pass stream name, which seems very reasonable when you read the docs, but in practice seem very tedious, since you end up with copy-pasting stream name everywhere, so people end up with some facades over event store to relieve that, so that you can publish your events without specifying stream name each time, and instead "fetch it" from different place. This is less of a problem in greenfield, event-sourcing or very neatly bounded systems, but you pretty much have to face that issue in legacy environments.
Examples I've saw:
event_store.publish(OrderPublished.new(...))
.event_store
here is a wrapper over RES, which inside has a mapping likeOrderPublished => ->(event) { "Order$#{event.data.fetch(:order_id)}" }
Event.publish!(OrderPublished.new(...), order)
, which inside have a mapping from an object (order
) to the stream to which this event is relevant.
I think the problem is common enough, that it would make sense to provide ability to set up some kind of StreamsNameProvider
(name is WIP), which would be able to answer the questions about the streams for given event.
We would preserve current behaviour by allowing to override these defaults by the stream_name:
keyword argument, as it is now.
An example implementation of such provider within a project:
module EventStore
class StreamsNameProvider
MAPPING = {
"OrderPublished" => ->(event) { ["Order$#{event.data.fetch(:order_id)}"] }
}
def call(event, *, **)
MAPPING.fetch(event.event_type) || []
end
end
end
This seem flexible enough to cover most of production cases, and should be relatively easy for extension. To be clear, I am willing to implement it, but I'm opening a discussion.
WDYT?