Are message broker necessary at all when using EventStoreDB?

Hi,

I realized that EventStoreDB can fully act as a message broker using CatchUp subscriptions. This raises the question whether there is any need for a message broker like Apache Kafka and RabbitMQ? I am asking because in literature and in many articles, people use message broker to integrate between event-driven services.

Please correct me if I am wrong: In an event-driven microservices architecture, two services can be integrated using EventStoreDB’s subscription mechanisms. There is no need to publish events to a message broker and let the “subscriber service” read from a message queue. The same events can be read by just reading directly from the database.

So, are there any use cases where you want to use a message broker for integrating instead of EventStoreDB?

Thanks a lot in advance for your answers :slight_smile:

Domenic

1 Like

It depends on your needs. But I can tell you I used to have a thin app to subscribe to domain events and map them to integration events that were broadcasted using a message broker and I decided to get rid of it and use “special streams” within event store db for integration.

It has the benefit of being able to replay events also.

For my scenario that works well. But if you need some different things like saga (distributed state machines to send commands upon some events) or delayed/scheduled delivery or dead letter queues or competing consumers, maybe it’s interesting to add some message broker.

Although with persistent subscriptions you can now use competing consumers also.

1 Like

You can use ESDB as a broker, but you’d need to ensure your integration streams (like Diego describes) have some TTL defined, and you need to scavenge your database regularly.

It very much depends on the volumes and the overall architecture. For example, it’s easier to run a single shovel workload from ESDB to a broker, where messages published to the broker trigger some serverless workloads. You’d only need to maintain a single always-on service, and the rest can be serverless. It makes sense when you use a cloud-native brokers like SNS/SQS, PubSub, Event Hub, etc.

1 Like

Excuse me, I am quite new to that concept. May I kindly ask you to explain what exactly you mean with integration streams and their TTL. At which point do we talk about integration streams, and what are other streams?

Let’s say you have an entity called Order and you have a lot of them. So, you’d have entity streams like Order-123, Order-234 etc where 123 and 234 are order ids. These are normal streams, like a primary use case.

You can have a stream named, for example, PaymentIntegration and shovel all the integration events there, as you’d do it with the broker. You’d then use a catch-up or persistent subscription to deliver these events to integration event handlers.

As integration events are transient, you can set a $maxAge stream meta, so these events will get scavenged. You don’t want them to be there forever.

1 Like

Ahh! Thanks a lot for clarifying. It is much clearer now! :slight_smile:

So, to achieve the same with messaging, we could use a simple process that publishes the Order domain events to a message broker like Kafka, in which case the events would be published to a topic called PaymentIntegration. Did I get this right?

Sorry for the beginner question: How would the mapping between domain events and integration events look like? How do they differ from each other? Currently, I am using the same events for both domain and integration events.