How would one go about using Event Store in separate bounded contexts? I am not talking about using Event Store to communicate between bounded contexts, but merely the fact that multiple BCs sometimes want to use an event store but still be independent of each other.
When using a database server, be that NoSQL or relational, there is a concept of databases so I can host multiple databases on the same database instance.
Is there any recommendations to solving this? For example I think most of our BCs will use catch-up-subscriptions using SubscribeToAll(), but we are not interested in events from different BCs. Maybe it can be solved entirely using namespaces for the events, but I appreciate any thoughts on this.
I used category to distinguish between ‘databases’. I would subscribe to events of one category in order to process events from a BC. This was I was able to use the same ES instance in multiple BCs.
Yes this is quite easy to do. As an example you could just use a name prefix on the streams. Then you could write a simple projection to bundle all the events from a given prefix into a single stream.
Now if your names were say sales-whatever-1022 you would have a stream /sales/ with only the sales events in it that you could use a catch up subscription on
“Category” is a projections concept. The docs are currently being written for projections as they will be released soon. There are some works in progress already posted in the documentation but not linked off the main page (go to pages) they will be completed over the next week or so.
The only problem I see in this approach is that there is no (easy) concept of individual backup and restore since all events are in a single “database”. Have you ever considered to introduce something similar to multiple “databases” (i.e. separate physical storage) per node?
The other thing you can do of course (in OSS at least) is run more than one node on the same box (and use something like nginx to route requests to the correct instance based on hostname or similar if you don’t want to open a ton of firewall ports…)
we’ll come back to that in the not too distant future since we have to provide fail over for certain customers. Its in our queue. But thanks for mentioning it
Jokes aside, do you expect that native support for physical multitenancy
will greatly improve memory/CPU usage compared to running multiple instances on the same node?
Btw much of the work for sharding/multitenant overlaps (tb sharding is just putting multiple tenant instances into one set … Management etc is all the same)