Hello,
I’m looking to run Event Store HA in multiple datacenters.
The requirement is that all events should be published to all datacenters.
Here’re some approaches:
- Create an Event Store cluster that spans datacenters.
Con: too easy to lose half the cluster and the quorum with it.
- Create 1 cluster with multiple replication groups, 1 per datacenter.
Con: I can’t find in the documentation whether Event Store can do that.
- Put the initial event through a globally distributed queue. Then in each datacenter,
extract from queue, publish to the local Event Store cluster, do all other processing locally.
Cons:
-
more technologies involved
-
(aesthetic?) command processor that generates original events
has to read from the local Event Store, but write to the queue.
- Write an event listener that copies missing events between datacenters.
a. Listen for all events in the local datacenter and publish them to others.
b. Listen for all events in other datacenters and publish them locally.
Cons:
-
have to be very careful not to get into infinite loops
-
may have to worry about message ordering
- Anything else?
Any recommendations or documentation I can read about this?
Thank you,
Igor.