I’m working on a project that requires me to distribute subsets of a centralized database to N devices based on user ‘subscriptions’. In Oil & Gas, users on offshore oil platforms never have reliable internet connections and must be able to work offline for periods of time, as well as be able to pass uncommitted data to another device totally offline… Think: the helicopter comes to end someone’s rotation before he’s had a chance to sync up his data… I’ve seen the CQRS-not-just-for-server video and believe that those methods + Event Store could satisfy my requirements, but I just cannot wrap my head around architecturally how to do it. Each user must be able to subscribe/unsubscribe himself to aggregates/streams. Upon subscribing, he must receive all data from beginning of stream. But the subscription can also happen by an external process while he’s offline, such that when he reconnects, the data is waiting for him.
Best I could come up with was having a Catch-Up subscriber always running, keeping the state of everyone’s subscriptions to aggregates, and also listening for $all events. When events come through, the metadata is checked to receive an AggregateRootId which is used against the subcription-state to determine all users who should “get” that event. From there… umm… message bus to each device? Or perhaps copy the event to a special per-user stream (<- I like this because then the user can use a very easy catch-up subscription against this stream). But this seems like bastardizing the event store into a message broker (which Greg said never to do in the “Event Store as a Read Model” video).
Any advice would be greatly appreciated. I’d __love __to be able to setup a fully working demonstration of the occasionally connected functionality with Event Store after <3 days of “development” to my superiors.
Thanks,
Chris