After watching Gregs Polyglot data video
http://www.ustream.tv/recorded/46673907
I now understand why using a message bus isn’t a great idea for integrating my bounded contexts. Its better that I have multiple client driven subscriptions (lets call them synchronisers), each polling streams in the Event Store to know what actions they need to take (populating a read model or calling another bounded context via an api).
The question is this, how are people managing their synchronisers (one set per bounded context)? If I have 5 bounded contexts to integrate, do I have a little management console for each one? Do you have one application\management console where you deploy the various event handlers from each bounded context? One of the aspects I was interested in was, that each bounded context and it synchronisers would be independently deployable without having to test the others.
I did work on a system like this before, a custom message queue\publish-subscribe approach. So I’m wondering how are other in the community are going about this? How do you manage and monitor that a call to another bounded context failed when processing an event (downstream)? In the previous system, we would move that message\event to a dead letter queue, and further messages with a shared correlation id would have to be left unprocessed, which would mean, the entire stream could not be processed, as messages needed to consumed sequentially.