Just wanted a bit of validation on some thinking around EventStore. It looks really neat but I’d like to make sure I understand it properly.
In a micro-services setup I’d have a web gateway that would take a JSON payload and convert it to domain commands. These commands are then run against an aggregate and resulting events applied.
Events are written in a batch to a single stream in EventStore - (Is this transactional in that all events in a batch will appear in the stream and be in order? Small event size (<1000 characters JSON) and fewer than 50 in a batch)
Projections (5-10 at most) on these streams will write to output streams each (Is there any chance that events can be missed by projections? Is it better two have 2 projections looking for the same characteristics and writing to a single stream each, or 1 projection writing to 2 streams in terms of reliability e.g. write commit guarantee), these output streams are command “queues” for other services. Distributed subscribers will read from here and forward on to other services (I understand that order can not be guaranteed here, are there any other caveats?)
Is this a fairly common pattern in use? Each service could have its own EventStore to deal with so it’s acting as the messaging system. I like the idea of projections but need to know how it can fail when it needs to write to multiple streams.
Finally does anyone have a good blog or book that goes in to a bit more depth than the docs on setting up clusters etc?