Transactions and Commanding

Hey all,

Just wanted a bit of validation on some thinking around EventStore. It looks really neat but I’d like to make sure I understand it properly.

In a micro-services setup I’d have a web gateway that would take a JSON payload and convert it to domain commands. These commands are then run against an aggregate and resulting events applied.

Events are written in a batch to a single stream in EventStore - (Is this transactional in that all events in a batch will appear in the stream and be in order? Small event size (<1000 characters JSON) and fewer than 50 in a batch)

Projections (5-10 at most) on these streams will write to output streams each (Is there any chance that events can be missed by projections? Is it better two have 2 projections looking for the same characteristics and writing to a single stream each, or 1 projection writing to 2 streams in terms of reliability e.g. write commit guarantee), these output streams are command “queues” for other services. Distributed subscribers will read from here and forward on to other services (I understand that order can not be guaranteed here, are there any other caveats?)

Is this a fairly common pattern in use? Each service could have its own EventStore to deal with so it’s acting as the messaging system. I like the idea of projections but need to know how it can fail when it needs to write to multiple streams.

Finally does anyone have a good blog or book that goes in to a bit more depth than the docs on setting up clusters etc?

Thanks

Hi Dylan,

My two pennies:

. The web gateway can become a problem for resiliency. Instead, I publish a command (or just a Json message/event) to an input stream and start from there with an interested micro service adapter.

. I don’t rely on projections in order to communicate with other MicroServices. When a MicroService does something relevant for others, its adapter publishes an “external” event in an output stream.

Hope this helps

Hi Riccardo,
Thanks for the input. The web gateway for me is purely for the client to call the service to get the data in as opposed to being used inter-service communication.

With regards to your second point, by publishing the external event in the adapter you’ve increased the “transaction” count on that call, meaning that you have to now deal with the possibility that that call could fail. By using the projections feature of event store you shift that burden elsewhere and maintain a transaction count of 1 across the system.

i.e.

  1. Service A writes an event to Event Store (1 transaction, if it fails the client can retry e.g. queue based source)

  2. Event Store projection discovers event and writes as a command to another stream (1 transaction, hopefully event store projections keep their place in the source streams)

  3. Service B reads new command

vs

  1. Service A writes an event to Event Store, Service A adapter writes a command to another stream (2 transactions, now have 3 code paths: op 1 success, op 2 success; op 1 fail, op 2 success; op 1 success, op 2 fail)

  2. Service B reads new command

Wherever possible I want to keep to a single transaction as it makes systems so much easier to reason about. You could of course write your own ‘projection’ that reads from Service A’s event stream and produces commands for Service B but that’s just duplicating functionality provided by Event Store.

Why? this seems convoluted.

Hi Greg,
Thanks for responding.
Which part are you referring to when you say it seems convoluted.

Given that service A produces events on multiple streams with the same prefix, and that some of those events can produce commands to other parts of the system; It seems to me that projections solves this problem without me having to do anything AND gives me a single transaction on the write side. The alternative is to have Service A write to EventStore and send its commands out to other services. This leads to multiple transactions, service discovery requirements, API contract knowledge etc.
I’d argue that using EventStore as a command source as well reduces complexity. If you’ve got any advice on why this is a bad idea I’d really appreciate hearing it hence the original post

@dylan, I’m not sure I understand your issue, but one thing you have to consider regardless of how you implement your system is that commands can be rejected.
If you put a command into a stream, it means you are recording an event that that command was issued. If that is what you need - good. But in my mind, you would then need a process manager (aka Saga elsewhere), that subscribes to those streams - and in turn, issues commands to aggregates. Aggregates, in turn, after accepting or rejecting the commands, would write their own events.

I read somewhere that this was done (commands were placed in queues), but I personally don’t do it (I have no business requirement for tracking commands).

Why not have the thing that originally places commands into a queue/stream - call the command endpoint directly. Command should either succeed or fail. It’s up to you to handle the outcome (e.g. keep going, halt or try again with a different command?).

But I may be talking nonsense, feel free to ignore me!