Aggregate Consistency

I’ve been working with Kafka on a few smaller projects. One of the features I’ve come to rely on in it are Consumer Groups. I can have multiple nodes subscribe to a particular topic, and that topic will have some definable partition id. In my case I typically choose my aggregate’s id. As nodes join or leave the consumer group for planned / unplanned reasons, each node will maintain a subscription to a subset of all partitions in the topic. What Kafka guarantees is that one and only one node will be processing a particular aggregate ID at any one time. This allows me to assure consistency at an aggregate level when validating / processing incoming events. I suppose this could be considered an implementation of the “Actor” model.

I’m not sure how EventStore handle’s this use case, or if it even is provided out of the box.

Perhaps there are other ways of architecting solutions that obviate this need?

My current use case arose from an incomplete understanding of practical CQRS implementation.

I considered that when a command came into the system - It would be validated using some logic, and then committed to an event log. If multiple nodes were processing the same stream they could validate two separate events concurrently, and commit to processing the command, with one command potentially invalidated the other. I was reading further and I found documentation which suggested committing the command to the event log atomically, and then processing that command and validating it as a side effect of being committed to the event log. I wasn’t fond of this method as it would seem invalid commands would be saved in the event log - potentially in abundance. It also seemed to conflate the idea of a command being an event, and the actual events of the domain.

My biggest struggles have been: where do I do validation of incoming commands? is it recommended to persist commands like an event? how are multiple conflicting updates handled, in particular, if people are processing related commands/events ( of the same aggregate ) on seperate nodes?

Eventstore actually has a stronger model based on ExpectedVersion. When you read the events off a stream you just remember the last event # that you read. When you write you set your ExpectedVersion on the write, if the stream has changed your write will fail due to the version.