Validation against aggregate current state

  • Noob question *

I’m experimenting with EventStore at the moment

I was wondering what should be the validation strategy.

If my app is receiving an event, I need to validate it against the current state of the aggregate.

If that state is retrieved by an EventStore Projection, which from what I understand is only Eventually Consistent, then it’s not enough for a write operation.

What other solution could be available?

You don’t load aggregate state from a projection. You load all the events for the aggregate (or if too many events, a snapshot plus events after the snapshot). The you apply the events to the aggregate in order to get the current state.

You don’t really validate events. Take some time to read this.

Thanks for the reply.

I understand that I don’t validate event. but I do want to validate a command, before I publish it as an event to EventStore (for example: a user might ask to withdraw money from his account, but he is over his limit, and the system need to deny his request in the synchronous response to it).

The main question here is how I get the current state for the aggregate.

What you suggested here makes sense - read the events from EventStore, and apply them to the aggregate.

From what I checked, when reading events from a stream, i’m limited to 4096 items (makes sense).

So what is the snapshot mechanism that EventStore provide?

If it doesn’t, what is the best practice to apply this?

You are limited to pages of 4096 you can certainly read multiple pages.

Our approach to this is: In our aggregate repository abstraction, read all events from the stream of the aggregate being rebuilt, up to a user defined or system imposed max batch size (whichever is lower). If the first read did not reach the end of the stream, build the state of the aggregate up to the point of all events in the batch having been applied, throw away the events in that batch, and loop around (effectively paging).

There’s also a warning being outputted after a certain number of events have been read so that we can see when streams are becoming excessively large and we should consider snapshotting.



In case the stream do become excessively large, or just that I don’t want to apply them from point zero each time, what is the snapshotting solution that you are using? isn’t this a common use-case when using EventStore?

Snapshotting is as simple as writing a serialized version of your
state to another stream say you have cart-1234 you could have
cart-1234-domain-snapshot. Then when reading read the last value of
the snapshot stream then use that to read forward in the cart stream.

Thank you both again for the tips.

Another thing -

After rebuilding the current state of the aggregate (using a snapshot), I need to subscribe for real-time events. when doing so, I might miss a few events between getting the last existing event, and subscribing to future ones.

I thought about subscribing first, pushing incoming events into buffer, then rebuilding the state, and then applying them the buffered events, and then continue listening.

It seems to me a bit awckward to do so. Is there any other solution, that gurantees that I won’t miss any event?

Why would you need to subscribe to events? Read the snapshot, append any events after that currently in event store - that gives you your current state of the aggregate.

If you are worried about ensuring that updates are saved atomically, use expected version on append (optimistic concurrency).


I thought that it might be better to continuously holding the current state instead of fetching it from EventStore and processing it upon request.

So the flow I thought about to begin with is the following -

  1. App recieve a command - UpdateAccount.

  2. App read account aggregate current state (snapshot + events), and validate the request.

If it is valid, it simply publish the event AccountUpdated to EventStore.

  1. Only Next time the App will need to read the account aggregate current state (for a write validation, or simply for a query), it will apply that event (AccountUpdated) to the aggregate.

Is this a good practice when working with EventStore?

Sure just use a LRUCache assuming you have a single node

Are you talking about single App node?

It won’t be single… this is why I need a “stateless” flow.

And even if it was single, I want to separate the reader and the writer, so I can’t cache the aggregate in the writer side because he doesn’t suppose to apply the AccountUpdated event… only the reader suppose to.

A solution I thought about is that the reader will subscribe to the stream and always update the latest state on Redis, so that all writers can read from.

The other solution is what I described… on every read request to the reader (for write validation or query), it will fetch everything from EventStore.

Is there a better solution?