With competing consumers on the query host (denormalizers), how do you process events in the correct order?

Assuming I have n instances of my query host, I will have n instances of the denormalizers listening to EventStore. If the aggregate on the command side is accumulating changes before saving/publishing then when the publish is finally made, then a stream of events will go out to the feed.

For example, let’s say the following 3 events were published when the Account aggregate is saved:

  1. AccountCreated
  2. AccountActivated
  3. AccountDeactivated
    If there are 2 (or more) instances of the Query host all listening to the same Account stream, it stands to say that the one that picks up second could potentially process faster than the one that picks up first. That could mean that event 2 and 3 get processed in reverse order, leaving the denormalized view of the account in the wrong state, active instead of inactive.

Is there a practice to solve this race condition?

For example… can EventStore maybe deliver a complete stream (the commit) instead of individual events?

use a catchupsubscription

this is supported but ordering will still not be perfect, just usually correct.

I’m currently using a catchup subscription, but it seems to do things on a per-event basis. Is there something going on under the hood that is capturing the stream of events atomically and then just calling my handler per event?

catchupsubscriptions have perfect ordering

I guess what I’m asking is if the commit contains 5 events, is it delivered as a whole to a single subscriber? If not, then it’s not the catch up subscription ordering that I’m worried about. If I have several instances of the my query host, then that is several subscriptions asking for the same data at different times. I’m worried that if they are delivered to subscribers one event at a time, then they’ll potentially get processed out of order.

For example, from the same stream:

instance 1 receives event 1A (activate account)

instance 2 receives event 1B (deactivate account)

instance 2 finishes 1B and asks for the next event, 1C (the account is now deactivated)

instance 1 finishes 1A ans asks for the next event 1D (the account is now activated)

we’re out of wack. Versus if it worked something like this:

instance 1 receives commit from stream containing 1A, 1B, 1C, and 1D

instance 2 receives commit from stream containing 2A and 2B

instance 1 finishes processing the commit in order

instance 2 finishes processing the commit in order

thats not a catchupsubscription that is competing consumers. you can
use the bucket strategy to get closer to what you want with that.

and no you don't events like how you discuss ever because you could
for example only be listening to Activated events. What order should
they be delivered in at that point still give all 4 events?

thats not a catchupsubscription that is competing consumers.

Yes, that was my original question, but you brought up the catch up subscription. I don’t know what a bucket strategy is and I’m not finding anything on it. Any references would be helpful.

I’m also, not clear on what you’re saying there, the words seem a bit jumbled. I have a listener per stream/category listening to a stream by category and then dispatching it to one or more denormalizers that will process the events that it cares about.

"
I'm also, not clear on what you're saying there, the words seem a bit
jumbled. I have a listener per stream/category listening to a stream
by category and then dispatching it to one or more denormalizers that
will process the events that it cares about."

This is not the only way of having a subscription. The concept of a
"commit" where you read n-events together works great in some places
and makes no sense in others.

re strategy see here
http://docs.geteventstore.com/dotnet-api/4.0.0/competing-consumers/
"Pinned strategy" under strategies

Great! Thanks Greg

So I looked into the pinned strategy a little and it looks like everything is stored on the server side which makes sense, to keep a single publish to the consumers. But is this a replacement for catch up subscriptions? How would one introduce a new projection and have it build/catch up? Group per denormalizer?

You could do a group per. But remember there are cases you may still
get duplicates with it or out of order messages.

So I just set up one up as a catch up subscription and ran it from index 0 and the first event that came in was already out of order:

I have the EventAppeared callback output the first received event before it does anything else. I should have received AccountCreated here. I don’t think I did anything weird in the setup:

I didn’t see a setting to play with the order – I thought maybe it was going in reverse. Thoughts?

btw this is a single consumer

Have a look at StreamPosition.Start, it’s -1 iirc.

/Peter

It’s 0. -1 is StreamPosition.End

I just blew away the db and tried again and it’s still happening. I’m actually using StreamPosition.Start now and it appears to be skipping the 1st event of each stream.

Noticed that the argument is int? … trying a null…

yep that was it /facepalm…