Synchronous subscriptions possible?

Hello,

is it possible to handle events synchronously?

(i.e. in the same process that writes the events - if possible even in the same transaction)

If this is not recommended or not even possible, how would you implement synchronously updated Read Models with the GetEventStore framework?

(e.g. a user triggers a command in the UI and the next view should already display the new data, i.e. the read model should already be updated)

Furthermore I recently read, that its a good practice to use synchronous event handling and read models only as long as the performance is alright and only afterwards go for asynchronous read models. Therefore this is kind of the default and should be possible somehow with GetEventStore I guess?!

Hope for some answers, thank you!

Best regards,

Dominik

is it possible to handle events synchronously?

(i.e. in the same process that writes the events - if possible even in the same transaction)

Yes it is possible however this would be handled outside of the event store in your code. In the same transaction is also possible but highly not recommended.

If this is not recommended or not even possible, how would you implement synchronously updated Read Models with the GetEventStore framework?

(e.g. a user triggers a command in the UI and the next view should already display the new data, i.e. the read model should already be updated)

Sure just write to the event store then update your read model synchronously.

Furthermore I recently read, that its a good practice to use synchronous event handling and read models only as long as the performance is alright and only afterwards go for asynchronous read models. Therefore this is kind of the default and should be possible somehow with GetEventStore I guess?!

Don’t know where you heard this as a recommendation. I would say its an option but definitely not the best way of going about things. There are other ways of doing similar things and in not showing the use eventual consistency.

Hi!

Sure just write to the event store then update your read model synchronously.

You said it is highly not recommended to do that within the same transaction, however, without a TX the “just write to ES then update RM” can fail: if the writer process dies after the ES-write and before the RM-update - what then? I would need some additional asynchronously running process which subscribes to the ES and handles all the events which have not been written to the RM synchronously - this sounds very cumbersome and not like an enterprise-ready solution.

What’s the deal with the “highly not recommended within a TX”? Why?

Don’t know where you heard this as a recommendation. I would say its an option but definitely not the best way of going about things. There are other ways of doing similar things and in not showing the use eventual consistency.

What other ways do I have for guaranteeing that my Read Models are up to date before using it to create the view?

Best regards,

Dominik

Sure just write to the event store then update your read model synchronously.

You said it is highly not recommended to do that within the same transaction, however, without a TX the “just write to ES then update RM” can fail: if the writer process dies after the ES-write and before the RM-update - what then? I would need some additional asynchronously running process which subscribes to the ES and handles all the events which have not been written to the RM synchronously - this sounds very cumbersome and not like an enterprise-ready solution.

All you would do is process any unprocessed events on startup. This should be roughly 5 lines of code. Not sure whats not “enterprisey” about this. If done right it wouldn’t even add any additional code to your system (use catch up subscription and just block on its processing when waiting on ui).

What’s the deal with the “highly not recommended within a TX”? Why?

Because it turns into a nightmare. What happens when a view is replaying? What happens when you have 2 models you write to?

Don’t know where you heard this as a recommendation. I would say its an option but definitely not the best way of going about things. There are other ways of doing similar things and in not showing the use eventual consistency.

What other ways do I have for guaranteeing that my Read Models are up to date before using it to create the view?

Ideally you don’t and you are likely DoingItWrong tm. This is primarily an issue with CRUD based UIs more so than Task Based UIs. There are however loads of tricks to keep users from seeing eventual consistency.

As for how to make 100% sure that the read model is updated just dispatch the event have request wait until dispatched. This solution however does not scale.

Cheers,

Greg

All you would do is process any unprocessed events on startup. This should be roughly 5 lines of code. Not sure whats not “enterprisey” about this. If done right it wouldn’t even add any additional code to your system (use catch up subscription and just block on its processing when waiting on ui).

I guess I’m not seeing it because I just started to look into GetEventStore after using NEventStore for quite a while (which allowed synchronous handling in the same TX) and I haven’t yet adapted my thinking process.

If my processs dies and then restarts: how to retrieve all “unprocessed” events on startup? They do not have a flag, do they? What does “unprocessed” in terms of the GetEventStore even mean, as far as I see there is no dispatching process and the system does not know about “has this event been dispatched”. I must be missing something really simple if it should be roughly 5 lines of code to do that.

The blocking catchup subscription is an alternative, I’m going to look into that as well during my evaluation.

Ideally you don’t and you are likely DoingItWrong tm. This is primarily an issue with CRUD based UIs more so than Task Based UIs.

I know, this is what we read everywhere. However, even in Task-based UIs it is sometimes required to show the effects of an operation immediately afterwards on the next page and not “eventually” (without using hacks like fake-result-data), but I guess this is a discussion for another topic :wink:

Best regards,

Dominik

If my processs dies and then restarts: how to retrieve all "unprocessed" events on startup? They do not have a flag, do they? What does "unprocessed" in terms of the GetEventStore even mean, as far as I see there is no dispatching process and the system does not know about "has this event been dispatched". I must be missing something really simple if it should be roughly 5 lines of code to do that.

Write a checkpoint into your read store in the same transaction as your update, then subscribe from there on restart (there’s a SubscribeToStreamFrom or SubscribeToAllFrom on the client API, or you can do it over HTTP as well).

James

you can do the whole process synchronously:

  1. save to the event store

  2. start transaction against read store

  3. update read store from event data

  4. update something along the lines of “last_event_processed” flag in read store with the event number

  5. commit transaction

when the system restarts, you can then check if the event store and read store are out of sync by checking the “last_event_processed” matches the last event in ES…

you can do it async by having steps 2-5 run in an event subscriber which listens to events being saved in ES (there is support for this built in)

Tom

Neventstore does the same thing I mentioned with replay on startup. I’ll explain more when at a computer.

Furthermore I recently read, that its a good practice to use synchronous event handling and read models only as long as the performance is alright and only afterwards go for asynchronous read models. Therefore this is kind of the default and should be possible somehow with GetEventStore I guess?!

Don’t know where you heard this as a recommendation. I would say its an option but definitely not the best way of going about things. There are other ways of doing similar things and in not showing the use eventual consistency.

Jimmy Bogard has written a series of blog posts about CQRS and User Experience; I think this one is quite good: http://lostechies.com/jimmybogard/2012/08/23/cqrs-and-user-experience/. A quote:

What I have found though that is if we build asynchronous denormalization in the back-end, but try to mimic synchronous results in the front end, we’re really just choosing async where it’s not needed. […] That’s why I start with a synchronous denormalizer in CQRS systems – where users already expect to see their results immediately.

He ties it to user experience and user expectations, of course, he doesn’t suggest starting synchronously on principle. However, he has a good point, “When the user expects immediate results, jumping through hoops to mimic this expectation is just swimming upstream.”

The key point is the difference between situations where consistency has to be given up in order to achieve scalability, and other situations where consistency is actually more important. For example, we have a geographically distributed application where after editing an aggregate, a user expects to immediately see the results in the UI. On the other hand, if a person situated in a different geographic region makes a change to the aggregate, it’s no problem at all if the changes come in asynchronously, maybe even a day or two later.

We solved this by having multiple installations of the software, including the event store, in the different geographic locations. The different locations subscribe to each other’s events and asynchronously update read models accordingly. Locally, however, read models are always updated synchronously.

Another situation (with another software) we’ve been in is where we have a distinction between sync read models, which are cheap to update and which the user expects/needs to reflect changes immediately, and async read models, which are expensive. In that case, we update the sync read models synchronously, but the async read models asynchronously.

These two situations have different reasons for applying CQRS with eventual consistency; the first one to achieve availability and partition tolerance between geographic regions, the second one to remove the bottleneck of updating expensive read models. But both have user experience and expectation requirements that require either a solution to the synchronous read model update problem or a workaround.

With NEventStore and, e.g., a SQL Server backend, transactional updates over the event store and (some) read models can be achieved (if you’re willing to pay the usual cost of tying the two data stores together); with Event Store, you have to go another way (optimistic synchronous update with catch-up on restart), but it still seems to be possible. Which is good, I think, because otherwise the technology would just be forcing us to jump through hoops :slight_smile:

Best regards,

Fabian

  1. save to the event store
  2. start transaction against read store
  3. update read store from event data
  4. update something along the lines of “last_event_processed” flag in read store with the event number
  5. commit transaction

When implementing this naively, there is a race condition if two threads do this at the same time, isn’t there? In that case, thread B could execute steps 1-5 while thread A is between steps 1 and 2.

Yeah that example does have a race condition, thanks for pointing it out. I've read Jimmy Bogard's posts too - I think a lot of people presume you have to do things async when they first start looking into CQRS/ES. Another thing I've had success with on my current project is to make the read model event processing idempotent so the ordering doesn't actually matter. This obviously won't be possible all the time, but it does simplify things when it can be done.

My guess is most would be doing it single threaded = no race condition. Systems that are synchronous like this are probably doing < 10 operations/second