Furthermore I recently read, that its a good practice to use synchronous event handling and read models only as long as the performance is alright and only afterwards go for asynchronous read models. Therefore this is kind of the default and should be possible somehow with GetEventStore I guess?!
Don’t know where you heard this as a recommendation. I would say its an option but definitely not the best way of going about things. There are other ways of doing similar things and in not showing the use eventual consistency.
Jimmy Bogard has written a series of blog posts about CQRS and User Experience; I think this one is quite good: http://lostechies.com/jimmybogard/2012/08/23/cqrs-and-user-experience/. A quote:
What I have found though that is if we build asynchronous denormalization in the back-end, but try to mimic synchronous results in the front end, we’re really just choosing async where it’s not needed. […] That’s why I start with a synchronous denormalizer in CQRS systems – where users already expect to see their results immediately.
He ties it to user experience and user expectations, of course, he doesn’t suggest starting synchronously on principle. However, he has a good point, “When the user expects immediate results, jumping through hoops to mimic this expectation is just swimming upstream.”
The key point is the difference between situations where consistency has to be given up in order to achieve scalability, and other situations where consistency is actually more important. For example, we have a geographically distributed application where after editing an aggregate, a user expects to immediately see the results in the UI. On the other hand, if a person situated in a different geographic region makes a change to the aggregate, it’s no problem at all if the changes come in asynchronously, maybe even a day or two later.
We solved this by having multiple installations of the software, including the event store, in the different geographic locations. The different locations subscribe to each other’s events and asynchronously update read models accordingly. Locally, however, read models are always updated synchronously.
Another situation (with another software) we’ve been in is where we have a distinction between sync read models, which are cheap to update and which the user expects/needs to reflect changes immediately, and async read models, which are expensive. In that case, we update the sync read models synchronously, but the async read models asynchronously.
These two situations have different reasons for applying CQRS with eventual consistency; the first one to achieve availability and partition tolerance between geographic regions, the second one to remove the bottleneck of updating expensive read models. But both have user experience and expectation requirements that require either a solution to the synchronous read model update problem or a workaround.
With NEventStore and, e.g., a SQL Server backend, transactional updates over the event store and (some) read models can be achieved (if you’re willing to pay the usual cost of tying the two data stores together); with Event Store, you have to go another way (optimistic synchronous update with catch-up on restart), but it still seems to be possible. Which is good, I think, because otherwise the technology would just be forcing us to jump through hoops