CQRS/EventStore/Websockets?/SPA

I am working on a new project with the above
technology. I am deep in the weeds at
the moment and have some questions that some in the community might have
already grappled with.

Note: The application
we are building is similar to standard CRM.
We are also using Docker Swarm, so we cannot ensure the same server will
handle any given request, except for the microservice handling the web sockets.

Updating the UI

How are you guys keeping your single page application screens
updated after commands are completed? How
are you handling commands that fail?

Our pattern currently looks like this. https://i-msdn.sec.s-msft.com/dynimg/IC709550.png.

I am currently focusing on lead screen and the UI is designed to keep the same
screen up when the user issues a command.
I cannot close the screen after they issue most commands. I have seen many examples where developers
close the screen afterwards and the refresh a listing. This is not an option for this screen. Once a command is sent, the UI, in most cases,
needs to lock, then wait for the outcome.

Example Commands and Events

  • Update Status - > Event – Status
    Changed
  • Update Document -> Event – Document
    Updated
  • Resume Lead -> Event – Lead Resumed

Do any of you use web sockets to inform the SPA screen that
a new read model was updated after a successful command? When the command fails send another message? Is this reliable?
(note: this is our current thinking, but
we are nervous to rely entirely on web sockets)

Do any of you use a correlation id and periodically query
the event store to see if the command was successful? This seems wrong because you are unduly
taxing the event store, where I think you should be query a separate read-model.

Bryan

"Do any of you use web sockets to inform the SPA screen that a new
read model was updated after a successful command?:"

Or just an atom feed

Hey Bryan,

So Greg's answer is probably the simplest way to go about it, but I'm
curious why you would go this route rather than a traditional
request/response REST API (send command, persist event, update read
model, refresh view)? Any particular reason?

/Rickard

traditional request/response REST API (send command, persist event, update read model, refresh view)? Any particular reason?

This doesn’t work well in this framework because from the interface point of view after a command has been accepted there is no guarantee the read model is updated yet. So you’ll end up doing “sendCommand(); wait(2000); refreshView()”

As for OP - I update my SPA this way and found it reliable for basic use. I wouldn’t build a trading platform on it, and every page has a refresh button so a user can manually update views, but it works well especially leveraging etags to know right off the bat if there has been a change.

My workflow is

( Frontend ) ( Backend ) ( Frontend )

Send Command -> Accept/Deny Command -> Events Written -> Read Model Updated -> Send Model Update -> Process Read Model Update

``

And because some browsers do weird things with backround processes when the user switches tabs, when my app’s tab is re-entered I refresh all displayed models

traditional request/response REST API (send command, persist event, update read model, refresh view)? Any particular reason?

This doesn’t work well in this framework because from the interface point of view after a command has been accepted there is no guarantee the read model is updated yet. So you’ll end up doing “sendCommand(); wait(2000); refreshView()”

That is easily fixable, here’s what we do. On POST we process the command, and on receiving confirmation of write to Eventstore we get commit log position, do a redirect to GET view with that as query parameter, and then a request filter waits until at least that position has been written to read model before processing the read. So the wait, sure, but instead of time the parameter is a log position. With that you get read your own write, consistently.

So given that this problem is easily fixable, are there any other reasons to make a more complex solution?

/Rickard

The
problem with an atom feed in our case is we are not using the event store for
the read model and we cannot guarantee the read-model has been written yet.

Also, this would not
address failures.

For read model see @Rickard's comments. Basically you just return the
position of the event (or put in redirect uri). On query to read model
it can return a Retry-After header as it knows how far along the read
model is providing the information is not there yet.

You have an interesting fix for the consistency dilemma - but thats a lot of hard coupling for my taste and seems just as hard as sending a server side event or web socket event with an updated read model ¯_(ツ)_/¯

You’ll have to specify in your frontend which read models are affected by which commands, or have your command return the eventstore position + GET uris of affected readmodels…

Seems easier to have a change api which when a read model is updated all subscribers of that read model are made aware of the change asynchronously. Decouples commands from read models and if you change your read model to be affected by a new event you won’t have to update your front end to requery the model whenever a command saves the event.

How is sending a redirect uri + then supporting retry-after hard
coupling? This is pretty basic http.

I am new to the CQRS
pattern so my interpretation could be wrong, but I thought the request
response pattern was not compatible.

With that, we were
looking at something similar. This is current what we have developed.

Send Command >
Accept/Deny Command > If accepted write event > Response to SPA >
Catch Up Subscription to Read Model (Separate application and process) >
Send web socket update to the SPA.

Currently our API is
request response in the sense that is Accepts or Rejects the command and waits
for the event to write before it responds the SPA. So, we do get error information. I was not sure that was the correct way to
go.

We are using a separate
application for the read model that uses a catch-up subscription. The reason for this was scalability and
resiliency. Will eventually move it to
competing consumer, I think, but I am not there yet.

Sure - if I send a command like “DeactivateCustomer” the side effects of this command might affect many read models.

I would consider it hard coupling because the front end has to be aware of all the read models that are changed from this one command. Either by it knowing ahead of time or by the command handler returning a list of GET uris. Both of which violate the separation of read / write imo.

See HATEOS

See HATEO

Yeahhh - I had one client who was doing that a few years ago. Never really sold me.

Especially now with my current project my web api doesn’t build read models so it has no idea what is affected by each command it sends along to the domain app. Doing something like this would make me merge my read models and command handlers into 1 app so I could keep track of what events do what and what uris to send back to the client. At least that would be the easiest way to achieve this effect.

Currently our API is request response in the sense that is Accepts or Rejects the command and waits for the event to write before it responds the SPA. So, we do get error information. I was not sure that was the correct way to go.

That’s absolutely the right way to go, have your front end wait until the command is accepted otherwise it might be rejected and your client will be split-brained.

We are using a separate application for the read model that uses a catch-up subscription. The reason for this was scalability and resiliency. Will eventually move it to competing consumer, I think, but I am not there yet.

Sending updates from the read app to up to the web api or the client directly is an acceptable way to do this.

And doing the GET Retry-After thing is also acceptable, they both have tradeoffs.

"

Yeahhh - I had one client who was doing that a few years ago. Never
really sold me.
Especially now with my current project my web api doesn't build read
models so it has no idea what is affected by each command it sends
along to the domain app. Doing something like this would make me
merge my read models and command handlers into 1 app so I could keep
track of what events do what and what uris to send back to the client.
At least that would be the easiest way to achieve this effect."

Not at all. It is the server side controls the state of the client
giving it uris to follow as opposed to the client maintaining state
and building uris. Versioning is also a win here.

Well that’s actually my big problem with this pattern. Now my web api is my front end. I have to model all the front end workflows in the web api. IMO my front end workflows should live in the front end for just the one reason that there can be multiple front ends. If I have a web site, a desktop app, and a mobile app I might not want 3 different web apis as well.

As for versioning, Im a fan of the “X-Version” header rather than a version in the uri so like I said I see the goal of HATEOS but never been sold on it.

"
As for versioning, Im a fan of the "X-Version" header rather than a
version in the uri so like I said I see the goal of HATEOS but never
been sold on it."

I don't think you have understood it? There is nothing about putting
versions in the uri in hateos though this discussion is way topic.

Sending updates from the read app to up to the web api or the client directly is an acceptable way to do this.

What do you use to
accomplish this? We are currently using Asp.net Core Web API and web sockets.

And doing the GET Retry-After thing is also acceptable, they both have tradeoffs.

So, we have not tried “GET Retry-After”, is this just, on a successful command, call the
read-model API over and over until it comes back with an updated model with the
correct version? My problem with this
is that we will have a ton of unnecessary queries on the read model system. Maybe I am attempting perfection and I should
just relax! :slight_smile:

"
So, we have not tried “GET Retry-After”, is this just, on a successful
command, call the read-model API over and over until it comes back
with an updated model with the correct version? My problem with this
is that we will have a ton of unnecessary queries on the read model
system. Maybe I am attempting perfection and I should just relax!
:)"

Consider that the ES is just a log. Return back the position in the
log the event was written to. When querying put this value as
only-return-if-you-are-this-far. Most queries should already be there.
This simulates read-your-own-writes-consistency.

Consider that the ES is just a log. Return back the position in the

log the event was written to. When querying put this value as

only-return-if-you-are-this-far. Most queries should already be there.

This simulates read-your-own-writes-consistency.

We did put the event version in the read model database, just because! :slight_smile: So we could easily do that.

So it is a retry > wait > retry like pattern and you could, granted not most of the time, waste resources query again and again, until the model is updated.

Sorry for asking the same question over again, I just wanted to make sure that was the suggestion.

Under normal circumstances the data should already be there when the
first query is executed