Where do you process your sagas?

Wondering where people are handling the processing of their sagas. Are you doing it in the same process that handles read model synchronization or in a separate process dedicated to processing sagas?

Also, the state of my sagas will be event sourced, i.e. reconstructed each time the saga is loaded from previous events/commands that the saga has intercepted/sent. Is anyone else doing this and, if so, what issues have you run into or is there anything that I need to be careful about when reconstructing saga state in this manner?

Thanks,
Jordan

I’ve tinkered with doing it this way. You can see an implementation here: https://github.com/damianh/Obsolete-Cedar/blob/master/src/Cedar/ProcessManagers/ObservableProcessManager.cs sample + tests here: https://github.com/damianh/Obsolete-Cedar/blob/master/src/Cedar.Example.Tests/starbucks_should.cs#L112

What it does is handle an event, dispatch any commands ‘synchronously’ and only then does it store a checkpoint (as an event in another stream for the GES implementation). So it is making an assumption that the side receiving the commands is idempotent.

WARNING: never actually used in production.

I have very lightweight sagas in my application. Then just save a byte with the current state.
event -> send command -> transisiton state

or

event -> transistion state.

This has worked fine in production for years. I have had a couple of commands that went missing, but i have manuall resends on commands.

"I have very lightweight sagas in my application. Then just save a
byte with the current state.
event -> send command -> transisiton state
or
event -> transistion state."

Don't you also at least need a correlation id? :slight_smile:

For what? If the state has changed the email is sent. So i don’t want to send it again :slight_smile:
The byte has an id, aggregaterootid + saga_name

1) this isn't really a process manager.
2) You are using "aggregate root id" as correlation id it looks like :slight_smile:

Probably not, do you have an example on how to implement it the Right way?

The main difference is that process managers normally need to listen
to more than one message which you don't have a good way of doing now.
This is why correlation id normally comes into play. The process
manager essentially subscribes to the correlation id to get the
related messages delivered to it.

Ah, i see. My sagas never involve more than one aggregate root id. Thats why i never had that problem.

Yep! Imagine however I am dealing with 4 services coordinating out a
process (fulfilling an order is the textbook example). I need some way
of correlating my messages together. You are still doing this just
implicitly via "aggregate id"

Cheers,

Greg

Thanks for the examples. It got me thinking about my architecture again and I think I found a way to simplify things for myself. I was planning on having the command handlers processed in my MVC controllers, which means that command logic would be executed in IIS and then the events generated by that logic would be sent to EventStore. Now I’m thinking of changing to first saving my commands in EventStore in separate streams based on the CLR type of the command. So, the MVC/WebApi controllers will just send the commands over the “bus” which just stores them in ES, end of web portion, no potentially heavy processing in IIS at all. I’ll then have a command processor with checkpointing that pulls commands out of ES using a subscribe-to-all subscription and takes care of executing the handler logic. The events generated will then be stored in ES and there will be an event processor with checkpointing that handles processing just the events for sagas and other non-read-model processes. The command processor and event processor will most likely be inside the same process, but I suppose the could be separated as well. I’ll then have a separate read-model synchronizing process that just handles updating my read model in SQL Server.

Thanks for getting the wheels turning,
Jordan

…and add all the UI compensation for the extra latency…

Don’t know how much latency there will actually be yet, but when working with events, there will always be some latency when propagating to the read model. Are you speaking from experience with this sort of architecture?

measure measure measure

Hi Greg,
Sorry for a newbie question. If my client crashes and restarts, how can I know which correlationId to subscribe again? Does that mean I have to store a list of current correlationId somewhere else, but then I have to deal with distributed transaction problem…?

Isn’t this (sagas) more of a service bus concern, not event sourcing?

What about the causationId? Can a command have a “causationId”, since it is generated because of an event? (Assuming that the saga/process manager is the tipical artifact that handles events and sends commands)