Right tool for the job ? (long running process and event sourcing)

I have many scenarios where there are some processes orchestrating physical machinery or people.

Each process is a single application, and it’s state is persisted in a sql database.

I think that some of those would benefit a lot from an event sourced architecture, both for statistical analysis (maybe auditing would be enough) and debugging. If i design a process this way it would be just a stream of event, and since the system has been active for years i would produce something like 10 msg/s continuosly, so the speed is not the problem here.

Let’s round it to max 5x10^8 msg/year.

I still have to spike an implementation so i may talking nonsense here :smiley: would be event sourcing and so event store the right tool for the job here ? Off course i’ll need to keep the working set in memory and snapshot every night, am i on the wrong path ?

Thanks, Valerio

In terms of snapshot ting it can be done asynchronous whenever you want. Depends on how many aggregates you have and how many messages per each. In terms of messages it sounds like you are around 500m. This is feasible on a single node. How big are they?

Good! they are not that big, let’s say 10k each, maybe more if i save commands on commit metadata.

I’ve been watching dev branch since i posted this question and you developed all the feature that allow this to be possible (it was just exposing existing apis to the client by the way) like, ReadAllEvents (for bus publishing), ReadEventsBackward (to get last snapshot instance efficiently, maybe not that usefull with maxcount parameter). Great work !
By the way could you explain how you got 50% tps performance boost? :smiley:

Best, Valerio

Oh it was a spike it’s not put into dev yet and it’s windows only. Has to do with storage writer. Tps should be fine for most people oob now. It makes a big difference btw if you compile in release vs debug as there are some checks of sanity in debug builds