Competing consumers and $all

So right off the bat I did find a thread from last year stating that competing consumers won’t work for $all and would require a fair amount of work to do. I’d just like to add my use case into the mix for future consideration.

In my app I’m not using event store internal projections - Im just using the event store to store events.

I have multiple console apps doing a catch up subscription to $all handling events only they care about. So I have an “Elastic” service which listens for some events and writes data to elastic when something interesting happens, and a “Mongo” and a “SQL” etc etc. In essence I did my own projection functionality. Not because I didn’t want to use eventstore just because I wanted to handle events the same way NSB handles commands and not worry about constantly tweaking javascript.

Problem I am having as I transition into a more high dense deployment is individual $all consumers just can’t keep up with the event stream and end up disconnected from the store after a few hours. If I could compete with $all I could spin up 3-4 “Elastic” consumers to $all and events would be fed to each in turn.

I hope I explained my case well enough and will be watching for any development regarding this feature.

In the meantime I am thinking of trying out the linkTo trick mentioned in the previous thread, or even just hashing the OriginalPosition to determine if a consumer should pass or process a specific event


It is quite common to use external projections. From my understanding
your handlers are not fast enough to keep up with your event flow (a
simple in memory handler can keep up with more events/second than ES
can handle writing).

$all with competing is probably *not* what you want here. By using
$all with competing you would have to give up any concept of ordering.
Doing this makes projection code much more complex (eg you can never
just update something because the create may come later). This code is
also quite fun to test (lots of edge conditions).

Normally the place to start in such a situation is to try to make the
projections faster. Most projections talking to things like say
elastic spend a large amount of their time waiting on IO. Batching can
reduce the amount of IO wait time (doing multiple concurrent async
operations is another way of reducing io wait time).



That all makes sense, but queuing events into a memory store will only last until I run out of memory :stuck_out_tongue:

If I have 100 event producers each spitting out 50 events a second its just not possible to expect even a memory store to keep up, just doesn’t make sense that my domain models can scale but the read projections can’t.

For the elastic handler, it actually doesn’t care about order at all - each event it writes acts as additional data no “creating” vs “updating” so not being able to use CC just forces me to do a weird workaround.

I guess really the best final option for scaling read models would be to compartmentalize events via ES projections and having clients reading from a specific stream? “CustomersStartingWithA”, “CustomersStartingWithB”

I don’t understand your response, nowhere did I suggest a memory store.

And at 5000 events per second a memory store would have no trouble a catchup subscription delivers ± 50- 100k/ second

Memory store is maybe too strong a word, but you were suggesting queueing events in memory while waiting for processing / bulking them together.
My point was if im receiving 50k events per second and elastic can write only 40k a second even with bulk processing then the app’s memory is going to run out before it finally catches up

"Memory store is maybe too strong a word, but you were suggesting
queueing events in memory while waiting for processing / bulking them

I suggested batching as opposed to doing an operation/event. Much of
your time is being spent on IO waiting especially at per event
grnularity of writes. I never suggested "let's just put everything in