Event Store stopping soon after Competing Consumers is being used

We are using the “top” command on Ubuntu:

We have also noticed something else a bit strange.

  1. We browse to /web/index.html#/streams

  2. There are a lot of $projections-$<guid> being written (every minute)

  3. When we drill into one of these we have:

  1. Now, when we drill into $create-and-prepare, we have:
{ "id": "4e083d9d90bc435c8e961ba40092d305", "config": { "runAs": "admin", "runAsRoles": [ "$admins", "admin" ], "pendingEventsThreshold": 5000, "maxWriteBatchLength": 500, "stopOnEof": true }, "version": { "id": -2, "epoch": -1, "version": 1 }, "handlerType": "JS", "query": "fromStream('AllSales_425095d57e044b1ebc08735873c176fd_20170306')\n .when({\n $init: function (s, e) {\n return {\n total: 0\n }\n },\n 'Vme.Eposity.SalesTransactions.DomainEvents.SalesTransactionSummaryAddedEvent': function (s, e) {\n\n s.total += e.data.SalesTransactionLineTotal;\n }\n });\n", "name": "2e4d089e-5315-428a-a287-34d17abad0e1" }

This has our custom projection in it, and seems to be getting processed every minute. This projection has been stopped, so we can’t quite work out why this seems to be running.

Hi Steven,

I’d like to just note something about some of the errors you are seeing in your logs. This might not be the sole cause of your troubles, but it is most likely related to them.

The error in your log looks like an issue that we saw in 3.9.3, which was caused by using multiple event type filters from all.

In this scenario, system streams and linkTo events weren’t properly filtered out when getting events for the projection to process.

This allowed messages in parked message queues to be processed by the projection, which resulted in an exception like the one you are seeing being thrown:

[PID:04187:031 2017.03.06 09:24:20.718 ERROR QueuedHandlerAutoRes] Error while processing message EventStore.Projections.Core.Messages.ReaderSubscriptionMessage+CommittedEventDistributed in queued $

System.ArgumentException: complete TF position required

Parameter name: committedEvent

This issue was fixed in 4.0.0 by this PR which prevented those events from being passed through the filter and causing trouble.

A workaround for it at the moment, and perhaps a way to see if it is the cause of what you are seeing would be to create a similar projection but just using $any as the filter, and using an if statement block to determine whether or not to process the events.

Hayley-Jean,

we have updated our projection to match your suggestion and have just started our Event Store store.

Will report back.

This is the longest our ES has been running without falling over in a while, so promising signs so far!

I am however curious how this could be related to a memory problem.
These would seem to be two distinct issues.

System still running, but Ram usage at 51%
There is almost no traffic during the night on the system, so a little concerned that the Ram issue still exists (but no where near as severe as before).

Thanks for reporting this back Steven.
As Greg mentioned, these appear to be 2 distinct issues, we will however investigate.

It died again :frowning:
We are looking at upgrading to V 4 to see if that sorts the problem.

ES still running, but Ram usage has climbed to 41%
We are keeping a close eye on it, and hoping this doesn’t die again.

Again, the system has barely broken a sweat since started (in terms of events processed).

In our db folder, there 7 x 256mb chunk files (I am sure I read somewhere that these files are loading into memory) so that might tally up with what we are seeing.