Read batches and sharing

We have the recommended setup for projections, using a catch-up subscription and storing the last read position for each.

The projections run against the same event store client, and are generally within 50-100 positions from each other. As the default read batch size is 500, there is a lot of overlap in available data.

If I have, say, 20 projections, with 20 catch-up subscriptions, will each of them read their own copies of the events? I don’t even know if this is something to concern myself with. What about 200 projections?

Thanks for any insight!

  • Bryan


Yes, each subscription will read it’s own copies of the events - it runs over TCP and does not (currently) do any caching etc. If you have 200 projections hosted in one process, you might be better off having a single catch up subscription to $all, and
dispatching internally.

You could achieve the same thing (albeit likely with a slightly higher latency) by using the HTTP client that JustGiving put together - ,
which has something similar to catchup subscription but should (I’d imagine) take advantage of local HTTP caching - although I’ve not personally looked over it, it I believe they are using it in production, perhaps Jon can confirm the current status of it?



In looking through the code that is here… One suggestion for the JustGiving guys is to support timing out of the http connections (needs to be done outside the httpclient iirc as there are some hokey scenarios where it can go off to lunch for 60 seconds)