Catch Up Subscriptions

Hi all,

What’s the best practice for subscriptions? I’ve currently set up an observer that reads out classes from a cache and processes the events from the All Subscription and dishes them out appropriately. Reason to do this is because my aggregates have a stream per instance. To subscribe to all the streams for each instance would need storing that stream exists somewhere, and recreating the subscription each time you need to.

BUT

Using the all subscription means your pointer is the same one for every aggregate. If I add a new observer, it will begin from the point the subscription is at, which is not the point right? How does one deal with this? Always go from StreamPosition.Start in the all subscriber, or is storing a list of subscriptions needed actually the right thing to do here?

Many thanks!

I am not clear in what you are trying to achieve. Can you clarify a bit?

I usually would not use a subscription at all in terms of handling
aggregates (often loading on the fly is preferred).

Sounds like I’ve missed something then. What I’m trying to do is have a system of observers listening to the events the system produces so as to build read models and store them in a database somewhere. Each of my domain objects (what i’ve called aggregates) has it’s own stream. So for example User-12345 User-54321

I have hooked up to the SubscribeToAllFrom within the EventStore connection, then on each event it checks a cache of observers and if one matches the event, it fires it off with the payload. This is then keeping my read side sync’d with my write side.

This seems to work great, but if I add a new observer to this, it’ll only start from the last known position within the SubscribeToAllFrom… only way right now is if I restart my app, currently that subscription will start from 0 again and run through everything. That though has the potential to do things multiple times.

Just to clarify, when I’m running events on the aggregate itself, I’m loading on the fly - that is all write side and therefore runs through all events stored and builds itself up that way. What I’m specifically talking about here are read models that may be an amalgamation of differing aggregates

If using this type of paradigm is not the way to go, what’s the best fit for keeping read sync’d with write?

Thanks!

Why not fire up a secondary bit for catching up? EG read up just a
loop of reads ... You could even use a second subscription for this,
just toss it when it hits live. Much of what is in the subscription
isn't needed (or even wanted!) on a replay. There are some other ways
you can optimize this further!

There is a common pattern with read models to add an entire new read
model when you have changes (eg every release = rebuild entire read
model from new, hey at least you know its safe :-D). This works quite
well in most cases though can take also delay things a bit.
Projections building read models in general should not have external
interactions (they are building state not interacting with other
things ... eg doing things such as sending emails, instead isolate
these into something else)

Yeah my read models won’t have interactions at all. It’s just for UI and API reasons that they are there. Will be a lot easier to just query a quick flattened DB table or Redis cache than go through the eventstore and rehydrate every time. So really what we’re saying is on each release “drop” the read model storage elements and re-create them? My worry there would be downtime, but I do see the appeal of it always being true to the ES!

I’ll keep playing, but thanks for showing me I’m not missing the point entirely so to speak!

It really seems to need to be using the built in $by_category projection so you have a stream per aggregate type (not per aggregate).

In your case, the stream name would be called $ce-User, which contains all the events for every User aggregate.

You can have multiple subscriptions to this stream each with their own checkpoint so you can create multiple read models if required.

You do not need to know about any individual aggregate this way.