Dispatching Events (NEventStore vs GetEventStore)

Hello,

I’ve been using NEventStore to do a prototype and I’m moving now to GES to compare them. One difference that I noticed is that NEventStore stores everything in the same collection (I’m using MongoDb), with a StreamId. In GES, we are creating a new stream for each Aggregate saved, even if it’s the same type, is that correct?

Now, I’m trying to dispatch all the events and I noticed that with this approach, I have to subscribe to all the events in the eventstore… will it affect performance?

Another difference is that NEventStore keeps track of all the events already dispatched, I think I can reproduce the same behavior using metadata. Is there any other way? I can also keep the record of the latest position in the database in case the application restarts…

Thanks.

Yes a stream per aggregate is the correct idea.

NEventStore’s ‘dispatch’ puts the solution in the wrong place - these subscriptions should be subscriber driven (store the position with your view model) rather than publisher driven as otherwise you can’t rebuild a single projection (for example).

You can also subscribe all ...

"I can also keep the record of the latest position in the database in
case the application restarts..."

Yes basically just remember the last position for a given subscriber.

Ok, it eliminates the need of Masstransit between the read model and the write model, which is valuable, but I will still need to publish it to other services.

With this approach,

  1. I would have a system with multiple services using the same eventstore, each service has its own domain and saves the aggregate to the eventstore.
  2. I would have multiple services for the read model or one service, not sure yet. I can have one service using MSSQL and I need at least one with ElasticSearch for the search.
  3. I must have only one readmodel-backend to synchronize with all the read models, or else I would need some logic to prevent publishing the same event multiple times.
  4. How do I scale this syncronizer?

I’m not feeling very confident about this approach, because then I need some logic to prevent publishing the same event multiple times from different streams subscribers or have a system hard to scale. I bet I’m missing something here, and you guys can help me.

segunda-feira, 29 de Junho de 2015 às 18:01:12 UTC+1, Bruno Costa escreveu:

See competing consumers.

For projections you rarely want competing consumers you instead want a
client driven subscription (catch up subscription) each read model
would have its own catch up subscription. For more on this see
https://www.youtube.com/watch?v=GbM1ghLeweU

Exactly-once messaging is basically a myth - you’ll need that deduping code anyway at some point or another. This just makes it explicit.

Myth? No never actually existed

Well no NSB does it. They inject the the deduping code for you given
they have tables in your sql db.

I know I will have to deal with it.

@Greg, I’m not authorized to use competing consumers at the moment, so I will have to live with it and need Masstransit (or anything similar) to send the events to other services.

But with this mechanism, I need a unique component which will be responsible to read the stream and send the events to Masstransit, right?

My question is: how do I scale out this easily?

terça-feira, 30 de Junho de 2015 às 22:20:56 UTC+1, Greg Young escreveu:

So you will read from a queue and put them in another queue (say rabbit) is what you are saying?

Second part. Yes

Yes, I will read from GES and put them in another queue.

Reasons for this to happen:

  1. GES Competing Consumers is still in pre-release
  2. Other teams don’t need to be aware of GES
  3. My boss says so…
    Thanks.

quarta-feira, 1 de Julho de 2015 às 14:40:56 UTC+1, Greg Young escreveu:

Will they be running projections off these? Also they will be in release in about a week.

I’m trying to keep all the services that will be running projections with GES, but in the end, we might have some services creating they own projections (read models) based on the events we send. It’s a big company and we cannot ensure how will they use/react to those events.

quarta-feira, 1 de Julho de 2015 às 15:50:30 UTC+1, Greg Young escreveu: