my take on an Event Dispatcher

Yes just hook it up to a catchupsubscription :slight_smile: it handles all the work (subscribefrom etc)

My initial thought was to go directly to ES for catch-up, and switch over to rabbit for load-balancing during live processing (using control messages to stop/start processing) but then I realized I was over-complicating things for my scenario. Thinking maybe a simple active/passive duplication of services might fit best in my environment, which as I mentioned at the beginning of the thread was constrained to the few servers I’m allowed to use. “NO CLOUD FOR YOU.”

And when ES eventually supports competing consumers, I can switch from active/passive to competing if I want.

Brian, have you done any benchmarking yet on whether you need the competing consumers? Can one server/worker handle the load? -Josh

No. I’m still in the early stages of introducing events and messaging into my apps, which is one of the reasons I realized competing consumers was probably overkill. I just wanted to have the option for redundant services without having to have them fight over messages and worry too much about concurrency and idempotency. I think active/passive is really fine at least for now. Message load will be low for the foreseeable future.

To clarify, we’re working on competing consumers on streams in the Event Store at the moment.

Cheers,

James

Hi Greg, I’m not sure how I’d use that to push events to denormalizers that are interested in them. Am I misunderstanding the process?

That would just be a dictionary no?

I’ve put together a really simple application that has an API, command service and a denormalizer service - I’m blogging the comparison of the CQRS Journey code and get event store for use in a new feature we’re developing.

I’ve yet to write up the GES version but the CQRS Journey part is here:

http://iwayneo.blogspot.co.uk/2014/07/sonatribe-conversations-platform-cqrs.html

I ended up refactoring a bit to make it better suit my needs and likes (getting rid of Unity being one! Plugging in raven db as the read model being another, Adding some auto config of event and command handlers being another)

The GES version is much simpler - here is the code for a ridiculously simple - and unfinished - event dispatcher that works for me:

https://github.com/wayne-o/ges-tryout/blob/master/src/Ges.ReadModel/Program.cs

Obviously needs caching and more work overall but as a spike it works well.

I’ll be continuing my experimentation and will write up the GES part as well as a summary post to compare the two in terms of developing features after the infrastructure is complete.

If anyone has any pointers for the above I’d really appreciate it :slight_smile:

I also thought it might be handy to have some example implementations of GES (plus an alternative) for the community.

w://

https://github.com/wayne-o/ges-tryout/blob/master/src/Ges.ReadModel/Program.cs#L2

This works until you restart, if you want to support restarting etc use SubscribeToAllFrom and save your checkpoints (then give back same checkpoint you last processed). This will also handle all of the other situations that can happen like the writer is writing too fast for your denormalizer so you lose your subscription due to back pressure or you lose your socket to the event store due to a node dying network problems etc etc.

Cheers,

Greg

Aha - cool - what would be the best way to store that? Dump it in a DB or is there a better way?

If you can store it atomically with your read model then you will have guaranteed once messaging (no need for idempotency).

If you store it in say a file etc then you will have atleast once messaging and need to think of idempotency.

Which is best totally depends on the cost of atomicity with the read model

Obviously a lot of the code for dealing with the raw event in program.cs was lifted from EventBunny - thought I should mention that :slight_smile:

So to get this straight in my head - this should be a global number that is stored in the read model and updated each time I read an event?

Sorry for the nubz questions

Yes if you look at the api there is a checkpoint value (its actually two longs) you would save this value (and give it back) assuming you were subscribed to all. For a specific stream its just an int.

Basically the client api with this will handle switching between reading through queries (eg history) vs having a live subscription as things are changing. It will automatically switch bck and forth between them as well for all the edge conditions that can occur (network losses, cluster rebalances, slow consumers, message bursts, etc etc)

Perfect thank you :slight_smile:

Greg is the checkpoint for an individual stream the ResolvedEvent.OriginalEventNumber?

Cheers

Tom

Yes. This behaves as follows:

If the event is written directly to a stream, it will be the number of the event in that stream

If the event is a link pointer written to a stream, it will be the number of the link rather than the number of the event pointed to.

Cheers,

James

Thanks James - I was writing a catchup subscription this morning and was wondering if I’d used the correct property, but it looks like I have

cheers

tom