Basic setup questions

?I’m considering adding some CQRS/ES to my app.

I used to mess about a lot with JOlivers event store etc…

I’m now looking at GetEventStore as well as the MS P&P CQRS Journey code

I can get the P&P code up and running without issue and it resembles the way I have done things in the past quite closely.

How’s everyone laying out their architecture with GES?

Previously I have had an command bus I push my commands onto, they get read by the domain service, processed and any commands are pushed onto the command bus to be read by the denormalizer service.

How does all this fit into GES? If I wanted to use azure service bus or rabbitmq - is that possible? if so, where is the configuration point? If not, how does my denormalizer get the commands?

I see there are projections coming along the way, these are not the same as a demormalized read model right? Why would I use them? What’s the use case? In my limited understanding it is so I can generate streams from streams - are these meant to provide a new ATOM feed from these streams? Is this where my denormalizer should be looking for data? That doesn’t feel right but maybe I am missing the trick somewhere?

I like the simplicity of some of the P&P code but there are a few things that bug me about it. There seems to be a lot of configuration in both XML and C# - previously I have configured by convention and I worry that the dependency on Unity and the manual configuration is too deeply engrained in the P&P code. This is subjective obviously and I am sure there is a reason for it. It’s just overkill for my use case. I just don’t like Unity and XML!

Are there any examples of running GES in a small .NET project with RabbitMQ or something? Perhaps that would be the best place for me to get started.

So EventStore doesn’t make most of these decisions for you. It supports basic operations like append/subscribe/durable subscription etc. It is NOT a framework. Roughly the difference is ODBC vs an ORM.

We have been discussing creating 3-5 opinionated frameworks on top of it similar to commondomain.

In terms of how to get setup. A durable subscription is basically what you need for a projection writing to say a SQL database (it gives you a live feed as events happen and lets you checkpoint where you are so on failure your projection can continue). There are reading/writing methods for doing such. EG if you wanted to rehydrate an aggregate in a repository the code might look something like: http://geteventstore.com/blog/20130220/getting-started-part-2-implementing-the-commondomain-repository-interface/

As for the internal projections they are probably suited for a category of problems you are not interested in. eg replicated state machines and ad-hoc querying of an existing event store (similar to sql querying). Much of what they support is niche event based systems (and making them simple, replicated state machines). Or having the ability to query your existing data in new and interesting ways (ad-hoc querying).

As per RabbitMq etc they are probably not needed on a project running with ES as your event store can also be used as a queue (write/listen to a stream!)

Cheers,

Greg

Aha - OK

So that definitely answers why I was looking for stuff and couldn’t find it!

GES is a more lower level thing than I was expecting.

I guess I would be interested in hearing how it is used in the real world then…

I can POST events to a stream in GES

Considering a (simplified) standard (for me perhaps) CQRS model:

API - receives PUT/POST requests pushes turns them commands onto an command bus

Domain Service - reads the commands from the command bus, does domain handling - saves events to stream and publishes events onto the event bus

Command Service - reads events from the event bus, maintains a denormalized read model

Which component(s) are we using GES for? Is it “saves events to stream” or is there more?

You mention there is no need to use command/event buses in a GES setup - how is that?

API - receives PUT/POST requests pushes turns them commands onto an command bus

Domain Service - reads the commands from the command bus, does domain handling - saves events to stream and publishes events onto the event bus

Command Service - reads events from the event bus, maintains a denormalized read model

By command bus you mean just a dictionary?

As of saving the events if using an event store saving and publishing are equivalent concepts. The API also contains things like “Subscribe!” whether over http (atom) or over tcp (protobufs) you can run projections etc just through a subscribe operation.

  • By command bus you mean just a dictionary?

In my workings the API and the domain service live in separate processes connected by a command bus. Is this wrong? I thought one benefit of this would be scalability.

Am I correct in thinking the command service should receive updates through the Atom or protobufs output? In the case of the Atom interface - would that be through polling? I have zero experience with protobufs but what little I do know about it is that it is ideal for long running chatter between endpoints. Is there any example of a system that connects all of these bits up?

Sorry if these questions are wide off the mark - I’m just struggling to fit it into my understanding.

w://

“Scalability?”

How many requests/second are you looking at?

OK - so I guess you’re saying that if this is not at the point where high scalability is a requirement then forget HS.

For one project it will be minimal but the other project I am trying to fit this into the SLA is ~2000 reqs/per second

So, in the main - I just POST/PUT to the event store directly from the domain service and it will just eat up my requests.

And in the read model side I can just use connection.SubscribeToStream to listen for published events.

I think I can see how this fits in now. I think!

Is that right there abouts?

At 2000 you might prefer writing through the connection than posting via http (eg appendToStream)