I did search existing topics, but did not see answers to my questions.
I am a little confused on this topic, we are using Amazon as our host. I was looking at using sns, sqs, dyanmodb, and rds to setup event sourcing with projections. I am trying to figure out if I was to use Event Store, would I still use some of these aws services? Is there somewhere in the docs that explains what ES uses to store events, projections, queue and notify. I tried to find this in my first pass at the docs. I would much rather use something proven and tried, but not sure where to go from here.
Also using event store, does it offer simplified snap-shotting, replaying events to rebuild a projection? I am trying to find a list of the features that using Event Store will offer over rolling my own using aws. I was also reading about a new feature in aws called Dynamo Event Streams (I think it is) that does notifications of any commands.
I’m still designing this, but my aggregate root, which receives commands, would drop the event into the event store (rds or dynamoDB), and then send the event over to an SNS topic, which would then publish to all subscribers the event to their respective SQS queue. I would then use a process to build the read only projections pulling from the queue, and either store them in dyamoDB or Elastic Cache (or both). I’m still figuring this all out, but I also don’t want to reinvent the wheel. Our current mode of operation is to leverage P/I AAS where it makes sense. However with this approach I still have to implement the core event source functionality that is needed, which I am assuming is already included in Event Store.
Good question Greg. We are just getting started (were a startup), and of course like every other startup, we expect to grow substantially over time and want to be able to scale as needed, and provide an optimal experience for our customers. The easiest explanation would be that of a shopping cart. New items are added to a product catalog, so we need to build up the projections which will be queried against on stores and products. So if we had 5k stores to start with, and 50k products, and 30k registered shoppers using the site, all of this using event sourcing / cqrs. Over time, we would hope to have several hundred thousand sellers, millions of products, and many more customers. Eventual consistency will work fine in most everything, aside form inventory. Obviously we can then scale in a few different ways x/y/z. What ever we implement, we need to make sure we can scale it over time without over architecting it in the beginning. Hopefully this provides you what you were looking for.
The design you describe is easy enough to implement without all the AWS services.
I work for a medium-sized ecommerce business, and that’s more or less the route we’re taking: independent services that communicate through asynchronous events, with ES as a central hub. For CQRS, we can just play all the events through a consumer and build a view-model that gets stored in Redis, though DynamoDB would work just as well.
I’d point out that you can be eventually consistent on inventory, too. In fact, if you’re selling physical goods, you don’t have much choice - somewhere out there in the real world is a real warehouse with real boxes that get lost, and damaged, and stolen. If a customer purchases an item, but the last one has already been allocated, you just send them a voucher and an apology.
On a less relevant note, you will never be able to “scale as needed” as a startup - if you’re successful, all your plans will need to be thrown out. Just keep things simple and well separated so that you can re-engineer components when you need to.
You know I had exactly the same set of questions. Mine were driven out of cost. By using a set of composed AWS services, (also including aws lambda and aws API), it seems you could architect and host an entire event source application for next to nothing and not have to worry about scale, as AWS services will auto scale? and Im sure latency between AWS services would be quite reasonable? Implementing your own Event Store would require at least a couple of instances to give you the required redundancy, and then of course you have to manage the servers, the load balancing, scaling etc. If you know how to compose the AWS stacks you can do a lot for not a lot…(it seems)… But I wondered, functionally what you find missing against a dedicated solution such as EventStore? I would be interested in a response to this question. Im sure the moving parts argument is valid, but I guess if you started to hit those sorts of limitations then you could make the switch?
Your app posts events to dynamoDB - your source of truth. Your consumers are implemented as Lambda functions that place the event details into an SNS / SQS. Your micro services / RDS maintain consistency by consuming these messages. Of course at any time you could re-project etc. from dynamoDB? Although - there is a question - a full re-read of an entire dynamoDB could be costly? Anyway - does that correctly answer your question? B.T.W. I am just learning / trying to understand all of this, so excuse any naivety…
Ok, understood. So reading on AWS, you can request a strongly consistent read although it does go on to say consistency across all copies of the data is usually reached within one second, therefore I can see at least one flaw in this proposed architecture!
We're actually in the middle of doing something similar. Using dynamodb as an event store with dynamodb streams as the event bus. It's nice having the ability to do cqrs without having to train the team to manage any additional servers. So far seems easy and stable. If you were doing this as well, I'd be interested in hearing your experiences.