This is a question that has been raised in our team with respect to our attempt to base a microservices architecture around EventStore.
I am currently working on a ServiceStack plugin that will act as a wrapper around the EventStore client. A colleague has suggested that we try, as much as possible, to abstract the details of persisting and rehydrating an aggregate from the development experience for each microservice and, so, hide all of this within the the plugin so that the developer does not need to know that the state of an aggregate has been built up by replaying events (and snapshots).
Now, I can agree with this aspiration with regards to stateful data storage: whether an aggregate is persisted to a relational database (possibly by means of an ORM) or to a document database is, to some extent, an implementation detail. State is persisted and the mechanism of storing it - PostgreSQL, RavenDB, etc - is arguably something that should abstracted away as much as possible.
When it comes to event sourcing, however, I do feel that we’re not in Kansas anymore. Event sourcing isn’t just a storage mechanism but, rather, a paradigm that possibly reaches into the heart of our domain logic. An event is not just an implementation detail but actually a unit of business behaviour.
The examples of event-sourced aggregates that are on the internet (such as here, here, or here) make events the centre point of their (internal) behaviour. For us to attempt to abstract away event sourcing, by hiding how state is achieved through applying events, would seem to me to be an attempt to impose a state-based model on top of a much richer model.
Anyway, I’d be interested in what your thoughts are. Does it make sense to keep your domain logic (and application services layer) as free as possible from the event paradigm?