(Possibly bad) idea: Using projections to denormalize core/critical read-models

We are currently moving away from a strictly mssql eventstore and read-model persistence (thank god), towards Event Store and primarily Elastic Search for read-model persistence.

One of the arguments that won over the rest of the team was that rebuilding the readmodel was extremely slow and brittle with the mssql-solution, and the mapping need (from json to c# to entity framework to sql) was ridiculous. And then i got to thinking: What if we could have a category of streams for holding the read-models, populated by projections inside the eventstore, so we only have to read backwards and ignore the products already committed to the read-model when rebuilding…

Am I barking up the wrong tree? What do other people do? Any good examples of denormalizer-implementations alongside with some performance-metrics? (time to rebuild / number of relevant events).


Given we have a lot of products to show on a website,

And the products change state often during their lifecycle

When i rebuild my read-model from scratch (f.ex. due to deployed software changes that does not affect the interpretations of the events regarding the products)

Then I have to wait while all denormalized versions (redundant or otherwise) of all products are persisted syncrously


Given the core information about a product was “snapshot” and maintained by a projection

And that projection emits a “ProductReadModelProjectionChanged” event per changed product to the stream “ProductReadModelProjections”

Then i should be able to rebuild my read-model with fewest possible writes, since i can read “ProductReadModelProjections” backwards and discard “ProductReadModelProjectionChanged” events for products that i have already written to the read-model persistence.

… And this is a good thing, because it is not a horrible idea, and no one has a better solution to the problem

That last line is the one I am uncertain about.

Any thoughts?

Never mind - I ended up just building the objects in memory until the subscriptions are catched up, and then persist to readmodel.
The catchup-subscription model is awesome…

/over and out

Sorry we missed your original email we are normally pretty good at
hitting all of them; it must have fallen through the cracks.

There is another model being released in 3.2.0 thats worth looking at
as well which is competing consumers (e.g. you can have many consumers
compete on a stream to get things like high availability for some
types of consumers).

The pros and cons of the varying models are discussed here:

Thanks, looking forward to it!
Also, I think i heard you say something about a lucene-index for Projection states? I would much prefer that for depending on enterprisy “ISagaFinder” and “ISagaPersistence” implementation in general, especially in respects to testing.

Did i hear right or am i imagining things (again)?

This is work thats shelved now but may be brought back (at the same
time we would become the most configurable docdb in the world)