Groups of events

The event store supports transactional writes of events; is the set of
events in a transaction available to me when reading the events back
out? That is, if one transaction creates events A,B,C, then another
creates D, a read model may want to group those actions into two
transactions also. I assume the built-in projections follow this rule,
that is, no projection would be visible to outside consumers that
showed just the effect of events A and B. But if we are publishing
events to other sorts of consumers, e.g. database read models, then
how do we know where to put the transactional boundaries?

Transactions talk about how things are written not read. This is the same as in SQL. The transaction assures that you will never for instance read from a stream and a,b but not have c there. They do not affect projections or whatever you have reading aside from the all or none assurance.

If you really want to do something like this on the reading side, use a correlation ID in the headers of each event (see the repository example which actually does this using a commit ID header). However, there’s no support built in directly and it would be quite unusual to need.

What’s your use case? Perhaps there’s a better way of dealing with it.



The read transactions don’t have to match the write transactions, yes. But other then ensuring a read model projector has read to the “end” of the stream – because it comes in batches – I don’t know how to understand whether I have a consistent set of events. If we cannot handle processing all the events at once, we may want to commit a transaction to the read model when we have read a fraction of the events still to process. Some of our read models are slower than our event sources, though we’re saved because the event sources are bursty and we can eventually catch up. If we can’t read all the events in a stream, how can we read a fraction of them, but be sure we’re not cutting a transaction into pieces?


You can read all the events in a stream - check the IsEndOfStream on the slice.



Yes, but what if we want to read only a portion of them? It sounds like the only option is to always read the whole stream to the end, and dispatch what you’ve read to a transactional subscriber.

It sounds like you want to be able to know what was committed together? Why not just store a commit ID in the headers of each event (maybe even with a total number written)?

What exactly do you mean by a consistent set of events?

Do you mean that your read model doesn’t make sense until you have processed a whole batch of events that were written together?

If that is the case you could tag some metadata onto the last event in the batch (or write an extra marker event to the stream) … this would let you commit your read model only once you have seen the whole batch?

[Edit: bah, beaten by James!]

Your questions have helped me form my problem more carefully. I’d like to read events from the event stream, without reading all the way to the end, but be sure I don’t process a set of events that are inconsistent. For instance, if one transaction commits both event “CreditBalance(5)” and “DebitBalance(5)”, I’d like to avoid reading only the “CreditBalance(5)” event without having to confirm I’ve drained the stream entirely.

The short answer may just be that this isn’t possible – read all the way to the end of the stream, or attach headers as have been recommended. But I’m concerned about what may be perfectly common case of wanting to start some work in the read model before I’ve reached the end of the stream. One important case for us is actually a rendering system. Even (and especially) if we are generating lots of events, we want to render a frame that is transactionally consistent with the inputs, but we want to start doing so as soon as possible.



Batchid : guid


Then just put same batch id on them.

Sounds good. We generate a “CommitSequence” internally anyway in our current naive implementation. It would be easy to insert in the event stream as metadata.