Stream guidance

So, I’m trying to figure out how to split up the streams for my system. Here’s what I had in mind so far:

Clients (trusted, internal)

append to command-{aggregate id} to send commands;****by id for tracing / regression testing I guess?

subscribe to result-{client id} to handle notifications like CommandFailed, ViewUpdated

Command Handler (active/passive currently)

subscribes to $ce-command to handle commands

reads from aggregate-{id} to load aggregate events

appends to aggregate-{id} to save events resulting from command

appends to result-{client id} to notify client


subscribe to $ce-aggregate to handle events

append to result-{client id} to notify client

System Monitor

listens to $ce-command and $ce-result to monitor for problems

So, does this seem reasonable? Is there a better way to split these up?

P.s. client id will be on command and event metadata, as will location in case client needs to know when read model is updated.


Only one projection can write to a given stream-- will that limitation cause problems for you? (Depends on how you implement projections and idea potency, I guess.)

What do you mean by the first statement? I’ll generally be writing to specific streams and reading from specific streams and category projections.

It looks like you have multiple projections writing to the result-{client id} stream. Have you tried it?

You might need to have each projection write to a different result steam (commandResult-{client id}, someProcessResult-{client id}, etc.) and have a single projection to accumulate the results to the single result-{client id} stream.

I would have multiple services writing to the same result stream. Why would they need to write to different streams? These are pub/sub streams mainly for the client, so concurrency checking is not needed, and they will be limited by a max age. I threw in a system monitoring projection from the result streams as an afterthought, but I suppose a projection from that stream would have a limited replay window.

Hi Kasey,

Projections enforce that only one projection can write to a given stream. Without this retries, idempotency and resetting projections would be very difficult. What are you trying to achieve exactly and maybe we can give you some better guidance.



Perhaps my naming system was confusing, but the only Event Store projections I will be using are category projections ($ce-whatever). The rest of the names in my original post are meant to be specific streams. E.g. command-{aggregate id} would be something like “command-customer/1”.

Sorry, I forgot to clarify that I would only reading from the Event Store projections, and writing to specific streams. I won’t be trying to directly write to projections.

"…subscribes to $ce-command to handle commands

…subscribe to $ce-aggregate to handle events…"

OK in that case you’re good.


My thought was instead of sending the command to a service and have the service log the command for regression testing, why not have the trusted client dump the command directly to the event store and have the service listen on that stream and send success/failure back through another stream as pub/sub. Since it’s purely async, I used a state machine and TaskCompletionSource to represent the command send workflow on the client side.

What do you do if you get a timeout?

When a command is sent, it starts a delayed timeout transition of the state machine. When the command completes normally, it cancels the timeout.

And if a client gets a timeout/connection closed etc?

Exceptions cause the state machine to go into a faulted state. The command result is faulted so the client knows it was an infrastructure problem, not a business logic problem.

Timeouts also transition to a faulted state.

I mean a timeout etc on posting :slight_smile:

Timeout is for the whole command post/response, but with different user messages depending on what state it timed out from.

So if I timeout posting it’s considered a failure? It may still have worked.