Events per stream limit

I thought about using one of the event store streams as a global log for all commands (successful and not)
To not create thousands of streams, and because I will never need to read whole stream at once, - I thought it is good idea to create single stream for all commands in the system

I though about creating JSON - which is serialized command, and also attaching some metadata - like commit ID, and maybe JWT token.

But what confuses me - version in EventStore is Int32

So there scan be more commands than int.MaxValue

Just interesting, why EventStore team did not use Int64 for version instead?

"To not create thousands of streams, and because I will never need to
read whole stream at once, - I thought it is good idea to create
single stream for all commands in the system"

If you will never read the whole stream at once why put it all in one stream?

Thanks for your reply

I want to treat ES as the only valuable business log storage

Thats why I dont want to store commands elsewhere (also I can store them in MongoDB for example)

I want read model to be rebuilt on demand, when this happens - all read DBs are deleted and rebuilt based on read model projections in my C# code

and commands log will also be rebuilt

So I decided to store commands in ES

but then next question was - should I store command in it’s own stream? or should I store them all in one stream?

I think I will go with stream per command type for now

"I want read model to be rebuilt on demand, when this happens - all
read DBs are deleted and rebuilt based on read model projections in my
C# code
and commands log will also be rebuilt"

I don't understand this statement. Read dbs etc are all based on
events? How will you "rebuild your command log"?

"I think I will go with stream per command type for now"

How many commands do you have? e.g. how many reqs/second are you
doing? Its also quite easy to do based on time e.g. commands for 2015

Sorry for not being clear about command log

I want to have command log in in ES, while building from it different projections for different analysis

I mean command log in ES is the log of commands being executed.

in read model I build different aggregated views based on that log

so I have:

  1. commands log in ES

  2. projections based on above - in read model

There could be a lot of use cases for that

for example I want to see which users do which operations most of the time etc.

About the second question about how many commands we have -

The system is not live yet, but hitting 2 billion commands is not that hard in few years

Thanks - idea about splitting streams based on time is great!

Will consider that!

I can also use 'category" in metadata of commands to be able to get all streams for commands on read side

If you’re willing to use projections, you can also have one stream per command. Streams are cheap :slight_smile:

So there is no difference whether I have billions of streams each having one event, or when I have a single stream with billion of events?

Im very confused by your terminology why do you have commands to your read model? What happens if the read model rejects your commands to it?

Actually I have very standard approach

commands synchronously are submitted to handlers which use repository and aggregates to submit many events in one commit

So one command can result in many events written to single stream in one commit

This is how it works now - now no commands are logged anyhow.

Recently I started to think how to log user intention

I thought I need to store command even before execution

So I know that user X submitted command Y on time Z.

(I will log token with claims along with command)

Later if aggregate validation passes OK, and if concurrency check will pass the events to event store, I will be able to track from event by it’s commitID back to the command, where I can see the token

It is also possible to log command result - like exception in case it does not pass, or OK in case of success.

Really I am not sure I am doing the right thing, because there will be overhead in logging commands (and possibly command execution result)

But I feel that I should log unsuccessful commands

For example I can build reports saying this was tried so many times and did not succeed

I understand the error log could do this, but I though I will separate out commands as user-intended action log.

Also I started thinking about this because one commit can have many events, and saving token once in “command” log is better - (also it is additional write which is yet another commit)

If logger for command fails - thats not problem because it is just log

Am I doing the right thing? :slight_smile:

If I understand correctly, the commands are being stored for reference and reporting. Effectively, they are events - this command was submitted.

Thats what I would understand in a sane environment as well.

"Recently I started to think how to log user intention
I thought I need to store command even before execution
So I know that user X submitted command Y on time Z.
(I will log token with claims along with command)
Later if aggregate validation passes OK, and if concurrency check will
pass the events to event store, I will be able to track from event by
it's commitID back to the command, where I can see the token
It is also possible to log command result - like exception in case it
does not pass, or OK in case of success."

Makes sense.... It is very normal, its just logging commands.

"so I have:
1) commands log in ES
2) projections based on above - in read model"

Does not.

Nor does

"I want read model to be rebuilt on demand, when this happens - all
read DBs are deleted and rebuilt based on read model projections in my
C# code
and commands log will also be rebuilt"

I am just trying to clarify if something very odd is being done here.

Yes - that’s what I try to do,

basically - if we forget about “commands” and call them “log events” containing command data, who submitted them, and a commit id they triggered (if any) -

what I tried to do is:

I tried to use EventStore as storage for storing log entries.

Basically I am trying to declare some interface IActionLog

and just then in infrastructure layer implement EventStoreActionLog

And then I realized ES does have limit per stream so I need many streams

What I mean by this:
"so I have:

  1. commands log in ES
  2. projections based on above - in read model"

I see ES db in our software as true log of events as they were happening

ES will contain all events happened after modifying aggregates, but also events preceding modifying aggregates - “tries” - which are command data + token

I understand a read model - as something I can delete any time I want and rebuild it from true log

so in 1) - I mean commands data which I log in “true” log

in 2) I mean different projections of “true” log into aggregated reports

Normally dont replay commands (turns out badly in many cases)

yes I understand that, thanks