overhead of storing event

Hi

We’re trying to do some back-of-the-envelope calculations on how much storage space we need. How much overhead is there in storing an event in eventstore?

Thanks.

Index is 20 bytes over head aside from that mostly depends on :

Length of stream name, size of data, size of metadata.

In total we have about 60 bytes of overhead we use internally for an event including the internal time stamps etc that we store https://github.com/EventStore/EventStore/blob/dev/src/EventStore.Core/TransactionLog/LogRecords/PrepareLogRecord.cs shows what is stored

Greg

Thank you Greg.

The one place to be careful of is what encoding you use on your events/metadata a wrong choice here can double your size.

At the moment we are using protobuf for fast streams and json for slow streams…

Should be fine.

What is fast vs slow in terms of messages? Also how long do you hold the messages?

Fast is max hundred per second for one stream. Time to live is a few hours… When sharding get’s possible we would like to increase this, since the longer data stays around in event store, the more it is of use :slight_smile: It would greatly simplify our setup if event store was not only used for distribution of these streams, but also permanent storage… But that would probably also require some sort of log compaction, to make it feasable, either by us or implemented in the event store…

Slow data is maybe one event per minute. With unlimited time to live.

Log compaction is already implemented in the Event Store - if you delete streams or expire events via either max age or max count and then run the scavenge process, you end up with a smaller log (maybe - there are caveats to that).

By the way, we are not quite in production yet, but very close…

Sorry, I’m not expressing my self clearly then. What I mean is compression of stored events.

This is actually a pretty simple feature to add. I think we would be open to some discussions especially if your company was willing to help fund it :slight_smile: eg compress during scavenge.

That said you can basically do this now by running on a compressed file system. As you are not needing high write performance this would work out of the box. To support higher write performance the first option would be needed.

Cheers,

Greg

That's good to know. :slight_smile: