EventStore storage compression/zipping to save disk space

Is there a plan for an feature to compress events to save disk space?
We currently have over 100GB of events on disk in production expecting 1TB or more during the next year.


We use ZFS and get compression that way. What kind of filesystem do you use?

he he - we use Windows

torsdag den 7. juni 2018 kl. 20.45.39 UTC+2 skrev ahjohannessen:

We used NTFS compression with some success. Got about 50% compression with no noticeable performance impact.

One MAJOR warning though; because we were only compressing completed chunks retrospectively (so there was no write impact) the NTFS partition ended up hugely fragmented. (99.9% according to tools) We didn’t really notice as we run on SSDs. Eventually EventStore was doing an index merge and tried to allocate a large file which failed. Even though there was plenty of space; 10x the file size available. This turned out to be due the number of fragments required to create the file passing a hard limit in NTFS.

So if you do decide to try it:

  • Probably only on SSDs. (Performance would become rubbish on spinning.)

  • Monitor the Split IO/Sec performance counter for the partition.

  • Try and defragment regularly. (It would have taken months to complete for us so we abandoned.)

  • Format the partition with the /L flag to try and avoid the file system limit. Ideally you would never come close to hitting the fragment limit, but better to have the ability rather than crash.

Why ? Disk space is so cheap and then you hit compression latency , only thing I can think of is some cloud providers have 100G /200G local SSD.

One thing I’ve seen is that disk compression can improve read performance if disk I/O is your bottleneck.