We are using EventStore for a project in its early days. We have around 20.000 events of 500 bytes size each, which should be around 10 MB of data.
Since we started Evenstore in February the chunk files hav grown to a total of 25 GB (a factor 2500), which we discovered by chance !! What on earth is the point of filling the disk with all that useless information? Is it just to drive people into needing consultancy assistance when the system crashes? If you search, you discover you can decrease the stats collection amount and time range, but you still have to manually initiate scavenge and clean up. Why? Seems such a waste of bandwidth, hardware and energy - ultimately CO2 emissions.
The only information in docs is: "
Stats and debug information
Event Store has a lot of debug and statistics information available about a cluster you can find with the following request:"
not a word saying it will fill your servers with useless information if you don’t proactively clean up all the time (another example on the lack of good documentation on EventStore).
/hoegge