We have now a small GES instance running on a Azure VM, and the little thing grown up to 40 GB in a month.
We see a few optimisation leads in order to reduce the size of the database :
put GES db in a compressed folder (saving around half the space on windows)
reduce the $maxAge meta of our temporary streams (obviously)
zip big events (let’s say bigger than 3KB) and copy some properties in headers to allow projections
(but will it save anything since the DBs is already compressed in the file system ??)
With all this solutions we should reduce this 40 GB to … 5 but here is the problem, it’s 1/1000 of our total volume, how will we handle 5 TB or more data.
And since our business should go very nice with the new version we are planning to deliver, how will we deal with 10 times bigger, 50 times more ?
We then plan to implement a ‘flatten’ functional event to shrink an aggregate to its final state (and $tb the past events)
Does all that sound like some good practices to you ? Or whould you have some brighter ideas ?