Impact of reducing --max-memtable-size

Hi,

I’m currently setting up an EventStore cluster on Google Cloud. As previously mentioned, I’m using persistent disks for ease of maintainability and reliability, and as our case does not yet demand low latency or high bandwidth.

However, this has a clear impact on recovery time when restarting a node (for whatever reason), as the index takes some time to re-build.

I’ve seen --max-memtable-size, which as I understand it should allow me to flush indexes to disk and result in a quicker recovery time on startup, but what is the downside to using a smaller number than the default with this flag?

Cheers,

Kristian

The downside is that you will write indexes to disk and go through
compressions more often. What is your expected db size?

In the near future, we’ll grow by no more than 1000 messages a day, so still very small - hence focus on easy of maintenance/reliability over raw performance. As our business grows, this number will grow with it, and I suspect we’ll look at moving to something closer to bare metal (at least local disk over networked) within the next 12 months.

Then its likely fine. Are you seeing issues with load times rebuilding
index the the google environment?

In our test environment, rebuilding the index with default configurations took 10+ minutes for 300k something total events. The way Google persistent disks work, performance is improved when a larger disk is used so there’s some room for improvement there (currently 200GB disks are used, for performance more than space required).

What are the sizes of events? This sounds a bit extreme.

The test environment has been used for various things. I will set up another cluster some time this week, populate it with events of known size and get back to you with something more meaningful.