We are running multiple instances of the event store (as a Windows service) on a single server (VM) and are observing that each node consumes a large amount of RAM even when idle. Is there a way to configure the amount of RAM each node can claim?
+1 We have a 6GB db and a single node in production, when we start up the node it within a few minutes it taking up 6GB of RAM and doesn’t release it. We’re running V18.104.22.168
What’s your --cached-chunks set to?
We have a ~8GB db and our memory used is usually under 300MB and I’ve never seen it over 400MB.
We had it set to 1, I see the default is 0 so tried that but it’s still eating my ram
This is 1.0 from reading above (upgrading to 1.1 or 2.0 when binaries
come out might be a good idea). Also how are you measuring the memory
This is quite normal right now the HA nodes which have > 130gb
databases are running in +- 500MB of memory they are running with -c 0
(don't cache chunks).
The boxes I should add are running with about 16gb of memory used
(file caches in windows)
I was pretty sure it is the cache chunks setting. But there is no doc available that defines what settings to use in what situation
We have a 8 GB VM and every node (with default settings regarding chunks) takes up about 1.3 to 1.5 GB. DB size currently is in the low GB per node.
We also try to run MongoDB on the same VM and in that configuration they are competing for memory…
I should add that we are on version 1.1
Are you sure it’s the Event Store process that’s using the memory and not (say) Windows file caching?
Setting --cached-chunks to 0 will stop us caching anything.
I’ll definitely try that. Thanks
sysinternals can help in monitoring.