What are memory requirements for running EventStore?

I am running EventStore (OSS, single node, 3.0.5) in a development environment under Windows 2008 R2 Server on a PC with 1 GiB of RAM and no pagefile. Once in a few days EventStore fails (I have the logs from the last time) with System.OutOfMemoryException. The load is small, only two users, data size is mostly small too.

What are actual memory requirements to run EventStore in a stable way? Are there any settings that I could use to improve the situation?

– Sergei

I’d recommend a minimum of 8GB, especially on Windows where the file cache handling isn’t optimal for Event Store’s use cases.

What if we don't expect a high performance, and only want it to be
stable for now? Any switches or tuning?

Not for 1GB, you can look at the caching options though. Alternatively run on Linux, the requirements there are a lot lower.

And what are the requirements for Linux installation?

Event store itself uses about 200mb of memory at the lowest level

I put up a loop today, with 10 000 commands being sent. Every command will lead to between 10 and 65000 events each.
While the cluster of three was up, I debugged the test with this loop, a couple of times, until it hit a breakpoint, then I would stop the tests. But ES cluster still up.
Did this 5-6 times. Eventually, every node was consuming almost 2.5 Gb memory each.
Now, this is of course extreme. But, what I’m thinking is: it seems like they will be consuming increasingly more memory? Or will they “clean up” themselves after a while if not pounded so heavily?
How to remedy this?

How are you measuring memory usage and with what os settings etc? Windows as example shows you file cache used by the os as if its eventstore. But this memory can be reclaimed by the os.

I’m just looking at the processes in task manager. Yes, windows 7. The os settings, I wouldn’t actually know which you refer to, which in turn, I guess, would indicate that they are … the default ones…? I mean, I haven’t been changing anything. Running in a VMware VM.

My thoughts goes like this: what if a db cluster is left unattended for a while (running on a very small server), and this memory usage starts to increase. What kind of things do you do to prevent the server to crash out of low memory?

You need to separate out the reporting here: most of that is the Windows file cache that is managed by the OS. It’s misleading for the most part. Process Explorer I believe breaks things out better.

OK, so if I don’t overwhelm the lazy writer, this shouldn’t be a problem?

What lazy writer are you referring to?

the Windows file caching, accumulating writes. I guess it couldn't flush at the same pace as it was accumulating.

Its not accumulating writes.

What greg and the others are trying to say is:

Windows reports memory wrong and you ware trying to measure how much memory eventstore is using.

James is saying that you need to look at the Process Explorer it reports things better than the Process task manager.

sometimes windows tells app memory as: appmem = (appmem + OS caches) which won’t let you know squat, when testing memory usage. OS caches can be reclaimed at any time (disk read/write cache) and are considered almost free memory.

Process explorer is a better choice to see memory usage according to James.

Also the eventstore guys keep letting people know that using Linux is much less painful.

Idar, great, thanks!

Hm OK Greg. I had the understanding that the windows file cache was write requests being accumulated in wait of flushing. Without being flushed, the memory would be depleted. So if I write to disk at a higher pace than flushing occurs, then memory will get depleted and programs will shut down. That was what I thought at least.

"
Hm OK Greg. I had the understanding that the windows file cache was
write requests being accumulated in wait of flushing. Without being
flushed, the memory would be depleted. So if I write to disk at a
higher pace than flushing occurs, then memory will get depleted and
programs will shut down. That was what I thought at least."

This is incorrect.