Memory Management in Docker

Hi, I am using EventStore in Docker with 128GB RAM in Ubuntu 22.04. Eventstore is consuming almost 50% of RAM and SWAP, can someone please guide me to optimize the same, below is compose file:

docker-compose.yaml

version: “3.5”

services:
setup:
image: eventstore/es-gencert-cli:latest
entrypoint: bash
user: “1000:1000”
command: >
-c “mkdir -p ./certs && cd /certs
&& es-gencert-cli create-ca
&& es-gencert-cli create-node -out ./node1 -ip-addresses 127.0.0.1,172.30.240.11 -dns-names localhost
&& es-gencert-cli create-node -out ./node2 -ip-addresses 127.0.0.1,172.30.240.12 -dns-names localhost
&& es-gencert-cli create-node -out ./node3 -ip-addresses 127.0.0.1,172.30.240.13 -dns-names localhost
&& find . -type f -print0 | xargs -0 chmod 666”
container_name: setup
volumes:
- ./certs:/certs

node1.eventstore: &template
image: eventstore/eventstore:latest
container_name: node1.eventstore
env_file:
- vars.env
environment:
- EVENTSTORE_INT_IP=172.30.240.11
- EVENTSTORE_ADVERTISE_HTTP_PORT_TO_CLIENT_AS=2111
- EVENTSTORE_ADVERTISE_TCP_PORT_TO_CLIENT_AS=1111
- EVENTSTORE_GOSSIP_SEED=172.30.240.12:2113,172.30.240.13:2113
- EVENTSTORE_TRUSTED_ROOT_CERTIFICATES_PATH=/certs/ca
- EVENTSTORE_CERTIFICATE_FILE=/certs/node1/node.crt
- EVENTSTORE_CERTIFICATE_PRIVATE_KEY_FILE=/certs/node1/node.key
- EVENTSTORE_INSECURE=true
healthcheck:
test:
[
“CMD-SHELL”,
“curl --fail --insecure http://node1.eventstore:2113/health/live || exit 1”,
]
interval: 5s
timeout: 5s
retries: 24
ports:
- 1111:1113
- 2111:2113
volumes:
- ./certs:/certs
depends_on:
- setup
networks:
clusternetwork:
ipv4_address: 172.30.240.11
restart: unless-stopped

node2.eventstore:
<<: *template
container_name: node2.eventstore
env_file:
- vars.env
environment:
- EVENTSTORE_INT_IP=172.30.240.12
- EVENTSTORE_ADVERTISE_HTTP_PORT_TO_CLIENT_AS=2112
- EVENTSTORE_ADVERTISE_TCP_PORT_TO_CLIENT_AS=1112
- EVENTSTORE_GOSSIP_SEED=172.30.240.11:2113,172.30.240.13:2113
- EVENTSTORE_TRUSTED_ROOT_CERTIFICATES_PATH=/certs/ca
- EVENTSTORE_CERTIFICATE_FILE=/certs/node2/node.crt
- EVENTSTORE_CERTIFICATE_PRIVATE_KEY_FILE=/certs/node2/node.key
- EVENTSTORE_INSECURE=true
healthcheck:
test:
[
“CMD-SHELL”,
“curl --fail --insecure http://node2.eventstore:2113/health/live || exit 1”,
]
interval: 5s
timeout: 5s
retries: 24
ports:
- 1112:1113
- 2112:2113
networks:
clusternetwork:
ipv4_address: 172.30.240.12
restart: unless-stopped

node3.eventstore:
<<: *template
container_name: node3.eventstore
environment:
- EVENTSTORE_INT_IP=172.30.240.13
- EVENTSTORE_ADVERTISE_HTTP_PORT_TO_CLIENT_AS=2113
- EVENTSTORE_ADVERTISE_TCP_PORT_TO_CLIENT_AS=1113
- EVENTSTORE_GOSSIP_SEED=172.30.240.11:2113,172.30.240.12:2113
- EVENTSTORE_TRUSTED_ROOT_CERTIFICATES_PATH=/certs/ca
- EVENTSTORE_CERTIFICATE_FILE=/certs/node3/node.crt
- EVENTSTORE_CERTIFICATE_PRIVATE_KEY_FILE=/certs/node3/node.key
- EVENTSTORE_INSECURE=true
healthcheck:
test:
[
“CMD-SHELL”,
“curl --fail --insecure http://node3.eventstore:2113/health/live || exit 1”,
]
interval: 5s
timeout: 5s
retries: 24
ports:
- 1113:1113
- 2113:2113
networks:
clusternetwork:
ipv4_address: 172.30.240.13
restart: unless-stopped

networks:
clusternetwork:
name: eventstoredb.local
driver: bridge
ipam:
driver: default
config:
- subnet: 172.30.240.0/24

var.env

EVENTSTORE_CLUSTER_SIZE=3
EVENTSTORE_RUN_PROJECTIONS=All
EVENTSTORE_DISCOVER_VIA_DNS=false
EVENTSTORE_ENABLE_EXTERNAL_TCP=true
EVENTSTORE_ENABLE_ATOM_PUB_OVER_HTTP=true
EVENTSTORE_ADVERTISE_HOST_TO_CLIENT_AS=127.0.0.1
EVENTSTORE_MEMDB=true

you’re using

EVENTSTORE_MEMDB=true
That means that everything is in memory , not on disk.( which is ofc not recommended for production scenarios)
=> what’s the approximate amount of data you’re putting in eventstore ?

also, EventStoreDB being a database, it allocates as much as it can
there are some settings you can tweak :

Thanks for your suggestions we will do the changes and let you know but we have such usecase where we need to retain the data for 24 hrs only, post syncing to central server, we need to clean up local data store, can you also suggest how to set retention policy for auto purge the event that is not required post sync.

Can you tell a bit more ?
How are you handling that now ?
how many events are you talking about per 24h, is it spread over those 24h or is there some peak time ?
what’s the increase in term of size of the database in those 24H ?

I guess this is related to Data Cleanup and Deletion - #3 by pravin.y ?

Ok, so scenario is something as below:
-We have onpremise deployment for Site Level Task
-There is one central server deployment to collect data and critical information only.
Once the Life Cycle of Event is completed then we want to cleanup or purge the data from Local Site Deployment to free up space.

There are two types of events:
-Data Event - When we are using the Machine to perform the task.
-Cyclic Event - For Machine Health, Alarm and Status

Size of Database in 24 Hrs is maximum - 1GB - Cyclic Event, 1GB Data & 40-50GB Images (Some of part going to EventStore e.g. 20 to 25%)

Yes, it’s related to Data Cleanup and Deletion

the site level one is the 24h one and the central one not .
Correct ?

are both site level & central server in memory ?

Depending on how you set the stream naming strategy
you can truncate, delete , set a max age on the streams you create

you can truncate / delete & set that metadata on stream level using the clients.
This is a logical operation though, to free up the disk space you need to scavenge :

Yes Correct, Site Level is 24 Hrs and Central Server is in Cloud. Can we configured it for Auto Purging or we need to do it manually?

you need to schedule the scavenge :
this is the call you need to do on each node

Operations-scavenging