I’ve looked through the posts in this Google group and reviewed the documentation, but am still unclear about how the clustering would work when it comes to changing the cluster.
In my project I am seeking to deploy an EventStore cluster via a container orchestration platform (OpenShift / kubernetes). I can specify the cluster size and the DNS as part of the deployment configuration files, so that seems clear to me. But, what about the following situations:
The initial storage I’ve allocated is insufficient and I need to allocate more
The initial amount of ES nodes (i.e. “replicas”) is insufficient to handle the processing/requests and I need to add more
I’ve over-provisioned the ES cluster and want to decommission a node to reclaim volume space, etc.
Can anyone point me at any documentation to help me understand what’s actually happening inside of the cluster? Should each ES node point at the same file location for their database? Or if I provision individual file locations to each node, will I get a copy of the data across all nodes? Or is there some sharding that happens?
Guidance/documentation on how to scale the processing and/or data persistence in some common use cases would be particularly helpful.