scaling a cluster

I have a question regarding the scaling of GES.

Suppose I have a cluster of 3 machines and i detect that they have to much load.

I would like to add 2 more machine to the cluster.

My question(s) are:

  1. Can I add new machines on the fly, or do i need to take down the cluster & reconfigure it?

  2. Will the load automatically spread out to the new machine or is there some manual process involved?



I have similar questions. I would like to use soemthing like AWS Auto-Scaling or use CoreOS to manage how many instances can be loaded at any one time. The docs on the cluster have a command line switch to indicate the number of nodes there are in the cluster. Does this mean the cluster has a set size when its started, or can it be set up to be dynamic in some way?


The setting is discussing the number of replicas in the replica set.

Do you need scaling for amount of data or for number of requests?

To be honest its just an exploratory question for the moment. The scaling would be more for HA and/or number of requests.

For example if I am upgrading an AWS AMI then autoscaling would start a new instance (based on cpu load, or number of requests, or some other metric, which would then be expected to join the cluster) and then when the load comes down, kill off one of the older machines.

The other use cause I would imagine, even if autoscaling was out of the question, would be if I started a cluster with a node-count of 3, and the number of requests got too many for my instance, then would I be able to re-configure for a 5-node machine, and then up to a 7 if required.

Just thinking out loud.

Thank you


“To be honest its just an exploratory question for the moment. The scaling would be more for HA and/or number of requests.”

So I would benchmark first :slight_smile:

The node count will very shortly be live upgradable though. You can also use a node type known as a clone which is a replica of the cluster but not part of it these nodes can be elastically scaled.

There is also some work happening in azure to allow for elastic scaling of read requests (up to hundreds of nodes) off the same replica set.

Excellent. Thanks greg. Basically its the same as our mysql server. We set the cluster offet and have to live with it almost forever, but we can bring slaves up at will.

I can live with that.

It also sounds easier to update event store, just need a little bit of down time first. I shall run some experiments.

Thanks again


I think you misunderstand I just meant you can’t change quorum size size at this time without a restart but you can add clones etc. you can change quorum size and soon while running

Even better. Thanks again greg. I hope to finally get some time to play with event store properly (its only been about 6 months since discovering the project, time hasn’t been on my side lately)


I know that feeling :slight_smile: