I’m currently in the process of doing a mass update of data from Azure Table Storage to Eventstore (5.0.8). I’m throttling how much I hammer ES (since this is our prod system and I still need plenty of capacity for "business as usual’ traffic). My concern is mainQueue size… I can see that I can easily send that on a skyrocketing trajectory so have limited it a bit and is now setting at around 3k-4k consistently during the import.
This cluster I’m working with isn’t big (3 x Azure DS1_v2 machines… so only 3.5G ram, 1 core etc). For our regular load this cluster is WAY more than enough (hits 5% CPU utilisation).
My concern is, is having the mainQueue large (well large for us) for possible 2-3 days straight going to be an issue? Or it will keep queuing things up and eventually process them?
Any advice appreciated. If need be, I might scale the machines up a bit for this import… but wanted to avoid “tinkering” with them if possible.