Hi,
We are really trying to use EventStore on a new application. However, clustering ES is proving to beat us down. Here’s what I’d love to do: Create an AWS EC2 Container Service cluster with my three nodes that is auto-scaled by ECS. However, I can’t do this b/c ES needs to know the IP addresses of the other seeds before it’ll let any seed start. This makes me sad. It’d be nice if it acted something like Consul, and let me say “Don’t try to do anything until all the nodes check in”. This is complicated by Docker’s rudimentary networking, but, if I could just link containers, this would be very, very nice.
I have tried to use discover-via-dns, but I presume that means I have to setup my own, local DNS server (I couldn’t, for example, put in Google’s DNS server) which seems to be a a lot of work to do just to get up a cluster. I get why someone might do it, but it feels like overkill for us.
As such, my only real option (on the open source side) seems to be to provision 3 separate EC2 instances, get their IPs, then run the clusternode command on each one. This means that ES would be outside our Docker based infrastructure, which makes me sad. This also introduces issues if one of these goes down and the IP changes for the replacement…
You should know that all of this is compounded by my lack of DevOps knowledge. However, I have been able to deploy almost every other piece of software in our stack, outside of ES, on ECS. So, while I am sure a more experienced DevOps person could get farther than me, I don’t think it’s the only issue. Although, I’d be really happy if the issue is I am stupid.
Adding to my suspicions that an OSS-based ES cluster is not a viable production option is the utter lack of tutorials, posts in this group, etc. around it. There’s a bit, but not much. My guess is most people are paying for support and the better clustering tools.
So, is anyone running an ES cluster in production (preferably on AWS) that is willing to share knowledge? My client is going to balk at $2000/year.
Any help is appreciated.
Thanks,
Glenn