Hi, I’m trying to set up a 3-node cluster in AWS in a private VPC behind ELB’s (one for HTTP, another for TCP). I ran into a few issues and wondered if there was a better way for me to configure things.
My setup is:
HTTPS(443) Application ELB -> HTTP(2113) on the 3 nodes;
TCP(1113) TCP ELB -> TCP (2113) on the 3 nodes.
The 3 nodes have private IP’s that are not accessible except to the ELB’s; I can only hit the ELB’s from my workstation, and note that I haven’t tried to enable HTTPS on the nodes themselves, but they report that they’re clustering/gossiping ok.
My questions are:
If I POST to https://:443/streams/somestream , I get a 307 redirect pointing me to: http://:2113/streams/mystream/incoming/ , keeping the ELB hostname, but replacing the protocol and port with those of the external HTTP listener on 2113 on the node, so that the URL can’t work from either in front of, or behind, the ELB. Is there a way to specify the full protocol+host+port for redirects in the configuration?
In the Admin UI (also working through the HTTPS/443 ELB), if I go to the Stream Browser tab and repeatedly refresh https://:443/web/index.html#/streams/$all , then sometimes I see a list of streams; other times there are no results and an error “stream does not exist.” The failure looks like a timeout hitting https:///streams/$all?embed=tryharder from the browser instead of https:///…, so my guess is that when the HTTPS ELB UI request fans out to a slave node, the streams/$all query page is rendered with instructions to query the master directly by IP for the streams/$all info. Unfortunately this can’t work with my network setup–is there a way to prevent the UI from contacting hosts other than where it was served from?
So far my TCP clients haven’t had any issues working through the TCP ELB, but does redirection ever occur when using the TCP API? (i.e. a client gets routed to a slave node, it says “nope, talk to the master at this ip…”, and the client then attempts to open a socket to the master directly)?