Cannot connect to 1113/tcp...

Howdy All,

I can’t seem to get ES to accept connections on port 1113 although it does connect on port 2113 when:

Running Event Store 3.0.3

Running within Docker container on Google Computer Engine instance

Dockerfile has:

EXPOSE 1113

EXPOSE 2113

CMD ./run-node.sh \

–ext-ip=$ES_HOST \

–ext-tcp-port=1113 \

–ext-http-port=2113 \

–http-prefixes=http://*:2113/ \

–db /data/db \

–log /data/logs \

–run-projections=all \

And I run it with:

docker run \

-d \

-e ES_HOST="$ES_HOST" \

-p ${tcp_port}:1113 \

-p ${http_port}:2113 \

-v /XXX/eventstore/db:/data/db \

-v /XXX/eventstore/logs:/data/logs \

XXX/eventstore:${dockerfile_version}

with tcp_port=1113 and http_port=2113 and dockerfile_version=latest

And I have tried:

$ES_HOST equal to

0.0.0.0

Google Compute Engine Instance Internal IP (it is accessed internally only)

Docker Network IP (subnet on GCE Instance)

And using --net=host (i.e. no Docker Host Network)

I am confused that I can telnet to the GCE instance Internal IP on 2113 and it connects but not on 1113

If I am on the instance I can telnet to 127.0.0.1 1113 and it connects.

When I tried to set ES_HOST anything other than 0.0.0.0 I get binding failures and ES exits.

I assume I don’t have to worry about internal network setting because I am not running a cluster.

Do I have to use --net=host? I’d prefer not too…

I am confused…

Any pointers most appreciated.

Thanks,

Ashley.

I have solved the problem.

Unfortunately, I had made the incorrect assumption that all Google Compute Engine networks had no firewall between instances. It turns out not to be the case and requires an explicit rules. Only the “default” network has this rule by default.

After adding this rule to my “staging” network, I was able to connect on 1113. I had some specific rules for the gateway instance that was allowing the connection on 2113 but not 1113.

Sorry for the distraction.

:+1: Thanks for posting the solution

Ah, that sounds about right. For future reference AWS also needs a security group to allow traffic between instances.

How are you finding performance etc in App Engine?

To be honest we haven’t got that far yet. We are using Google Compute Engine rather than App Engine or Container Engine and only using regular instances. If needed we can beef up the RAM and processor on the instances to improve performance. We are only a startup so performance issues are something we could be so luck to have :wink:

Seems reasonable - I’d be interested to hear how you get on.

BTW you get nothing from running Event Store in Docker. I’d strongly recommend removing that bit from your setup and running it directly on the instances.

James, is running event store in docker not a good thing considering prod == staging == test with respect to setup? I know what you mean in a sense, but having repeatability and reproducibility across diff environments is great, IMHO.

In general I find many levels of virtualization to not be a good thing
when talking about any database system.

Yes I follow you, but having that kind of alignment is great from a developer’s perspective, no?

developer != ops :slight_smile:

Hehe, I guess you are not that big of a fan of coreos + docker/rkt for that :slight_smile: I think it is great being able to be on an updated os like coreos all the time wrt security fixes, without being involved :slight_smile: event store is kind of orthogonal to that stuff, I know, but you have to admit that VM’s are on their way out and “containers” on their way in, and thus you guys focusing on a great container story :wink:

We have been discussing this internally to the point of providing a
supported container (with a whole lot of weasel words around it for
performance etc). In general I have no issues with containers where
you are not concerned with throughput/latency (e.g. most business
systems) the place where its a mess is when you need performance.

Containers are one of the things that falls under “just because you can, doesn’t mean you should”. Hipsters using containers everywhere does not mean we’ll be focusing on it until there is a proven benefit to running as an operational system.

The only ones which actually provide the claimed benefits are Solaris containers, but we don’t test on Solaris at all. Can you explain what actual benefit (that wouldn’t be solved by operating system packages) are provided by containers for running a database? Or is it a case of “works on my machine, so we’ll just ship my machine”? How do you manage storage in a sensible manner?

Not sure about that, just stating the fact that docker and rkt are a thing happening right now, not a fad IMHO, and you guys do yourselves a favour being container friendly as possible. Manage storage in a sensible way? Just like on a VM, but with volume mapping?

Bless Solaris, and may it rest in its grave :wink: Yes, you can align dev with staging and dev, that is a big thing from a developer perspective. Yes, Greg, I know that developer != ops.

align setup of production with stage and dev, that is. 12 factor ftw.

FreeBSD jails are also quite nice!