Dashboard, connections, trustworthy?

We have started testing using a readonly replica to offload the master node. From the client logs it looks like it’s working(best candidate is the replica), but the connection is still listed on the masternodes dashboard in the web ui. Only 2 out of about 10-15 connections are displayed on the replica node.

Is this not displaying what I think it should? Any workarounds to actually see connected clients to specific nodes?

Are you using the legacy tcp based client ?
The new clients connection do not show up in the UI currently.

Tip: when using a reaonly replica: have the connection point to the readonly replica only
i.e instead of
esdb://Node1, node2, node3 or esdb+discover://clusterDns
do
esdb://readonly.replica
upside :

  • 100% sure that that specific client will only use the replica.

downside of this :

  • if the read only replica is down for sme reason the set of processes using that connection will have no connectivy to ESDB .

It’s tcp client (we still have 5.0.x on some production installations, so can not move to gRPC yet). Use gossip to connect to cluster.

Think we want the fallback of connecting to a follower if replica is down…

Started experimenting with connecting to replicas, and noticed the probable cause: It connects to the replica, but then closes and connects to master instead.

Assuming it’s because a persistent subscription is started on the connection? Will refactor to use different connection for them…

So what you did is :

  • having a connection instance (to a follower )
  • do some operations
  • start a persistent subscriptions

Is this correct ?

yeah that could be the cause of the connection going to the Leader node , persistent subscription need that.

a single client is connected to 1 nodes indeed: it’s 1 channel to that node.
The client object does not handle multiple nodes at once .

so yes , having 2 connections with 2 different connection string will help .
Mind that persistent subscriptions require a connection to the Leader , since it needs to talk back to it for acking / nacking handling of events.

Yes, correct. Started a new thread about when when connections are moved… So no issue with dashboard and tcp.

link to thread