Let’s say that I have a stream of command messages to make HTTP requests against a specific endpoint. This endpoint can only support a given volume before it is overloaded and starts rejecting requests.
If I use competing consumers directly against the command stream, I may flood the connection with too many requests.
If I use a catch-up subscription, I may fall too far behind the head of the stream and induce too much latency in the processing of new commands.
I think what I want to do is utilize some form of lossy back-pressure so that I can fully utilize the endpoint, while also not falling too far behind the command stream. My thought was potentially implementing something like RxJava’s onBackpressureLatest() using a projection (EventStore or another process if necessary) and a set of streams.
Before getting into how I might implement this projection (I have some ideas), does anyone have any suggestions on how to approach this problem?
btw: we use this strategy internally in event store in some cases
(namely around storage writer which is subject to the same kinds of
limitations as your external http calls)
So this would be a combination of using the right number of consumers (dependent on the connection) and enforcing a TTL. I guess that would produce nearly the same results, and is also very simple.
Does EventStore provide a “sent” time on events? I’m thinking this, along with the relative time to live duration would be useful, so that I don’t have to ensure that consumer processes have synchronized clocks.
No questions about json :). I was thinking maxAge could be used for this too, but I’ll probably have my consuming code handle it as I may want to emit/link another event when I drop messages.