We have an occasional message that can go to 500K , in fairly rare but supported cases… we cant really move this data easily into blobs because it synchs down to occasionally offline event stores and its important for consistency reasons that the data is one message and moves offsite in 1 message. This message is proto buf encoded now.
The problem we get is the 64 Meg TCP framing limit … https://github.com/EventStore/ClientAPI.NetCore/blob/master/src/EventStore.ClientAPI.NetCore/Transport.Tcp/LengthPrefixFramer.cs
Now when we have a integration test which pumps out these message continually we get errors. Now by my calcs the maximum paging size then would be 64000/500 = 128 , we have dropped it to 40 but when getting lots of data to make off site thats a lot of trips.
So my questions
Would a page size of 100 drop the performance compared to 1000… My guess would be for 100,000 messages you get a latency tax of 10 second instead of 1 second.
Can we increase this to 128Meg at least ? I dont see this in the doco. The code seems to say no if Framer is the right place…
Is there a nice way to drop the page size and retry the current connection architecture seems to make this hard … as the error comes from the connection
using ( var connection = new Connection())
data = connection.ReadData(LargePage)
catch (FramingException ex)
data = connection.ReadData(SmallPage)