Event position 18446744073709551615

Hello, I’m experimenting with EventStore DB. I’m using the GRPC client with C#, I’m able to write and read events; but when I read, any stream and any event gives me the Event.Position {C:18446744073709551615/P:18446744073709551615}
looks very suspicious to me that 18446744073709551615 is equal to the max ulong (plus, a fields that always have the same value really doesn’t have any utility)

Anyway, when I write events, each write returns a meaningful position (much much lower than this one, and always varying)

Is it a problem only I have? did I miss-intepreted the meaning of the “Position” field? Or there is some issue in the GRPC client?

What can I do to diagnose where the problem is, how can i solve this and having the real position of the event, when I read a stream?

1 Like

Looks like the Event.Position is provided with a meaningful value only if the read occurs on the $all stream which is quite useless in my opinion because prevents several optimization that can be applied while reading from specific streams while maintaining the absolute ordering of the events.
Also the server filtering capability of events / streams is available only on the “SubscribeToAllAsync” method not on the “ReadAllAsync” which makes extremely difficult to process them.
I really can’t udnerstand the reason of this kind of design, i’m sure there is a good expleination and i would like to know about it, as i suspect it will be very interesting. Someone can shread some light on this point?

I am experiencing the same issue and having similar questions.

I’ve just noticed that the position for each event in a stream is 18446744073709551615
when using

var stream = eventStoreClient.ReadStreamAsync(Direction.Forwards, _streamName,StreamPosition.Start);
var resolvedEvents = await stream.ToListAsync();

My stream has 6 events and all of them have the same commit and prepare position.

I am using

<PackageReference Include="EventStore.Client.Grpc.Streams" Version="21.2.0" />

The ES is 20.10.0-bionic as a docker container

docker run --name esdb-node -it -p 2113:2113 -p 1113:1113 eventstore/eventstore:20.10.0-bionic --insecure --enable-atom-pub-over-http --mem-db=True

When using

eventStoreClient.SubscribeToAllAsync(startPosition, handler)

I can see in the resolvedEvent within the callback that the position seems a valid number this time, and each event has a different one, as expected.

Also after appending events, I can get the last position as expected.

I don’t have any particular issue with my scenarios, but I’m also curious about why reading a stream retrieves events claiming to be at the same (and last possible) position

I’ve talked to a developer from EventStore: he told me that it is a known issue and they are working on it, hopefully it will be fixed for the next version.

Hey EventStoreDB team ( @yves.lorphelin ), any news on this? this is unfortunately very limitating. the requested feature is “to have a way to retrieve the the position (AllCommit, AllPrepare) for each event resolved” at the moment this sometimes returns {C:18446744073709551615/P:18446744073709551615} when reading from certain streams at least in the gRPC client for C#, the one i’m using.
Is there a way to workaround this? or should I wait for the new version? is it still in plan to be fixed with the next release?

I found out that when subscribing to a regular stream (not $all), you get -1 as event position always (I cast it to long), which confirms the observation. I opened an issue in .NET client repo, but I am not sure it’s the client issue as it just gives us what the server returns.

IMO if ESDB server is not capable to return a proper commit position in such cases, the field should not be included in the struct, and we should have different structs for stream subscriptions and subscription to all.

Thanks @alexey.zimarev ! it’s very nice to have your reply on this post (i’ve seen a lot of your videos: they are very informative an well done).

I agree that if the server is unable to return a meaningful value, then the field should not exists on the reply, but anyway I strongly encourage for a solution where the server always returns a meaningful and correct value, because it will simplify a lot of scenarios, and probably will enables other that can be very useful.

I can bring example on what these scenarios are, if required, and how complicated and not very efficient is to workaround this limitation in a robust way. I recognize that the most of these are not very common use-cases, and probably to explain why I need these edge cases will result in a very long thread with lot of diverging opinions… Anyway the simplest example could be: determine absolute ordering of events in different streams in a efficient way (beware: I’m not trying to merge two streams, I’m well aware of the last-event problem O_O ), but sometime you just need to know if something is happened before or after something else, milliseconds in the timestamp cannot be enough…

I’ve talked to @yves.lorphelin some weeks ago and he told me he is aware of the issue and apparently someone is working on a fix.

i think it will be amazingly useful, and in the spirit of the design del sistema, in fact the field is already in place, maybe is only an oversight…

Yeah, I will bring it up again. Still, my usual approach personally is to subscribe to $all, where this issue doesn’t exist.

yep i’ve ended doing it so, but looks to me to waste some system functionality.
for example is there any performance penalty in enumerating all the events from a specific stream VS enumerating the same events on the $all with a filter? are the events in the $all stream somehow indexed or for every filter it scans all the events in order matching one by one with the condition?

thanks again for your help!

Since you’re asking this, I would assume you use something like category projection, or custom JS projections, to emit links to particular events, and then subscribe to that stream. It can be quite the opposite when it comes to performance. Have you read the Performance impact note on projections? When the system has lots of writes, the penalty of emitting link events can create a severe write performance impact.

Reading from $all is also easier for the server, as it just reads from the log. Reading from a stream, which contains links, forces the server to resolve each event from the link, which requires using the index. That is the reasoning behind server-side filtering, which allows you to reduce network traffic and CPU load on subscriptions, moving the filtering to the server. Server-side filters are also available for TCP clients, although it’s not documented. The server must be 20+ for it to work though.

1 Like

Thank you, I’ve read the link you have posted. My question was a little diffent.

I’m using the out-of-the-box projection “$et-” to enumerate all the events of a particular kind no mater the streams on which they have been created.

so the question really was:
is it faster to enumerate $et-myevent (with event resolution) or enumerate $all with a filter for event type matching the regex ^myevent$ ?

is it the same thing or $all is faster anyway? or depends by the number of events in $all and the distribution of the occurrences of the “myevent” in $all?

thank you!

@duke for the position on read
this is the PR you want to watch

thank you Yves! i’ll follow the progress on the PR

It is faster to read/subscribe to $all with a filter. However, everything has two sides.

  • If you are always real-time, reading from $all with a server-side filter is faster
  • If you need to start from the beginning of times on a terabyte size database, reading from $et is faster.

It’s because reading with a filter is, well, reading everything with a filter. $et materialises the event type filter, so re-reads are faster, but it comes with the cost of two appends for each event (the original event and the link), CPU load to run the event type projection, disk space, more things to scavenge, resolve links when reading, read stream meta when resolving. So, producing links and reading links is not free.

ok, as i’ve immagined… $et is a sort of pre-filtered “copy”.
In my current situation I would like to know which is the Position (the global position, so the AllCommit, AllPrepare indexes) of the last event of a given type.

I suppose that resolving the last event of the $et stream (so reading the first from the end) is anyway faster than reading from $all, also because on the gRPC client I don’t see a way to read $all in reverse using a server-side filter :frowning: am I missing something?

is there any other method to read resolve the “global” position of the last event of a given kind?
or the “correct” way is to always have an open live subscription to all and keep observing the new events, taking note of the “global” position of each kind of event when it arrives?

It will be nice if the DB can do so by itself, it will also avoid me to have a serparate persistent storage to remember such “last” position in case of system shutdown, I don’t like the idea to reprocess thousands of events just to find the position of the last every time the system boots…

If you don’t want to host an always-on subscription for $all with a filter on event type, reading the last event from $et is a better solution, as you can always read it from the back.

However, those links use disk space. You can also define a custom stateful projection, which will keep those things you need in its state. The state, naturally, is also an event in the $projections-{projection-name}-result stream, but as this stream has maxCount set to one, all the previous state events will be scavenged.

The write amplification for byEventType system projection and for a custom projection is roughly the same, the only overhead you get is for the server to run JS code. But, in 21.6 it’s greatly simplified as we don’t use the V8 engine anymore, so that shouldn’t be an issue.

Still, remember that in both cases by writing one event you get two produced. However, if you use a custom projection, you can only enumerate the events you need, whilst the byEventType projection will produce links to $et for all the events in the database.

Hello, i’m unsure about the progress status of this feature and i woul like to ask for clarification.

The feature appears to be finally merged into main, so I was thinking it is included in the latest image for docker: i’ve tested it but at first it doesn’t look to work :frowning:

I’ve looked at the code and at least to me the server side code appears to “do something” about providing the required data, but i was unable to see if now it works with the current version of the client.

My guess is that the grpc client is not yet updated and doesn’t have received the new info provided.
I’ve checked that the latest version of the client package is from 3/2/2021 (quite outdated) while the server appears to be updated around mid august 2021, for sure the client doesn’t have these updates.

I’ve even downloaded the last source of the client and compiled it but it still doesn’t seems to support the new field: position always read to 18446744073709551615 :frowning:

I really need this feature, after 1 year of waiting, and having the major part of the work done (seems) which is the roadmap to have the client updated to support it?
Can I help in some way?

maybe @yves.lorphelin or @alexey.zimarev can help with this?

I know that there’s some work on the client to support the latest server features, but it is not ready for release yet. I know that @oskar.dudycz is involved in reviewing the code changes for the new client version, maybe he has some information.

Sadly, also the new version of the gRPC client 22.0.0 with the database version 21.10.2.0 (which should contain the fix to this bug) still presents the same issue.
The Event Position in the client.ReadStreamAsync method always contains useless value 18446744073709551615
@yves.lorphelin @alexey.zimarev @oskar.dudycz
Can someone please provide an estimation on when this will be fixed?
is there a way for me to help with the resolution of this problem?

I’m very discouraged :frowning:

@duke, unfortunately, this is still by design, so you can get global position only while reading from the $all stream. It may change in the further versions, but there is no set deadline for that.