Idempotency Confirmation

Hey ESDB team,

I’ve read the documentation and run some of my own tests and I am finding the idempotency behavior confusing. I have a few questions.

Right now, we pull requests off of PubSub and run a number of operations afterwards, one of which is saving the event to ESDB. If something fails during any of these operations, the message is reprocessed from the beginning.

We’d like to enforce idempotency for our events, so I create deterministic UUIDs from a seed that is provided in the message’s metadata.

Working with the Go Client, if I do not set the ExpectedRevision, trying to append an event with the same UUID goes as expected. The event is only appended once and it moves forward successfully and future events are processed normally. I would be fine moving forward with this, but according to the docs leaving this blank “does not 100% guarantee idempotency.” So I’m wondering, what can go wrong here on ESDB side?

If I set the ExpectedRevision to the previous events Version, the event is appended twice. It completely ignores the UUID and the next event I try to append, even to a different stream eventually times out. I would expect if this is set that it would still verify the UUIDs for the stream. Is that not the case? And why would this cause future events to fail?

I’m using eventstore:22.10.4-alpha-arm64v8 for my local testing and the latest Go client.

Thank you in advance for any insights

  • Idempotent write is best effort in ESDB
    • it’s based on a cache of recently appended events ID
    • so nothing could “go wrong” , just that the eventIds are not in the cache anymore .
    • There is no way to control the cache size ( I’m not sure about the size out of my head)
    • the cache could be emptied for various reasons
    • the reason for a limited cache is we can’t provide idempotency check on the full log
      • there are sometime billions of events in the database
      • some streams might contains hundreds of thousand of events

The idempotent feature is mainly meant for client processes that would send the exact same append request multiple time , in a relatively short period of time
( analogy: think the double-click problem of old in browser)

When setting an expected version and expected version == current version of the stream , the append will happen:
because what you would have happening in code is

  • Append ( expected version = 1, event Id=A, stream 1)
  • Append ( expected version = 2, event Id=A, stream 1)
    it is not the same append

this is different than

  • Append ( expected version = 1, event Id=A, stream 1)
  • Append ( expected version = 2, event Id=B, stream 1)
  • Append ( expected version = 1, event Id=A, stream 1)

the first and last are the same append , and the result for the last one will vary between idempotent write and WrongExpectedVersion ( in this case it should be idempotent)

Thank you for the detailed reply!

Is there a cache per individual stream?

It’s a global cache on recently appended events , independant of the stream they’ve been appended to

Thank you. One more question:

I’ve previously seen in the forums that AppendEvents is atomic. However, when I look at the Golang client, it looks like it is just making sequential gRPC calls.

Is the AppendEvents function in the go client atomic?

Is the AppendEvents function in the go client atomic?

It should be , it’s a gRPC streaming operation and will fail or succeed on completion of the send :
response, err := appendOperation.CloseAndRecv()

Though let’s ask a specialist here @yorick.laupa , because my go know-how is limited .

@sean.payne what do you mean by atomic here? Appending events involves network I/O, thus can’t be atomic from a CPU standpoint

It occurred to me that you might have been referring to “atomicity” in the context of ACID properties. If that is the case, then yes, appending a batch of events is considered atomic from the database’s perspective. The Go client streams the events that the user wishes to append, but the database will not persist any data until it has received and processed the entire batch of events.


That’s correct. I was referring to DB atomicity. That’s helpful thank you!