Commands deduplication

Hi,

Suppose I’m using a message broker which ensures at-least-once delivery guarantee, meaning that there is a chance that the same command will be processed twice by my service.
The solution here is to use commands deduplication, hence every command is supplied with an id.

Now, reading the article https://developers.eventstore.com/clients/dotnet/5.0/appending.html#append-events-to-a-stream and going through multiple forums, I understand that EventStore ensures idempotency only at its api level. This means that if a CommandA, which produces EventA with id ‘id1’, is executed twice, I will have two events appended to my stream:

  1. EventA (‘id1’)
  2. EventA (‘id1’)

The reason for this is that every time a command is processed, the following workflow is done:

var aggregate = aggRepo.Load(cmd.AggregateId);
aggregate.MethodA();
aggRepo.Save(aggregate, cmd.Id);

Inside the aggregate repository, a call to EventStore is issued:

conn.AppendToStreamAsync(“newstream”, expectedVersion: agg.InitialVersion, ToEvents(agg.Events, command.Id));

The method ToEvents() creates EventStore event objects with predictable guids - every EventId is created based on command’s id.

Given the above logic, if CommandA is processed the second time, the expectedVersion is not longer the one that was at the first time of processing CommandA. Consequently, the second EventA will be successfully appended even though it has the same id.

Could you suggest a standard way of achieving commands deduplication and optimistic concurrency with EventStore?

It seems like you are looking to find a technical solution for idempotent command processing, which is not always required. For example, if you have an entity backed by an event stream, the entity state can tell you if the command was already executed. For example, the command wants to change the customer address, and the entity state already has the new address. You can then avoid making any changes as nothing needs to be done.

You can of course just store command ids in a distributed cache with short expiration time as the duplicate command might come within a very short time after it was received for the first time.