Cross aggregate events

I’m puzzled by a part of DDD, and while I suspect the question isn’t about EventStore specifically, I’ll use it to demonstrate my point since - as far as I can tell, an event stream is meant to represent the state of a single aggregate.

I’ll use the classic banking analogy. Let’s say I’ve got an ‘Account’ aggregate, and you’re able to issue commands:

  • ‘Deposit’ to add an amount to the Account.
  • ‘Withdraw’ to subtract an amount from the Account.

The ‘Withdraw’ command will have rules saying that you cannot withdraw more than is in the Account.

The corresponding events would be:

AccountDeposited { AccountId: AccountID, Amount: number }
AccountWithdrawn { AccountId: AccountID, Amount: number }

To implement this, I’ve have an Account stream which would consist of a sequence of these events, and my aggregate can be loaded from this. For example:

AccountDeposited { AccountId: 1, Amount: 30 }
AccountDeposited { AccountId: 1, Amount: 20 }
AccountWithdrawn { AccountId: 1, Amount: 15 }

Based on the above Account events, we know that the Account has an amount of 35, so if someone tries to withdraw 40 then we can prevent this.

So far, so simple.

But a common more complex scenario is transferring between accounts, and modelling this as a single transaction. I’ve seen this modelled as a separate ‘Transfer’ aggregate, with an event such as:

TransferRequested { FromAccount: AccountID, ToAccount: AccountID, Amount: number }

But if this is a separate aggregate, and therefore a separate stream, then I won’t be able to fully load my Account aggregate any more right? For example, the following events:

AccountDeposited { AccountId: 1, Amount: 30 }
TransferRequested { FromAccount: 1, ToAccount: 2, Amount: 25 }

As far as the overall system is concerned, Account 1 now has 5, Account 2 now has 25. But the individual event streams for these aggregates don’t allow either to be loaded sufficiently to enforce my original rule of ‘don’t allow any more to be withdrawn than is in the account’.

Hopefully this makes sense.

Hi,
One of the ways I’ve dealt with this is using a ‘write ahead’ pattern of events.

If you are using a system with global ordering (like event store does :slight_smile: ) then you know the events that go into each stream will have a guarantees to be both committed in the system and not have an undetermined order.

So we can model the transfer just using the two streams.

We start with a reservation or hold request on the account to be debited.
When that succeeds and raises a FundsReserved (or transaction started) event we can then place a apply funds request on the target account.
When that succeed and raises a FundsRecieved event on the credit account we can then request the debit account to finalize the transaction
When we get the FundsTransferCompleted event we know we are done.

We can have a monitoring service periodically check for open and uncompleted transactions as well.
To handle power cuts and failovers we would run the monitoring service on startup.

The key here is to break the process into steps and record the intended actions for each step.

Note: The same approach will work for streams or actors in different stores (i.e. they do not share a common ordering) by creating a dedicated stream to manage the transaction instance and record the steps and actions between the systems there. (This is called a process manager)

In this other model which is also generally related to transfer stream approach there would be credit and debit events added into the 2 account streams to allow the stream to build their state independently of the transfer or process manager stream.

Thanks,
Chris

Chris,

That’s almost identical to how we coded this as well.
The immediate write is to the Aggregate where we are transferring money from.
Eventually, the To account picks up this request, and we successfully transfer the funds.
if for whatever reason the transfer doesn’t happen, the funds are simply made available again after an allotted time.

1 Like

Steven,
:+1:

and the use of a “Later” service here to delay send timeout messages gets really powerful.
-Chris

1 Like

Thanks for all of your responses.

I think I can just about see how this could work when there are two aggregates in play, or at least a fixed number as in the account transfer situation. But I can’t quite wrap my head around this working with a large or variable number of aggregates.

For example, purely as a hypothetical, what would the pattern look like for an atomic process of ‘send amount “x” from account “a” to accounts [“b”, “c”, “d”, …]’? i.e. it should either succeed or fail as a whole.

Are there any good examples out there for how this should be approached?