Code sharing or generally wrong approach?

Let’s say I have a dotnet project that is structured like the following


This is loosely assembles what @alexey.zimarev describes in his book and ESDB webcasts. It builds the context for my customer domain.

So far so good. Now I have the “requirement” that every finalized customer registration needs to be exported to legacy systems. These legacy systems should not be changed for now, and rely on a file based export.

As this is kind of a temporary requirement my idea is to put this export related functionality in a separate process/service. Separate it because if not needed anymore, just kill the process/service and call it a day.

But this leads me to the following questions

If this export process is externalized, and I still want to store the fact/event that the user entity export was done, I will ultimately end up with a 2 phase commit, as I have to store the event and create the export file. Nevertheless I would have the same issue no matter in what process the file export is created I guess. So how to make the file creation and storing the fact atomic?

The 2nd question is directly related to the idea of separating the export. If I want to export the user I can either

  • listen to all events related and build my export view on the fly
  • or just act on the final exportable status and build the export view on demand.
    Basically then go to ESDB and assemble the exportable “view”. As no too many events will be involved until this final status is reached it seems feasible.

No matter how I build the export, i will need my code shared across to the other implementation/service.

Is this something you should avoid in general? If not it ads complexity on its own. Nuget package? or git submodule and so on?

It seems a lot easier to just keep this inside the same project. As it belongs to the domain in general it might be the correct fit. But then the temporary nature triggers a “move it out” reflex inside me.

Anny input appreciated. At lot to learn for me. :slight_smile:

Tbh, I am not sure what are you asking. Is the question about two-phase commit or sharing some code? The question title is about code sharing abut I don’t really understand what exactly you want to share and why.

Ok. Too many thoughts at once I guess. :see_no_evil: Sorry. Will try to stick with the code sharing problem.

In my Customer.Domain assembly I have the AggregateRoot implementation that has the current state after I read the event off the stream.

If I want to reuse this code now in a different process, I have to share it somehow, to re-hydrate the aggregate from the stream. e.g. for exporting. Given the fact, I want to to this file export in another service, by re-hydrating from the stream directly.

But this need for code reuse feels somehow “bad”. The question basically is:

Is this idea of re-hydrating directly from the stream, that requires code sharing as it will be a different implementation, a bad idea in general or just something regular that needs to be dealt with?


See when you see “reuse this code” do you mean just rehydrate the aggregate in another part of the system / service?

So you might have

Command -> Service#1 -> Rehydrate Customer
Command -> Service#2 -> Rehydrate Customer

If that’s what you mean, I certainly do that a lot (whether that’s right or wrong!)
The way i see it is, the service is only an entry point. The aggregate is still the same whether it’s hiding behind another service or inside the same service.

What do you refer as “service” in your example? An “application service” inside the same API (solution) for example? Here I wouldn’t see the sharing as an issue.

I’m talking about different implementations. One API serving the handling of the domain (command side), and let’s say a completely isolated Windows service that wants to deal wit the export. Here I would need to share the code across these completely separated implementations. That’s where it feels kind of bulky.

I’m probably on the completely wrong track here, and should build a proper read side that serves the needed data for the decentralized export.

Would the aggregate be changing at all in this isolated service?
If not, I would have thought this could be managed by a read model / projection, rather than the clone of the aggregate itself.