Schema migrations, etc.

I thought I’d share a solution I’ve come to, fitting our business.

We’re a rapidly growing company, with more features envisioned than time available. Information about business is not always at hand when developing a feature, and requirements change constantly.

Constant pressure to release long before it feels comfortable to do so.

Nothing special there.

For many, that kind of situation could lead to bugs. Now of course none of you write any bugs :slight_smile: but that happens here.

This situation has - in our case - been resulting in events not being all that well designed from the beginning, and moreover: bugs leading to sometimes a burst of totally meaningless or even “damaging” events, with bad data.

I used to tackle this by trying to come up with some “compensatory events”, add code for that and have aggregates come to a good state again. By the book no?

In the end I was having a bunch of extra events and commands, used only at that time to clean up the mess after bugs. And it wasn’t even easy to fix, and doing queries afterwards I’d have to be very careful to know what events were “bad” and what where OK, otherwise I’d have completely useless results.

What I really wanted to was to plainly remove the bad events. They had no other value than saying “You fucked up”.

And yeah maybe there’s some really intriguing correlation with the data in those events that I could have analysed years later to find some amazing business insights. But I trusted my ability to distinguish crap from potential value.

And so I designed a little piece of code to filter and transform every event in the db.

Now I could add some filters saying, “events with these characteristics, don’t keep them. Events with those characteristics, transform into a v2 with these extra properties” and so on.

I’d use filters to select the events needed to populate the new versions with the data I had wished they’d had from the beginning.

Drawbacks:

  1. when browsing new db in eventstore ui, all events seem to have same timestamp. My event baseclass has it’s own timestamp, and moreover many events have different timeproperties anyway.

  2. Immutable eventstore working as an audit log? Not anymore.

But that’s okay. My life just got so much better :smiley:

Maybe this is quite similar to how projections are used by some. Anyway, would be interesting to know how others do.

This is a perfectly valid thing to do (read -> transform -> write) and
I have mentioned it many times.

The timestamp you are seeing is when the event was accepted. There is
no way to back date these. As you have found just use your own
timestamp if you want to

Projections could as you point out do the same thing.

That’s good to hear Greg, I really feel this is a great way to work with the db.
Maybe I came out as antagonistic in some way (I probably thought this way was “discouraged”), wasn’t my intention though.

Now I’ll strive to make this tranformation easier to do, constantly finetuning the data.

It came to my mind that maybe the ui could have the timestamp field optional somehow, so that we can decide for our selves what underlying data to use for it?

Not sure though if it’s actually of high enough interest to actually request it as a new feature.

For me the timestamp is the Ui representing the order received seems
to make more sense given things are linearized. I would recommend
doing something on top at that point supporting your custom dates.