Haha I love causing a stir! Sorry for the delayed reply; I’m realizing it was unwise to post a question right before traveling (still am through October, so will continue to be spotty). This is all incredibly useful! I really appreciate you squaring off some of my internal definitions.
To answer some of these questions though, we make a variety of tools that help patients stay adherent to their medication, and track how well they’re doing. There’s a ton of apps that do this kind of thing, but we’re sort of unique in the space for using physical devices. Event sourcing came up as a possible solution to ease the transition away from a monolithic architecture towards microservices. The transition is still in experimental stages, but so far we’re really impressed by the kinds of things it seems to be able to enable!
So in rough order:
- A device will only have 1 patient, but a patient may have multiple devices (rare, but some folks do it)
- Most of our patient base have long-term treatment plans, so they use the devices for a while (months to years), and then send them back for decommissioning, generally when the drug is no longer indicated. From a data perspective, the device is not reused.
- The device collects adherence data, which needs to be associated back to that patient.
For the follow up questions, because of the logistics of certifying devices, and getting them sent to the patient, the physical process enforces a strict order of device data populated
-> patient data populated
-> mapping
-> shipping / activation
. On the data side, we operate with the assumption that that’s always true, but I have considered weird scenarios where it breaks down (e.g. I’ve been lobbying for us to work with safe injection sites where patients would necessarily be anonymous). Building for those cases isn’t getting dev time atm tho.
The solution I’ve got right now which seems to work well is that there are 3 separate streams:
patient-$PATIENT_ID
device-$DEVICE_ID
patient_device_mapping-$DEVICE_ID
I’ve then built a projection (in Go, connected to a persistent subscription to patient_device_mapping
event type stream) that pushes a LinkTo of the mapping into the patient and device streams. The source of truth is still the patient_device_mapping
stream, but from the perspective of a projection reading the streams, it’s obvious what’s going on. With the exception of the LinkTo events, these are each instance streams.
Post-mapping, our patient events actually include the device ID in the metadata (this predates experiments with ES), so I’ve also experimented with a version that skips the LinkTo and just watches for the change in metadata while reading the patient stream. In the lifecycle of a patient / device, mapping events don’t happen often (for a device, it’s literally twice: mapping and unmapping), so writing 2 extra events once in a while doesn’t feel like a high cost, and I prefer the explicitness of “this thing happened” showing up as an event in the stream, rather than promoting metadata to be part of the data model.
After talking this out, I suspect the “most correct” version of this is to leave the patient and device streams alone, and have patient_device_mapping
events trigger the generation of a new derived stream patient_device
that includes everything in the patient, device and mapping streams. I’ll play with that, probably this weekend, and see how I like it.
Thank you so much for all the help!