Hard deletes + 410 GONE, unbounded dataset?

Hi,

just exploring the delete stream semantics and considering what approach to take in my sql rip-off… :slight_smile:

In order to support the 410 GONE semantics I assume some state must be retained regarding the hard deleted stream.

So what happens over time? Is this state kept forever? Are there any limits or bounds to be concerned with?

Hard delete in other systems (RDMBS) is a bit different.

Correct me if my assumption is wrong of course.

Cheers

Hard delete leaves a tombstone in the index (it costs +- 20 bytes)

Ok, cheers. Will chew on this a bit. At this point I think I want a ‘really-truly-hard-delete’. May make a store specific implementation concern.

Es supports a "really truly hard delete". Basically there is a special
index value that represents the stream has been deleted. This value in
parlance in known as a tombstone. All other data can be deleted but
you need to leave this value to prevent the item from being re-created
in the future. More tricky is actually the soft-delete and dealing
with cacheability of future writes without causing issues.

Yes, I’m considering whether I want to retain tombstones or not (or make it optional). I can see some scenarios where I could be deleting a lot. Further analysis and measurement needed…

so we did this with ES. Remember its per stream. As I mentioned its an
index entry so is 24 bytes (exact). Now if you deleted 10m streams
that would be a +-240mb disk cost. I wouldn't consider this
ridiculous. We should add an offline option to scavenge them out

I’'ll check the byte cost in SQL land. Prob not a whole lot more really. I’m also considering using this on laptops, tablets and smartphones - I just want to understand implications fully. A scavenging process seems like a good idea (for my side anyway).