Is there some more info on the scavenge feature? Best practices on when/how often to run it?
Its pretty simple, it cleans up things that are deleted. Best
practices depend on your workload. If you have a workload that rarely
deletes things, rarely scavenge. If you have a workload that deletes a
lot, scavenge more often. Scavenging also causes write amplification
so its normally best to scavenge during times that your workload is
lower than others.
Thanks, yeah I don’t think I delete anything in my workload
Is it safe to stop the process while scavenging?
If yes, is it going to resume once restarted?
W dniu wtorek, 5 stycznia 2016 21:45:09 UTC użytkownik Greg Young napisał:
How many “things that are deleted” may the system generate by itself?
We had never deleted anything on our server but after scavenging about 500 MBs were saved and the number of chunks went down from a dozen to just two. The only thing that comes to mind were queries (and we hadn’t ran so many of them, anyway), do they take space that gets scavenged?
I have the same question. Can you please provide a short answer to this question?
We are experimenting with the data amount, for a system we’re going to develop. And after a few 100000 events the event store took several GBs.
After scavenging (and not deleting anything) the data amount shrunk back to about 256MB.
There is quite likely some events that were scavenged (as example
$stats stream which also has pretty big events). Depending on if you
were using competing consumers or projections these also write