$all feed

Hi,

I am new to eventstore (just started today) and have been playing around with the .NET api.

My scenario is a standard one. I need streams for building aggregates, and a unified stream for replaying events to build the read models.

It seems to me that the $all stream would be well suited for the last part except that it seems to hold a copy of the events of the aggregate streams and not links.

The problem is that when playing around, testing, demoing etc. it would be nice to be able to cleanup the event store to start from scratch, but when I delete Aggregate Streams the corresponding events still exist in the $all stream rendering it invalid for replay. Furthermore the $all stream cannot be deleted or changed

So what should I do ?

Should I build my own unified replay stream i.e. always writing events twice once to an aggregate stream and once to a unified stream, or should I just delete the entire EventStore Node when cleanup is needed (I guess that would be preferable ?! But can it be done from the .NET API ?)

Can you describe your app? messages/second, # of projections, etc?

Hi Greg,

Well. It is kind of hard to say since it does not exist yet. The probable amount of users will be 20.000 - and the app will be more read heavy. So I would guess 100 pr. second at peak, but normally 0-10.

We do not plan to use the EventStore Javascript projections at all, but that might change if you could send me links to some great demo applications of them (since I am totally new to EventStore I have not seen their value yet)

Our readmodel will be built with .NET handlers - we have done it this way in a similar app, but used a relationel database as EventStore.

Anyway - my questions can be boilt down to this:

  1. Can I delete the $all stream at all via an API call ? E.g. by deleting the EventStore “Node” if that is the correct term.

  2. If not will it be advisable to built the Unified / Combined EventStream myself.

The reason for me wanting a unified stream is first my experience with EventSourcing in the other app I mentioned. To avoid problems with the order of events during replay of readmodels, and this is also what Rinat advices here:

https://groups.google.com/forum/#!topic/dddcqrs/T43emPWJ1C4

It feels right to me to have both a unified stream, and one pr. aggregate as Rinat suggests.

In very load heavy systems I guess the unified stream can be a bottleneck problem, but not in our case (besides you do exactly the same when you build the $all stream)

I would prefer to use the $all stream for this since it is built in. But I do not know exactly how it works. Is it guaranteed to be updated when another stream is updated for instance

BTW looking forward to seeing your talk at Buildstuff in November. The topic has not been announced yet I think ?

To answer my own question it does seem that I cannot use the $all stream since it also contains all kinds of system events. I had not noticed that.

But if it is impossible to delete from the $all stream then how is the database ever cleaned up ? When running Scavenge for instance only events from original streams are deleted the $all stream still contains everything.

I know it is not an issue in production systems since events are never deleted, but in test/debug/demo scenarios it would be nice to be able to clean the eventstore.

When you scavenge things are not in the $all stream after unless you
are have only a single chunk and are trying to scavenge (you can't
scavenge the chunk being written to).

While there are system events in $all its often better to subscribe to
$all and ignore them than to make more complex subscriptions
(especially when you are discussing levels of say 100
messages/second). It is network stupid complexity smart.

To delete a db just delete the folder you passed for --db

Hi Greg,

Thank you. I will definitely reconsider and probably use the $all stream then.

Your comment about scavenging I did not understand - I think there must be some typos or missing words ?

Nice touch that I can just delete the db folder :slight_smile: - I thought as much, but was not able to find documentation about it.

Regards Jesper

Scavenging will remove events. But it only works on all the chunk
files that are not the current chunk (that is being written to). EG
you have 10 chunks scavenging can look at 9 of them. If you run
scavenge and only have a single chunk nothing will happen.

OK. Thank you. :slight_smile: