Understanding Event Store with comparisons to RDBMS concepts.

Most of these questions are coming from the perspective of trying to take the leap from RDBMS. These are the steps to understanding this new paradigm my mind is taking…

So in RDBMS you have your table schema and data. Your consumers / clients then throw SQL at this data.

In event store you have these events continually being saved - into streams.

In the most part would you consider streams to be analogous to your tables?

I see in the examples, lots of great ways to then introspect your events, essentially projecting out views or new streams as it seems they become.

Say you add a new client, and it is interested in a new view of the data, from what I gather you write some snippet of javascript and voila, a new stream is available that you can then subscribe to.

Conceptually coming from SQL - you send the query - you get your data - you can repeat this as many times as you wish.

With the event store it appears a bit different - you create the view (stream) as a 1-time op, then subscribe to it for ever more, picking up lost data on reconnect etc.

So lets say you deploy a new micro service that is interested in a particular view of your event store, as part of its bootstrapping it create this view (new stream), then it subscribes to it for example.

So you scale up your micro service, should you be managing this bootstrapping of new streams in anyway or is it something that you can view as a no cost idempotent op?

Then if the event store crashes, I assume subject to bring it back up on the same data folder, everything comes back as is? Subscriptions, projections, constructed views / streams etc.

Sorry if these questions seem a little basic, but this really is new territory although I have a lot of interest in what appears to be on offer.

Thanks in anticipation of your response(s).

Normally the "views" would be outside projections where the ES manages
storing the events. An example might be into an RDBMS or into a
graphdb. That service then queries its view of the event stream. This
talk may help a bit: https://www.youtube.com/watch?v=GbM1ghLeweU
especially the bits talking about outside projections.

Why not just have that service open a subscription and build its own?
Normally you should be using internal stuff very sparingly.

Ok, I get the concepts of polyglot, but there were some questions surrounding the mechanics of interacting with Event Store:

So lets say you deploy a new micro service that is interested in a new view of the data, your micro service understands how it wants to construct that data out of one or more existing streams. You define the projection and it either uses event store or an external system to store the new view?..

        "$statsCollected" : function(s,e) {
              var currentCpu = e.body["sys-cpu"];
              if(currentCpu > 40) {
                   emit("heavycpu", "heavyCpuFound", {"level" : currentCpu})


        "$statsCollected" : function(s,e) {
              var currentCpu = e.body["sys-cpu"];
              if(currentCpu > 40) {
                   save_to_external_rdbms("heavycpu", "heavyCpuFound", {"level" : currentCpu})

My question is do you do the first op, then setup say an atom feed on ‘heavycpu’ to populate your external RDBMS or is there any value to diving straight into the second code block?

I think my confusion stems from the fact that the docs area of the web site has no code examples such as the above, it just seems to be concerned with RESTFUL READ and WRITES to the Event Store streams, where as in the blogs, I am finding this code that seems to interact with Event Store in a more complex manner - for example:


is this kind of information anywhere in the new docs?

Just reading it now, it appears that every time you create a new projection you are in fact creating a new stream and it will return you a unique restful url to consume that stream.

If I ran that same code snippet above twice would I then have 2 streams each with the same data but different UIDs? Is there a way you I should be looking to boostrap / configure these new streams and track them so that they are not created each time a service instance is deployed (i.e. avoid baking the above code into the consumer service)

sorry I tried to delete and correct my post a little…now they are miss aligned.

yes I think my question is, the definition of the projections that new service wants to consume - Im asking if these really need to be a one time setup against the Event Store and not baked into the service with code snippets like those above. I think im getting there conceptually now.

You can bake them on the subscriber side and just have the subscriber
listen for events.

but if you bake them into the subscriber, then multiple instances will create new (but identical in content) streams and you would be consuming the same data multiple times, where as a one time setup and competing consumers would be the model you were looking for?

Im viewing it now like commands such as fromStream, are akin to create table and then your subscriptions are akin to your read queries.

or am I confusing myself with issuing a readAll against: /projections/transient?

See SubscribeToStreamFrom etc in the client api (or atom feeds in http
api). Normally when you want outside models you want to do it this way
not with a an internal js query

ok thanks!

call me dumb but I can find this info anywhere…could you please provide a link?

http here: http://docs.geteventstore.com/http-api/3.4.0/
tcp here: http://docs.geteventstore.com/dotnet-api/3.3.1/reading-events/
(same logic). For the

includes a quick snippet for an example

yes, there is definitely documentation lurking within the blogs that is not covered anywhere in the official docs.

That’s not a gripe - simply a statement, I understand the demands…

I put it as an example (the other two links are the documentation)

Sorry but I can’t see any mention of methods such as SubscribeToStreamFrom in the docs? Am I missing something?

For http see the example of reading and atom feed forward (from
beginning to end) on the doc page linked. For client API look at the
bottom, the example is using read from all but the logic is roughly
the same just readfromstream instead of readfromall. Will put up an
explicit example using subscribetostreamfrom.

Yes, I dont mean to be a pain or come across pedantic - im just giving feedback on how a noob may get a little lost in some areas of the docs. All the docs make perfect sense, but there are some different code examples to be found outside the main docs that can initially lead to some confusion - to a noob at least.

btw for a code example (full end to end) you might also want to check
https://github.com/eventstore/sklaida it uses a catchupusbcription now
but could also easily use a competing subscription.