So I’m reaping the benefits of logging events from Github constantly since my last e-mail to this group
I’ve not got any projections on that server, I deliberately didn’t write any because development of projections against a large and changing data-set is a PITA at the moment WRT to development experience.
Now I want to write some projections, so I’m working on a development version (5% of the event count) and then setting the projection code on the full data-set.
However, I’ve ran into a roadblock as my server is now pretty much dead as the dataset is crunched through a single projection (96% usage) just doing a count of pushes.
I can’t load most of the http pages from the server and I certainly can’t view the state of that projection or upload any more.
Is there some setting somewhere to say “Hey, make the projections slower and give priority to other things?”
What is the expected behaviour when giving the server a large number of projections/heavy projection usage? (It’s only 2.5million events or so, it’s a small instance on Amazon though (not a micro, I pay for this))
I can scale up for now, I’m just wondering how I’d insulate myself against this sort of thing in a production environment. Multiple nodes for running projections and a set of write-only nodes?