Scratch that… this is a phantom issue. I misunderstood the intended approach.
Also, I’m sorry if my outline of the non-problem cast EventStore in a negative light.
The headOfStream issue is still outstanding. In speaking with James, though, he’s suggested that I not count on it.
[NOTE: There’s a footnote at the bottom of this message that outlines a current malfunction that Greg is aware of.]
I hadn’t paid attention to how cache control works.
I thought that all requests for pages would be cached. I thought that I would read up to the last page with data, and go no further. I thought that if I requested a page beyond the last page, that it would be cached, causing subsequent requests for that page to return a page that has an empty entries list.
In my understanding, my code would check whether to continue getting pages so as to avoid going beyond the last page, and thus avoid the caching of a page of empty entries.
My understanding of EventStore now is that the cache control headers vary based on whether the requested page is the last whole page of data or not.
By “whole page”, I mean that there are as many entries in the page’s entries list as there were requested in the page size parameter in the query.
eg: For a URI of http://127.0.0.1:2113/streams/sendFunds-D9F2EEB1-65AF-4417-8265-68F01B9753AC/39/backward/20, the “20” after the “backward” is the page size.
The last whole page of data is cacheable. The cache control response header is:
Cache-Control: max-age=31536000, public
(a.k.a. 1 year in the future, the recommended maximum for cache age)
For an incomplete page, the result is uncacheable. The cache control response header is:
Cache-Control: max-age=0, no-cache, must-revalidate
By “incomplete page”, I mean that there are less entries in the response than there were requested in the page size parameter in the query.
An incomplete page will render a “rel=previous” link for continuing to traverse the stream (this is “previous” in ATOM reverse-chronology) toward the end (or “beginning”, in ATOM terms).
If the rel=previous link traverses past the end (head) of the stream, that response will also be unchacheable.
The allows the incomplete pages to be requested repeatedly until they are complete, at which point the will become cacheable, and it allows subscriptions to feeds to continue to traverse forward (backward in ATOM terms), caching only whole page responses.
FOOTNOTE: There is presently [Mon Jul 13 2015] an off-by-one error where the calculation of the last complete page results in page not being identified as the last complete page, and causes it to be uncacheable.