In “Implementing Domain Driven Design” Vaughn Vernon describes a strategy that creates fixed URIs that can be cached and also navigates links in the order I expect. I’m just curious why this strategy is not used?
To read all the events from a stream I first need to load the current document “/stream/streamId” then follow the “last” link, and then keep following all the “previous” links until I have loaded all the events. This is confusing and seems to be the opposite of what is proposed in RFC5005. I would expect reading the “first” and then following the “next” links. Is this intended or is this something that cannot be changed without breaking backwards compatibility?
(I also expected prev-archive and next-archive to be used instead of the page feed convention “without any guarantees about the stability of each document’s contents”)
Another problem is that different URIs are being generated depending on how many events I have in the stream. This creates a problem when I want to read the current events and realize I have missed some, then I need to follow the “next” link to load the older events. The problem is that the “next” URI changes. For example in a stream with 48 events:
-> next URI = “http://192.168.0.17:2113/streams/123/28/backward/20”
When I add another event and retry:
-> next URI = “http://192.168.0.17:2113/streams/123/29/backward/20”
The problem is that I need to cache up to 20 times as many archieve feeds.