Snapshot example

Is there any code/article explaining how to take snapshots with geteventstore?

Thanks,

Arun

Snapshots are not related to event store but basically you would just
replay your events and take a snapshot (e.g. serialize a piece of
state)

If you abstract the means of snapshotting from the repository itself, it becomes easier to use whatever - an InMem-Cache is wonderful when the same aggregate is in use in a concentrated period (use case).
ISnapshotService { TAggregate Get(Guid AggregateId, int maxVersion; Save(Aggregate aggregate); } composes ISnapshotInterfaces, defaulting to a null-object implementation.

/Julian

Thanks, separating out the snapshotting does sound like a good idea.
-Arun

Thanks, separating out the snapshotting does sound like a good idea.
-Arun

I have come up with this based on the suggestions:

public class SnapshotService :ISnapshotService

{

    private readonly IEventStoreConnection _eventStoreConnection;

    public SnapshotService(IEventStoreConnection eventStoreConnection)

    {

        _eventStoreConnection = eventStoreConnection;

    }

    public TAggregate Get<TAggregate>(Guid AggregateId, string streamName, string eventClrTypeHeader)

    {

        var slice = _eventStoreConnection.ReadStreamEventsBackwardAsync(streamName, StreamPosition.End, 1, false).Result;

        //TODO: check for valid slice

        return (TAggregate) DeserializeAggregate(slice.Events[0].Event.Metadata, slice.Events[0].Event.Data, eventClrTypeHeader);

    }

    public void Save<TAggregate>(TAggregate aggregate, string streamName, string eventClrTypeHeader)

    {

        const int versionNo = 1;

        var eventData = CreateEventData(aggregate, eventClrTypeHeader);

        _eventStoreConnection.AppendToStreamAsync(streamName, versionNo, eventData);

    }

-Arun

I have a working snapshot implementation in my project Aggregates.Net.
https://github.com/volak/Aggregates.NET

My repository loads and save memento objects to special snapshot streams which are the streamid + .snapshot

Seems like in your example you are serializing and serializing aggregates, not the recommended practice in any book I've read but if it works for you :slight_smile:

I thought snapshots store a set of changes made to an aggregate, but maybe I am wrong (this is my first attempt at implementing the event store with snapshots).

Let's step back why do you need snapshots?

I thought snapshots store a set of changes made to an aggregate, but maybe I

am wrong (this is my first attempt at implementing the event store with

snapshots).

Maybe you dont need snapshots yeah. Events represent a set of changes to an aggregate, replaying those events rebuilds the aggregate.

The reason why you would need snapshots is if an aggregate will receive thousands of events over its lifetime. Its kind of a special purpose thing.

A snapshot stores an aggregates current state - when you go to retrieve the aggregate from the store the latest snapshot is loaded then all events after the snapshot are replayed.

Basically to avoid reloading a lot of events, to improve performance.

Define “a lot of events”?
What performance numbers are seeing without snapshots?

What I’m getting at is that you shouldn’t assume that “a lot of events” will be “slow” without testing.

Sure, will test these out first. ‘Lots of events’ here indicates at least 5 per second, per aggregate.

And the lifetime of an aggregate (i.e., the same will be used) could range from a week to several months.

This is my first post in this discussion group (that i have been looking into now and again) and i wanted to share my thoughts on the merits of snapshotting.

Our system runs on force.com and there loading more than 50.000 records in a single transaction is just not possible (it’s a hard system limit salesforce emposes on all apex code). So snapshotting becomes a necessity for performance reasons when you have a busy aggregate.

I feel that when writing tests in a given-when-then style it’s very convenient to initialize an aggregate in a known state by running a number of commands thru it, using the resulting state as a starting point for various test cases.

Also when coding an aggregate it feels very natural to write command processing like (pseudo code): 'handle( command c ) -> check invariants -> throw exception or raise( new domainevent ) -> state = apply( state, domainevent ) '. So i have two constructors in my aggregates: aggregate(command) and aggregate(state, list ) making snapshotting possible by (de)serializing aggregate state.

However the biggest benefit i see is migrating of existing, in our case rdbms, data. Our system today only stores current state in the salesforce database, there is no history how the data got to be. (Before any salesforce gurus jump in, i’m aware salesforce can track changes on a field level but it does not capture the users intent and that is the goal i want to reach). This makes reporting on time based events and correlation between events nigh on impossible. When moving towards a more task oriented user interface to capture users’ intent the system will start to produce events from our aggregates. Applying these events and replaying in case of new projections or bug fixes without a snapshot won’t make sense in our case because we didn’t start with an empty aggregate but with existing ‘legacy’ data, in other words with existing aggregate state.

The way i see it to move from a crud to an event sourcing style in an existing system is to convert current data from our database into an aggregate state / memento, serialize this as the first snapshot for that type of aggregate and switch the ui over to our new and shiny task based ui. We’ll define listeners on the domain events to peform crud on the records in the rdbms so the rest of the system still functions and existing reports still are accessible. Following this cycle we can migrate our whole system step by step, or aggregate root by aggregate root.

I apologize for the long reply but i’d love to hear the community’s thoughts on this.