Replicated DB is Much Smaller than Original

Hey guys,

I just wrote this little windows service that replicates a database.

I ran it against a 25GB database and confirmed that all streams were created correctly. But the resulting database is 1GB.

Can anyone shed some light on this? Is my code somehow wrong? Anything else?

Thanks,

Chris

have you scavenged the first database?

ahh. Nope. :wink:

Thanks.

Why not just use a catchup subscription for this? It will when caught up even subscribe and push events for you (eg true warm replica). At the least it will be less code even for just copying a db :slight_smile:

Also what you have here is a warm replica. In the OSS code you can now support multiple hot nodes (with paxos quorums).

Cheers,

Greg

You realise that replication is now open source by the way? :slight_smile:

Here check out https://github.com/EventStore/EventStore/blob/master/src/EventStore/EventStore.Core.Tests/ClientAPI/subscribe_to_all_catching_up_should.cs it shows the usage of the catchup subscription. Also it uses subscribe internally when caught so the store will push new events as they happen as opposed to polling. It also handles reconnecting and moving nodes when operated against a cluster :slight_smile:

Thanks guys!

I had no idea about replication, actually. Diggin in now.

It’s not very well documented at the moment (and we haven’t released any binaries yet so you’ll have to build from source - dev branch). However, if you build from source, you’ll want to start multiple nodes like this:

start-cluster.bat:

start EventStore.ClusterNode.exe --mem-db --int-ip 172.16.7.146 --ext-ip 172.16.7.146 --int-tcp-port=1111 --ext-tcp-port=1112 --int-http-port=1113 --ext-http-port=1114 --cluster-size 3 --commit-count 2 --prepare-count 2 --gossip-seed 172.16.7.146:2113
–gossip-seed 172.16.7.146:3113 --use-dns-discovery-

start EventStore.ClusterNode.exe --mem-db --int-ip 172.16.7.146 --ext-ip 172.16.7.146 --int-tcp-port=2111 --ext-tcp-port=2112 --int-http-port=2113 --ext-http-port=2114 --cluster-size 3 --commit-count 2 --prepare-count 2 --gossip-seed 172.16.7.146:1113
–gossip-seed 172.16.7.146:3113 --use-dns-discovery-

start EventStore.ClusterNode.exe --mem-db --int-ip 172.16.7.146 --ext-ip 172.16.7.146 --int-tcp-port=3111 --ext-tcp-port=3112 --int-http-port=3113 --ext-http-port=3114 --cluster-size 3 --commit-count 2 --prepare-count 2 --gossip-seed 172.16.7.146:1113
–gossip-seed 172.16.7.146:2113 --use-dns-discovery-

This will start three in-memory nodes running on the same machine (172.16.7.146) but running on ports starting with 1xxx, 2xxx or 3xxx respectively. Gossip seeds for each node will need to point to every other node. There are tools which make this kind
of config significantly easier which come with the commercial support etc.

This is the same stuff as is running the HA showcase at
http://ha.geteventstore.com
(though that’s down right now as the web server running it has decided it wants activating and I don’t have the Windows key for it to hand right now… doh!)

Cheers,

James

This is very awesome and timely news for us.

The whole reason for my version of replication was because we can’t quite afford the commercial offerings at the moment. We’ve been running a SingleNode for a very long time and lately we’ve been fearing the day that POOF, our data is gone.

Backups are manual and tedious.

Really though. Thanks for OSing this stuff! My Monday is now spoken for.

-=Chris

Ah OK. You probably want to scavenge occasionally as well if you’ve been running for a long time!

By the way, the HA showcase site is back up and running now.

Cheers,

James

If you have issues feel free to stop us an email and we can get the HA stuff setup for you. You may want to keep a subscription to a warm replica as well. To do this just use a catchupsubscription and write into another node (very similar to what you have now). A common use case for this is 3 nodes in the cloud and a warm replica in your office (just in case of catastrophic occasions :slight_smile:

Cheers,

Greg

Yup. That’s the plan.

Thanks so much!