Problem trying to restart my getEventStore

Hello,

I am facing a problem with restarting my event-store on a small amazon ec2 instance.

Note : My event database is very small, i am talking of a few hundreds events on only one aggregate.

I am having this content in log files :

[PID:06356:006 2015.03.15 22:19:10.035 TRACE TFChunk ] Verifying hash for TFChunk ‘/home/eventStore/db/chunk-000000.000000’…
[PID:06356:016 2015.03.15 22:19:11.385 DEBUG IndexCommitter ] ReadIndex Rebuilding: processed 10851 records (5.7%).
[PID:06356:016 2015.03.15 22:19:16.386 DEBUG IndexCommitter ] ReadIndex Rebuilding: processed 37748 records (24.9%).
[PID:06356:016 2015.03.15 22:19:21.386 DEBUG IndexCommitter ] ReadIndex Rebuilding: processed 45401 records (30.3%).
[PID:06356:016 2015.03.15 22:19:26.387 DEBUG IndexCommitter ] ReadIndex Rebuilding: processed 68121 records (35.3%).
[PID:06356:016 2015.03.15 22:19:31.394 DEBUG IndexCommitter ] ReadIndex Rebuilding: processed 81729 records (45.0%).
[PID:06356:016 2015.03.15 22:19:36.395 DEBUG IndexCommitter ] ReadIndex Rebuilding: processed 89217 records (50.5%).
[PID:06356:016 2015.03.15 22:19:41.395 DEBUG IndexCommitter ] ReadIndex Rebuilding: processed 98214 records (55.7%).
[PID:06356:016 2015.03.15 22:19:42.362 DEBUG IndexCommitter ] ReadIndex Rebuilding: processed 100000 records (56.7%).
[PID:06356:016 2015.03.15 22:19:47.363 DEBUG IndexCommitter ] ReadIndex Rebuilding: processed 109386 records (61.8%).

Note : Very strange to see log talking of 109386 records when my database is so small.

The problem is that this process is never ending so my database is never available.

How can i handle this case ? Is there a way to quickly repair a get-event-store database ?

Thank you in advance.

you have statistics being written in your database as well. By default
every 10 seconds or so they are in

$statistics-ip:port

You can set a limit on the stream for how many to keep and even turn
off statistics via the command line (set statistics to a very high
number, like once per year).

As for it never ending it looks in that log to be making progress.

How do you determine its never ending? Also what are you running on
that looks very slow

Uhm cool i’ll check this statistic setting.

I determine its never ending because my event-store web-ui is never reachable.

For now i see those “ReadIndex Rebuilding” lines in logs so i deducted that this process is responsible.

Thank you for the quick reply !

when it finishes the ui becomes available.

To be fair on my laptop this generally takes about 1 second :slight_smile:

Yeah on my macbook pro its lightning fast.

Maybe related to ec2.small instance with amazon ami. Gonna try something bigger.

Thank you very much for your help.

Well i am still stucked trying to have a stable getEventStore on ec2.small instance.

After multiple instance freeze i discovered this with “iotop” :

TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND

23241 be/4 john 170.18 K/s 0.00 B/s 0.00 % 94.69 % clusternode --config=/home/eventStore/eventStore.yaml

``

The issue here may be related to this never ending task of “ReadIndex Rebuilding” (never ending = when it hit 100% then it start over at 1% with the next file…forever) :

Does anyone know what i could try to stop this ?

I tried 10 times to stop and restart but the result is always the same.

[PID:22884:295 2015.03.18 10:33:34.271 DEBUG IndexCommitter ] ReadIndex Rebuilding: processed 35620 records (18.5%).
[PID:22884:295 2015.03.18 10:33:39.272 DEBUG IndexCommitter ] ReadIndex Rebuilding: processed 35678 records (18.5%).
[PID:22884:295 2015.03.18 10:33:44.313 DEBUG IndexCommitter ] ReadIndex Rebuilding: processed 35733 records (18.6%).
[PID:22884:295 2015.03.18 10:33:49.314 DEBUG IndexCommitter ] ReadIndex Rebuilding: processed 35790 records (18.6%).
[PID:22884:295 2015.03.18 10:33:54.397 DEBUG IndexCommitter ] ReadIndex Rebuilding: processed 35851 records (18.6%).

``

When you restart it starts from beginning.

This is VERY VERY slow though. What size are your events you have written?

It will come up eventually (its still processing) and it looks like
AWS is just being capped at about 170k/second?!