We’re at exploration phase, we are considering using ESDB to see if that fits with our needs and then we can switch to it.
It’s always worth setting certain expectations, then verifying potential options. Assuming that a single writer won’t be enough without benchmarking may give you wrong conclusions. It may be too slow, but until you benchmark it, you won’t know.
at the time of consuming events when we want to update the read model, we have to call another downstream to fetch more details and then update the read model. the downstream call might be slow in some cases.
by being able to consume streams concurrently, we’ll sure that we’ll not lag too far behind.
we currently have 200k aggregates in one aggregate type
The number of aggregates is not an issue by itself, as EventStoreDB scales well with the number of streams. It’s critical to gather information about the “working set” so how many of those streams are accessed actively and in what distribution, and the expected throughput.
at the time of consuming events when we want to update the read model, we have to call another downstream to fetch more details and then update the read model. the downstream call might be slow in some cases.
by being able to consume streams concurrently, we’ll sure that we’ll not lag too far behind.
I think that you should try then to focus on optimising the downstream call, as that sounds like a bottleneck. Could you provide the details of this call? We could try to provide better guidance. What type of database are you using for read model?
one option is to partition the events at the time of writing to eventstoredb and prefix the partition number with stream name(i.e profile-{partition-0}-user_id_1
I think that you should work on your stream design. Why would you want to partition stream? In general streams should be short, so there shouldn’t be partitioned? Could you explain us your stream design? Typically, then lowest partition level is per stream, not per stream partition.