Careful with the word “lose.” In theory, you can’t lose data (forever) once it is acked by the master. Obviously, the data may not be visible to you if the server you are talking to is partitioned in the minority. But that data will eventually become visible.
The delay for consistency (convergence) is indeterminate. As soon as “lost” node rejoins the majority it will stream down updates. That depends on all the common factors: data volume, network speed, disk speed. In my limited experience, ES is reasonably efficient here.
A 2/1 split across DCs strikes me as suboptimal, due to performance and availability. ES is a single-master system: writes must be forwarded to one node. There usually are cost, performance, and complexity issues with having the master role move between DCs without warning. Your mileage my vary of course.
Traditionally, I’d prefer to run a complete cluster within a DC, and send all writes there. I’d then set up read-only replicas in other DCs. Alas, this isn’t yet natively supported in ES, AFAIK. Eventually we’ll be able to constrain slaves to be non-electable Clones, but I don’t think that is an official feature yet.