When I evangelize event sourcing, this question or variants on it, is probably the most common. (We are an OLTP shop.) What follows is my generalist answer, caveat that it may not apply to your business.
The question is generally a smell that the questioner is thinking too much “classic DB” and not enough “event stream.” The stream approach is to model a transaction as a series of events, with a “commit” or “finalizer” event to logically complete the transaction. That is, a transaction’s “completeness” is interpreted by the READER side (who looks for a finalizing event), not the WRITER side (who would roll back the data). I ask people to think of transactions not as single atomic actions, but as an FSM that events drive forward.
There are advantages to this approach. The biggest (as always with ES) is avoiding information loss. That is, the fact that a transaction was even STARTED (and then cancelled) is often valuable info. An atomic rollback erases that, which usually means you need to capture it in a logging system, what have you.
Classic example is a long webform. In the old mindset, if your computer crashes mid-entry (you lose the cookie), you never committed anything and have to start all over. In a stream mindset, every fine-grained change is captured as you go, and you can pick up where you left off. If you are storing a lot of progress “state” on the side, you are stream processing in spirit, if not in name.