in a leaderless configuration, failover does not exist
the client sends the write to all three replicas in parallel, and the two available replicas accept the write but the unavailable replica misses it.
If it’s sufficient for two out of three replicas to acknowledge the write: we consider the write to be successful.
when a client reads from the database, it doesn’t just send its request to one replica: read requests are also sent to several nodes in parallel
The client may get different responses from different nodes; i.e., the up-to-date value from one node and a stale value from another. Version numbers are used to determine
which value is newer
if there are n replicas, every write must be confirmed by w nodes to be considered successful, and we must query at least r nodes for each read. (In our example, n = 3, w = 2, r = 2.) As long as w + r > n, we expect to get an up-to-date value when reading,
because at least one of the r nodes we’re reading from must be up to date. Reads and writes that obey these r and w values are called quorum reads and writes
A common choice is to make n an odd number and to set w = r = (n + 1) / 2 (rounded up).
Sloppy Quorums and Hinted Handoff
Databases with appropriately configured quorums can tolerate the failure of individual nodes without the need for failover.
They can also tolerate individual nodes going slow, because requests don’t have to wait for all n nodes to respond—they can return when w or r nodes have responded.
Sloppy Quorum - writes and reads still require w and r successful responses, but those may include nodes that are not among the designated n “home” nodes for a value
Once the network interruption is fixed, any writes that one node temporarily accepted on behalf of another node are sent to the appropriate “home” nodes. This is called hinted handoff.
Sloppy quorums are particularly useful for increasing write availability: as long as any w nodes are available, the database can accept writes
Thus, a sloppy quorum actually isn’t a quorum at all in the traditional sense. It’s only an assurance of durability, namely that the data is stored on w nodes somewhere.
There is no guarantee that a read of r nodes will see it until the hinted handoff has completed.