%\section{Introduction}

\interfootnotelinepenalty=10000
Replication, one of the core techniques to provide fault tolerance in storage
systems, is sensitive to the tension between robustness and efficiency:
1) synchronous primary-backup systems~\cite{chang06bigtable,Cully08Remus,ghemawat03google}
require $f+1$ replicas to tolerate $f$ crash faults, but they risk data
loss if there are timing errors;
2) asynchronous full replication systems~\cite{Baker11Megastore,Bolosky11Paxos,Burrows06Chubby,Hunt10ZooKeeper}
use asynchronous agreement~\cite{Lamport98Part,Lamport01Paxos} to ensure correctness despite timing errors,
but send data to $2f+1$ replicas to tolerate $f$ crash failures and thus have higher costs than synchronous
primary-backup systems; 3) asynchronous partial replication systems~\cite{Lamport04Cheap,Wood11ZZ}
still require $2f+1$
replicas, but they only activate $f+1$ of the replicas in the failure-free case;
the spare replicas are activated only if some of the active ones fail.
Although existing partial replication approaches work well for small-state
services, they are not well-suited for
replicating a storage system because, after a failure, the system becomes unavailable until
it activates a spare replica, which requires copying all of the state from available replicas.
If the copying can be done at, say, 100MB/s, then the fail-over time would exceed 2.7 hours
per terabyte of storage capacity.


This dissertation presents {\ourSystem},\footnote{``Gnothi S'auton''
(\textgreek{Gn\wc ji s''aut\oa n}) is the ancient Greek aphorism ``Know thyself''.}
a new block storage system that simultaneously achieves robustness (correctness despite
timing errors and availability despite failures) and efficiency ($f+1$ data replication).
 {\ourSystem} replicates data to guarantee availability and durability
when replicas fail. To guarantee correctness despite timing errors,  {\ourSystem} uses $2f+1$ replicas to perform asynchronous state
machine replication~\cite{Lamport98Part,Lamport01Paxos,Schneider90Implementing}. To reduce network bandwidth, disk arm
overhead, and storage cost, {\ourSystem} executes updates to different blocks on different subsets of replicas. The key challenge
is to perform partial replication while not hurting availability or durability. {\ourSystem}
meets this challenge by using two key ideas.

First, to ensure availability during failure and recovery,
{\ourSystem} \emph{separates data from metadata} so that metadata is
replicated on all replicas while data for a given block is replicated
only to a preferred subset for that block. A replica's metadata keeps
the status of each block in the system, including whether the replica
holds the block's current version. Replicating metadata to all
replicas allows a replica to always process a request correctly, even
while it is recovering after having missed some updates.

Second, to ensure durability during failures, {\ourSystem}
\emph{reserves a small fraction (e.g. 10\%) of storage on each
  replica} to buffer writes to unavailable replicas.  While up to $f$
of a block's \emph{preferred replicas} are unresponsive, {\ourSystem}
buffers writes in the reserve storage of up to $f$ of the block's
available \emph{reserve replicas}. Directing writes to a reserved
replica when a block's preferred replica is unavailable guarantees
that each new update is always written to $f+1$ replicas even if some
replicas fail.  {\ourSystem} allows a tradeoff between availability
and space cost: data is writeable in the face of $f$ failures as long
as failed nodes are repaired before the reserve space is exhausted.
To guarantee write availability regardless of failure duration or
repair time, conservative users can configure the system with the same
space as asynchronous full replication ($2f+1$ actual storage blocks
per logical block).  Given that in {\ourSystem} replicas recover
quickly, analysis of several traces shows that a 10\% reserve is
enough to guarantee write availability for many workloads.


  {\ourSystem} combines these ideas to ensure availability and durability during
  failures and to make recovery fast despite partial replication.
  In summary, {\ourSystem} provides the following guarantees: when an update
  completes, data is stored on $f+1$ disks; all reads
  and writes are linearizable~\cite{Herlihy90Linearizability}; reads always
  return the most current data even though some replicas may have
  stale versions of some blocks; the system is available for reads as long
  as there are at most $f$ failures; and the system is available for writes
  as long as there are at most $f$ failures and failed replicas recover
  or are replaced before the reserve buffer is fully consumed by new updates.

We implement {\ourSystem} by modifying the ZooKeeper server~\cite{Hunt10ZooKeeper}.
{\ourSystem} provides a block store API, and it can be used like a disk:
users can mount it as a block device and create and use a filesystem on it.
We evaluate {\ourSystem}'s performance both in the common case
and during failure recovery and compare it with Gaios, a state-of-the-art Paxos-based
block replication system~\cite{Bolosky11Paxos}.
The evaluation shows that {\ourSystem}'s write throughput
can be 40\%-64\% higher than our implementation of a Gaios-like system
while retaining Gaios's excellent read scalability.
We also find that for systems with large amounts of state, separating data and metadata
significantly improves recovery compared to traditional state machine replication.
Unlike standard Paxos-based systems, {\ourSystem} ensures that a recovering
replica will eventually catch up regardless of the rate at which new requests are processed,
and unlike previous partial replicated systems, {\ourSystem} remains available even while large amounts of state are rebuilt on recovering replicas.
