\section{Gnothi: Efficient and available storage replication}

\subsection{Motivation}
Replication is widely used to protect data from failures. An ideal storage
replication protocol should provide good availability and durability,
strong correctness guarantees, low cost, and fast failure recovery.
Existing replicated storage systems make different trade-offs among these properties:
1) synchronous primary-backup systems~\cite{chang06bigtable,Cully08Remus,ghemawat03google}
require $f+1$ replicas to tolerate $f$ crash faults, but they risk data
loss if there are timing errors;
2) asynchronous full replication systems~\cite{Baker11Megastore,Bolosky11Paxos,Burrows06Chubby,Hunt10ZooKeeper}
use asynchronous agreement~\cite{Lamport98Part,Lamport01Paxos} to ensure correctness despite timing errors,
but send data to $2f+1$ replicas to tolerate $f$ crash failures and thus have higher costs than synchronous
primary-backup systems; 3) asynchronous partial replication systems
\cite{Lamport04Cheap,Wood11ZZ} still require $2f+1$
replicas, but they only activate $f+1$ of the replicas in the failure-free case;
the spare replicas are activated only if some of the active ones fail.
Although existing partial replication approaches are promising for replicating
services with small amounts of state, they are not well-suited for
replicating a block storage service because after a failure the system becomes unavailable until
it activates a spare replica, which requires copying all of the state from available replicas.
If the copying can be done at, say, 100MB/s, then the fail-over time would exceed 2.7 hours
per terabyte of storage capacity.


\subsection{Design}
We have built Gnothi\footnote{``Gnothi S'auton''
($\Gamma\nu\tilde{\omega}\theta\iota~\sigma '\alpha\upsilon\tau\acute{o}\nu$)
%(\textgreek{Gn\wc ji s''aut\oa n})
 is the ancient Greek aphorism ``Know thyself''.}~\cite{Wang12Gnothi},
a new block storage system that achieves high availability,
low cost, and fast recovery simultaneously.
Gnothi replicates data to guarantee availability and durability
when replicas fail. To guarantee correctness despite timing errors, Gnothi uses $2f+1$ replicas to perform asynchronous state
machine replication~\cite{Lamport98Part,Lamport01Paxos,Schneider90Implementing}. To reduce network bandwidth, disk arm
overhead, and storage cost, Gnothi executes updates to different blocks on different subsets of replicas. The key challenge
is to perform partial replication while not hurting availability or durability. Gnothi
meets this challenge by using two key ideas.

{\bf Separating data and metadata.} To ensure availability during failure and recovery,
Gnothi separates data from metadata so that metadata is
replicated on all replicas while data for a given block is replicated
only to a preferred subset for that block. A replica's metadata keeps
the status of each block in the system, including whether the replica
holds the block's current version. Replicating metadata to all
replicas allows a replica to always process a request correctly, even
while it is recovering after having missed some updates.

{\bf Reserve storage.} To ensure durability during failures, Gnothi
reserves a small fraction (e.g. 10\%) of storage on each
  replica to buffer writes to unavailable replicas.  While up to $f$
of a block's \emph{preferred replicas} are unresponsive, Gnothi
buffers writes in the reserve storage of up to $f$ of the block's
available \emph{reserve replicas}. Directing writes to a reserved
replica when a block's preferred replica is unavailable guarantees
that each new update is always written to $f+1$ replicas even if some
replicas fail.  Gnothi allows a tradeoff between availability
and space cost: data is writeable in the face of $f$ failures as long
as failed nodes are repaired before the reserve space is exhausted.
To guarantee write availability regardless of failure duration or
repair time, conservative users can configure the system with the same
space as asynchronous full replication ($2f+1$ actual storage blocks
per logical block).  Given that in Gnothi replicas recover
quickly, analysis of several traces shows that a 10\% reserve is
enough to guarantee write availability for many workloads.

Gnothi combines those ideas to ensure availability and durability during
 failures and to make recovery fast despite partial replication. To achieve
fast recovery, Gnothi separates metadata recovery from data recovery: in the metadata
recovery phase, a recovering replica fetches the missing metadata from other
replicas. When this phase is complete,
the recovering replica can process new requests since it has complete
knowledge of which blocks are up-to-date and which blocks are stale.
And then in the data recovery phase, the recovering replica re-replicates
the missing blocks to restore their durability in the background.
Separating metadata and data recovery ensures that a recovering
replica will eventually catch up regardless of the rate that new requests are processed.
Furthermore, compared to state-of-the-art Paxos-based block replication system,
it provides more throughput during recovery since a recovering
replica can participate in the processing of new requests earlier.


\subsubsection{Overview}
\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=0.5\textwidth]{gnothi_arch.pdf}}
\caption{\label{graph:gnothi_design} Data and metadata flow for a request to update a block.}
\end{figure}

As shown in Figure \ref{graph:gnothi_design}, Gnothi uses the Replicated State
Machine (RSM) approach~\cite{Schneider90Implementing}: agreement modules on different replicas
 work together to guarantee that all replicas process the same client update requests
in the same order. Requests are then logged
and executed, and replies are sent to the clients.

Gnothi splits metadata and data. Metadata is updated using state machine replication
and is replicated at all $2f+1$ replicas, but data is replicated to just $f+1$ replicas.
A replica marks a data block as \emph{COMPLETE} or  \emph{INCOMPLETE}
depending on whether or not the replica holds what it believes to be the block's current version.

\begin{itemize}

\item A block is \emph{COMPLETE} at a replica if the replica stores a version
of the block's data that corresponds to the latest update to the block recorded
in that replica's metadata.

\item A block is \emph{INCOMPLETE} at a replica if the replica's metadata records
a version of the block that is more recent than the latest data stored at the
replica for that block.
\end{itemize}

When no failures or timeouts occur, Gnothi maps each block $n$ to
one of $2f+1$ \emph{slices} and stores each slice on $f+1$ preferred
replicas, from replica $n$ to replica $(n-f)\%(2f+1)$. This ensures
that the $2f+1$ slices are evenly distributed among different
replicas, and that each replica is in the preferred quorum of $f+1$
different slices, which are the \emph{PREFERRED} slices for
that replica.
When failures or timeouts occur, a data block might be pushed to reserve storage
on replicas out of its preferred quorum.

To simplify the description, we say that a block is \emph{PREFERRED} at a replica if
the replica is a member of the block's preferred quorum. Otherwise, we say that
the block is \emph{RESERVED} at that replica. We similarly say a request (read, write)
is \emph{PREFERRED}/\emph{RESERVED} at a replica if it accesses a
\emph{PREFERRED}/\emph{RESERVED} block at the replica.


Each replica allocates a preferred storage to store the data of \emph{PREFERRED} writes,
a reserve storage to store the data of \emph{RESERVED} writes, and a relatively small
metadata storage for each block's version and status.

\subsubsection{Protocol Overview}

\begin{figure*}[t]
\centerline{\includegraphics[angle=0, width=0.8\textwidth]{gnothi_protocols.pdf}}
\vspace{-1ex}
\caption{\label{graph:shard-allprotocols} Gnothi protocols. We only show
the logical flow of data and metadata in the figure, not actual messages. Consensus messages
among servers are omitted, and we only show the flow and state for a single block. Multiple
blocks are distributed among servers, so that each replica holds
both \emph{COMPLETE} and \emph{INCOMPLETE} blocks.}
\vspace{-3ex}
\end{figure*}


This section presents an overview of Gnothi's protocol and compares Gnothi with the
asynchronous full replication used by Paxos~\cite{Lamport98Part,Lamport01Paxos},
the asynchronous partial replication used by Cheap Paxos~\cite{Lamport04Cheap}, and
the state-of-the-art Paxos-based block replication system Gaios~\cite{Bolosky11Paxos}.
We leave the details of Gnothi's protocol to \cite{Wang12Gnothi}.


Figure \ref{graph:shard-allprotocols}.a shows a write operation when no failures occur.
In Paxos and Gaios, a write operation is sent to, and executed on, all correct replicas.
This seems redundant if our goal is to tolerate one failure: a natural idea is to send
the write requests to two replicas first, and if they do not respond in time,
try the third one~\cite{Malek05Fault}. Cheap Paxos adopts this idea by activating two replicas
and leaving the other one as a cold backup~\cite{Lamport04Cheap,Wood11ZZ}. Gnothi incorporates a similar idea,
but it still sends the metadata to the third replica, which executes the request by
marking the corresponding data block as \emph{INCOMPLETE}. Later, we will see
that this metadata is critical to reducing the cost of failure and recovery.


Figure \ref{graph:shard-allprotocols}.b shows a read operation when no failures occur.
In Paxos, the read is sent to all replicas and the client waits for two replies. The figure
shows a common optimization that lets one replica send back the full reply and lets the others
send back a hash or version number~\cite{castro02practical}. By using similar optimizations
for its writes, Cheap Paxos executes the read on only two replicas. Gaios introduces a protocol that
allows reads to execute on only one replica while still ensuring linearizability, and Gnothi uses Gaios's read protocol,
with a slight modification to avoid reading \emph{INCOMPLETE} blocks.


Figure \ref{graph:shard-allprotocols}.c shows what happens when one
replica fails. Paxos and Gaios do not need special handling since the
remaining two replicas hold all data.  Cheap Paxos brings online the
 cold backup, which needs to fetch the data from the live
replica: the system is unavailable until this transfer finishes,
possibly for a long time if the system stores a large amount of
data. In Gnothi, the third replica knows whether a block it
stores is \emph{COMPLETE} or not, so it can safely continue processing
read requests by serving reads of \emph{COMPLETE} blocks and
redirecting reads of \emph{INCOMPLETE} ones to the other replica.  And
it can also continue processing writes whose block belongs to the
failed replica by storing data in its reserve storage. Therefore,
Gnothi also does not need any special handling when a replica
becomes unavailable.

Figure \ref{graph:shard-allprotocols}.d shows a write operation when a replica is unavailable.
Paxos, Gaios, and Cheap Paxos do not need any special handling. For Gnothi,
a replica may receive a \emph{RESERVED} write and store it in its reserve
storage to ensure that writes only complete when at least two nodes store their data.
Read operations in this case are not different from those when no failure occurs.

Figure \ref{graph:shard-allprotocols}.e shows how recovery works. Paxos and Gaios both need to fetch
 all missing data before processing new requests at the recovered replica. Cheap Paxos
can just leave the recovered replica as the cold backup and does not need any special
handling. Gnothi performs a two-phase recovery when a failed replica recovers.

In the first phase, the recovering replica fetches missing metadata
from others. Since metadata is updated on all replicas, this phase of
recovery proceeds as in a traditional RSM. After this phase is
complete, the recovering replica can serve write requests even though
full recovery is not complete yet: at this point the system stops
consuming additional reserve storage on other replicas. Since the size
of metadata is small, this phase is fast, and thus it is not necessary
to allocate a large reserve storage.

In the second phase, the recovering replica re-replicates all missing or stale \emph{PREFERRED}
blocks. Gnothi performs this step asynchronously, so it can balance recovery
bandwidth and execution bandwidth while still guaranteeing progress. Depending on
the status of the recovering replica, there are two possible cases here: if all data on disk
is lost, the recovering replica needs to rebuild its whole disk; if the data on
disk is preserved, the recovering replica just needs to fetch the
updates it missed during its failure. Note that
Gnothi can continue processing reads and writes to all blocks during the second phase. If
a node receives a read request for an \emph{INCOMPLETE} block, it rejects the request, and the client
retries with another replica.

Table \ref{table:compare} summarizes the costs of Gnothi and of
previous work. In read cost, write cost, and space, Gnothi
dominates Paxos, Gaios, and Cheap Paxos, improving on each in at least
one dimension and approximating most in the others. For recovery and
availability, Gnothi can perform the heavy data transfer in the
background concurrently with serving new requests, while in Paxos and
Gaios, the recovering replica must wait for the transfer to finish,
and in Cheap Paxos, the whole system must halt until the transfer
completes.

\begin{table*}[ht]
    \begin{footnotesize}
    \begin{center}
    \begin{tabular}{ | c | c | c | c | c | c | c | }
    \hline
    Protocol &      Write & Read & Space &    Failure & Recovery (Disk survived) & Recovery (Disk replaced)             \\ \hline
    Paxos &         2f+1 &  2f+1 &  2f+1 &                              0          &   O(NB)    & O(S)      \\ \hline
    Gaios &       2f+1 &  1 &     2f+1 &                              0               &  O(NB)          & O(S)     \\ \hline
    Cheap Paxos &   f+1 &   f+1 &   f+1+f (Cold) &                  O(S) (Blocking) & 0         & 0            \\ \hline
    Gnothi &         f+1 &   1 &     f+1+$\Delta$f $0<\Delta\le1$ &    0               & O(Nb)+O(NB)          &    O(Nb)+O(${f+1\over 2f+1}$S)      \\ \hline

    \end{tabular}
    \vspace{-2ex}
    \caption{\label{table:compare} Cost of Gnothi and previous work; S is the total storage space.
    N is the number of unique updated blocks missed by the recovering replica;
    B is the block size, and b is the metadata size for each block.}
   \vspace{-6ex}
    \end{center}
    \end{footnotesize}
\end{table*}

\begin{figure}
\centering
\begin{minipage}{.5\textwidth}
  \centering
  \includegraphics[width=1\linewidth]{gnothi_random.pdf}
  \caption{\label{graph:random} Random I/O with 3 ($f$=1) and 5 ($f$=2) servers.}
\end{minipage}%
\begin{minipage}{.5\textwidth}
  \centering
  \includegraphics[width=1\linewidth]{gnothi_rereplicate.pdf}
  \caption{\label{graph:rereplicate} Failure recovery (re-replicate).}
\end{minipage}
\end{figure}

\subsection{Evaluation results}
We implement Gnothi by modifying the ZooKeeper server~\cite{Hunt10ZooKeeper}.
We evaluate Gnothi's performance both in the common case
and during failure recovery and compare it with Gaios, a state-of-the-art Paxos-based
block replication system~\cite{Bolosky11Paxos}.
Here we just show part of our results and other results can be found
in \cite{Wang12Gnothi}.
As shown in Figure~\ref{graph:random}, Gnothi's write throughput
can be 40\%-64\% higher than our implementation of a Gaios-like system called G'
while retaining Gaios's excellent read scalability.
We also find that for systems with large amounts of state, separating data and metadata
significantly improves recovery compared to traditional state machine replication.
As shown in Figure~\ref{graph:rereplicate},
compared to standard Paxos-based systems like G', Gnothi can achieve 100\% to 200\% more write
throughput during recovery and it ensures that a recovering
replica will eventually catch up regardless of the rate that new requests are processed.
And unlike previous partial replicated systems like Cheap Paxos, Gnothi remains available
even while large amounts of state are rebuilt on recovering replicas.




