\section{Design}

\subsection{Interface and Model}

{\ourSystem} targets disk storage systems
within small clusters of tens of machines.
Because linearizability is composable,
it is possible to scale {\ourSystem} by composing multiple small clusters:
this is discussed in Chapter~\ref{chapter-salus}.

{\ourSystem} provides an interface similar to a disk drive: there is a fixed
number of blocks with the same size, and applications can read or
write a whole block. Block size is configurable. Our experiments use
sizes ranging from 4KB to 1MB, but smaller or larger sizes are possible.

{\ourSystem} provides linearizable reads and writes across different clients.
Furthermore, if a client has multiple outstanding requests,
{\ourSystem} can be configured so that they will be executed in the order they were issued.

{\ourSystem} is designed to be safe under the asynchronous model: the network can drop, reorder, modify, or
arbitrarily delay messages. Therefore, {\ourSystem} makes no assumption about the
maximum communication delay between nodes, and thus it is impossible to detect whether
 a node has failed or it is just slow.
{\ourSystem} provides the same guarantees as previous asynchronous replicated state machines (RSMs): the system is
always safe (all correct replicas process the same sequence of updates), but it is only live (the system
guarantees progress) during periods when the network is available and message delivery is timely. {\ourSystem} uses
 $2f+1$ replicas to tolerate $f$ omission/crash failures. Commission/Byzantine failures are not
 considered.

\subsection{Architecture}

\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{figures/gnothi_arch.pdf}}
\caption{\label{graph:gnothi-design} Data and metadata flow for a request to update a block in slice 1.}
\end{figure}

As shown in Figure \ref{graph:gnothi-design}, {\ourSystem} uses the Replicated State
Machine (RSM) approach~\cite{Schneider90Implementing}: agreement modules on different replicas
 work together to guarantee that all replicas process the same client update requests
in the same order. Requests are then logged
and executed, and replies are sent to the client.

{\ourSystem} splits metadata and data. Metadata is updated using state machine replication
and is replicated at all $2f+1$ replicas, but data is replicated to just $f+1$ replicas.
A replica marks a data block as \emph{COMPLETE} or  \emph{INCOMPLETE}
depending on whether or not the replica holds what it believes to be the block's current version.

% Replicas holding the current version of a data block mark it as \emph{COMPLETE},
%  and the others mark it as \emph{INCOMPLETE}.

\begin{itemize}

\item A block is \emph{COMPLETE} at a replica if the replica stores a version
of the block's data that corresponds to the latest update to the block recorded
in that replica's metadata.

\item A block is \emph{INCOMPLETE} at a replica if the replica's metadata records
a version of the block that is more recent than the latest data stored at the
replica for that block.
\end{itemize}

\begin{figure*}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{figures/gnothi_protocols.pdf}}
\caption{\label{graph:gnothi-protocols} Gnothi protocols. We only show
the logical flow of data and metadata in the figure, not actual messages. Consensus messages
among servers are omitted, and we only show the flow and state for a single block. Multiple
blocks are distributed among servers, so that each replica holds
both \emph{COMPLETE} and \emph{INCOMPLETE} blocks.}
\end{figure*}

Note that the concepts of \emph{COMPLETE} and \emph{INCOMPLETE}
are different from those of \emph{Fresh} and \emph{Stale}. A block is \emph{Fresh}
if it contains the data of the latest update to that block and is \emph{Stale} if it
contains a previous version. In {\ourSystem}, a \emph{COMPLETE} block can
be \emph{Stale}. For example, this can happen when a node becomes disconnected and
misses both the data and metadata update. Section~\ref{read protocol} discusses how
to avoid reading a \emph{Stale} block.

When no failures or timeouts occur, {\ourSystem} maps each block $n$ to
one of $2f+1$ \emph{slices} and stores each slice on $f+1$ preferred
replicas, from replica $n\%(2f+1)$ to replica $(n-f)\%(2f+1)$. This ensures
that the $2f+1$ slices are evenly distributed among different
replicas, and that each replica is in the preferred quorum of $f+1$
different slices, which are the \emph{PREFERRED} slices for
that replica.


When failures or timeouts occur, a data block might be pushed to reserve storage
on replicas out of its preferred quorum. We will show the detailed protocol
later.

To simplify the description, we say that a block is \emph{PREFERRED} at a replica if
the replica is a member of the block's preferred quorum. Otherwise, we say that
the block is \emph{RESERVED} at that replica. We similarly say a request (read, write)
is \emph{PREFERRED}/\emph{RESERVED} at a replica if it accesses a
\emph{PREFERRED}/\emph{RESERVED} block at the replica.


Each replica allocates a preferred storage to store the data of \emph{PREFERRED} writes,
a reserve storage to store the data of \emph{RESERVED} writes, and a relatively small
metadata storage for each block's version and status.




\subsection{Protocol Overview}

This section presents an overview of {\ourSystem}'s protocol and compares {\ourSystem} with the
asynchronous full replication used by Paxos~\cite{Lamport98Part,Lamport01Paxos},
the asynchronous partial replication used by Cheap Paxos~\cite{Lamport04Cheap}, and
the state-of-the-art Paxos-based block replication system Gaios~\cite{Bolosky11Paxos}.


Figure \ref{graph:gnothi-protocols}.a shows a write operation when no failures occur.
In Paxos and Gaios, a write operation is sent to, and executed on, all correct replicas.
This seems redundant if our goal is to tolerate one failure: a natural idea is to send
the write requests to two replicas first, and if they do not respond in time,
try the third one~\cite{Malek05Fault}. Cheap Paxos adopts this idea by activating two replicas
and leaving the other one as a cold backup~\cite{Lamport04Cheap,Wood11ZZ}. {\ourSystem} incorporates a similar idea,
but it still sends the metadata to the third replica, which executes the request by
marking the corresponding data block as \emph{INCOMPLETE}. Later, we will see
that this metadata is critical to reducing the cost of failure and recovery.


Figure \ref{graph:gnothi-protocols}.b shows a read operation when no failures occur.
In Paxos, the read is sent to all replicas and the client waits for two replies. The figure
shows a common optimization that lets one replica send back the full reply and lets the others
send back a hash or version number~\cite{castro02practical}. By using similar optimizations
for its writes, Cheap Paxos executes the read on only two replicas. Gaios introduces a protocol that
allows reads to execute on only one replica while still ensuring linearizability, and {\ourSystem} uses Gaios's read protocol,
with a slight modification to avoid reading \emph{INCOMPLETE} blocks.


Figure \ref{graph:gnothi-protocols}.c shows what happens when one
replica fails. Paxos and Gaios do not need special handling since the
remaining two replicas hold all data.  Cheap Paxos brings online the
 cold backup, which needs to fetch the data from the live
replica: the system is unavailable until this transfer finishes,
possibly for a long time if the system stores a large amount of
data. {\ourSystem} too needs no special handling, the third replica knows whether a block it
stores is \emph{COMPLETE} or not, so it can safely continue processing
read requests by serving reads of \emph{COMPLETE} blocks and
redirecting reads of \emph{INCOMPLETE} ones to the other replica.  And
it can also continue processing writes whose block belongs to the
failed replica by storing data in its reserve storage.


Figure \ref{graph:gnothi-protocols}.d shows a write operation when a replica is unavailable.
Paxos, Gaios, and Cheap Paxos do not need any special handling. For {\ourSystem},
a replica may receive a \emph{RESERVED} write and store it in its reserve
storage to ensure that writes only complete when at least two nodes store their data.
Read operations in this case are not different from those when no failure occurs.

Figure \ref{graph:gnothi-protocols}.e shows how recovery works when a replica that has missed
some writes recovers or when a new replica replaces a lost one.
Paxos and Gaios both need to fetch
 all missing data before processing new requests at the recovered replica. Cheap Paxos
can just leave the recovered replica as the cold backup and does not need any special
handling. {\ourSystem} performs a two-phase recovery when a failed replica recovers.

In the first phase, the recovering replica fetches missing metadata
from the other replicas. Since metadata is updated on all replicas, this phase of
recovery proceeds as in a traditional RSM. The recovering replica then proceeds
to store and mark as \emph{COMPLETE} all the data blocks it has received; all
remaining blocks referred in the received metadata are marked as \emph{INCOMPLETE}.
By the end of the first phase, the recovering replica can serve write requests even though
full recovery is not complete yet: at this point the system stops
consuming additional reserve storage on other replicas. Since the size
of metadata is small, this phase is fast, and thus it is often not necessary
to allocate a large reserve storage.

In the second phase, the recovering replica re-replicates all missing or stale \emph{PREFERRED}
blocks. {\ourSystem} performs this step asynchronously, so it can balance recovery
bandwidth and execution bandwidth while still guaranteeing progress. Depending on
the status of the recovering replica, there are two possible cases here: if all data on disk
is lost, the recovering replica needs to rebuild its whole disk; if the data on
disk is preserved, the recovering replica just needs to fetch the
updates it missed during its failure. Note that
{\ourSystem} can continue processing reads and writes to all blocks during the second phase. If
a node receives a read request for an \emph{INCOMPLETE} block, it rejects the request, and the client
retries with another replica.

\subsection{Summary}
Tables~\ref{table:compare1} and \ref{table:compare2} summarize the costs of {\ourSystem} and of
previous work. In read cost, write cost, and space, {\ourSystem}
dominates Paxos, Gaios, and Cheap Paxos, improving on each in at least
one dimension and approximating most in the others. For recovery and
availability, {\ourSystem} can perform the heavy data transfer in the
background concurrently with serving new requests, while in Paxos and
Gaios, the recovering replica must wait for the transfer to finish before
serving requests, and in Cheap Paxos, the whole system must halt until the transfer
completes.

\begin{table*}[ht]
    \begin{footnotesize}
    \begin{center}
    \begin{tabular}{ | c | c | c | c | }
    \hline
    Protocol &      Write & Read & Space              \\ \hline
    Paxos &         2f+1 &  2f+1 &  2f+1       \\ \hline
    Gaios &       2f+1 &  1 &     2f+1      \\ \hline
    Cheap Paxos &   f+1 &   f+1 &   f+1+f (Cold)             \\ \hline
    {\ourSystem} &         f+1 &   1 &     f+1+$\Delta$f $0<\Delta\le1$    \\ \hline

    \end{tabular}
    \caption{\label{table:compare1} Cost of {\ourSystem} and previous work when there are no
    failures.}
    \end{center}
    \end{footnotesize}
\end{table*}

\begin{table*}[ht]
    \begin{footnotesize}
    \begin{center}
    \begin{tabular}{ | c |  c | c | c | }
    \hline
    Protocol &          Failure & Recovery (Disk survived) & Recovery (Disk replaced)             \\ \hline
    Paxos &                                       0          &   O(NB)  	& O(S)      \\ \hline
    Gaios &                                     0               &  O(NB)		& O(S)     \\ \hline
    Cheap Paxos &                    O(S) (Blocking) & 0		& 0            \\ \hline
    {\ourSystem} &             0               & O(Nb)+O(NB)          &    O(Nb)+O(${f+1\over 2f+1}$S)      \\ \hline

    \end{tabular}
    \caption{\label{table:compare2} Cost of {\ourSystem} and previous work during failure and recovery.
    S is the total storage space,
    N is the number of unique updated blocks missed by the recovering replica,
    B is the block size, and b is the metadata size for each block.}
    \end{center}
    \end{footnotesize}
\end{table*}





