\section{Detailed Design}
\label{design}

This section presents in detail how {\ourSystem} stores and accesses
data and metadata, and how it performs recovery after a replica
fails.

\subsection{Data and Metadata}

{\ourSystem} splits the storage space into $2f+1$ slices, with each
replica in the preferred quorum of $f+1$ slices. A replica stores
the data of its $f+1$ \emph{PREFERRED} slices in its preferred
storage, and allocates space for $f$ \emph{RESERVED} slices in its
reserve storage. When all replicas are available, blocks are always
written to preferred storage, but when some replicas are not
available, blocks are stored in the reserve storage of replicas
outside the block's preferred quorums.

If the per-slice size of preferred and reserve storage are the same,
then the system can remain available indefinitely even if $f$ replicas
fail, but at the cost of $2f+1$ physical blocks for each logical
block. In Section \ref{section-reducing}, we will show that, given
{\ourSystem}'s fast recovery, a much smaller reserve storage is likely
to suffice for many workloads. For now, let us assume that preferred
and reserve storage have the same per-slice size.

In processing updates, {\ourSystem} separates data and metadata. The data is carried in a PrepareData message, while
the corresponding metadata is carried in a WriteData message; we
will detail the messages' format in the following
subsections. A client first sends PrepareData; upon receiving the message, a replica first logs it to disk and then stores it in a buffer until it
receives the corresponding WriteData and can perform the actual write. We call the buffer the PrepareData buffer in the following
sections. To avoid overflowing a replica's PrepareData buffer, {\ourSystem} sets an upper bound on how many
outstanding PrepareData requests a single client can buffer. If a replica finds its PrepareData buffer for
a client is full, it stops receiving messages from that client until the buffer has room.
Once it knows that the PrepareData has been stored by enough replicas, the client proceeds to send the
WriteData message. Replicas run an agreement protocol to guarantee that WriteData messages are processed in
the same order by all correct replicas.

It would be tempting for the clients to send PrepareData and WriteData messages in parallel.
However, this optimization does not work, because a client failure may leave the system in a state where metadata (WriteData)
is fully replicated while data (PrepareData) is completely lost: each replica would mark the
corresponding block as \emph{INCOMPLETE}, and the system would be unable to process read requests for
this block. Sending PrepareData before WriteData---the approach used in {\ourSystem}---eliminates this problem, but it introduces
the possibility of a new problem of its own: a buffered PrepareData on a replica may never be written to its target location. For example, a client
could fail after sending the PrepareData message but before sending the
WriteData message. To garbage-collect such buffered PrepareDatas, a client includes a client
sequence number with each PrepareData message and WriteData message it sends, and a replica discards
a buffered PrepareData if it processes a WriteData message with a higher sequence number. When the client
fails and recovers, it sends to all replicas a special ``new epoch'' command.
Replicas process the new epoch command using the same agreement protocol used to order
WriteData messages: hence, by the time replicas enter a new epoch, they have processed the same
sequence of WriteData messages. Once the new epoch
command completes, all replicas can discard all buffered PrepareDatas in the previous epoch.
Notice that if the failed client does not recover, the replica cannot discard buffered PrepareDatas;
in asynchronous replication, it is impossible to know whether a client has permanently failed
or is just slow.  If the cost of a few megabytes per permanently-failed client is too high, the system can
rely on an administrator or on a very long timeout (say, one day) to detect and remove the permanently
failed client and clear its buffer.

{\ourSystem} keeps metadata for each block: an 8-byte version number
assigned by agreement to identify the block's last update, and an
8-byte requestID to connect the block to the PrepareData message. The
version number and requestID are primarily used in failure recovery:
we will show why {\ourSystem} needs them and how to use them later.
In addition, each replica keeps one bit for each block to identify
whether or not the block is \emph{COMPLETE} on the replica.

\subsection{Write Protocol}

The write protocol is illustrated in Figure~\ref{graph:gnothi-write}:

\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{figures/gnothi_write_protocol.pdf}}
\caption{\label{graph:gnothi-write} Write Protocol}
\end{figure}

\mymk{1} A client sends PrepareData(requestID, block data) to $f+1$
replicas, using the pair (clientID, clientSeqNo) to achieve a unique requestID.
% To achieve
% a unique requestID we use the pair (clientID, clientSeqNo)~\cite{castro02practical,Hunt10ZooKeeper}.
At first, the client targets the block's preferred quorum, but if a timeout occurs, the client tries other
replicas. To prevent the client's network from becoming a bottleneck for sequential access we use chain
replication~\cite{Guerraoui10Throughput,Renesse04Chain}: the client sends the data to one replica which
forwards it to the next, and so on.

\mymk{2} A replica receiving the PrepareData puts it into a PrepareData buffer, logs it to disk, and sends
a PrepareDataAck(requestID) to the client.

\mymk{3} The client waits for $f+1$ PrepareDataAcks. If there is a timeout, the client repeats Step \mymk{1},
choosing some other replicas. When the network is available and message delivery is timely, this step is
guaranteed to terminate as long as at least $f+1$ replicas are capable of processing requests.

\mymk{4} The client sends WriteData(requestID, block number) through the agreement protocol so that all
replicas receive the same sequence of write commands. {\ourSystem} uses code from ZooKeeper for agreement, but
other Paxos-like protocols~\cite{Bolosky11Paxos,clement09upright} could be used.

\mymk{5} After agreement, when a replica receives a WriteData message it updates its metadata storage and tries to find the corresponding
PrepareData in the PrepareData buffer by using the requestID as the identifier. There are three possible cases:

\mymk{5.1} The replica has both the WriteData and the corresponding PrepareData messages, and this
 is a \emph{PREFERRED} write for the replica. The replica
then writes the data to its preferred storage and marks the corresponding block as \emph{COMPLETE}.
Note that the write operation can be performed asynchronously, because the
PrepareData message is already logged to disk in Step \mymk{2}: as shown in Section~\ref{Gnothi-throughput},
writing asynchronously can improve the throughput of the system under random workloads because it gives the disks
more opportunities to reorder requests.

\mymk{5.2} The replica has the WriteData message but no corresponding PrepareData message.
 The replica then marks the corresponding block as \emph{INCOMPLETE}.

\mymk{5.3} The replica has both the WriteData and the corresponding PrepareData messages,
 and this is a \emph{RESERVED} write for the replica.
The replica then writes the data to the reserve storage and marks the corresponding block as \emph{COMPLETE}. This case happens
only when there are unavailable or slow replicas, so it is not shown in Figure~\ref{graph:gnothi-write}.

In all cases, the replica sends a WriteAck(requestID) back to the client.

\mymk{6} The client waits for $f+1$ WriteAcks. If there is a timeout, the client repeats Step \mymk{4}.
If the WriteData has already been processed, the replicas send a WriteAck reply~\cite{castro02practical};
otherwise, they process the write request.
Assuming there are at least $f+1$
functioning replicas, the client is guaranteed to get enough WriteAcks eventually.

To argue correctness, we observe that
Step \mymk{3} guarantees that PrepareData is received by at least $f+1$ replicas and thus will not be lost; the agreement
protocol guarantees that WriteData is eventually received by all correct replicas and thus will not be lost; and the
agreement protocol provides the write linearizability guarantee. Notice that in Step \mymk{6}, WriteAcks may not always come from
the same nodes that stored data and sent PrepareDataAcks in Step \mymk{2}, but this is not a problem since the system is still protected
against $f$ failures. If one of the nodes storing data
is slow, temporarily unavailable, or crashed but can recover locally, it will catch up with others using
standard techniques~\cite{Lamport98Part,Lamport01Paxos} and process the WriteData, so that the write will survive even if another node fails. Conversely,
if a node permanently loses its data, the recovery protocol must restore full redundancy by fetching the failed node's state from the remaining replicas,
but this case is no different whether the node that received the
PrepareData and then permanently crashed did so before or after sending a WriteAck.

\subsection{Read Protocol}
\label{read protocol}

For reads, we use the Gaios read protocol~\cite{Bolosky11Paxos}, modified slightly to handle
\emph{INCOMPLETE} blocks. (Steps \mymk{2}-\mymk{4} below are the same as described for Gaios):

\mymk{1} A client sends a Read (block number, replica ID) to the current agreement leader node, stating that
it wants to read that block from a specific target replica. Usually,
the target is the first replica in the block's preferred quorum.

\mymk{2} The leader buffers the Read request and queries all other replicas: ``Am I still the leader?''

\mymk{3} If the leader receives at least $f$ ``Yes'' responses, it
continues. Otherwise, it does nothing. This can happen if a slow
replica still believes to be the leader, while enough other replicas have already
moved on. In this case, the client will timeout, restart from Step
\mymk{1}, and try another replica as the leader.

\mymk{4} The leader attaches a version number to the Read and sends it to the target replica specified in the request.
The version number is set to the number of write requests already agreed. This number is used in Step \mymk{5}
to ensure that the target replica does not read stale data.

\mymk{5} The target replica waits until the write with the specified version number is executed, and then it executes the Read.
This synchronization prevents a slow replica from sending stale data to the client. There are two cases to consider:

\mymk{5.1} The corresponding block is \emph{COMPLETE} : the target
replica then sends the data to the client.

\mymk{5.2} The corresponding block is \emph{INCOMPLETE}: the target
replica sends an INCOMPLETE reply to the client. This allows the
client to move to the next replica quickly, instead of waiting for a
timeout.

\mymk{6} If the client receives the data, it finishes the Read. If it
receives INCOMPLETE or times out, it chooses another replica and
restarts from Step \mymk{1}. The client chooses the target replica in
round-robin fashion starting with the preferred quorum, so that all
replicas will be tried.

When no failures or timeouts occur, Step \mymk{5.1} will always happen, since the client chooses a node from the
preferred quorum as the target. When failures or timeouts occur, the client may try some other replicas,
but during a period with timely message delivery, it will eventually succeed since some replica must hold the data.

Note that if the client issues a read and then a write to the same block before the read returns, the
read can return the result of the later write. {\ourSystem} assumes this is an acceptable behavior for
block drivers~\cite{Bolosky11Paxos}, but a client can prevent it by blocking the later write when
there is an outstanding read operation to the same block.

\subsection{Failure and Recovery}
\label{section-recovery}
{\ourSystem} performs no special operations when replicas fail.
A client may timeout in the read or write protocol and retry using some other replicas, or write data to
some replicas not in the preferred quorum, which will store these \emph{RESERVED} writes in their reserve
storage.


Recovering a failed replica begins with replaying the replica's log.
If the disk is damaged or the machine is entirely replaced, this step
may fail but correctness is not affected.  What cannot be recovered
from the log is fetched from the other replicas in two phases: first
to be restored is the metadata, and then any data missing from the
failed replica's preferred slices.  The recovering replica can process
new requests once the first phase is complete, and it is fully
recovered and no longer counts against our $f$ threshold when the
second phase is complete.

\subsubsection{Phase 1: Metadata recovery}

{\ourSystem} replicates metadata on each node, so metadata recovery proceeds as it would in traditional
RSMs: the recovering replica sends to the primary the last version number it is aware of, to which the primary
replies with a list of metadata records, if
any, with higher version numbers. Besides the version number, each of these records includes a block
number and a requestID.

For each received record, the replica then checks if it holds in its
buffer a PrepareData with the same requestID: if so, it executes the
write request and marks the block as \emph{COMPLETE}. This check
handles the case when a replica receives a PrepareData but fails
before receiving the corresponding WriteData. In this case, the
recovering replica should finish executing the write request, and the
requestID is necessary to connect a PrepareData to its block. If there
is no PrepareData in the buffer with the same requestID, the replica
simply marks the block as \emph{INCOMPLETE}.

When metadata recovery is complete, it is safe for the replica to process new requests, even though it may have
some \emph{INCOMPLETE} blocks. An update will overwrite the \emph{INCOMPLETE} block, and a read will
be redirected to other replicas with the \emph{COMPLETE} block.

{\ourSystem} transfers 24 bytes of metadata for each block during this
phase. This is 6GB per terabyte of data using 4KB blocks and 24MB per
terabyte for 1MB blocks, so the first phase typically takes a few
seconds to a few minutes to complete. Note that during this metadata transfer, the other replicas continue to process new reads and writes.

\subsubsection{Phase 2: Re-replicate}

In the second phase, the recovering replica retrieves from the others the data for all the  \emph{INCOMPLETE} blocks
in its preferred storage, thus freeing those replicas' reserve storage.
If a replica retains its data on its local
disk, it just needs to fetch the modified blocks. This case typically occurs when a replica crashes and recovers,
becomes temporarily disconnected from the network, or becomes temporarily slow. If a replica loses its on-disk data
as a result of a hardware fault,
it needs to rebuild its storage by fetching all blocks in its slices' preferred storage.

This phase can take a long time, depending on how many blocks that need to be fetched, but it is needed only to
free the reserve space of other nodes so that they are better equipped to mask future failures: once the replica recovers
its metadata, it can process all writes to its slices, and it can process reads to the subset of blocks that are locally
\emph{COMPLETE}. {\ourSystem} performs re-replication as a background task that can be throttled to balance
the resources used for re-replication and for processing new client requests. Even if new client requests
are processed at a high rate and re-replication proceeds at a low rate, re-replication will still eventually
complete because the recovering replica's metadata allows it to process new requests while it is still catching
up re-replicating missed old updates.

Every replica periodically checks its reserve storage: for every \emph{RESERVED} block, the reserved replica
communicates with the block's preferred replicas to check whether the block is \emph{COMPLETE},
and if a \emph{RESERVED} block is \emph{COMPLETE}
on all its preferred replicas, then the reserved replica can safely delete the block from its reserve storage.

\subsection{Reducing replication state}
\label{section-reducing}
Each replica needs to reserve space for $f$ \emph{RESERVED} slices. It is
always safe to set the size of reserve storage to be $f$ times a slice size, so that it can absorb
any number of writes to each slice. This approach needs to pay a storage cost of $2f+1$ blocks for
one block, since a data block is
stored on a preferred quorum of $f+1$ replicas, and the other $f$ replicas must reserve space for this
block in the reserve storage:
when $f=1$, a replica must then allocate one third of its
storage space  for reserve storage,  and more when $f$ is larger.
This is the same space overhead of the standard approach of Paxos or Gaios, which may
be acceptable. When reducing replication costs is a concern, however,
{\ourSystem} also enables allocating less space for reserve storage.
The risk of this thriftier approach is that if a failed replica does not recover or is not replaced soon,
the reserve storage can fill, preventing the system from processing additional writes. However, filling the reserve storage
does not put safety at risk, since data is always written to $f+1$ replicas.
In general, {\ourSystem} can allocate less space for reserve storage in any of the following cases:
1) the workload is read-heavy; 2) the workload
is write-heavy but dominated by random writes so that the throughput is low;
3) the workload is write-heavy but has good locality.
Our analysis of several disk traces suggests that, as long as the metadata is recovered quickly,
allocating 10\% of disk space as reserve storage is enough to guarantee write availability for many workloads.

Specifically, we analyze two sets of traces from Microsoft: one is collected by Microsoft Research Cambridge~\cite{Narayanan08WriteOffLoading}
and it consists of 23 one-week disk traces under different workloads; the other is collected on Microsoft's production servers~\cite{Kavalanekar08Characterization}
and consists of 44 disk traces, whose lengths vary from six hours to one day. We choose these two sets of traces
because they are recent and because they contain a variety of workloads including compiling,
MSN Storage, SQL Server, computation, etc.
We calculate the maximum usage ratio
for each trace. To be precise, $MaxUsage(T)$ is the maximum number of different sectors written during any time
interval of length $T$, divided by the total number of sectors.

In the Microsoft Cambridge Traces, only two of the
23 traces write to more than 10\% of the disk space in a week. For the heaviest one, reserving 10\%
always allows at least 10 minutes to finish Phase 1 and recover all metadata before the system
becomes unavailable to writes. A conservative administrator may
reserve more for this workload.

In the Microsoft Production Server Traces,
38 of the 44 disk traces write to less than 10\% of the space in their traces. For the heaviest one, reserving
10\% always allows at least 10 minutes to complete Phase 1.


\subsection{Metadata}
\label{gnothi-metadata}

Each replica stores both local and replicated metadata for every block. The local metadata consists of the \emph{COMPLETE} bit
for each block, and the replicated metadata includes the version number and requestID for each block.

In {\ourSystem}, caching in memory the \emph{COMPLETE} bit of each
block is feasible in both size and cost. For example, with a small 4KB
block each 1TB of disk storage requires about 30MB of \emph{COMPLETE}
bits. In May 2012, a commodity 2TB internal hard drive costs about
\$120 and a common 4GB memory DIMM costs about \$25. This means that keeping \emph{COMPLETE} bits in memory adds about
0.3\% to the dollar cost of the disk data it tracks. {\ourSystem}
regularly stores checkpoints of the \emph{COMPLETE} bits by writing
the current state to local files.

The block number, version number, and requestID are 8 bytes each, and it would be costly for {\ourSystem} to keep them all in memory.
{\ourSystem} uses a metadata storage design similar to that of BigTable~\cite{chang06bigtable,Mammarella09Modular}.
Each {\ourSystem} node maintains in a local key-value store the mapping from logical block ID to version number and requestID. Metadata updates are logged to disk first as
described before. Afterwards, to update a record, {\ourSystem} first puts the record
in a memory buffer; then when the buffer is full, {\ourSystem} sorts the buffer according to the key and then writes the
whole buffer to a new file. A background thread merges these files when there are too many of them.
Metadata writes and merges are fast, since they are sequential writes to disk. Our micro benchmark shows that this approach can sustain a throughput of about
200K writes per second, which is enough for our needs. Reading from metadata storage only occurs when {\ourSystem}
recovers a crashed or slow replica
by fetching metadata from another replica: this case requires a sequential scan of the metadata, which
is again fast.
Individual read operations do not access metadata storage, since a read operation only needs to access the \emph{COMPLETE} bit.





