\subsection{Active storage}\label{sec:active}

Active storage provides the abstraction of a region server that does
not experience arbitrary failures or lose data. \sys uses active
storage to ensure that the data remains available and durable despite
arbitrary failures in the storage system by addressing two key
limitations of existing scalable storage systems: first, they replicate data
at the storage layer (e.g. HDFS) but leave the computation layer
(e.g. HBase) unreplicated. As a result, the computation layer that
processes clients' requests represents a single point of failure in an
otherwise robust system. For example, a bug in computing the checksum
of data or a corruption of the memory of a \rs can lead to data loss
and data unavailability in systems like HBase. Second, the storage
layer usually replicates data with three-way synchronous primary-backup
protocols, requiring minimal cost
but leaving the system vulnerable to uncommon errors.

To prevent a \rs from being a single point of failure,
the design of \sys uses a new {\em active storage} architecture
that embodies a simple principle: all changes to
persistent state should happen with the consent of a quorum of
nodes. \sys uses these {\em compute quorums} to protect its data from
faults in its \rs{s}: when a compute quorum sends data to the storage layer,
all nodes in the quorum must unanimously agree on a certificate, which
carries information to validate data, and send the certificate to the storage
layer so that the storage nodes
can verify data integrity.

To tolerate arbitrary failures in the storage layer,
the active storage architecture incorporates the central idea of Gnothi: to tolerate two failures, it
replicates data on just three nodes, but replicates metadata (certificate)
on five nodes. By using separate policies
when replicating data and metadata, \sys can
identify the correct version of data despite arbitrary failures
with minimal replication costs.


Using active storage, \sys can provide strong availability and
durability guarantees: a data block will
remain available and durable as long as there are no more than two
failures occurring on the physical machines that are processing the block.\footnote{
A data block is processed by three \rs{s}, three \Dn{s},
and two witness nodes. \sys allows a \rs to be colocated with a \Dn
or a witness node on the same physical machine. \sys can tolerate
two physical machine failures, which means that two pairs of colocated
processes can be faulty, but at least one \rs/\Dn pair is correct.}
These guarantees hold irrespective of whether the nodes fail by
crashing (omission) or by corrupting their disk, memory, or logical
state (commission).

Replication typically incurs network and storage overheads. \sys uses
three key ideas---(1) moving computation to data,
(2) using unanimous consent quorums, and (3) separating data from metadata---to ensure that active
storage does not incur significant network or storage overhead compared to existing
approaches. Perhaps surprisingly, in addition to improving fault-resilience, active
storage improves performance, by trading
relatively cheap CPU cycles for expensive network bandwidth.

\subsubsection{Moving computation to data to minimize network usage}
\label{sec:moving}

\sys implements active storage by
%To minimize redundant data transfers, \sys
blurring the boundaries between the storage layer and the computation
layer. Existing storage
systems~\cite{hbase,chang06bigtable,calder11windows} require a
designated primary \Dn to mediate updates. In contrast, \sys modifies the storage
system API to permit \rs{s} to directly update any replica of a
block. Using this modified interface, \sys can efficiently implement
active storage by colocating a computation node (\rs) with the storage
node (\Dn) that it needs to access.

Active storage thus reduces bandwidth utilization in exchange for
additional CPU usage (Section~\ref{section:aggregate})---an attractive
trade-off for bandwidth constrained data-centers. In particular, because a
\rs can now update the colocated \Dn without requiring the network,
the bandwidth overheads of flushing (Section~\ref{sec:background}) and
compaction (Section~\ref{sec:background}) in HBase are avoided.

We have implemented active storage in HBase by changing the NameNode
API for allocating blocks. As in HBase, to create a block a \rs sends
a request to the NameNode, which responds with the new block's
location; but where the HBase NameNode makes its placement decisions
in splendid solitude, in Salus the request to the NameNode includes a
list of preferred DataNodes as a {\em location-hint}.  The hint biases
the NameNode toward assigning the new block to DataNodes hosted on the
same machines that also host the \rs{s} that will access the
block. The NameNode follows the hint unless doing so violates its
load-balancing policies.

Loosely coupling in this way the \rs{s} and \Dn{s} of a block yields
Salus significant network bandwidth savings (Section~\ref{section:aggregate}). Why
then not go all the way---eliminate the HDFS layer and have each \rs
store its state on its local file system?  The reason is that
maintaining flexibility in block placement is crucial to the
robustness of Salus: our design allows the NameNode to continue to
load balance and re-replicate blocks as needed, and makes it easy for
a recovering \rs to read state from any \Dn that stores it, not just
its own disk.

\subsubsection{Using unanimous consent to reduce replication overheads}\label{sec:unanimous}

To prevent a failed \rs from propagating errors to \Dn{s}, \sys asks the
\rrs to agree on the \pput{s} to be stored. BFT agreement,
the traditional solution to this problem, requires $3f+1$ replicas
and thus is usually perceived as too expensive in practice.
\sys reduces this threshold to its minimal value of $f+1$
by leveraging two observations. First, one key challenge in BFT agreement---a
faulty leader proposes different execution orders to
different participating servers---does not exist in \sys, because
(1) \sys relies on the single client allowed to perform writes on
a given volume to specify the order in which \pput{s} should execute\footnote{
Of course the client can also be faulty, but
no guarantee can be achieved for the user anyway if an unreplicated client is faulty.}
and (2) each \rs can validate a client's
requests independently. This allows \sys to reduce by $f$ the number
of replicas it needs to guarantee safety~\cite{Clement2012NonEquivocation}.
%As a comparison, if the system needs to support multiple
%writers, how to order requests from different clients is usually decided by the leader.
%In this case, it is impossible for a \rs to independently verify the ordering, and thus
%it has to communicate with other \rs{s}.
Second, \rs{s} do not have any persistent
state: their state is stored on the \Dn{s}. Therefore, a \rs failed
on one machine can be recovered on another machine by reading data from
the relevant \Dn{s}. This allows \sys to further reduce its replication cost
by $f$ in the failure free case, down to $f+1$ by sending requests to
only a minimal preferred quorum~\cite{Cowling06Hq} and failing over to a different quorum if
the preferred quorum fails to make progress.

Taking advantage of these two observations, \sys uses unanimous-consent quorums for \pput{s}:
the replicated \rs{s} check clients' requests independently, reach
unanimous consent for any operation that updates the state,
and generate a certificate proving the legitimacy of the update.

%To control the replication overheads, we use unanimous
%consent quorums for \pput{s}. Existing systems replicate data to three
%nodes to ensure durability despite two permanent omission
%failures. \sys provides the same durability and availability
%guarantees despite two failures of either omission {\em or} commission
%without increasing the number of replicas. To tolerate $f$ commission
%faults with just $f+1$ replicas, \sys
%requires the replicas to reach unanimous consent prior to performing
%(\pput quorum of size $3$)
%any operation that updates the state and to store a certificate
%proving the legitimacy of the update.
%(\get quorum of size $1$).

This approach provides safety guarantee despite two commission failures because
an update must be agreed by three \rs{s}, each of whom can validate requests independently.
However, the failure of any of the replicated \rs{s} can prevent unanimous
consent. To ensure liveness, \sys replaces any \rrs that is not
making adequate progress with a new set of \rs{s}, which read all
state committed by the previous \rs quorum from the \Dn{s} and resume
processing requests. This fail-over protocol is a slight variation of
the one already present in HBase to handle failures of unreplicated
\rs{s}.  If a client detects a problem with a \rrs, it sends a
\emph{RRS-replacement request} to the \m, which first attempts
to get all the nodes of the existing \rrs to relinquish their
leases; if that fails, the \m coordinates with ZooKeeper to
prevent lease renewal. Once the previous \rrs is known to be
disabled, the \m appoints a new \rrs. Then \sys performs
the recovery protocol as described in Section~\ref{sec:recovery}.


\subsubsection{Separating data from metadata to reduce storage overheads}
Three \Dn{s} are not enough to tolerate two commission failures in an asynchronous
environment---during failure recovery, if two nodes are not responding in time and
the remaining available node provides a valid version
of the data, it's impossible for the system to decide
the correct way to proceed.
On the one hand, the valid data may be a stale version provided
by a faulty node, so accepting it may cause the system to lose a suffix
of data. On the other hand, waiting for other nodes to respond
is unacceptable, since those nodes may have failed permanently.
Indeed, it has been proved that we need at least five execution
replicas to tolerate two commission failures in an asynchronous
environment~\cite{Yin03Separating}.

The lesson we learned from Gnothi, however, is that
it is not necessary for all these five replicas to store all data: in
\sys, only three \Dn{s} store data and certificates while the
remaining two serve as witnesses, storing only the certificates.
As in Gnothi, during recovery (see Section~\ref{sec:recovery}), the
fully replicated certificates help the system identify
the correct data.


\subsubsection{Active storage protocol}\label{sec:pipeliningrs}

To provide the other components of Salus with the abstraction of a
correct region server, \rs{s} within a \rrs are organized in a chain.
In response to a client's \pput request or to attend a
periodic task (such as flushing and compaction), the {\em primary} \rs
(the first replica in the chain) forwards the request (the \pput request or
a special flush or compact request) to all \rs{s} in the chain.
After executing the request, the \rs{s} in the \rrs coordinate to create
a {\em certificate} attesting that all replicas executed the
request in the same order and obtained identical responses.
The components of \sys (such as client, NameNode, and \m) that use
active storage to make data persistent  require all messages from a
\rrs to carry such a certificate:  this guarantees no spurious changes
to persistent data as long as at least one \rs and its
corresponding \Dn do not experience a commission failure.


\input{salus_fig_active_node_steps}


Figure~\ref{fig:active-node-steps} shows how active storage refines
the pipelined commit protocol for \pput requests.

\begin{itemize}

\item Step \circledNum{1} . The \pput issued by a client is received by the primary \rs
as part of a sub-batch. The primary \rs needs to ensure the safety
property that the system processes correct requests in the correct order.
To make this possible, the \pput request carries additional
metadata including the
client ID, the checksum of the data, the sequence number and the \texttt{prevNum}
assigned by the client (see Section~\ref{sec:pipeline}), and
finally the client's signature of all information. Upon receiving a \pput, the primary
\rs validates it independently by checking 1) the signature matches the client ID; 2)
the checksum matches the data; and 3) the \texttt{prevNum} is the same as
the sequence number of the previous \pput received by this region. If the validation succeeds,
the \rs forwards it down the chain of replicas and each replica will
perform the same validation independently. With such design,
it is not possible for a \rs to accept a corrupted \pput or miss
a \pput from the client, as long as the client does not experience
commission failures.

\item Step \circledNum{2} . Each \rs writes the \pput requests to the corresponding
\Dn of its log file, assigned by the NameNode, and waits for the \Dn to confirm that
data is persistent. Note that at this point, no unanimous consent is
reached yet and the \Dn cannot validate the correctness of data, except that data is
from the expected \rs; the \Dn just
logs the data optimistically, hoping that unanimous consent will be
reached later. And if not, the data will be garbage-collected in the
recovery protocol (see Section~\ref{sec:recovery}). One may wonder why the \Dn
cannot verify the data directly by using the mechanism described in
step \circledNum{1} . This is because a \Dn, as a component HDFS, is designed to process
append-only logs, and it does not understand the semantics of the \pput/\get
interface of HBase; it is the task of the \rs to convert a \pput request
into an append operation on a \Dn: during this procedure, a \rs may split its
log into multiple files and HDFS may also split a file into multiple blocks.
In addition, both the HBase layer and the HDFS layer are adding additional information
to the log, such as timestamp. As a result, what is received by a \Dn is a mixture
of clients' \pput{s}, possibly broken into pieces, and the additional information
from HBase and HDFS. In this case, it is impossible for the \Dn to verify \pput{s}
directly.

\item Step \circledNum{3} . For the data it just writes, each \rs generates
a \textsc{tentative\_log\_entry} certificate,
 which contains the file ID (assigned by NameNode) of
the log, the starting position of the data in the file, the
length of the data, the checksum of the data, and a signature. The certificate not
only validates data content, but also validates data location, which is critical because
the order of \pput{s} in log files can affect recovery. Each \rs then broadcasts its
\textsc{tentative\_log\_entry} certificate to three \Dn{s} and
two witness nodes. If a \Dn or witness node receives three correctly signed and matching
\textsc{tentative\_log\_entry} certificates, it combines them into a \textsc{log\_entry} certificate and stores it.
Note that at this time a \Dn does not even need to verify data with the \textsc{log\_entry} certificate: such
verification can be delayed until recovery. A \textsc{log\_entry} certificate serves two purposes:
first, it proves the correctness of the data, since it has been agreed by three \rs{s}, of which
one is correct. Such proof allows \sys to validate data during recovery.
Second, it also implicitly serves as a proof that the data is already
stored on at least one correct \Dn, because at least a \rs
and \Dn pair is correct, and the correct \rs only proposes
its \textsc{tentative\_log\_entry} certificate after it writes data to the
correct \Dn. As we will see, this property is important in recovery.
After storing the \textsc{log\_entry} certificate, a \Dn or witness node replies to
all \rs{s}, and a \rs considers the procedure as complete if it receives five replies.
Any failure during the procedure, of course, will block the execution and finally
trigger recovery.

\item Step \circledNum{4} . Each \rs independently sends its vote (\textsc{yes} if
Step 3 completes successfully and \textsc{no} otherwise) to the leader
of the batch to which the \pput belongs and,
if it voted \textsc{yes}, waits for the decision. On
receiving \commit, the \rs{s} mark the request as committed, update
their in-memory state and generate a \textsc{tentative\_put\_success} certificate;
on receiving \abort the \rs{s} generate instead a
\textsc{tentative\_put\_failed} certificate. A client considers a request as complete
if it receives a \textsc{put\_success} certificate, which consists of three
matching \textsc{tentative\_put\_success} certificates from three \rs{s}.
Otherwise, the block driver triggers recovery if there is a proof
of misbehavior by any of the servers (e.g. no unanimous consent) or retries
the request if there is no such proof (e.g. timeout): the block driver also
triggers recovery if it retries a request several times but the servers still
cannot make progress. The \textsc{put\_success} certificate
proves that at least one correct \rs believes that a
correct \textsc{log\_entry} certificate has been successfully
generated and stored, which indicates that all the correct
\Dn{s} and witness nodes must have already
stored the \textsc{log\_entry} certificate.

\end{itemize}

Similar changes are also required to leverage active storage in
flushing and compaction. Unlike \pput{s}, these operations are
initiated by the primary \rs: the other \rs{s} use predefined
deterministic criteria, such as the current size of the memstore, to
verify whether the proposed operation should be performed.
The writing protocol of flush and compaction, however, can be
greatly simplified: the \rrs{s} only need to agree on a hash of the
generated file at the end of a flush or compaction, and store
the hash on the NameNode if unanimous consent is reached.
If a failure occurs, the Master node can simply delete the incomplete
files and instructs a new set of \rrs{s} to retry the flush or compaction:
we can do this because data is already stored in the logs and can be
recovered, a luxury obviously not offered to log files.

