\section{Salus: A robust and scalable block store}

\subsection{Motivation}

The primary directive of storage---not to lose data---is hard to
carry out: disks and storage sub-systems can fail in unpredictable ways~\cite{Ford10Availability,Bairavasundaram07AnalysisLatent,Bairavasundaram08AnalysisDataCorruption,Jiang08DisksDominant,Pinheiro07Failures,Schroeder07DiskFailures},
and so can the CPUs and memories of the nodes that are responsible for accessing the
data~\cite{Schroeder09DRAM,Nightingale11Cycles}.
Concerns about robustness become  even more pressing  in cloud storage
systems, which appear to their clients as black
boxes even as their larger size and complexity create
greater opportunities for error and corruption.

The major challenge is to achieve {\em scalability}
and {\em robustness} simultaneously. Some recent systems have provided end-to-end
correctness guarantees on distributed storage despite arbitrary node
failures~\cite{mahajan10depot,castro02practical,clement09upright}, but
these systems are not scalable---they require each correct node
to process at least a majority of updates.  Conversely, scalable distributed storage
systems~\cite{anderson96serverless,lee96petal,anderson00interposed,ghemawat03google,maccormick04boxwood,chang06bigtable,calder11windows,hbase,Thekkath97Frangipani}
typically protect some subsystems like disk
storage with redundant data and checksums, but fail to
protect the entire path from client \pput to client \get, leaving them
vulnerable to single points of failure that can
cause data corruption or loss.

Concretely, achieving robustness and scalability simultaneously
presents several challenges.

First, to build a high-performance block store
from low-performance disks, our system must be able to write different sets
of updates to multiple
disks in parallel. Parallelism, however, can threaten the basic
consistency requirement of a block store, as ``later'' writes may
survive a crash, while ``earlier'' ones are lost.

Second, aiming for efficiency and high availability at low cost can
have unintended consequences on robustness by introducing single
points of failure. For example, in order to maximize throughput and
availability for reads while minimizing latency and cost, scalable
storage systems execute read requests at just one replica.  If that
 replica experiences a {\em commission failure} that causes it
to generate erroneous state or output, the data returned to the client
could be incorrect.  Similarly, to reduce cost and for ease of design,
many systems that replicate their storage layer for fault tolerance
(such as HBase~\cite{hbase}) leave unreplicated the computation nodes that can
modify the state of that  layer: hence, a memory error or an
errant \pput at a single HBase \rs can irrevocably and undetectably
corrupt data.

Third, additional robustness should ideally not result in higher
replication cost.  For example, in a perfect world our system' ability to
tolerate commission failures would not require any more data
replication than a scalable key-value store such as HBase already
employs to ensure durability despite omission failures.



\subsection{Design}
We have built
Salus,\footnote{Salus is the Roman goddess of safety and welfare.} a
scalable block store in the spirit of Amazon's Elastic Block Store
(EBS)~\cite{amazonebs}: a user can request storage space from the
service provider, mount it like a local disk, and run applications
upon it, while the service provider replicates data for durability and
availability.

Salus provides strong end-to-end correctness guarantees for read
operations, strict ordering guarantees for write operations, and
strong durability and availability guarantees despite a wide range of
server failures (including memory corruptions, disk corruptions,
firmware bugs, etc), and leverages an architecture similar to scalable
key-value stores like Bigtable~\cite{chang06bigtable} and
HBase~\cite{hbase} towards scaling these guarantees to thousands of
machines and tens of thousands of disks.

To address the challenges mentioned previously, Salus introduces three
novel ideas: pipelined commit, active storage, and scalable end-to-end verification.
The key guideline for these ideas is again that metadata
should be strongly protected to ensure consistency during
failures while data can be protected with minimal cost.

{\bf Pipelined commit.}  Salus' new pipelined commit protocol allows
large bulk of data to be processed in parallel at multiple disks but, by tracking
the necessary dependency metadata during failure-free execution,
guarantees that, despite failures, the system will be left in a
state consistent with the ordering of writes specified by the
client.


{\bf Active storage.} To prevent a single computation node from
corrupting data, Salus replicates both the storage and the computation
layer.  Salus applies an update to the system's persistent state only
if the metadata of the update is agreed upon by {\em all} of the replicated
computation nodes. We make two observations about active
storage. First, perhaps surprisingly, replicating the computation
nodes can actually improve system performance by moving the
computation near the data (rather than vice versa), a good choice when
network bandwidth is a more limited resource than CPU cycles.  Second,
by requiring the {\em unanimous consent} of all replicas before an
update is applied, Salus comes near to its perfect world with respect
to overhead: Salus remains safe (i.e.~keeps its blocks consistent and
durable) despite two {\em commission} failures with just three-way
replication---the same degree of data replication needed by HBase to
tolerate two permanent {\em omission} failures.  The flip side, of
course, is that insisting on unanimous consent can reduce the times
during which Salus is live (i.e.~its blocks are available)---but
liveness is easily restored by replacing the faulty set of computation
nodes with a new set that can use the storage layer to recover the
state required to resume processing requests.

{\bf Scalable end-to-end verification.} A Salus' client maintains
sufficient metadata so that it can validate that each \get request returns
consistent and correct data: if not, the client can reissue the request to another replica. Reads
can then safely proceed at a single replica without leaving clients
vulnerable to reading corrupted data; more generally, such end-to-end
assurances protect Salus clients from the opportunities for error and
corruption that can arise in complex, black-box cloud storage
solutions. Concretely, Salus maintains a Merkle
tree~\cite{Merkle80Protocols} for each volume and each
client caches part of or the whole tree in memory.
Further, Salus' Merkle tree, unlike those used in other systems
that support end-to-end verification~\cite{Fu02fastand,sun06zfs,mahajan10depot,li04secure}, is
scalable: each server only needs to keep the sub-tree corresponding to
its own data, and the client can rebuild and check the integrity of
the whole tree even after failing and restarting from an empty state.

\subsubsection{Overview}
\begin{figure}[tpb]
\begin{center}
  \includegraphics[width=3in]{salus-arch}		
\end{center}
\caption{The architecture of Salus. Salus differs from HBase
  in three key ways. First, Salus' block driver performs end-to-end
  checks to validate the \get reply. Second, Salus performs pipelined
  commit across different regions to ensure ordered commit. Third, Salus replicates region servers via active
  storage to eliminate spurious state updates. For efficiency,
  Salus tries to co-locate the replicated \rs{s} with the replicated \dn{s}
  (\DN{s}). }
\label{fig:salus-arch}
\end{figure}

The architecture of Salus is shown in Figure~\ref{fig:salus-arch}.
It inherits the design from HBase~\cite{hbase} to achieve its scalability
to thousands of machines. Futhermore, it incorporates its three key
techniques (pipelined commit, active storage, and
scalable end-to-end verification) without perturbing the scalability
 of the original HBase design.

Figure~\ref{fig:salus-arch} also helps describe the role played by
our three novel techniques in the operation of Salus.

Every client request in Salus is mediated by the block driver, which
exports a virtual disk interface by converting the application's API
calls into \changebars{Salus}{HBase} \get and \pput requests. The block
driver, as we saw, is the component in charge of performing Salus'
scalable end-to-end verification: for \pput
requests it generates the appropriate metadata, while for \get
requests it uses the request's metadata to check whether the data
returned to the client is consistent.

When the client issues a request to a replicated region server (\rrs), the
first responsibility of the \rrs is to ensure that the request commits
in the order specified by the client. This is where the pipelined
commit protocol becomes important: the protocol requires only minimal coordination
to enforce dependencies among requests assigned to distinct
\rrs{s}. If the request is a \pput, the \rrs also needs to ensure
that the data associated with the request is made persistent, despite
the possibility of individual \rs{s} suffering commission
failures. This is the role of active storage: the
responsibility of processing \pput requests is no longer assigned to a
single \rs, but is instead conditioned on the set of \rs{s} in the \rrs
achieving unanimous consent on the update to be performed.  Thanks to
Salus' end-to-end verification guarantees,  \get requests can instead be
safely carried out by a single \rs (with obvious
performance benefits), without running the risk that the client sees
incorrect data.




\subsubsection{Pipelined commit}

\begin{figure}[tpb]
\begin{center}
    \includegraphics[width=3in]{pipelined-writes.pdf}
\end{center}
\vspace{-2ex}
\caption{Pipelined commit (each batch leader is actually replicated to tolerate arbitrary faults.)}
\label{fig:pipeline}
\end{figure}

The goal of the pipelined commit protocol is to allow clients to concurrently
issue requests to multiple regions, while preserving the ordering specified
by the client (\oc). In the presence of even simple crash failures, however,
enforcing the \oc property can be challenging.

Consider, for example, a client that, after mounting a volume $V$ that
spans regions 1 and 2, issues a \pput $u_1$ for a block mapped
to region 1 and then, without waiting for the \pput to complete,
issues a barrier \pput $u_2$ for a block mapped at region 2. Untimely
crashes, even transient ones, of the client and of the region server
for region 1 may lead to $u_1$ being lost even as
$u_2$ commits. Volume $V$ now violates both standard disk
semantics and the weaker prefix semantics; further, $V$  is left in
an invalid state that can potentially cause severe data
loss~\cite{prabhakaran05iron,Chidambaram12NoFS}.

A simple way to avoid such inconsistencies would be to allow clients
to issue one request (or one batch of requests) at a time, but performance would suffer
significantly. Instead, we would like to achieve the good performance
that comes with issuing multiple oustanding requests, without
compromising the \oc property.
%The purpose of the pipelined commit protocol is to allow clients to
%issue multiple outstanding requests/batches and achieve good
%performance without compromising the \oc property.
To achieve this goal, Salus parallelizes the bulk of the processing (such as
cryptographic checks and disk-writes) required to
handle each request, while ensuring that requests commit in order
by passing necessary metadata sequentially.

Salus ensures \oc by exploiting the sequence number that
clients assign to each request. \rrs{s} use these sequence numbers to
guarantee that a request does not commit unless the previous request
is also guaranteed to eventually commit. Similarly, during recovery,
these sequence numbers are used to ensure that a consistent prefix of
issued requests are recovered.

Salus' pipelined commit protocol for \pput{s} is illustrated in
Figure~\ref{fig:pipeline}. The client, as in HBase, issues requests in
batches. Unlike HBase, each client is allowed to issue multiple
outstanding batches. Each batch is committed using a
  2PC-like protocol~\cite{Gray78Notes,Lampson76Crash}. 
  Different from 2PC, pipelined commit also needs to guarantee
  commit ordering between multiple transactions. As shown in
  Figure~\ref{fig:pipeline}, this requires
  an extra message from the leader of the previous batch to
  the leader of the current batch in the failure-free case.
  However, ensuring ordering between batches requires
  a complex recovery protocol when some
  of the servers fail. We leave
  the detailed protocol to \cite{Wang13Salus}.


Notice that all disk writes---both within a batch and across
batches---can proceed in parallel and that the voting and commit
phases for a given batch can be similarly parallelized. Different \rrs{s}
receive and log the \pput and \textsc{commit} asynchronously. The only
serialization point is the passing of \textsc{commit-confirmation}
from the leader of a batch to the leader of the next batch. 

Despite its parallelism, the protocol ensures that requests commit in
the order specified by the client.  The presence of \textsc{commit} in
any correct \rs's log implies that all preceding \pput{s} in this
batch must have prepared. Furthermore, all requests in preceding
batches must have also prepared. Our recovery protocol
ensures that all these prepared \pput{s}
eventually commit without violating \oc.

By separating the data transfer from the passing of commit metadata, pipielined
commit achieves both parallel writes and ordering guarantees.

The pipelined commit protocol enforces \oc assuming the abstraction of
(logical) region servers that are correct. It is the {\em active
  storage} protocol that, from physical \rs{s}
that {\em can} lose committed data and suffer arbitrary failures,
provides this abstraction to the pipelined commit protocol.

\subsubsection{Active storage}
Active storage provides the abstraction of a region server that does
not experience arbitrary failures or lose data. Salus uses active
storage to ensure that the data remains available and durable despite
arbitrary failures in the storage system by addressing a key
limitation of existing scalable storage systems: they replicate data
at the storage layer (e.g. HDFS) but leave the computation layer
(e.g. HBase) unreplicated. As a result, the computation layer that
processes clients' requests represents a single point of failure in an
otherwise robust system. For example, a bug in computing the checksum
of data or a corruption of the memory of a \rs can lead to data loss
and data unavailability in systems like HBase.

The design of Salus embodies a simple principle: all changes to
persistent state should happen with the consent of a quorum of
nodes. Salus uses these {\em compute quorums} to protect its data from
faults in its \rs{s}.

Salus implements this basic principle using {\em active storage}. In
addition to storing data, storage nodes in Salus also coordinate to
attest data and perform checks to ensure that only correct and
attested data is being replicated.
Perhaps surprisingly, in addition to improving fault-resilience, active
storage also enables us to improve performance by trading
relatively cheap CPU cycles for expensive network bandwidth.

Using active storage, Salus can provide strong availability and
durability guarantees: a data block with a quorum of size $n$ will
remain available and durable as long as no more than $n-1$ nodes
fail. These guarantees hold irrespective of whether the nodes fail by
crashing (omission) or by corrupting their disk, memory, or logical
state (commission).

Replication typically incurs network and storage overheads. Salus uses
two key ideas---(1) moving computation to data,
and (2) using unanimous consent quorums---to ensure that active
storage
does not incur more network cost or storage cost compared to existing
approaches that do not replicate computation.

\paragraph{Moving computation to data to minimize network usage}


Salus implements active storage by
%To minimize redundant data transfers, Salus
blurring the boundaries between the storage layer and the compute
layer. Existing storage
systems~\cite{hbase,chang06bigtable,calder11windows} require a
designated primary \dn to mediate updates. In contrast, Salus modifies the storage
system API to permit \rs{s} to directly update any replica of a
block. Using this modified interface, Salus can efficiently implement
active storage by colocating a compute node (\rs) with the storage
node (\dn) that it needs to access.

Active storage thus reduces bandwidth utilization in exchange for
additional CPU usage ---an attractive
trade-off for bandwidth starved data-centers. In particular, because a
\rs can now update the colocated \dn without requiring the network,
the bandwidth overheads of flushing and
compaction in HBase are avoided.


\paragraph{Using unanimous consent to reduce replication overheads}

To control the replication and storage overheads, we use unanimous
consent quorums for \pput{s}. Existing systems replicate data to three
nodes to ensure durability despite two permanent omission
failures. Salus provides the same durability and availability
guarantees despite two failures of either omission {\em or} commission
without increasing the number of replicas. To tolrate $f$ commission
faults with just $f+1$ replicas, Salus
requires the replicas to reach unanimous consent prior to performing
%(\pput quorum of size $3$)
any operation that updates the  state and to store a certificate
proving the  legitimacy of the update.
%(\get quorum of size $1$).

Of course, the failure of any of the replicated \rs{s} can prevent unanimous
consent. To ensure liveness, Salus replaces any \rrs that is not
making adequate progress with a new set of \rs{s}, which read all
state committed by the previous \rs  quorum from the  \dn{s} and resume
processing requests. This fail-over protocol is a slight variation of
the one already present in HBase to handle failures of unreplicated
\rs{s}.


\subsubsection{Scalable end-to-end checks}

Local file systems fail in unpredictable
ways~\cite{prabhakaran05iron}.  Distributed systems like HBase are
even more complex and are therefore more prone to failures.  To
provide strong correctness guarantees, \foosys implements end-to-end
checks that (a) ensure that clients access correct and current
data and (b) do so without affecting performance:  \get{s} can be
processed at a single replica and yet retain the ability to identify
whether the returned data is correct and current. The key is that
the client maintains sufficient metadata so that it can validate
any data it reads.

Like many existing
systems~\cite{Fu02fastand,sun06zfs,mahajan10depot,li04secure}, \sys'
mechanism for end-to-end checks leverages Merkle trees to efficiently
verify the integrity of the state whose hash is at the tree's root.
Specifically, a client accessing a volume maintains a Merkle tree on
the volume's blocks, called {\em volume tree}, that is updated on
every \pput and verified on every \get.


For robustness, \sys keeps a copy of the volume tree stored
distributedly across the \rs{s} that host the volume so that, after a
crash, a client can rebuild its volume tree by contacting the \rs{s}
responsible for the regions in that volume.  Replicating the volume
tree at the \rs{s} also allows a client, if it so chooses, to only
store a subset of its volume tree during normal operation, fetching on
demand what it needs from the \rs{s} serving its volume.

Since a volume can span multiple \rs{s},  for scalability and
load-balancing each \rs only stores and validates a {\em region tree}
for the regions that it hosts. The region tree is a sub-tree of the
volume tree corresponding to the blocks in a given region. In
addition, to enable the client to recover the volume tree, each \rs
also stores the latest known hash for the root of the full volume
tree, together with the sequence number of the \pput request that
produced it.

\begin{figure}[tpb]
\begin{center}
    \includegraphics[width=3in]{MT}
\end{center}
\vspace{-4ex}
\caption{Merkle tree structure on client and \rs{s}}
\label{fig:merkle-tree}
\end{figure}

Figure~\ref{fig:merkle-tree} shows a volume tree and its region
trees. The client stores the top levels of the volume tree that are
not included in any region tree so that it can easily fetch the
desired region tree on demand. A client can also cache recently used
region trees for faster access.

To process a \get request for a block, the client sends the request to
any of the \rs{s} hosting that block. On receiving a response, the
client verifies it using the locally stored volume tree. If the check
fails (because of a commission failure) or if the client times out
(because of an omission failure), the client retries the \get using
another \rs. To process
a \pput, the client updates its volume tree and sends the
signed root hash of its updated volume tree along with the
\pput request to the \rrs. Attaching the root hash of the volume tree
to each \pput request enables clients to ensure that, despite
commission failures, they will be able to mount and access a
consistent volume.

\subsubsection{Recovery}
Salus needs to perform recovery if either the client, the \rs, or the \dn fails.
We leave the details of the recovery protocol to \cite{Wang13Salus}. 

\subsection{Evaluation results}

We have implemented \sys by modifying HBase~\cite{hbase} and
HDFS~\cite{Shvachko10HDFS} to add pipelined commit,
active storage, and end-to-end checks.
We measured the robustness, performance, and scalability of Salus
and compared them to those of HBase.
Here we just show part of our results and other results can be found
in \cite{Wang13Salus}.

\paragraph{Robustness.}
As shown in Figure~\ref{graph:robustness}, Salus' client does not
accept corrupted data no matter what happens to the servers. And Salus
ensures liveness and \oc when there are no
more than 2 failures within any \rrs and the corresponding \dn{s}.
On the contrary, HBase provides none of these guarantees.

\paragraph{Performance.} As shown in Figure~\ref{graph:maxthroughput},
when the network bandwidth is sufficient, Salus' performance is comparable
to that of HBase. And when we switched to an environment where the
network bandwidth is limited, as shown in Figure~\ref{graph:throughputandbandwidth},
Salus can outperform HBase by 74\% since the active storage
protocol halves the network usage.

\paragraph{Scalability.} As shown in Figure~\ref{graph:scalability},
for the sequential write workload, the throughput per server
remains almost unchanged in both HBase and Salus as we move from 9 to
108 servers, meaning that for this workload both systems are perfectly
scalable up to 108 servers. For the random write workload, however,
both HBase and Salus experience a significant drop in
throughput-per-server when the number of servers grows. The culprit is
the increasing number of small I/O operations that this workload
requires when we increase the number of servers. Note however that
the extent of \sys' slowdown with respect to HBase is virtually the same (28\%) in
both the 9-server and the 108-server experiments, meaning that \sys'
overhead does not grow with the scale of the system.



\begin{figure*}[ht]
\begin{footnotesize}
    \begin{center}
    \begin{tabular}{ | c | p{6cm} | c  c | c  c |}
    \hline
    \multirow{2}{*}{Affected nodes} & \multirow{2}{*}{Faults}		&      \multicolumn{2}{|c|}{HBase} 	& 	 \multicolumn{2}{|c|}{\foosys}               \\ \cline{3-6}
						& & \get & \pput	& 	\get & \pput	\\ \hline	

   Client	&  Crash and restart 	& Fresh & Not ordered
							& Fresh & Ordered 	 \\ \cline{1-6}
    \multirow{4}{*}{\Dn}
     &  1 or 2 permanent crashes	&   	Fresh & Ordered				
                                        &	Fresh & Ordered 			 \\ \cline{2-6}
     & Corruption of 1 or 2 replicas of log or checkpoint  &    Fresh &  Ordered				
                                                    & 	Fresh & Ordered 			 \\ \cline{2-6}
     & 3 arbitrary failures     &   Fresh*  & Lost
                                				&	Fresh* & Lost			 \\ \hline

    \multirow{4}{*}{Region server+\Dn}
    & 1 (for HBase) or 3 (for \sys) \rs permanent crashes 			&   Fresh & Ordered
                            				&  	Fresh & Ordered    			\\ \cline{2-6}
    & 1 (for HBase) or 2 (for \sys) \rs arbitrary failures that potentially affect \dn{s}
%(memory corruption, attempt incorrect
%    write to \dn, send premature delete to NameNode, drop
%    or reorder requests) 		
					&   Corrupted & Lost	
                            				&	Fresh & Ordered			 \\ \cline{2-6}
    & 3 (for \sys) region server arbitrary failures that potentially affect \dn{s}  		    &       - & -
					&	Fresh* & Lost			 \\ \hline
   \multirow{1}{*}{Client+Region server+\Dn}
    & Client crashes and restarts, 1 (for HBase) or 2 (for \sys) \rs arbitrary failures causing
	the corresponding \dn{s} to not receive a suffix of data 			&   Corrupted & Lost
                            				&  	Fresh & Ordered    			\\ \cline{2-6}
    %\cline{2-11}
    % & 1 or 2 attempt to corrupt Name node 	&       	        N & N & N	
    %     					&	Y & N & N				
    %     					&	Y & Y & (Y)
    %     					\\
\hline

    \end{tabular}
    \caption{\label{graph:robustness} Robustness towards
      failures affecting the region servers within an RRS, and their
      corresponding \dn{s}. (- = not applicable, * = corresponding
      operations may not be live).  Note that a \rs failure has
      the potential to cause the failure of the corresponding \dn.}
    \vspace{-2ex}
    \end{center}\end{footnotesize}\end{figure*}

\begin{figure}[t]
\begin{footnotesize}
\begin{center}
  \begin{tabular}{| >{\centering\arraybackslash}p{50mm} | c | c |}\hline
   			&  HBase & Salus \\ \hline
   Throughput (MB/s) 	& 27 	 & 47	 \\ \hline
   Network consumption (network bytes per byte written by the client)
		 	& 5.3 	 & 2.4 	 \\ \hline
  \end{tabular}
\end{center}
\vspace{-2ex}


%\centerline{\includegraphics[angle=0, width=0.5\textwidth]{graphs/NWGraph.pdf}}
\caption{\label{graph:throughputandbandwidth} {Aggregate sequential write
throughput and network bandwidth usage  with fewer
    server machines but more disks per machine.}}
%\vspace{4ex}
\end{footnotesize}
\end{figure}



\begin{figure}
\centering
\begin{minipage}{.5\textwidth}
  \centering
  \includegraphics[width=1\linewidth]{max-throughput.pdf}
  \caption{\label{graph:maxthroughput} {Aggregate throughput of HBase and Salus}}
\end{minipage}%
\begin{minipage}{.5\textwidth}
  \centering
  \includegraphics[width=1\linewidth]{ec2-throughput_write.pdf}
  \caption{\label{graph:scalability} Scalability of HBase and Salus.}
\end{minipage}
\end{figure}


\subsection{Future work}
There is a limitation in the current design that a combination of
client failures, server corruptions, and network partitions can
cause Salus to lose a suffix of the updates. This problem
occurs because we use $f+1$ replicas to tolerate $f$
arbitrary failures, thus during recovery, if the correct replica
is temporarily unavailable, the system may accept a stale
but still valid state.

We're planning to eliminate the possibility of data loss by
applying the idea of separating data and metadata:
we can perform $2f+1$ replication for metadata (e.g. certificates)
and $f+1$ replication for data. During recovery, the better
protected metadata can survive and be used to identify
the correct data. This allows Salus to eliminate the
possibility of data loss while still keeping the replication cost low.


