Many distributed storage systems experience the tension
between robustness and scalability, because stronger protection
techniques are usually more expensive and thus are rarely deployed
at large scale.

This chapter describes the design and implementation of
\foosys,\footnote{Salus is the Roman goddess of safety and welfare.} a
scalable block store in the spirit of Amazon's Elastic Block Store
(EBS)~\cite{amazonebs}: a user can request storage space from the
service provider, mount it like a local disk, and run applications
upon it, while the service provider replicates data for durability and
availability.

What makes \sys unique is its dual focus on {\em scalability}
and {\em robustness}. Some recent systems have provided end-to-end
correctness guarantees on distributed storage despite arbitrary node
failures~\cite{mahajan10depot,castro02practical,clement09upright}, but
these systems are not scalable---they require each correct node
to process at least a majority of updates.  Conversely, scalable distributed storage
systems~\cite{anderson96serverless,lee96petal,anderson00interposed,ghemawat03google,maccormick04boxwood,chang06bigtable,calder11windows,hbase,Thekkath97Frangipani}
typically protect some subsystems like disk
storage with redundant data and checksums, but fail to
protect the entire path from client \pput to client \get, leaving them
vulnerable to single points of failure that can
cause data corruption or loss.

\sys provides strong end-to-end correctness guarantees for read
operations, strict ordering guarantees for write operations, and
strong durability and availability guarantees despite a wide range of
server failures (including memory corruptions, disk corruptions,
firmware bugs, etc). To scale these guarantees to thousands of machines
and tens of thousands of disks, \sys leverages an architecture similar to scalable
key-value stores like Bigtable~\cite{chang06bigtable} and
HBase~\cite{hbase}, but achieving this unprecedented combination of
robustness and scalability presents several challenges.

First, to build a high-performance block store
from low-performance disks, \sys must be able to write different sets
of updates to multiple
disks in parallel. Parallelism, however, can threaten the basic
consistency requirement of a block store, as ``later'' writes may
survive a crash, while ``earlier'' ones are lost.

Second, aiming for efficiency and high availability at low cost can
have unintended consequences on robustness by introducing single
points of failure. For example, in order to maximize throughput and
availability for reads while minimizing latency and cost, scalable
storage systems execute read requests at just one replica.  If that
 replica experiences a {\em commission failure} that causes it
to generate erroneous state or output, the data returned to the client
could be incorrect.  Similarly, to reduce cost and for ease of design,
many systems that replicate their storage layer for fault tolerance
(such as HBase) leave unreplicated the computation nodes that can
modify the state of that  layer: hence, a memory error or an
errant \pput at a single HBase \rs can irrevocably and undetectably
corrupt data (see Section~\ref{section:robustness}).

Third, additional robustness should ideally not result in higher
replication cost.  For example, in a perfect world \sys' ability to
tolerate commission failures would not require any more data
replication than a scalable key-value store such as HBase already
employs to ensure durability despite omission failures.

To address these challenges \sys introduces three novel ideas---pipelined
commit, active storage, and scalable end-to-end verification---based
on the principle of separating data from metadata.

{\bf Pipelined commit.}  By tracking the necessary dependency metadata,
\Foosys' new pipelined commit protocol allows
writes to be processed in parallel at multiple disks but
guarantees that, despite failures, the system will be left in a
state consistent with the ordering of writes specified by the
client.


{\bf Active storage.} To prevent a single node from
corrupting data or metadata, \sys replicates both the storage and the computation
layer. To reduce the cost of storage replication, \sys incorporates
the idea of Gnothi, requiring only $f+1$ data replicas to tolerate $f$
failures. To reduce the cost of replicating computation,
\sys applies an update to the system's persistent state only
if the update is agreed upon by {\em all} of the replicated
computation nodes. This approach, again, requires only $f+1$ replicas to tolerate $f$
failures. We make two observations about active
storage. First, perhaps surprisingly, replicating the computation
nodes can actually improve system performance by moving the
computation near the data (rather than vice versa), a good choice when
network bandwidth is a more limited resource than CPU cycles.  Second,
by requiring the {\em unanimous consent} of all replicas before an
update is applied, \sys comes near to its perfect world with respect
to overhead: \sys remains safe (i.e.~keeps its blocks consistent and
durable) despite two {\em commission} failures with just three-way
replication---the same degree of data replication needed by HBase to
tolerate two permanent {\em omission} failures.  The flip side, of
course, is that insisting on unanimous consent can reduce the times
during which \sys is live (i.e.~its blocks are available)---but
liveness is easily restored by replacing the faulty set of computation
nodes with a new set that can use the storage layer to recover the
state required to resume processing requests.



{\bf Scalable end-to-end verification.} \sys maintains sufficient
metadata---a Merkle tree~\cite{Merkle80Protocols,Blum91Checking}---for each volume so that a client can
validate that each \get request returns consistent and correct data:
if not, the client can reissue the request to another replica. Reads
can then safely proceed at a single replica without leaving clients
vulnerable to reading corrupted data; more generally, such end-to-end
assurances protect \sys clients from the opportunities for error and
corruption that can arise in complex, black-box cloud storage
solutions.  Further, \Foosys' Merkle tree, unlike those used in other systems
that support end-to-end verification~\cite{Fu02fastand,sun06zfs,mahajan10depot,li04secure}, is
scalable: each server only needs to keep the sub-tree corresponding to
its own data, and the client can rebuild and check the integrity of
the whole tree even after failing and restarting from an empty state.

We have prototyped \foosys by modifying the HBase key-value store.
The evaluation confirms that \sys can tolerate servers experiencing commission
failures like memory corruption, disk corruption, etc. Although one
might fear the performance price to be paid for \foosys' robustness,
\sys' overheads are low in all of our experiments. In fact, despite its
strong guarantees, \foosys often \emph{outperforms} HBase, especially
when disk bandwidth is plentiful compared to network bandwidth. For
example, \foosys' active replication allows it to halve network
bandwidth while increasing aggregate write throughput by a factor of
1.74 in a well-provisioned system.

