\section{Related work}

Merge the related work sections of Gnothi, Salus, and Exalt.

\iffalse
\paragraph{Scalable and consistent storage.}
Many existing systems provide the abstraction of scalable distributed
storage~\cite{hbase,calder11windows,chang06bigtable,Corbett12Spanner,Lim11Silt,Nightingale12FDS} with
strong consistency. Unfortunately, these systems do not tolerate
arbitrary node failures. While these systems use checksums to
safeguard data written on disk, a memory corruption or a software
glitch can lead to the loss of data in these systems. In contrast, our system is designed to be
robust (safe and live) even if nodes fail arbitrarily.


\paragraph{Protections in local storage systems}
Disks and storage sub systems can fail in various
ways~\cite{Bairavasundaram08AnalysisDataCorruption,Bairavasundaram07AnalysisLatent,Ford10Availability,Jiang08DisksDominant,Pinheiro07Failures,Schroeder07DiskFailures},
are so can memories and
CPUs~\cite{Nightingale11Cycles,Schroeder09DRAM} with disastrous
consequences~\cite{prabhakaran05iron}. Unfortunately, end-to-end
protection mechanisms developed for local storage
systems~\cite{prabhakaran05iron,sun06zfs} are inadequate for
protecting the full path from a PUT to a GET in complex systems
like HBase.



\paragraph{End-to-end checks.}
ZFS~\cite{sun06zfs} incorporates an on-disk Merkle tree
to protect the file system from disk corruptions. SFSRO~\cite{Fu02fastand},
SUNDR~\cite{li04secure} and Depot~\cite{mahajan10depot} also use
end-to-end checks to guard against faulty servers. However, none of
these systems is designed to scale to thousands of machines, because,  to support multiple clients sharing a
volume, they depend on a single server to update the Merkle tree; our system
is instead designed for a single client per volume, so it can rely on
the client to update the Merkle tree and make the server side
scalable. We do not claim this to be a major novelty of our system; we see
this as an example of how different goals lead to different
designs. We know of no  end-to-end verification technique
scalable at both the client and server side.

\paragraph{BFT systems.}
While some distributed systems tolerate arbitrary faults
(Depot~\cite{mahajan10depot}, SPORC~\cite{feldman10sporc},
SUNDR~\cite{li04secure}, BFT
RSM~\cite{castro02practical,clement09upright}), they require a correct
node to observe all writes to a given volume, preventing a volume from
scaling with the number of nodes.

\paragraph{Evaluating large-scale systems.}
David, DieCast, Extrapolating, Stubbing
\fi
