\section{Related work}

\paragraph{Synchronous and asynchronous replication}

In synchronous replication~\cite{Bressoud96Hypervisor,Budhiraja92Primary,Cully08Remus}, a primary replica provides the
service to the clients, and if the primary replica fails, a backup
replica takes over and continues to provide service. It takes $f+1$
replicas to tolerate $f$ crash failures. There are three main
disadvantages to synchronous primary backup~\cite{Bolosky11Paxos}: 1)
its correctness is not guaranteed when there are timing errors caused
by network partitions or server overloading, since these faults can
cause replicas to diverge; 2) to minimize correctness issues, the
system must be configured with conservative timeouts that can hurt
availability; 3) read throughput is limited by the capability of a
single machine, since only the primary replica processes requests.


Asynchronous replication does not assume an upper bound on network
latency or node response time, and hence can ensure correctness even
in the face of relatively rare events like server overload, network overload, or network partitions.
The traditional approach to asynchronous replication involves a Replicated State Machine (RSM),
in which a consensus protocol guarantees that each correct replica receives the same sequence
of requests and in which each replica is a deterministic state machine.


Paxos~\cite{Lamport98Part,Lamport01Paxos} is representative of the
asynchronous RSM approach, which requires $2f+1$ replicas to tolerate
$f$ crash failures. Paxos guarantees safety (all correct replicas
receive the same sequence of requests) at all times and guarantees
liveness (the system can make progress) when the network is available
and node actions and message delivery are timely. Paxos uses timeouts
internally, but it does not depend on their accuracy for
safety and can adjust timeouts dynamically for liveness.


The standard Paxos protocol executes every request on each of the
$2f+1$ replicas, with costs (in bandwidth, storage space, etc.) higher
than synchronous replication. Much work has been done to reduce the
cost of Paxos: Gaios~\cite{Bolosky11Paxos} does not log reads, executes them on only one
replica, and nonetheless guarantees linearizabilty by adding new
messages to the original Paxos
protocol~\cite{Bolosky11Paxos}. ZooKeeper~\cite{Hunt10ZooKeeper}
includes a fast read protocol that executes on a single replica, but
it does not provide Paxos's linearizability guarantee.

On-demand instantiation (ODI) \cite{Lamport04Cheap,Wood11ZZ} reduces write costs by executing
requests on a preferred quorum of $f+1$ replicas. If one of the active replica fails, a backup replica is activated, but before it can start processing any
request it must be initialized by fetching the current value of all replicated state. In storage systems with
large amounts of data, this approach does not scale, as the system can be unavailable for hours while it
transfers terabytes of data.
Distler et al.~\cite{Distler11Increasing} propose to alleviate this problem by replaying a per-object log on demand,
but again this approach is not appropriate for replicating applications with large amounts of state, because its logs
and snapshots are on a per-object basis; to reduce overhead, per-object garbage collection
is performed infrequently, once every 100 updates, which means that the system stores 100 copies of each object
at each replica.

\paragraph{Scalable storage.}
Many existing systems provide the abstraction of scalable distributed
storage~\cite{hbase,calder11windows,chang06bigtable,Corbett12Spanner,Lim11Silt,Nightingale12FDS} with
strong consistency. Unfortunately, these systems do not tolerate
arbitrary node failures. While these systems use checksums to
safeguard data written on disk, a memory corruption or a software
glitch can lead to the loss of data in these systems. In contrast, Salus is designed to be
robust (safe and live) even if nodes fail arbitrarily.

These scalable storage systems are architected to
separate data and metadata to allow one or more metadata managers to coordinate access
to large numbers of storage servers by tracing where each object is stored.
Salus inherits such design to achieve its scalability while also applying
the idea of ``separating data and metadata'' in data processing protocols
to achieve its robustness guarantees.

\paragraph{Robust storage.}
Disks and storage sub systems can fail in various
ways~\cite{Bairavasundaram08AnalysisDataCorruption,Bairavasundaram07AnalysisLatent,Ford10Availability,Jiang08DisksDominant,Pinheiro07Failures,Schroeder07DiskFailures},
are so can memories and
CPUs~\cite{Nightingale11Cycles,Schroeder09DRAM} with disastrous
consequences~\cite{prabhakaran05iron}. Unfortunately, end-to-end
protection mechanisms developed for local storage
systems~\cite{prabhakaran05iron,sun06zfs} are inadequate for
protecting the full path from a PUT to a GET in complex systems
like HBase.

End-to-end verification has been incorporated in several systems to
ensure that no matter what happens to the disks or servers, the users
never reads incorrect data.
ZFS~\cite{sun06zfs} incorporates an on-disk Merkle tree
to protect the file system from disk corruptions. SFSRO~\cite{Fu02fastand},
SUNDR~\cite{li04secure} and Depot~\cite{mahajan10depot} also use
end-to-end checks to guard against faulty servers. However, none of
these systems is designed to scale to thousands of machines, because,
to support multiple clients sharing a
volume, they depend on a single server to update the Merkle tree.

\iffalse
; our system
is instead designed for a single client per volume, so it can rely on
the client to update the Merkle tree and make the server side
scalable. We do not claim this to be a major novelty of our system; we see
this as an example of how different goals lead to different
designs. We know of no  end-to-end verification technique
scalable at both the client and server side.
\fi

While some distributed systems tolerate arbitrary faults
(Depot~\cite{mahajan10depot}, SPORC~\cite{feldman10sporc},
SUNDR~\cite{li04secure}, BFT
RSM~\cite{castro02practical,clement09upright}), they require a correct
node to observe all writes to a given volume, preventing a volume from
scaling with the number of nodes.

\paragraph{Evaluating large-scale systems.}
Several tools have been proposed to address the gap between the
experimental needs and the available resources.

In DieCast~\cite{Gupta08DieCast} the experimental gap is addressed
using time dilation~\cite{Gupta06Infinity}. DieCast runs multiple
processes inside virtual machines on a single host and slows down each
process by a constant factor. It compensates for this slow down by
multiplying the measured throughput by the same factor. DieCast can
achieve some degree of colocation when the CPU utilization is the
bottleneck, but does nothing to reduce the large amount of disk
space necessary to evaluate large-scale storage systems.

The system that comes closer to address the experimental gap for
storage systems is David~\cite{Agrawal11Emulating}. David leverages
the observation that to evaluate a local file system it is not
necessary to store the actual data. Thus, David only stores the file system
metadata: the data is simply discarded. This technique allows David to evaluate
local file systems of much larger size than that of the local disk on
which they are run. Unfortunately, this approach cannot be easily
applied to distributed storage services. For example, when users write
a key-value pair to HBase, the HBase region server adds a
timestamp and a region identifier to the write request and stores this
metadata, together with the users' data, on the local file system of
an HDFS datanode. By discarding file contents, and since data and metadata
look indistinguishable to the HDFS layer, David would have to discard metadata
that is critical for the correct operation of the system.

Memulator~\cite{Griffin02Timing} emulates nonexistent storage components by storing
data in memory and accurately predicting the time each operation takes.
Its purpose is to test the behavior of the system on devices that the
researchers do not have access to. Unlike Exalt, it does not save any
resource usage, which makes it not applicable to our goal. However,
time prediction techniques used in Memulator can be incorporated into
Exalt to better estimate the performance of emulated resources.

Finally, simulation is a technique used by several systems to evaluate the
performance of large-scale deployments. The approaches vary from
disk simulation~\cite{DiskSim}, network simulation~\cite{ns2},
to simulation of large-scale P2P systems~\cite{ExascaleSimulation}.
However, simulation itself has well-known drawbacks: researchers
cannot run systems directly on the simulator. Instead, simulation
is based on coming up with an accurate model of how the system works
and then using the simulator to predict the system behavior. Unfortunately,
as the systems grow in size and complexity, coming up with a model that
encompasses all aspects of the system becomes prohibitively hard.
In practice, simulation is often used together with emulation: in fact, DieCast,
David, and Memulator all use DiskSim to predict disk latency.



