\chapter{Related Work}

\section{Separating data and metadata}

Separating data and metadata is an old but effective idea adopted
by multiple systems for different goals. This section only describes
how it is used in storage systems to provide better robustness.

Many storage systems apply stronger protection to metadata and weaker
protection to data because
metadata, if damaged, could potentially cause any data, even the
whole storage system, to become unavailable or corrupted; the effect
of damaged data, however, is usually contained in the
data item (e.g. a file) itself. Therefore, local filesystems such
as the EXT series~\cite{ExtFileSystems} and ZFS~\cite{Rich06ZFS} keep multiple copies of the superblock and
inodes on disks while keeping fewer copies of data; distributed storage
systems such as Farsite~\cite{Adya02Farsite} and Windows Azure Storage~\cite{calder11windows} apply strong replication
(BFT and Paxos respectively) to its namespace metadata while using
primary backup to minimize the cost of data. These systems usually
achieve stronger guarantees on metadata but weaker guarantees on data.
My dissertation, however, shows that with properly designed metadata,
data as well can also be protected with strong guarantees and little
additional cost.

Paris et al.~\cite{Paris91Voting} reduce the storage overhead of
voting using volatile witnesses.  Yin et al.~\cite{Yin03Separating}
separate agreement from execution to reduce the number of execution
nodes required for Byzantine replication, showing that while $3f+1$
nodes are still required for agreement, $2f+1$ nodes are enough for
execution, and Clement et
al.~\cite{clement09upright} refine these techniques. Gnothi and Salus
both adopt similar ideas but with further improvements: Gnothi shows that
the replication cost of execution in the
failure free case can be further reduced to approximately $f+1$ by performing partial
replication of data and full replication of metadata; Salus' active
storage protocol shows that the replication cost of agreement can also be reduced to
$f+1$ under certain conditions (e.g. single writer per volume).

%Several scalable cluster file systems~\cite{Adya02Farsite, anderson96serverless, ghemawat03google, Shvachko10HDFS} are architected to
%separate data and metadata to allow one or more metadata managers to coordinate access
%to large numbers of storage servers by tracing where each object is stored. {\ourSystem}, instead, focuses on scaling Paxos-based
%replication and providing strong consistency (linearizability) for arbitrary read/write workloads, and therefore maintains
%different types of metadata (block versions and requestID rather than mappings of objects/locations).

\section{Robustness techniques}

Three techniques are commonly used to protect a storage system
against failures: replication, end-to-end checks, and
erasure coding.

\subsection{Replication}

\paragraph{Tolerating omission failures}
Replication techniques used to tolerate omission failures
can be classified as either synchronous or asynchronous.

In synchronous replication~\cite{Bressoud96Hypervisor,Budhiraja92Primary,Cully08Remus},
a primary replica provides the
service to the clients, and if the primary replica fails, a backup
replica takes over and continues to provide service. It takes $f+1$
replicas to tolerate $f$ crash failures.  There are three main
disadvantages to synchronous primary backup~\cite{Bolosky11Paxos}: 1)
its correctness is not guaranteed when there are timing errors caused
by network partitions or server overloading, since these faults can
cause replicas to diverge; 2) to minimize correctness issues, the
system must be configured with conservative timeouts that can hurt
availability; 3) read throughput is limited by the capability of a
single machine, since only the primary replica processes requests.

Asynchronous replication does not assume an upper bound on network latency or node response time,
and hence can ensure correctness even in the face of relatively rare events like server overload,
network overload, or network partitions.
The traditional approach to asynchronous replication involves a Replicated State Machine (RSM),
in which a consensus protocol guarantees that each correct replica receives the same sequence of requests and
in which each replica is a deterministic state machine.

Paxos~\cite{Lamport98Part,Lamport01Paxos} is representative of the
asynchronous RSM approach, which requires $2f+1$ replicas to tolerate
$f$ crash failures. Paxos guarantees safety (all correct replicas
receive the same sequence of requests) at all times and guarantees
liveness (the system can make progress) when the network is available
and node actions and message delivery are timely. Paxos uses timeouts
internally, but it does not depend on their accuracy for
safety and can adjust timeouts dynamically for liveness.

The standard Paxos protocol executes every request on each of the
$2f+1$ replicas, with costs (in bandwidth, storage space, etc.) higher
than synchronous replication. Much work has been done to reduce the
cost of Paxos: Gaios does not log reads, executes them on only one
replica, and nonetheless guarantees linearizabilty by adding new
messages to the original Paxos
protocol~\cite{Bolosky11Paxos}. ZooKeeper~\cite{Hunt10ZooKeeper}
includes a fast read protocol that executes on a single replica, but
it does not provide Paxos's linearizability guarantee.

On-demand instantiation (ODI)~\cite{Lamport04Cheap} reduces write costs by executing
requests on a preferred quorum of $f+1$ replicas. If one of the active replica fails, a backup replica is activated, but before it can start processing any
request it must be initialized by fetching the current value of all replicated state. In storage systems with
large amounts of data, this approach does not scale, as the system can be unavailable for hours while it
transfers terabytes of data.

Falcon~\cite{Leners11Falcon} uses an accurate failure detector to eliminate the necessity of
asynchronous replication, but it relies on the availability of all the network switches, an assumption that may not always hold
in today's datacenters: a rack of machines together with its switch may be turned off
for maintenance or fail unexpectedly, and large-scale storage systems should be designed
to remain available in this case~\cite{ghemawat03google,Shvachko10HDFS,Muralidhar14f4,Ford10Availability}.
Indeed, the authors of Falcon explicitly acknowledged the significant technical challenge
involved in network failure localization~\cite{Leners11Falcon}.


%Distler et al.~\cite{Distler11Increasing} propose to alleviate this problem by replaying a per-object log on demand,
%but again this approach is not appropriate for replicating applications with large amounts of state, because its logs
%and snapshots are on a per-object basis; to reduce overhead, per-object garbage collection
%is performed infrequently, once every 100 updates, which means that the system stores 100 copies of each object
%at each replica.

\paragraph{Tolerating arbitrary failures}
Byzantine Fault Tolerance (BFT) replication is the standard
technique to tolerate arbitrary failures. It can also be
classified as either synchronous or asynchronous, and their
relationship is similar to the pair described in the previous
paragraph.

Synchronous BFT systems~\cite{Bazzi00Synchronous,Lamport82Byzantine} take $3f+1$ replicas to tolerate $f$ arbitrary failures, while
asynchronous BFT systems~\cite{castro02practical,clement09upright,Kotla07Zyzzyva} also take
$3f+1$ replicas ($2f+1$ for execution) but, similar to Paxos, can only guarantee liveness during synchronous
intervals. Several BFT systems~\cite{Cowling06Hq,Malek05Fault} incorporate more replicas to optimize
latency or throughput.

On-demand instantiation (ODI) is also applied in BFT techniques~\cite{Wood11ZZ}, but
it suffers from the same problem that if a replica fails, the system is unavailable for
a long time to wait for the data copy to complete.
Distler et al.~\cite{Distler11Increasing} propose to alleviate this problem by replaying a per-object log on demand,
but again this approach is not appropriate for replicating applications with large amounts of state, because its logs
and snapshots are on a per-object basis; to reduce overhead, per-object garbage collection
is performed infrequently, once every 100 updates, which means that the system stores 100 copies of each object
at each replica.

\subsection{End-to-end checks}
ZFS~\cite{sun06zfs} incorporates an on-disk Merkle tree
to protect the file system from disk corruptions. SFSRO~\cite{Fu02fastand},
SUNDR~\cite{li04secure}, Depot~\cite{mahajan10depot}, and Iris~\cite{Stefanov12Iris} also use
end-to-end checks to guard against faulty servers. However, none of
these systems is designed to scale to thousands of machines, because, to support multiple clients sharing a
volume, they depend on a single server to update the Merkle tree. Instead, Salus
is designed for a single client per volume, so it can rely on
the client to update the Merkle tree and make the server side
scalable. We do not claim this to be a major novelty of Salus; we see
this as an example of how different goals lead to different
designs.

We are not aware of any end-to-end verification techniques that can support multiple
writers while achieving strong consistency, scalability, and end-to-end verification for read requests.
One can tune Salus to support multiple writers by either using a single
server to serialize requests to a volume as shown in SUNDR~\cite{li04secure}, which of
course hurts scalability, or by using weaker consistency models like
Fork-Join-Casual~\cite{mahajan10depot} or fork*~\cite{feldman10sporc}.

End-to-end checks alone only provide safety guarantees but do not provide any
durability or availability guarantees: an error can be detected by end-to-end checks,
but how to recover from such an error remains unknown. That is why in distributed
systems, end-to-end checks are often used in combination with replication
to provide all the desired properties, and Salus adopts the same principle.

\subsection{Erasure coding}
Erasure coding~\cite{HDFS-RAID,Huang12Erasure,Dimakis-Regenerating} is another popular technique to
protect data in distributed systems. It splits the raw data into multiple
blocks and then codes them into a new set of blocks so that as long as a
certain number of the new blocks survive the failures, the raw data can be reconstructed.
Compared to replication, erasure coding makes different tradeoffs: first, erasure
coding uses more CPU resource, but usually requires less storage space to provide the
same level of durability guarantee; second, erasure coding usually requires more network bandwidth
to recover a lost block since many blocks need to be read to reconstruct the lost
block; third, replication can usually provide better read throughput because
reads can be directed to any of the replicas, while erasure coding doesn't have
this advantage. All such tradeoffs make erasure coding an attractive option
for cold data~\cite{cold_data1,cold_data2}---data that is written once and rarely accessed in the future---for
which space is more of a concern than performance. This dissertation
mainly focuses on hot replicated data, but several of the techniques we discuss,
such as Salus' pipelined commit protocol and end-to-end verification, do not depend
on how data is protected and thus should work with storage systems that use erasure
coding.

%\paragraph{Protections in local storage systems}
%Disks and storage sub systems can fail in various
%ways~\cite{Ford10Availability,Bairavasundaram07AnalysisLatent,Bairavasundaram08AnalysisDataCorruption,Jiang08DisksDominant,Pinheiro07Failures,Schroeder07DiskFailures},
%are so can memories and
%CPUs~\cite{Schroeder09DRAM,Nightingale11Cycles} with disastrous
%consequences~\cite{prabhakaran05iron}. Unfortunately, end-to-end
%protection mechanisms developed for local storage
%systems~\cite{sun06zfs,prabhakaran05iron} are inadequate for
%protecting the full path from a \pput to a \get in complex systems
%like HBase.

%\paragraph{BFT systems.}
%While some distributed systems tolerate arbitrary faults
%(Depot~\cite{mahajan10depot}, SPORC~\cite{feldman10sporc},
%SUNDR~\cite{li04secure}, BFT
%RSM), they require a correct
%node to observe all writes to a given volume, preventing a volume from
%scaling with the number of nodes.

\section{Scalability techniques}

\subsection{Scalable and consistent storage}
Sharding---breaking a storage space into multiple units (e.g. blocks,
files, and key-value pairs) and assigning these units to different
servers---is used in almost any large storage systems to scale them
to more than thousands of servers. One of the key challenges is
how to maintain membership, tracking which servers a data unit has been assigned to.

Many large-scale storage systems~\cite{hbase,chang06bigtable,calder11windows,Lim11Silt,ghemawat03google,Adya02Farsite}
rely on a single (maybe replicated) metadata server to keep membership information. This approach
is easy to design and implement, but suffers from the scalability problem that
the single metadata server can become the bottleneck. On the other hand, systems
like Cassandra~\cite{lakshman09cassandra} rely on a distributed hashtable (DHT) to maintain membership,
eliminating the single bottleneck but suffering from another problem that the
hashtable might become inconsistent when a large number of nodes are joining or leaving. In the middle
ground, several systems~\cite{anderson96serverless,Weil06Ceph,Fikes10Storage} use a group of metadata servers, instead of only one, to
maintain membership, making a tradeoff between complexity and scalability.

Few such systems are designed to tolerate arbitrary node failures:
while these systems usually use checksums to
safeguard data written on disk, a memory corruption or a software
glitch can lead to the loss of data in these systems
(Section~\ref{section:robustness}).  In contrast, Salus is designed to be
robust (safe and live) even if nodes fail arbitrarily.

\subsection{Evaluating scalability}
As we mentioned earlier, two common approaches to evaluating the
scalability of large storage systems are using extrapolation and
stub components. For example, extrapolation is used, among others, in
RAMCloud~\cite{Ongaro11RAMClound}, Spanner~\cite{Corbett12Spanner},
and Salus~\cite{yang13salus}, while the stub approach is used in
HDFS~\cite{Shvachko10HDFS,HDFSScalability}.
Section~\ref{sec-motivation} discusses these approaches in detail, so
we do not discuss them further here.

Several tools have been proposed to address the gap between the size
of the experiments that researchers would like to run and the resources
available to them.

In DieCast~\cite{Gupta08DieCast} this experimental gap is addressed
using time dilation~\cite{Gupta06Infinity}. DieCast runs multiple
processes inside virtual machines on a single host and slows down each
process by a constant factor. It compensates for this slow-down by
multiplying the measured throughput by the same factor. DieCast can
achieve some degree of colocation when CPU utilization is the
bottleneck, but does nothing to reduce the large amount of disk
space necessary to evaluate large-scale storage systems.

The system that comes closer to addressing the experimental gap for
storage systems is David~\cite{Agrawal11Emulating}. David leverages
the observation that to evaluate a local file system it is not
necessary to store the actual data. Thus, David only stores the file
system's metadata: the data is simply discarded. This technique allows
David to evaluate local file systems of much larger size than that of
the local disk on which they are run. Unfortunately, this approach
cannot be easily applied to distributed storage services. For example,
when users write a key-value pair to HBase, the region server adds a
timestamp and a region identifier to the write request and stores this
metadata, together with the users' data, on the local file system of
an HDFS DataNode. Since data and metadata look indistinguishable to
the HDFS layer, David would discard metadata
critical for the correct operation of the system.

Memulator~\cite{Griffin02Timing} emulates nonexistent storage
components by storing data in memory and accurately predicting how
long each operation takes.  Its purpose is to test the behavior of the
system on devices that the researchers do not have access to. Unlike
\sys, it does not save any resource usage, which makes it not
applicable to our goal.

Finally, simulation is a technique used by several systems to evaluate the
performance of large-scale deployments. The approaches vary from
disk simulation~\cite{DiskSim}, network simulation~\cite{ns2,NetworkEmulation},
to simulation of scheduling and checkpointing in large platforms~\cite{Wang13SimMatrix,Zhao13Exploring}.
A well-known drawback of simulation is that its results are only as good as its
model of how the system works. Unfortunately,
as systems grow in complexity, coming up with a model that
accurately captures all their features becomes prohibitively hard.

There exist several compression
algorithms~\cite{Ziv77Universal,Ziv78Compression,Cleary84Data,Moffat90Implementing,Manning97Identifying}
one may consider using in our context.  However, all these algorithms
are designed to be general-purpose and as such they need to scan all
the input bytes. \Scheme, on the other hand, owes its efficiency
largely to the fact that it does not have to scan most of the input
bytes.



