\chapter{Introduction}

The primary directive of storage---not to lose data---is hard to
carry out: disks can fail in unpredictable ways~\cite{Ford10Availability,Bairavasundaram07AnalysisLatent,Bairavasundaram08AnalysisDataCorruption,Jiang08DisksDominant,Pinheiro07Failures,Schroeder07DiskFailures},
and so can CPUs and memory~\cite{Schroeder09DRAM,Nightingale11Cycles}. Concerns about robustness become even more pressing
as scalable storage systems like Google's GFS~\cite{ghemawat03google}, Bigtable~\cite{chang06bigtable},
Megastore~\cite{Baker11Megastore}, and Spanner~\cite{Corbett12Spanner}, Facebook's Haystack~\cite{Beaver10findinga}, and
Amazon's DynamoDB~\cite{Decandia07Dynamo} become more complex. For example, Google observes
one corruption for every 5.4 petabytes of data scanned in Bigtable~\cite{Dean10Evalution}.
What is worse, the consequences of such corruptions are hard to predict: in the
infamous 2008 Amazon outage, a single bit flip caused the entire Amazon S3
service to be down for 8 hours~\cite{Amazon08Outage}.

Strong protection techniques, such as Byzantine Fault Tolerance (BFT)~\cite{castro02practical,clement09upright},
can shield the system from unexpected and uncommon errors, but are usually expensive:
the scale of today's large storage systems magnifies the cost of these strong protection techniques,
severely reducing their applicability in practice. Therefore, developers today are facing a painful
tradeoff between robustness and scalability and, in practice, they usually
choose affordable solutions that can tolerate several types of common
errors, leaving the system vulnerable to uncommon errors with large impact.
To resolve the tension between robustness and scalability, my research
explores new ways to provide modern storage systems with extremely high levels
of reliability at reasonable cost.

My approach to building robust and scalable storage systems utilizes an old
but powerful idea---\emph{separating data from metadata}---in new ways. Many
previous systems protect metadata more aggressively than data
because metadata in storage systems is usually smaller and more important than data~\cite{ExtFileSystems,Rich06ZFS,Adya02Farsite,calder11windows}.
Consequently, these systems usually offer stronger guarantees for metadata than
for data. The dissertation, however, shows that many distributed storage systems can
actually achieve strong guarantees for \emph{both} metadata \emph{and} data by
applying strong protection to metadata and minimal protection to data. This perhaps
surprising result is based on the observation that much of the cost paid by
many strong protection techniques is incurred to \emph{detect}
errors. Taking simple data replication as an example, if the system only aims at tolerating
machine crashes, then two replicas are enough to cover one failure because the system
can detect whether a machine has crashed or not by using complementary mechanisms, such as timeouts.
If the aim is to tolerate arbitrary errors, however, then there is no obvious way to detect whether a machine is
faulty, and the system needs at least three replicas to outvote a single faulty replica.

This observation suggests an opportunity: since the high cost of strong protection
usually comes from error detection, if we can build a low-cost oracle to detect
errors and identify correct data, it may be possible to reduce the cost of protection without
weakening its guarantees. This dissertation shows that metadata, if carefully designed, can serve as such
an oracle. Our approach involves three steps: first, we design metadata so that it can be used to
validate data integrity; we then apply those strong
and expensive techniques \emph{only}
to metadata, with little effect on scalability; finally, we use such strongly protected
metadata to identify correct data. Despite its simplicity, we show that this approach yields
data protection with strong guarantees at minimal cost: in particular, we are able to employ
powerful fault tolerance techniques such as Paxos~\cite{Lamport98Part,Lamport01Paxos} and end-to-end
Byzantine Fault Tolerance (BFT)~\cite{castro02practical,clement09upright} at little
additional cost over weaker alternatives such as, respectively, synchronous primary backup
and piecemeal checksums. In fact, in some cases, providing strong end-to-end guarantees
opens up new optimization opportunities that allow our hardened systems to
significantly outperform the original systems on which they are based.

I have applied this approach to build three different systems: Gnothi, a small-scale
storage system that can tolerate data loss and timing errors cheaply; Salus, a large-scale
block store that provides strong end-to-end guarantees for read operations,
strict ordering guarantees for write operations, and strong durability and
availability guarantees despite a wide range of server failures (including
memory corruptions, disk corruptions, firmware bugs, etc); and Exalt, an emulator
that allows researchers to test the scalability of today's large storage systems.

\begin{itemize}

\item \emph{Gnothi: Efficient and available storage replication [Chapter 2].}
Replication is the key technique to guarantee data durability and availability
in storage systems and multiple replication protocols have been proposed to
provide different guarantees with different
costs: synchronous primary backup uses $f+1$ replicas to tolerate $f$ crash
failures but it usually employes a conservative timeout to perform accurate
failure detection, which hurts the availability of the system; asynchronous
replication (e.g. Paxos) does not rely on accurate failure detection, but it
increases the replication cost to $2f+1$. My work targets the following
question: can one write data to only $f+1$ nodes and still use a short and potentially
inaccurate timeout without risking correctness? This is well-known to be
impossible in the general case, but in storage systems, leveraging the key idea of
separating data from metadata allows me to closely approximate this goal:
by replicating metadata with Paxos and using metadata to identify correct
data during failure and recovery, I show that it is sufficient to replicate data on only
$f+1$ nodes. I have built a small-scale storage system, Gnothi, based on
this insight.

\item \emph{Salus: A robust and scalable block store [Chapter 3].}
Salus provides functionalities similar to those of Amazon's popular
Elastic Block Store (EBS), but with unprecedented guarantees in
terms of consistency, availability, and durability in the face
of a wide range of server failures (including memory corruptions,
disk corruptions, CPU errors, etc.).

Existing scalable storage systems usually give up certain robustness
properties for scalability, but Salus demonstrates that such trade-offs
may not be necessary. For example, scalable systems shard data and write
data to different shards in parallel to achieve scalability. This approach,
however, does not provide ordering guarantees between writes, and
such guarantees are essential to the correctness of certain applications,
e.g. a block store. Salus addresses this problem by separating data
transfer from metadata transfer: data is processed in parallel, while
metadata, which carries information about which data can be committed,
is processed sequentially. If failures occur, Salus utilizes metadata
to identify data that can be committed. Salus addresses a second key challenge:
large-scale storage systems are usually composed of multiple layers, with data replication
performed at the lowest layer. In such systems, using approaches similar to
Gnothi to enhance the robustness of the replication
layer is not enough, since middle layers are not replicated and can
become single points of failure. Salus shows that replicating such middle
layers can improve not only the robustness of the system, but also its
efficiency when disk bandwidth exceeds network bandwidth.

\item \emph{Exalt: An emulator for evaluating large-scale storage systems on small-to-medium infrastructures [Chapter 4].}
A basic tenet of sound systems research is to validate a design by implementing
a prototype and running experiments on it. Abiding by this precept when designing
highly scalable storage systems, however, is prohibitively hard: for example, Salus targets systems with
thousands of machines and tens of thousands of disks, but the largest affordable
experimental infrastructure I could use to validate my design
included only 200 machines. The lack of large testbeds presents a fundamental
challenge to almost all researchers working on large-scale systems: even industrial
researchers who are within reach of clusters of the necessary size may not
be able to reserve them for large-scale experiments, since these clusters are a
primary source of revenue.

To solve this problem, I have designed an emulator, Exalt, that uses data
compression to reduce by two orders of magnitude the number of physical
machines needed to validate a storage system of a given size.
To achieve efficient compression, I leverage the observation that
the behavior of storage systems often does not depend on the actual data
being stored: this insight is at the core of  Tardis, a new synthetic data format that allows applications
to quickly separate data from metadata and achieve high rates of data compression.

By applying Exalt to existing large-scale storage systems, I improve
the scalability of a mature storage system by an order of magnitude compared
to its default configuration and unearth several performance issues that are
not observable at small scale.
\end{itemize}

Gnothi~\cite{Wang12Gnothi}, Salus~\cite{yang13salus},
and Exalt~\cite{Wang14Exalt} have each been the subject of
conference publications: this
dissertation not only expands on the original papers,
but also improves Salus and Exalt in both design and evaluation.






