
\sys is a library that gives back to researchers the
ability to verify the scalability claims of today's large storage
systems, which, ironically, have become hard to corroborate precisely
because of the scale of these systems.

The advent of Big Data has strained the scalability of traditional
storage systems, and several new architectures have been proposed to
respond to this
challenge~\cite{lakshman09cassandra,ghemawat03google,chang06bigtable,calder11windows,hbase,Shvachko10HDFS,Corbett12Spanner,Nightingale12FDS}
by supporting up to hundreds of petabytes of storage and tens of
thousands of storage nodes.
Testing systems at such scale, however, requires access to tens of thousands
of machines and at least as many disks, and few researchers have
access to resources that plentiful: the rest of us have to design
systems that are supposed to operate at a scale much larger than the
infrastructure available to test them. Nor are such resource
limitations affecting only academia: even industrial
researchers at organizations with clusters of the necessary size may
not be able to reserve them for large scale experiments, since these
clusters are a primary source of revenue.

These limitations are typically sidestepped in one of two ways.  The
first is to run experiments on a medium-sized cluster (100-200
machines) and extrapolate the results to larger scales. Although this approach may
work reasonably well in some cases, the fundamental assumption on
which it rests---that resource consumption increases linearly with the
load and the number of machines in the system---does not always hold,
as we show in Section~\ref{sec-motivation}. To make matters worse,
sources of non-linear growth  are sometimes hard or impossible to observe in
small deployments. For example, the time needed to add a new block to
an HDFS file~\cite{Shvachko10HDFS} increases with the file's size,
but it is only after that size has grown beyond what is likely to be
observable in small deployments that the slowdown becomes a limiting factor for
the system's performance.

The second common approach for predicting the behavior of large-scale
systems is
simulation~\cite{DiskSim,ns2,Wang13SimMatrix,Zhao13Exploring}. Unfortunately, the
results of a simulation are only as accurate as the model on which the
simulation relies; as systems grow in size and complexity,
modeling them faithfully becomes prohibitive.


This dissertation proposes a third way: the \sys library offers researchers
the ability to test the scalability of a large-scale storage system by
running its real code, but without requiring access to thousands of
machines.  The basic insight at the core of \sys is that, in many
large-scale experiments, how data is processed is not affected by
the content of the data being written, but only by its metadata, such as its size.
\sys leverages this freedom by virtualizing the data, while keeping the
metadata intact to ensure that the system continues to function
correctly. Specifically, the format that \sys clients use to write
data, which we call {\em \scheme}, has two key advantages. First,
it allows \sys to compress the behavior of the system in both
space and time. Space compression is a powerful tool for performing
large-scale experiments: for example, running 10,000 storage nodes on
just 100 machines can bring to light previously unknown scalability
bottlenecks in the metadata service. Since compressed data takes much
less time to write, compression in space can in turn result in
compression in time: with the system running faster, bugs and performance
issues can be discovered more rapidly.

The second key advantage of \scheme is that it addresses
a fundamental challenge in virtualizing data: being able to
distinguish data from metadata. While the content of the former is
not important for the system to function correctly and can therefore
be virtualized, the integrity of the latter is essential. This problem
is particularly prominent in modern storage systems, which employ a
two-layer architecture where the upper layer uses the lower layer as
black-box storage: files written to the lower layer contain both data
and metadata, which look indistinguishable to the lower layer.  The
need to ensure the integrity of the metadata is why approaches that
virtualize data by altogether disposing of file contents
(e.g.~\cite{Agrawal11Emulating}) cannot be used in our context.


In summary we make the following contributions:

\begin{itemize}
\item We introduce \scheme, a data representation sche\-me that
  allows data to be identified and efficiently compressed even at
  lower-level storage layers that are not aware of the semantics and
  formatting used by higher levels of the system.  \Scheme provides
  transparent, lossless, computationally efficient compression of data
  and achie\-ves high compression ratios.
\item We present a methodology that utilizes \scheme to test the
  scalability and robustness of large-scale storage systems: our goal
  is not to predict every aspect of the performance of such systems
  (e.g. their power consumption) but, more modestly, to identify
  scalability problems. Our approach has a
  ``Tru\-man-show''~\cite{TrumanShow} feel: the part of the system
  whose scalability is being tested processes real data and interacts
  with the rest of the system as it would in a true large-scale
  deployment, while the rest of the system uses \scheme to compress
  data and achieve high degrees of colocation, thereby emulating the
  behavior of a large number of nodes.
\item We present our experience using \sys, a library that implements
  \scheme and uses our methodology to identify scalability issues in
  large-scale storage systems. Using \sys we found several
  such issues in three mature storage systems:
  HDFS~\cite{Shvachko10HDFS}, HBase~\cite{hbase}, and Cassandra~\cite{lakshman09cassandra},
  and fixed a subset of them.
  All the problems
  we identified manifest when the scale of the system becomes larger
  than a typical research cluster. In the case of HDFS, resolving these
  problems resulted in an order of magnitude improvement of the
  aggregate system throughput.
 Our ability to identify these issues was not, for better or worse,
 due to a prior deep understanding, but rather to the
 opportunity offered by \sys to test them at an unprecedented scale.
\end{itemize}

The rest of this chapter is organized as follows. Section
\ref{sec-motivation} discusses the common practices for testing the
scalability of large systems. Section \ref{sec-virtualize} introduces
the \scheme data representation scheme and Section \ref{sec-finding}
describes how it can be used to identify scalability problems in
large-scale systems. Section~\ref{sec-limitations} reviews the assumptions
of \sys and discusses its applicability in various contexts.
Section \ref{sec-casestudies} presents our
experience using \sys to identify performance problems in three mature
systems: HDFS, HBase, and Cassandra.
Section \ref{sec-conclusion} concludes our discoveries of Exalt.
