\section{The Exalt emulator (future work)}

\subsection{Motivation}

With the advent of Big Data, several designs have been proposed for
building scalable storage systems that claim to handle hundreds of
petabytes of storage, tens of thousands of storage nodes and hundreds
of thousands of clients~\cite{lakshman09cassandra,ghemawat03google,chang06bigtable,calder11windows,hbase,Shvachko10HDFS,Corbett12Spanner,Nightingale12FDS}. Evaluating
systems at such scale requires access to tens of thousands of machines
and at least as many disks. Unfortunately, few researchers have access
to resources that plentiful, and many system researchers find
themselves designing systems that are supposed to operate at a much
larger scale than the testbed infrastructure available to them. When
speaking of academia, pointing out resource limitations may appear
 redundant, but even industrial researchers who are within
reach of clusters of the necessary size may not be able to run large
scale experiments on them, since these clusters are a primary source
of revenue.

There are three main approaches to sidestep these limitations. The first
 is to run experiments on a medium-sized cluster
(100-200 machines) and extrapolate the results to larger scales. While
this approach may work reasonably well in some cases, it is limited by
its fundamental assumption that resource consumption grows linearly
with the load and the number of machines in the system: as we observe
in practice, this is not always true. Moreover, such
non-linear behaviors are sometimes hard or impossible to observe in
small deployments. For example, as the size of HDFS~\cite{Shvachko10HDFS} files grows,
adding a new block to a file becomes increasingly slower, eventually
becoming a limiting factor for the system's performance.

The second common approach to performing large-scale experiments with modest resources is by
using stub (fake) components. This can be done by running several low-overhead local
threads or remote processes that try to reproduce the behavior of real components of the system.
Our experience indicates that it is very hard to reproduce the behavior of a real component;
the stub component ends up being almost as complex as the real component.

The third common approach for  predicting the behavior of large-scale
systems is simulation~\cite{DiskSim,ns2,ExascaleSimulation}. Unfortunately, the results of a simulation are
only as accurate as the model on which the simulation
depends, and as a system grows in size and complexity, modeling it
faithfully becomes prohibitive.



\subsection{Design}

We're building Exalt, which offers researchers the ability
to evaluate a large-scale storage system by running the system's real
code, but without requiring access to thousands of machines. Exalt is built on the insight that
for many useful large-scale experiments, the actual data being written does not affect the
behavior of the system. Based on that insight, Exalt abstracts away the data, while
taking care to keep the metadata intact to ensure that the systems continues to
function correctly. Abstracting away the data allows us
%We can therefore emulate writing the data to the disk and network, which allows us
to {\em compress} the behavior of the system in both space and time. Space compression
is a powerful tool for performing large-scale experiments, e.g. by running 10,000 storage nodes
on only a hundred machines and observing that the metadata service becomes the
bottleneck of the system. On the other hand, time compression can be used to run the
system in an accelerated mode, discovering bugs and performance issues much faster than
a real deployment would allow.

One of the challenges in abstracting away data is being able to distinguish data from
metadata---while data itself is not important for the system to function correctly, the same
is not true for metadata. This challenge is particularly prominent in modern storage systems,
which typically use a two-layer architecture, where the upper
layer uses the lower layer as black box storage. As a result, the files written to the
lower layer contain both data and metadata, which look indistinguishable to the lower layer.
Approaches that discard file contents altogether (e.g.~\cite{Agrawal11Emulating}), although superficially
similar to \sys, have the unpleasant consequence of discarding critical system metadata,
and therefore do not work with modern storage systems.

\subsubsection{Make data distinguishable from metadata}
Distinguishing data from metadata is the main challenge to abstract away data efficiently. 
Fortunately, for the purposes of testing the scalability of large-scale storage systems, we are afforded
the luxury of choosing both the data being written by the clients (data format) and the way
that this data is represented in the system (compression algorithm). We will call such a
combination a {\em data representation scheme}. To better understand the various options in
the space of data representation schemes,
let us first review the requirements that such a scheme must fulfill. The first requirement,
as we argued above, is that the scheme must be lossless. The second requirement is also
straightforward: the scheme must achieve a high compression ratio.

The third requirement is that the scheme must be computationally efficient.
Consider, for example, the case where the client data is simply a sequence of 0's.
When that data is accompanied by some metadata, the compression algorithm would need to
scan all the input bytes to determine where the sequence of 0's begins and where it ends. This is
computationally inefficient: the disk and network bottlenecks are removed, but instead
a CPU bottleneck is introduced, severely limiting the scalability of this approach.

The final requirement is more subtle; it stems from the fact that modern storage systems
do not necessarily store data as a single piece, but instead split them into multiple pieces
(sometimes called chunks), that are stored separately. This means that each of these chunks
should be compressible independently. Moreover, the process of dividing data into chunks
is non-deterministic and depends on many factors outside the client's control. For example,
in HBase the procedure of splitting data into chunks depends on a non-deterministic race
between multiple threads.
This means that it is not possible to pre-assign ``hea\-der'' compression information to each chunk,
as the client does not know how the data will be split into chunks. We can therefore express
the final requirement as follows: any part of the data should be compressible on its own.

We are designing a novel data representation scheme, called {\em \scheme},
that satisfies the above requirements. \Scheme consists of a data format and an algorithm
for compressing and decompressing the data. The driving insight of \scheme is that, since
we control the data format, we can come up with a very simple compression/decompression
algorithm.

\smallskip{\bf Data format} The data written by the client is a series of {\tt <flag> <number>} entries
where {\tt <flag>} is a predefined byte sequence and {\tt <number>} is the number of remaining
bytes in the data. For example, using a 4-byte flag and 4-byte integers, 1\,KB of data would
be formatted as:\\
\centerline{\tt <flag>1016<flag>1008...<flag>8<flag>0}

\smallskip{\bf Compression and decompression} Given a byte sequence in the above format,
the compression algorithm would simply need to store the length of that sequence. However,
since any part of that sequence may be compressed separately, the compression algorithm
stores two numbers: the starting point and the length of the sequence. In the above example,
if the entire 1\,KB of data were being compressed, the result would be the pair (1024,1024).
If, however, this sequence were split into two chunks of 512 bytes each, the first chunk
would be compressed as (1024,512) and the second chunk would be compressed as (512,512).

Of course, as we discussed above, in modern storage systems data and metadata
are frequently stored together. Given a byte sequence, the algorithm scans the sequence
to find the first occurence of the flag and only tries to compress the remaining bytes,
if they adhere to the specified format. Any bytes that do not adhere to this format
are left uncompressed. To be able to distinguish between compressed from uncompressed data
during decompression, an uncompressed sequence is preceded by a 0 and a 4-byte integer
denoting the length of the sequence, while a compressed sequence is preceded by a 1.

\smallskip{\bf Choosing the flag} The purpose of the flag is to help distinguish data from
metadata. However, if the flag sequence occurs in the metadata, too, this could result
in the algorithm erroneously compressing some parts of the metadata---which, unlike
data, should not be compressed, as their real values can not be recovered. Though a
randomly chosen 8-byte flag works well for HDFS in our preliminary experiments, we're
seeking analytical ways to help us choose an appropriate flag.

\smallskip{\bf Handling inline metadata} So far we have assumed that, while a data sequence
may be split into multiple chunks, the bytes of each chunk are contiguous. However,
storage systems sometimes insert additional metadata (e.g. checksums) at locations
that the client can not predict. We're designing efficient algorithms to detect
such ``foreign'' bytes inside a sequence. The idea is simple:
any sequence that does not contain any foreign bytes should have a length that is equal
to the difference between its first number and its last number (plus the length of the
flag). If that condition is not
true, this indicates that some bytes have been added somewhere in the sequence. In order
to quickly identify where those bytes are added---instead of naively scanning the entire
byte sequence---we leverage the fact that the numbers in a sequence are sorted. We
will use an algorithm akin to multi-key binary search, where both halves of the sequence
are iteratively searched if their length does not match the difference between their
first and last numbers.

\subsubsection{Implementation}

Our implementation of Exalt performs data compression at three key resources:
the disk, the network, and the memory. To facilitate the use of Exalt, the main goal
of our implementation is to be as non-intrusive as possible. Ideally one should
be able to simply run a storage system that uses Exalt to enable large-scale
experiments, without having to modify the source code of the system. We are planning
to achieve that goal
with respect to 2 out of the 3 target resources: the disk and the network.
When data needs to be compressed in memory, we have to perform minor modifications
to the system code. To achieve transparent compression, we are using bytecode instrumentation (BCI) to modify
the appropriate Java library classes so that they automatically compress and decompress data.


\subsubsection{Finding scalability bottlenecks with Exalt}

Applying the data compression techniques described above gives us the ability to
colocate multiple nodes on a single physical machine, which is the stepping
stone for performing large-scale tests on a small set of machines. However, how
to use this ability to draw meaningful conclusions
about the behavior of the system at large scale is still a question. In our plan, we will view the
system as a collection of {\em real} and {\em emulated} nodes. A node is called real
if it is running the actual system code and handling unmodified data. An emulated node
still runs the actual system code, but it handles and stores compressed data.

We use this combination of real and emulated nodes as a kind of microscope that focuses
on one part of the system and tries to identify performance bottlenecks. In particular,
when we want to test the scalability of some component, we leave the corresponding nodes
real, and only use emulated nodes in the rest of the system. This ensures that the part
of the system that is ``under inspection'' behaves exactly as it would in a real large-scale
deployment. This approach works particularly well at identifying performance issues at
centralized components that can become the system bottleneck as the scale increases
(e.g. HDFS NameNode, HBase Master).

While the above methodology is useful for identifying performance problems at a specific
part of the system, it has the downside that the rest of the system consists of emulated
components. That said, this approach would still be
unable to detect performance issues that only manifest when a large number of nodes
perform some collective action (e.g. system-wide recovery, 99\% latency). In this case,
we will fall back to the traditional approach: predicting the behavior of the
resource based on a model. Of course, our ability to accurately predict the behavior of
that resource depends on the accuracy of the model. Some resources, like energy consumption,
are quite challenging to model. For other resources, like disk performance, there exist
models that can predict the resource's performance with reasonable accuracy~\cite{DiskSim}. Finally,
capacity-related resources are straightforward to model: one need only keep a record of
the original size of the data.  We are working on incorporating disk~\cite{DiskSim}
and network~\cite{ns2} models into Exalt. However, modeling always leads to
certain level of inaccuracies and we will evaluate the effects of such inaccuracies.

\subsection{Preliminary results}

We have applied Exalt to HDFS. Specifically, we ran HDFS NameNode, which is well-known
to be the bottleneck of HDFS, in real mode and we ran all HDFS DataNodes in emulated
mode.

As shown in Figure~\ref{graph:hdfs_throughput_scalability}, we have identified several
configuration problems when we tried to add more DataNodes to the system and by fixing
these problems, we can get more throughput. And as shown in Figure~\ref{graph:hdfs_big_file},
we have also identified problems in the code that can cause performance degradation when the
size of the file grows big. By applying a simple patch, we can eliminate this problem.

\begin{figure}
  \centering
  \includegraphics[width=.5\linewidth]{hdfs_throughput.pdf}
  \caption{\label{graph:hdfs_throughput_scalability} HDFS throughput scalability. The numbers in parentheses
under the number of disks is the number of physical server machines involved in each experiment.}
\end{figure}

\begin{figure}
  \centering
  \includegraphics[width=.5\linewidth]{hdfs_big_file.pdf}
 \caption{\label{graph:hdfs_big_file} HDFS throughput degradation as the size of files increases.}
\end{figure}




