\section{Compressing data with \scheme}
\label{sec-virtualize}

Our approach is based on a simple intuition: for the purposes of
testing the scalability of large-scale storage systems, it is
typically the size of the data being written that
matters, not its actual content. We are then free to {\em choose} what
data clients write during our tests: our work explores the
opportunities that this freedom affords.

Specifically, our approach is to design a data format that achieves
fast and efficient compression and decompression.
As we discuss in Section \ref{sec:using-compression}, using compressed
data lets us colocate multiple nodes on the same machine, which in
turn enables running large-scale experiments on a small infrastructure.

Before presenting \scheme, our compression scheme, we set
forth the requirements that it must fulfill.

\subsection{Compression scheme requirements}

\begin{figure*}[t]
\centerline{\includegraphics[angle=0, width=\textwidth]{figures/exalt-figure1.pdf}}
\caption{\label{fig:tardis-examples} Examples of the Tardis format in compressed and uncompressed form.}
\end{figure*}

\hspace{8ex}{\bf The scheme must be lossless}. While compression can
reduce resource usage and allow node colocation, the ability to
recreate the original data is essential.  Modern large-scale storage
systems typically use a two-layer architecture, where the upper layer
uses the lower as a black-box
storage~\cite{hbase,chang06bigtable,calder11windows}.  What appears
like generic data to the lower storage layer may actually be metadata
necessary for the correct functioning of the upper layer; it is
critical that none of this  metadata be lost.

{\bf The scheme must achieve a high compression ratio}. The
motivation for this requirement is straightforward, since the
compression ratio directly affects the amount of colocation we can
achieve.


{\bf The scheme must be computationally efficient}.  As a
counterexample, consider a straw-man scheme in which clients
simply write sequences of 0's. This scheme offers obvious
opportunities for significant compression; however, if it is possible
for the system to interleave client data with metadata, the
compression algorithm would need to scan all the input bytes to
determine where the sequence of 0's begins and where it ends.  The
disk and network bottlenecks would have been removed, but at the
expense of introducing a CPU bottleneck, severely limiting the
scalability of this scheme.

{\bf Data chunks should be independently compressible}.  Modern
storage systems do not necessarily store data as a single unit, but
instead split it into multiple, separately stored chunks, which must
be independently compressible.  Meeting this requirement is
challenging, however, since a client in general has no control over
how data is divided into chunks. For example, in HBase the procedure
of splitting data into chunks depends on a non-deterministic race
between multiple threads.


\subsection{\Scheme compression}

We introduce a novel compression scheme, called {\em \scheme},
that satisfies the above requirements. \Scheme consists of a data format and
an algorithm for compressing and decompressing the data.
Intuitively, \scheme aims to achieve the following two complementary
goals.  When no metadata is inserted in the middle of the data, the
compression algorithm should be able to compress the entire data after
scanning only a small fraction of it.  Otherwise, the compression
algorithm should be able to quickly identify the location of the
inserted metadata.

\smallskip{\bf Data format} Clients write data as a series
of {\tt <flag> <marker>} entries, where {\tt <flag>} is a predefined
byte sequence that does not appear in the system metadata,
and {\tt <marker>} denotes the number of remaining bytes in
the data. For example, using a 4-byte flag and 4-byte markers, 1\,KB
of data would be formatted as:\\
\centerline{\tt <flag>1016<flag>1008...<flag>8<flag>0}\\
In this example, the first marker denotes that there are 1016 bytes remaining
in the sequence, since the (first) flag and the marker itself are 4 bytes each.
Of course, the size of flags and markers need not be the same: our prototype uses 8-byte flags and 4-byte markers.

\smallskip{\bf Compressed data format} Given a byte sequence in the above format,
the compression algorithm would simply need to return its length.  However, to enable data chunks to be
independently compressible, the algorithm actually returns two numbers:
the starting byte of the sequence as well as its length. In the above
example (also illustrated in Figure \ref{fig:tardis-examples}a), if
the entire 1\,KB of data were being compressed, the result would be
the pair (1024,1024).  If, however, the data were split into two
chunks of 512 bytes each (Figure \ref{fig:tardis-examples}b), the
first chunk would be compressed as (1024,512) and the second as
(512,512).

As we discussed above, in modern storage systems data and metadata are
frequently stored together. Figure \ref{fig:tardis-examples}c shows an
example where metadata is inserted in the middle of a \scheme
sequence. In this case, the metadata splits the original sequence into
two subsequences, of length 20 and 1004, respectively. Ideally, we
would like to compress each of these sequences separately, leaving the
metadata uncompressed. However, since in
this case the metadata is inserted in the middle of a flag-marker
pair, we simply leave these 8 bytes---the flag and the corresponding
marker---uncompressed.\footnote{It is actually possible to include the
  flag in the compressed sequence, but we omit this optimization for
  simplicity of presentation.}  This shortens the first subsequence to
a length of 16 and the second subsequence to 1000. Note that even if
the metadata were not aligned with the flags and markers, the result
would be the same: only the flag-marker pair that is split by the
metadata is left uncompressed and the rest of the data is compressed
as two separate subsequences.

To distinguish between compressed and uncompressed data during
decompression, an uncompressed sequence is preceded by a 0 and a
4-byte integer denoting its length, while a compressed
sequence is preceded by a 1.

\begin{figure}[!t]
\centering
\pseudocodeinput[breaklines=true,mathescape=true]{tardis.txt}
\caption{Pseudocode for \scheme compression.}
\label{fig:tardis_compression}
\end{figure}

\smallskip{\bf Compression} Figure~\ref{fig:tardis_compression}
shows the pseudocode for the Tardis compression algorithm. The main function, {\tt TardisCompress},
calls the {\tt FindSubsequence} function iteratively until all input data has been consumed.
When {\tt FindSubsequence} returns a new subsequence (line 7), the
main function appends the appropriate bytes to the compressed data
buffer. We detect the presence of metadata between two subsequences by
checking whether the starting position of the new
subsequence (\emph{pos}) is after the end of the previous
subsequence (\emph{index}). If so,  we append a 0 to denote the
beginning of an uncompressed sequence, followed by the length of the
metadata, and finally by the
metadata itself, uncompressed ({\em AppendMeta}, line 13).

It is then time to add the new subsequence. To denote that what
follows is compressed, we append a 1 before  the compressed form of
the \scheme subsequence (which, recall, consists of  the starting
point and length of the subsequence)  ({\em AppendTardis}, line 14).

Function {\tt FindSubsequence} is the core of the algorithm: its task is to identify a \scheme
subsequence. Two factors complicate this task: the sequence may have
been split  into multiple chunks and metadata may have been inserted somewhere in
the sequence. Given a starting index in the data, {\tt FindSubsequence} first scans
the data to find the first flag, indicating the start of a \scheme sequence, and reads the corresponding
marker (line 19). Then, it checks whether some metadata has been added in the
middle of this sequence. The check is simple: if no metadata is inserted between two markers with values
$A$ and $B$, then these markers should be placed $B-A$ bytes apart.
The purpose of lines 22-23 is to determine which marker should serve
as marker $B$. If the original sequence is not split across chunks,
then $B$ is marker 0, which should be $m$ bytes after the first
marker, where $m$ is the value of the first marker. Otherwise, $B$ is set
to the last marker of the current
chunk. If the difference between the values of markers $B$ and $A$ is
indeed equal to the byte distance between the markers, the algorithm
has found an uninterrupted \scheme subsequence. If that
is not the case, the algorithm performs a binary search to find the
rightmost flag-marker pair that satisfies the above condition, leveraging the fact that
the values of the markers form a sorted sequence (lines 24-30).


In practice, the common case is very simple: as long as there is no
metadata inserted in the byte sequence, the compression algorithm
needs only to check the first and last number of the sequence. This
allows \scheme to compress data much faster than off-the-shelf
compression algorithms. For example, when compressing data chunks of
1\,MB, \scheme is about 33,000 times faster than Gzip~\cite{gzip} and
2,300 times faster than the straw man compression scheme where client
data consists only of 0's and the compression algorithm simply scans
the data and compresses sequences of 0's into an integer denoting
their length. Of course, the comparison to Gzip is not
apples-to-apples, since Gzip is a generic compression algorithm; what
it does show, however, is that being able to choose the data format
drastically reduces the CPU overhead of our approach.

\smallskip{\bf Decompression} The decompression algorithm is straightforward.
Given a compressed sequence, it iterates through each sequence, whether
compressed (preceded by a 1) or uncompressed (preceded by a 0 and the length of the
sequence). Uncompressed sequences are copied without modification,
while compressed sequences are expanded to their uncompressed form.

\smallskip{\bf Choosing the flag} To prevent portions of metadata from
being accidentally compressed, the flag sequence should never appear
in the metadata.  If it did and, by unlucky coincidence, the length
value following the fake flag pointed to another flag followed by a
0, all that sequence of bytes would be compressed.  Although we
could altogether eliminate this danger,\footnote{It would suffice to
  escape the flag sequence in the metadata. However, this would
  require intrusive modifications to the server code, as all metadata
  insertions would need to be aware of the escaping logic.} it seems
  unnecessary: \sys
is not intended for production use, and an accidental compression
would simply require us to rerun the affected experiment. With a
sufficiently large flag, the odds of a false positive can be driven
arbitrarily low: our pragmatic approach was to choose as flag an 8-byte random
sequence and take our chances.  Our experiments have yet to produce a
false positive.

\subsection{Using compression to enable large-scale tests}
\label{sec:using-compression}

Since we are attempting to run a large number of nodes on a much
smaller number of machines, we will necessarily have to colocate
multiple nodes on the same machine. However, such colocation will
cause significant contention on the physical resources of the
machine. Specifically, the disk- and memory capacity, and the disk-
and network bandwidth available to each machine are typically enough
to support only a single node, making straightforward colocation infeasible.

Data compression can help here:
storing compressed data on disk decreases the disk capacity and
bandwidth requirements of each node, as well as memory capacity and
network bandwidth. Of course, data compression is not without
cost; in this case, the cost is CPU utilization.

This tradeoff, however, is very attractive for storage systems, where
CPU cycles are plentiful and bandwidth and storage capacity are
typically the system's bottleneck. It also opens the door to emulating
the behavior of storage systems too big to test using HPC computation
clusters: indeed, as we will see in Section~\ref{sec-casestudies}, our
analysis of the scalability of HDFS/HBase/Cassandra has been performed by
running \sys on the Stampede high performance cluster at the Texas
Advanced Computing Center (TACC)~\cite{TACC}.


If data compression is used without colocation, it results in a system
that is ``compressed'' in time, rather than space, since each write
will take less time to complete.  Running the system at an accelerated
pace offers the potential of identifying bugs or performance problems
much faster: Section~\ref{sec:datanode-scalability} discusses a case
where time compression allowed us to identify a problematic behavior
about 100 times faster than in a real deployment.



\subsection{Implementation}

Our implementation of \sys performs data compression for three key
resources: disk, network, and memory. Our goal is to be minimally
intrusive. While in-memory compression does require minor modifications
to the source code of the storage system being tested, we achieve
fully transparent disk and network compression by using byte code
instrumentation (BCI) to modify the relevant Java library classes
(Socket, SocketInputStream, SocketOutputStream, SocketChannel for
network compression; File, Fi\-le\-InputStream, FileOutputStream,
RandomAccessFile, and FileChannel for on-disk compression).

File compression is more challenging than network compression because
the file interface allows a user to partially update existing
data. When that data is already compressed, updating it in place is
not straightforward.  A naive solution would be to decompress the
existing data, update it, and compress it again. However, if the old
and the newly compressed data have different sizes, all following data
chunks would have to be moved. To address this problem, similarly to
the Log-Structured File System (LFS)~\cite{rosenblum92lfs}, we
transform in-place update operations into append operations. This
allows us to efficiently process in-place updates, with only a small
bookkeeping overhead to keep track of the latest version of each range
of bytes.

\smallskip{\bf Memory compression} In-memory data structures do not
use a well-defined interface, such as the File or Socket abstractions
used by the disk and network. As a result, transparently modifying
these data structures to compress and decompress data at the
application layer is very hard.\footnote{Transparent compression of
   in-memory data could be potentially implemented at the kernel
   level, but it would sacrifice portability.}
Instead, when the in-memory data needs to be
compressed, we manually modify the source code of the system.
Fortunately, this process is quite simple. One needs only identify the
data structures that hold the client data. When data is stored  in the
data structure, it is compressed; when data is retrieved from the data
structure, it is decompressed. For example, compressing the in-memory
key-value store of HBase required adding 71 lines of code across four
files.




