\section{Finding scalability bottlenecks}
\label{sec-finding}

Data compression gives us the ability to colocate multiple nodes on a
single physical machine: in this section, we discuss how we can
selectively use this ability to draw meaningful conclusions about the
scalability of a large-scale storage system.  We will view the system
as a collection of {\em real} and {\em emulated} nodes. A real node
runs the system's actual code and handles
unmodified data. An emulated node still runs the system's actual code,
but, as needed to support colocation, may (i) store compressed
data on its disk, (ii) send compressed data over the network, when it
communicates with other emulated nodes, and (iii) store compressed
data in memory.

% With emulation, of course, comes loss of information:
% in our case, the information loss does not affect the data itself, but
% rather other resources that would have been consumed by storing or
% sending the data. For example, storing only a few bytes instead of
% 1\,MB of data means that one can no longer directly measure the amount
% of energy required to store the original data on disk or send it over
% the network.  Fortunately, a full characterization of the performance of a large
% scale system is explicitely a non-goal of \sys


\subsection{\sys methodology}

We use this combination of real and emulated nodes as a microscope of
sorts that we can focus on a part of the system to identify
performance bottlenecks.  To ensure that the part of the system
``under inspection'' behaves as it would in a real large-scale
deployment, we leave the corresponding nodes real, while using
emulated nodes for the rest of the system.  This approach works
particularly well at identifying performance issues at centralized
components that can become a bottleneck as the scale of the system
increases (e.g. HDFS NameNode, HBase Master). Section
\ref{sec-casestudies} discusses our experience using this technique to
find scalability problems in real systems.

A downside of this methodology is that it may not discover scalability
problems that arise at the nodes that are being emulated.  To address
this issue, after having stress-tested the part of the system under
inspection by using the maximum amount of colocation for emulated
nodes, we perform a new set of experiments where a small subset of
formerly emulated nodes are also run as real, while the rest is kept
emulated. This hybrid configuration makes it possible to identify
scalability problems also at nodes that are not under inspection,
while maintaining a high degree of colocation, but it is not a
panacea: for example, it is still unable to detect performance
issues that only manifest when a large number of nodes that are not
under inspection perform some collective action (e.g. system-wide
recovery).

\iffalse
\subsection{Predicting the behavior of emulated resources}

Emulation leads to loss of information. While our emulated resources use a lossless
compression algorithm to ensure that the data can always be restored from its
compressed form, the very use of emulation entails some loss of information. In our
case it is not information about the data itself, but about the resources that
would have been consumed by storing or sending the data. For example, storing
only a few bytes instead of 1\,MB of data means that we can no longer directly measure
the amount of energy required to store the original data on disk or send
it over the network. Similarly, storing these few bytes on disk means that we can
no longer directly measure the disk bandwidth that storing the original data would
require.

Accurately predicting the performance of emulated resources is a non-goal for \sys.
Instead, \sys's goal is to identify scalability problems in the system, by using a
combination of real and emulated nodes.
In cases where emulation causes loss of information about the behavior of some emulated
resource, one can fall back to the traditional approach: predicting the behavior of the
resource based on a model. Of course, the ability to accurately predict the behavior of
that resource depends on the accuracy of the model. Some resources, like energy consumption,
are quite challenging to model. For other resources, like disk performance, there exist
models that can predict the resource's performance with reasonable accuracy~\cite{DiskSim}. Finally,
capacity-related resources are straightforward to model: one need only keep a record of
the original size of the data. However, we discourage using even such accurate models
to predict system behavior. For example, a model may accurately predict that a Java
program's memory usage will be smaller than its allocated heap size. However, as the
allocated memory approaches the heap size, garbage collection will impose an increasingly
large overhead.
\fi 