\section{Testing for scalability: common practices}
\label{sec-motivation}

When faced with the challenge of running experiments on a system whose
scale vastly exceeds their infrastructure, researchers
typically resort to one of two options: they either run the system at
the largest scale they can afford and try to extrapolate their
results, or they explicitly forgo running certain components of the
system, substituting them with stubs that, ideally, maintain the
interactions of the original components with the rest of the system,
but are simpler and less resource-demanding to run.  We discuss both
options, and why they are not well-suited for performing
scalability tests on large-scale systems.

\subsection{Extrapolation}

A common approach to estimate the behavior of systems that are
too big to test is to run them at a small or medium scale and then to
extrapolate, based on those results, how they will behave at a large
scale. For example, if the CPU utilization of a bottleneck node is
10\% in a 100-node experiment, extrapolation would lead one to
estimate that the system will scale to about 1,000 nodes. While
attractive for its simplicity, this approach has several drawbacks
that make it inaccurate in practice.


First, extrapolation is based on the assumption that resource usage
grows linearly with the scale of the system. However, because of
design choices and implementation issues, this assumption is
frequently violated in practice. For example, HDFS uses an array to
maintain a sorted list of files within a directory. Using an array
causes insertion to be an O($N$) operation, where $N$ is the number of
files in the directory. As more files are added to the directory,
insertion becomes increasingly expensive: indeed, the cost of adding
$N$ files to a directory is O($N^2$) . Note that a more efficient
directory implementation (e.g.~a sorted tree map) does not restore
linear growth in resource usage, but simply reduces the growth rate to
O($N \cdot \log N$). In general, once the load on the system is not linear,
accurate extrapolation becomes much harder, especially because, as we
have seen, the system's performance may depend on the details of the
implementation.

A second, more subtle drawback of extrapolation is that at small
scales some important behaviors can easily escape notice. Consider
again the above example of a workload of O($N^2$) complexity: as long
as the value of $N$ is low, the potential scalability bottleneck
remains largely inconspicuous. To exacerbate the problem, measuring
resource utilization is an inherently noisy process. For example,
observing that a Java process uses 100\,MB of memory does not, by
itself, indicate how much memory is being used by the data structures
of that process. Answering that question requires accurate
information about the amount of memory used internally by the JVM, the
amount of non-garbage-collected memory, etc. The uncertainty added by
measurement noise is significantly more prominent at lower scales,
where resource utilization is low.

The final drawback, which is closely related to the previous one, is
that extrapolation cannot be used to predict behaviors that are only
triggered when some resource utilization reaches a certain threshold.  For
example, HDFS has a blocking disk-scanning procedure that becomes
increasingly expensive as the system grows in size. Beyond a certain
size, running the procedure causes the corresponding DataNode to start
missing heartbeats, which in turn can cause it to be evicted and force
all its data to be re-replicated, with serious performance repercussions.

\subsection{Using stubs}

Another technique for predicting the performance of a system
too big to test is to emulate, rather than actually run, some of its
components. The emulated components are implemented as stubs, running
either locally or remotely. For this approach to be successful, the
stubs should be simple to implement and require much more modest
resources than the original components they stand in for; at the same
time, they should be able to correctly exercise the rest of the
system, allowing it to be stress-tested at scale using relatively
modest resources.

While attractive in theory, the promises of emulation are often elusive
in practice: reproducing accurately the behavior of a
non-trivial real system component is hard, and in the process the stub
component can end up being almost as complex as the real one,
defeating its purpose.

We faced this challenge first-hand when trying to test the scalability
of the HDFS NameNode using stub DataNodes. In particular, our goal was to create a
large number of stub DataNodes and use them to stress-test the
NameNode. Our first attempt did not involve the DataNodes in the
protocol at all; to create files and add blocks to them, clients
simply invoked {\tt createFile} and {\tt addBlock} at the
NameNode. However, the system did not work, since the NameNode expects
the DataNodes to confirm the receipt of each block. We therefore
modified our clients to notify the stub DataNodes, so they could in
turn appropriately notify the NameNode. This did not work, either: the
NameNode, we discovered, also expects each DataNode to periodically
report the list of blocks it stores on disk. After several frustrating
iterations, we eventually came to realize that
emulating the correct behavior of DataNodes would have required us to
reimplement the full HDFS protocol, including all inter-DataNode
communication, local bookkeeping, etc.

% The stub approach has been used by the HDFS developers to test the
% scalability of the NameNode: they completely eliminate clients and
% DataNodes and start a local benchmark on the NameNode.  However,
% this approach ignores several implementation details, such as the
% overhead of the RPC library used for communication. They explicitly
% admit that their evaluation .


