\section{Case studies}
\label{sec-casestudies}

\sys allows us to evaluate storage systems at an unprecedented
scale. This section presents our experience applying \sys to evaluate
three real-world storage systems: the Hadoop Distributed File System
(HDFS)~\cite{Shvachko10HDFS}, the HBase key-value store~\cite{hbase}, and
the Cassandra key-value store~\cite{lakshman09cassandra}. We chose these systems for
several reasons. First, not only they are popular in their own right,
especially among researchers, but their architectures are representative
of a broad range of existing large-scale storage systems. Second,
all systems are open-source, which allows us to perform code
modifications where necessary (i.e.~for in-memory compression).
Finally, all systems have a large development community that has
produced a mature and stable codebase. Despite the maturity of the
code, we identified several performance issues that arise as the scale
of the system increases. Our ability to diagnose these issues was not
due to a prior deeper understanding of these systems, but simply  to
the ability to evaluate them at an unprecedented scale.

In our evaluation, we run HDFS and HBase at a scale about 100 times
larger than the size of the infrastructure available to us. For
example, one of our experiments uses 96 machines to run an HDFS
cluster with 9600 DataNodes. Our experiments identify a number of
performance problems that arise at such large scales.  Some of these
problems pertain to low-level implementation details, while others are
due to high-level design choices. For example, we find that storing
many files on an HDFS directory causes file creations to that
directory to become increasingly slow; and that keeping less than
$\frac{3}{4}$ of the region data in the memory of an HBase region
server causes its performance to degrade precipitously. Using \sys, we
were able to identify and fix many of these problems, improving as a
result the aggregate HDFS throughput by an order of magnitude.
Our experience with Cassandra is different: the scalability of
Cassandra is so limited that its scalability problems can be
identified even without the help of \sys.


Unfortunately, we have not yet been able to validate our results by
running the actual systems at a large scale: after all, it is the very
reason that we do not have access to such plentiful resources that has
motivated our work in the first place.  The largest validation we have
performed involved running HDFS on 1,500 nodes of the Stampede
cluster at the Texas Advanced Computing Center~\cite{TACC}: while our results
confirm the prediction of \sys for that configuration, the scale of
the system is still too small to exhibit even the first of the
scalability issues identified by \sys.
%{\color{red} (Not happening) We are actively negotiating
%with TACC access to a bigger cluster}.
Our current confidence in
\sys's effectiveness stems from two sources. First, for each problem
that \sys has identified, we have traced the cause of the problem in
the source code and, if possible, we fixed it and run the modified system to confirm that
its performance has been improved.  Second, some of our findings
have been confirmed by engineers at Facebook, among the few
who have access to a large-scale deployment of
HDFS~\cite{FBcommunication}.
% {\color{red} If possible, I hope to avoid using Facebook as a validation,
%because I am not sure how accurate their description is. Maybe still use the HDFS develop's results on Facebook?} .

%In addition to finding performance issues in real-world systems, we would like to validate
%\sys's accuracy in absolute terms: how close is the predicted scalability to that of a real deployment.
%However, the motivation of \sys is precisely the fact
%that such a deployment is typically not available to researchers. As such,
%we compare our results against the second best available option:
%a scalability assessment performed by the developers of HDFS~\cite{HDFSScalability}. This
%assessment is based on the largest deployment available to the HDFS developers (4,000 nodes),
%and it uses extrapolation to predict the performance of a 10,000 node deployment.


%We used \sys to study the scalability of several large-scale storage systems to
% see 1) whether \sys can and how accurately \sys can identify bottlenecks or problems
% in real systems and 2) how much resource reduction
%\sys can achieve when performing large-scale experiments.

%We choose Hadoop Distributed File System (HDFS) and HBase as our target applications because of their
%popularity. Furthermore, these two systems are both complex systems that can demonstrate the
%usability of \sys: Hadoop is recommended as a benchmark to stress test Java Virtual Machine (JVM) today.
%Thus if we can run these systems with \sys, it's a good indicator that \sys can be used in practice.

%Ideally, to validate the accuracy of \sys, we should compare the results from \sys to that from
%a real full-scale system. However, it's unaffordable for us to get tens of thousands of machines.
%Therefore, we instead use two indirect approaches to validate \sys: first, people have measured or estimated the
%scalability of certain systems. In this case, we compare the numbers \sys gets to the measured or estimated
%numbers: if they match, it's a good indicator that \sys is accurate. Second, sometimes \sys shows a scalability problem
%in the system. If we can find the convincing reasons for such problems, fix them, and achieve better
%scalability, it's another indicator that \sys is accurate.

Most of our experiments were performed on the Stampede cluster at
TACC, whose machines have 16 cores, 32\,GB of memory, but only 80\,GB
of local disk storage.
%Throughout our experiments, we never used more than 200 machines on that cluster: 100 machines as
%real or emulated servers and 100 client machines that generate the workload.
Since our access to TACC was limited, we ran some of our experiments
on three local machines with 16 cores, 64\,GB memory and ten 1\,TB
disks each. These machines were used to test the capacity limitations
of individual storage nodes.

\input{exalt_eval_HDFS}

\input{exalt_eval_HBase}

\input{exalt_eval_Cassandra}

%\input{Eval-Discussion}
