\subsubsection{Scalability}
\label{sec:eval-scalability}
\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{graphs/salus_ec2_throughput_write.pdf}}
\caption{\label{graph:scalability} Write throughput per server with nine servers and 108 servers (compaction disabled).}
\end{figure}

In this section we evaluate the degree to which the mechanisms that
\sys uses to achieve its stronger robustness guarantees impact its
scalability. Growing by an order of magnitude the size of the testbed
used in our previous experiments, we run \sys and HBase on Amazon
EC2~\cite{AmazonEC2} with up to 108 servers. While short of our goal
of showing conclusively that \sys can scale to thousands of servers,
we believe these experiments can offer insights on the
relevant trends.


% Although \sys is explicitly designed to leverage the
% scalable architecture of HBase,


% in this section we the overhead of the
% mechanisms \sys uses to achieve robustness does not grow with the
% scale of the system. While our resources did not allow us to perform a
% full study



% In order to  that, we ran \sys and HBase on Amazon
% EC2~\cite{AmazonEC2} with up to 108 servers. Though we hope to run
% with more machines, we do not have enough funding to cover the cost
% and we think 108 servers is a reasonable scale that can show the basic % trend.

For our testbed we use EC2's extra large instances, with DataNodes and
region servers configured to use 3GB of memory each.  Some preliminary
tests run to measure the characteristics of our testbed show that each
EC2 instance can reach a maximum network and disk bandwidth of about
100MB/s, meaning that network bandwidth is not a bottleneck; thus, we
do not expect \sys to outperform HBase in this setting.

Given our limited resources, we focus our attention on measuring the
throughput of sequential and random writes: we believe this is
reasonable since the only additional overhead for reads are the
end-to-end checks performed by the clients, which are easy to make
scalable. We run each experiment with an equal number of clients and
servers and for each 11-minute-long experiment we report the
throughput of the last 10 minutes.

Because we do not have full control over EC2's internal architecture,
and because one user's virtual machines in EC2 may share resources
such as disks and networks with other users, these experiments have
limitations: the performance of EC2's instances fluctuates noticeably
and it becomes hard to even determine what the stable throughput for a
given experimental configuration is.  Further, while in most cases, as
expected, we find that HBase performs better than \sys, some
experiments show \sys with a higher throughput than HBase, possibly
because the network is being heavily used and pipelined commit helps
\sys handle high network latencies more efficiently: to be
conservative, we report only results for which HBase performs better
than \sys.

Figure~\ref{graph:scalability} shows the per-server throughput of the
sequential and random write workloads in configuration with nine and 108
servers.  For the sequential write workload, the throughput per server
remains almost unchanged in both HBase and \sys as we move from nine to
108 servers, meaning that for this workload both systems are perfectly
scalable up to 108 servers. For the random write workload, however,
both HBase and \sys experience a significant drop in
throughput-per-server when the number of servers grows. The culprit is
the high number of small I/O operations that this workload
requires. As the number of server increases, the number of requests
randomly assigned to each server in a sub-batch decreases, even as
increasing the number of clients causes each server to process more
sub-batches. The net result is that as the number of server increases,
each server performs an ever larger number of ever smaller-sized I/O
operations---which of course hurts performance. Note however that
the extent of \sys' slowdown with respect to HBase is virtually the same (about 28\%) in
both the 9-server and the 108-server experiments, meaning that \sys'
overhead does not grow with the scale of the system.

% The total cost of experiments is about \$1400, which includes setting up machines, running experiments,
% and fixing bugs that are not shown in small-scale experiment.
