
\subsubsection{Aggregate throughput/network bandwidth}

\label{section:aggregate}

% $\bullet$ Does foosys get better/comparable aggregate throughput?

% $\bullet$ Start a cluster with 9 servers and run different workloads with multiple clients until it's saturated.

% $\bullet$ Figure~\ref{graph:maxthroughput} shows aggregate throughput.
% We get comparable read throughput. We get lower write throughput because of compaction. We should get higher
% if the network is the bottleneck.

% $\bullet$ Figure~\ref{graph:networkbandwidth} shows the network bandwidth usage. We compare the nubmer of bytes
% sent by the servers to the number of bytes sent by the clients. We get slightly more than 2, which is as expected (1 for logging and 1 for flushing).
% HBase gets slightly more than 5, which is also as expected (2 for logging, 2 for flushing and some else for compaction).
% We count the client bytes in pure data, not including metadata size, so finally it's slightly more than 2 and 5.

% Then we evaluate the aggregate throughput of \foosys and compare to that of HBase. We start
% multiple clients to issue requests to the system until the system is saturated. Similarly,
% for random read experiments, the client creates a 80GB volume and for other workloads, each client
% creates a 10GB volume.
%

\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{graphs/salus_max_throughput.pdf}}
\caption{\label{graph:maxthroughput} {Aggregate throughput on small nodes. HBase-N and Salus-N disable compactions.}}
\end{figure}

We then evaluate the aggregate throughput and network usage of
\foosys. The servers are saturated in these experiments, so pipelined
execution does not improve \sys' throughput at all. On the other hand,
we find that active replication of \rs{s}, introduced to improve robustness,
can reduce network bandwidth and significantly improve performance
when the total disk bandwidth exceeds the aggregate network bandwidth.

Figure~\ref{graph:maxthroughput} reports experiments on our
small-server testbed with nine nodes acting as combined \rs and \Dn{s};
we increase the number of clients until the throughput
does not increase.

For sequential read, both systems achieve about
110MB/s. Pipelining reads in \foosys does not improve aggregate
throughput since also HBase has multiple clients to parallelize
network and disk operations. For random reads, disk seek and rotation
are the bottleneck, and both systems achieve only about 3MB/s.

The relative slowdowns of \sys versus HBase for sequential and random
writes are, respectively, 19.4\% and 16.4\%, and significantly lower (12.8\% and 11.1\%)
when compaction is enabled, since compaction adds more disk operations
to both HBase and \sys. \sys reduces network bandwidth at the expense
of higher disk and CPU usage, but this trade-off does not help
because disk and network bandwidth are comparable. Even so, we
find this to be an acceptable price for the stronger guarantees
provided by \sys.



\begin{table}[t]
\begin{footnotesize}
\begin{center}
  \begin{tabular}{| >{\centering\arraybackslash}p{50mm} | c | c |}\hline
   			&  HBase & Salus \\ \hline
   Throughput (MB/s) 	& 27 	 & 47	 \\ \hline
   Network consumption (network bytes per byte written by the client)
		 	& 5.3 	 & 2.4 	 \\ \hline
  \end{tabular}
\end{center}


%\centerline{\includegraphics[angle=0, width=0.5\textwidth]{graphs/salus_NWGraph.pdf}}
\caption{\label{graph:throughputandbandwidth} {Aggregate sequential write
throughput and network bandwidth usage  with fewer
    server machines but more disks per machine.}}
\end{footnotesize}
\end{table}



Table~\ref{graph:throughputandbandwidth} shows what happens when we
run the sequential write experiment using the three 10-disk storage
nodes as servers.  Here, the tables are
turned and \foosys outperforms HBase (47MB/s versus 27MB/s).  Our
profiling shows that in both experiments, the bottleneck is the
network topology that constrains the aggregate bandwidth to 1.2Gbit/s.


Table~\ref{graph:throughputandbandwidth} also compares the network
bandwidth usage of HBase and \foosys under the sequential write
workload. HBase sends more than five bytes for each byte written by
the client (two network transfers each for logging and flushing, but
fewer than two for compaction, since some blocks are overwritten.)
\foosys only uses two bytes per byte written to forward the request to
replicas; logging, flushing, and compaction are performed locally. The
actual number is slightly higher than two, because of \foosys
additional metadata. \sys halves network bandwidth usage compared to
HBase, which explains why its throughput is 74\% higher than that
HBase when network bandwidth is limited.

Note that we do not measure the aggregate throughput of EBS because we
do not know its internal architecture and thus we do not know how to
saturate it.



%since \foosys-Active needs to attach meta info, like the vtree
%branches, to the request.



%For random write, HBase can achieve about 25MB/s. This is lower than sequential write since random write
%workload creates a lot of small flushing operation, instead of several big flushing operation in sequential
%write. Therefore, it has a significantly higher overhead in flushing and compaction. \foosys-Verify can
%achieve about 17MB/s and \foosys-Active can achieve 15MB/s. Still we believe that with storage nodes,
%\foosys-Active can perform better than HBase, but we've no time to try that.


