\subsection{Performance}
\label{section:performance}

%\subsubsection{Workload and Configuration}
\sys' architecture can in principle result in both benefits and overhead
when it comes to throughput and latency: on the one hand, pipelined
commit allows multiple batches to be processed in parallel and active
storage reduces network bandwidth consumption. On the other hand,
end-to-end checks introduce checksum computations on both clients and
servers; pipelined commit requires additional network messages for
preparing and committing; and active storage requires additional
computation and messages for certificate generation.
Compared to the cost of disk-accesses for data, however, we expect
these overheads to be modest.

This section quantifies these tradeoffs using benchmarks for both sequential and random
access patterns and for both
reads and writes.  We compare \foosys' single-client
throughput and latency, aggregate throughput, and network usage to
those of HBase. We also include measured numbers from Amazon EBS to
put \sys' performance in perspective.

\foosys targets clusters of storage nodes with 10 or more disks each. In
such an environment, we expect a node's aggregate disk bandwidth to be
much larger than its network bandwidth. Unfortunately, we
have only three \emph{storage nodes} matching this description, the rest
of our \emph{small nodes}
have a single disk and a single active 1Gbit/s network connection.

Most of our experiments run on a 15-node cluster of \emph{small nodes}
equipped with a 4-core Intel Xeon X3220 2.40GHz CPU, 3GB of memory,
and one Western Digital WD2502ABYS 250GB hard drive. In these experiments, we use nine
small nodes as \rs{s} and \Dn{s}, another small node as the Master, ZooKeeper, and
NameNode, and up to four small nodes acting as clients.  In these
experiments, we set the Java heap size to 2GB for the \rs and 1GB
for the \Dn.

To understand system behavior when disk bandwidth is more plentiful
than network bandwidth, we run several experiments using the three storage
nodes, each equipped with a 16-core AMD Opteron 4282 3.0GHz CPU, 64GB of
memory, and 10 Western Digital WD1003FBYX 1TB hard drives. These storage nodes
have 1Gbit/s networks, but the network topology constrains them to
share an aggregate bandwidth of about 1.2Gbit/s.

To measure the scalability of \sys with a large number of machines, we run
several experiments on Amazon EC2~\cite{AmazonEC2}. The detailed configuration is shown
in the ``Scalability'' section. %Section~\ref{sec:eval-scalability}.



% aggregate network bandwidth, but unfortunately, we only have 3 such nodes,
% so we run a minimal experiment to verify our design of active nodes. Besides, we run most
% of our experiments with small machines with one disk. We will clarify which machines each
% experiment uses.

% The 3 storage nodes are equipped with 8-core hyper-threaded AMD Opteron @3.0GHz CPU, 64GB of memory,
% and 10 WDC WD1003FBYX 1TB hard drives.
% The small nodes are r.
% For most of the experiments, we use 9 small nodes as servers and allocate a \rs
% and a \Dn on each physical node. We set the Java heap size to be 2GB for the \rs
% and 1GB for the \Dn. We use another small node as Master and NameNode and we run clients
% on up to 4 small nodes. For the experiment with storage nodes, we use the 3 storage nodes
% as servers and allocate up to 4 \rs{s} and 4 \Dn{s} on each physical node to
% maximize parallelism. Each \Dn uses a software raid 0 volume of two disks as its storage.
% We run Master, NameNode, and clients on small nodes. All machines are equipped with one 1Gbps network
% card, but the 3 storage nodes share an aggregate bandwidth of about 150MB/s, so they cannot reach
% maximum network bandwidth (100MB/s) at the same time.

For all experiments, we use a small 4KB block size, which we expect to
magnify \foosys' overheads compared to larger block sizes. For read
workloads, each client formats the volume by writing all blocks and
then forcing a flush and compaction before the start of the
experiments.  For write workloads, since compaction introduces
significant overhead in both HBase and \sys and the compaction
interval is tunable, we first run these experiments with compaction
disabled to measure the maximum throughput; then we run HBase with its
default compaction strategy and measure how many bytes it reads for
each compaction; finally, we tune \sys' compaction interval so that
\sys performs compaction on the same amount of data as HBase.



\input{salus_eval_perf_single}
\input{salus_eval_perf_aggregate}
\input{salus_eval_perf_scalability}
\input{salus_eval_perf_pipe}
\input{salus_eval_perf_active}
