

\subsubsection{Single client throughput and latency}
\label{section:single-client}

% $\bullet$ Does foosys get better/comparable single-client throughput? What is the overhead on latency

% $\bullet$ Start a cluster with 9 servers and run different workloads with 1 client

% $\bullet$ Figure~\ref{graph:throughput} shows the single client throughput.
% We get better seq read throughput because we do pipeline read and we can read from multiple servers concurrently.
% We get better seq write throughput because we do pipeline write.
% We get comparable random read throughput since the disks become the bottleneck.
% We get lower random write because of compaction overhead is higher.

% $\bullet$ Figure~\ref{graph:latency} shows the single client latency.
% Our overhead comes from 1) tree computation 2) ask the previous one for commit record 3) generate and wait
% for certificates 4) wait for 3 replies.

We first evaluate the single-client throughput and latency of
\foosys. Since a single client usually cannot saturate the system, we
find that executing requests in a pipeline is beneficial to \sys'
throughput. However, the additional overheads of checksum computation
and message transfer of \sys increase its latency.

 We use the nine  small nodes as servers and start a single client
to issue sequential and random reads and writes to the system. For the
throughput experiment, the client issues requests as fast as it can
and performs batching to maximize throughput. In all experiments, we
use a batch size of 250 requests, so each batch accesses about
1MB.
For the latency experiment, the client issues a single request, waits for
it to return, and then waits for 10ms before issuing the next request.

Figure~\ref{graph:throughput} shows the single client throughput.  For
sequential reads, \sys outperforms the HBase system with a speedup of
2.5. The reasons are that \sys' three \rs{s}
increase parallelism for reads and that reads are pipelined to have
multiple batches outstanding; the HBase client instead issues only one
batch of requests at a time.  For random reads, disk seeks are the
bottleneck and HBase and \foosys have comparable performance.

For sequential writes and random writes, \sys is slower than HBase by 3.5\%
to 22.8\% for its stronger guarantees.  For \sys,
pipelined execution does not help write throughput as much as it helps
sequential reads, since write operations need to be forwared to all
three nodes and unlike reads cannot be executed in parallel.


As a sanity check, Figure~\ref{graph:throughput} also shows the
performance we measured from a small compute instance accessing
Amazon's EBS. Because the EBS hardware differs from our testbed
hardware, we can only draw limited conclusions, but we note that the
\foosys prototype achieves a respectable fraction of EBS's sequential
read and write bandwidth, and that it modestly outperforms EBS's
random read throughput (probably because it is utilizing more disk
arms), and that it substantially outperforms EBS's random write
throughput (probably because it transforms random writes into sequential
ones.)




% HBase can achieve 27MB/s.
% The major problem of HBase is that it only provides a blocking interface, so the client cannot issue the next
% read until it gets the previous reply. The other overhead comes from \Dn metadata processing, checksum,
% and HBase serialization/deserialization, etc.
% \foosys-Verify can achieve 47MB/s since it uses pipeline read to parallelize network and disk operations.
% \foosys-Active can achieve 66MB/s since it can read from 3 replicas concurrently.
% For sequential write, HBase can achieve 17MB/s. \foosys-Verify can achieve 23MB/s by pipeline write.

% \foosys-Active achieves 21MB/s because of its overhead of unanimous agreement.
% For random read, all three systems can only achieve about 2-3 MB/s and that is bottlenecked by the disk speed.
% For random write, HBase can achieve 18MB/s, which is even higher than that of sequential write. This is because
% random write workload is evenly distributed to all servers. \foosys-Verify does not implement the group commit
% protocol, so it has a quite high overhead for each request to check remotely whether the previous request is complete.
% It can achieve 9MB/s. \foosys-Active improves it to 15MB/s as a result of the group commit protocol, but compared
% to HBase, \foosys-Active consumes more memory, so it flushes and compacts more often, and as a result it has a lower throughput.

Figure~\ref{graph:latency} shows the 90th-percentile latency for
random reads and writes. In both cases, \foosys' latency is within two
or three milliseconds of that of HBase, which is reasonable considering
\sys' additional work to perform Merkle tree calculations, certificate
generation, and network transfer.  Note that,
in the random write latency experiment, the HBase \Dn does
not call \emph{sync} when performing disk writes: that's why its
write latency is small.  This may be a reasonable design decision when
the probability of three simultaneous crashes is
small~\cite{Liskov91Harp}.  In this experiment, we also show what
happens when adding this call to both HBase and \sys: calling sync
adds more than 10ms of latency to both. To be consistent, we do not
call {\em sync} in other throughput experiments.




Again, as a sanity check we note that \foosys (and HBase) are
reasonably competitive with EBS (though we emphasize again that EBS's
underlying hardware is not known to us, so not too much should be read
into this experiment.)

Overall, these results show that despite all the extra computation and message transfers to achieve stronger
guarantees, \sys' single-client throughput and
latency are comparable to those of HBase,
because the additional processing \sys requires adds relatively
little to the time required to complete disk operations. In an
environment in which computational cycles are plentiful, trading off,
as \sys does, processing for improved reliability appears to be a reasonable trade-off.

\begin{figure}[t]
\centerline{\includegraphics[angle=0,
  width=1\textwidth]{graphs/salus_throughput.pdf}}
\caption{\label{graph:throughput} {Single client throughput on small nodes. HBase-N and Salus-N disable compactions. EBS's numbers are measured on different hardwares and are included for reference.}}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{graphs/salus_latency90.pdf}}
\caption{\label{graph:latency} {Single client latency on small nodes. HBase-S and Salus-S enable sync. EBS's numbers are measured on different hardwares and are included for reference.}}
\end{figure}
