\subsubsection{Pipeline commit}
\label{sec:eval-barrier}
\begin{figure}[t]
\centerline{\includegraphics[angle=0, width=1\textwidth]{graphs/salus_batchsize_throughput.pdf}}
\caption{\label{graph:barriercommit} Single client sequential write throughput as the
    frequency of barriers varies.}
\end{figure}

\foosys achieves increased parallelism by
pipelining \pput{s} across barrier operations---\foosys'
\pput{s} always commit in the order they are issued, so the barriers'
constraints are satisfied without stalling the pipeline.
% Figure~\ref{graph:barriercommit} shows what happens when we vary the
% frequency of barriers
% result.
% We do not need to block at the barrier, so it does not affect us.
% Batch size affects HBase throughput. If batch size is 1, then the throughput is about 3MB/s.
% If batch size is 32 (cite), the throughput is about 13MB/s.
% Then we evaluate the benefit of pipeline commit. Without pipeline commit, HBase has to block at
% a barrier to ensure fork sequential, so its performance is affected by the barrier interval.
% On the contrary, \foosys does not have to block on a barrier.
Figure~\ref{graph:barriercommit} compares HBase and \foosys by varying
the number of operations between barriers.
\foosys' throughput remains constant at 18 MB/s as it is not affected by barriers,
whereas HBase's throughput suffers with increasing barrier frequency:
HBase achieves 3MB/s with a batch size of one and 14 MB/s with a batch
size of 32.
