
\subsection{Pipelined commit}
\label{sec:pipeline}

The goal of the pipelined commit protocol is to allow clients to
issue requests to multiple regions concurrently, while preserving the ordering specified
by the client (\oc). In the presence of even simple crash failures, however,
enforcing the \oc property can be challenging.

%To build a high-performance block store, \sys allows clients to mount
%volumes spanning multiple regions and to issue multiple outstanding
%requests that are executed concurrently across these regions. In the
%presence of even simple crash failures, however, enforcing the \oc property
%in these volumes can be challenging.

% This section describes \sys' pipelined commit protocol that enables
% it to enforce the \oc property on volumes that span multiple
% servers. \changebars{Absent pipelined commit, requests from a client
%   that span multiple servers can be processed out-of-order, violating
%   the barrier semantics and leading to a severe data
%   loss[cite].

Consider, for example, a client that, after mounting a volume $V$ that
spans regions 1 and 2, issues a \pput $u_1$ for a block mapped
to region 1 and then, without waiting for the \pput to complete,
issues a barrier \pput $u_2$ for a block mapped to region 2. Untimely
crashes of the client and of the region server
for region 1 may lead to $u_1$ being lost even as
$u_2$ commits.\footnote{For simplicity, in this example and throughout
  this section we consider a single logical region server to be at
  work in each region. In practice, in \sys this abstraction is
  implemented  by a \rrs.}  Volume $V$ would now violate the standard disk
semantics; further, $V$ would be left in
an invalid state that can potentially cause severe data
loss~\cite{prabhakaran05iron,Chidambaram12NoFS}.

A simple way to avoid such inconsistencies would be to allow clients
to issue one request (or one batch of requests) at a time, but, as we
show in Section~\ref{sec:eval-barrier}, performance would suffer
significantly. Instead, we would like to achieve the good performance
that comes with issuing multiple outstanding requests, without
compromising the \oc property.

The basic insight of \sys' solution is that processing data out of order,
as a result of parallel processing and failures, is fine as long as the
clients can only observe the data in the correct order, even if such
out-of-order data is made persistent on disks: such prematurely-persisted data can be garbage-collected
without being noticed by clients. To ensure
clients can only observe data in the correct order,
we once again apply the idea of
separating data from metadata:
\sys parallelizes the bulk of the processing (such as
cryptographic checks and disk-writes) required to
handle each request, while ensuring that requests commit in order
by transferring a small quantity of metadata sequentially.

\sys ensures \oc by exploiting the sequence number that
clients assign to each request.  \Rs{s} use these sequence numbers to
guarantee that a request does not commit (become visible to clients)
unless all the previous requests are persistent on disks, and thus
are guaranteed to eventually commit. Similarly, during recovery,
these sequence numbers are used to ensure that a consistent prefix of
issued requests are recovered (Section~\ref{sec:recovery}).

\sys' approach to ensure \oc for \get{s} is simple. Like other
systems before it~\cite{Bolosky11Paxos}, \sys neither assigns new
sequence numbers to \get{s}, nor logs \get{s} to stable
storage. Instead, to prevent returning stale values, a \get request to
a \rs simply carries a \texttt{prevNum}
field indicating the sequence number of the last \pput executed on
that region: \rs{s} do not execute a
\get until they have committed a \pput with the \texttt{prevNum}
sequence number.
Conversely, to prevent the value of a block from being overwritten by
a later \pput, clients block \pput requests to a block that has
outstanding \get requests.\footnote{This requirement has minimal
  impact on performance, as such \pput requests are rare in practice.}


\sys' pipelined commit protocol for \pput{s} is illustrated in
Figure~\ref{fig:pipeline}. The client, as in HBase, issues requests in
batches. Unlike HBase, \sys allows each client to issue multiple
outstanding batches. Each batch is committed using a protocol consisting
of the phases described below.\footnote{This
protocol looks similar to a two-phase commit (2PC) protocol~\cite{Gray78Notes,Lampson76Crash},
but it is designed for a different purpose: the objective of 2PC is to ensure that
a batch or transaction is either fully executed or not, while that of pipelined commit
is to ensure that if a batch is committed, all batches that preceded it will
eventually be committed.}


 % 2PC-like protocol~\cite{Gray78Notes,Lampson76Crash}, consisting of the phases described
 % below. Compared to 2PC, pipelined commit reduces the overhead of the
 % failure-free case by eliminating the disk write in the commit phase
 % and by pushing complexity to the recovery protocol, which is usually
 % a good trade-off.

\iffalse The pipelined commit protocol looks similar to the two-phase
commit (2PC) protocol [cite]: in the prepare phase, each \rs writes
the \pput to its log and in the commit phase when a \rs receives the
notification from a leader stating that all \rs{s} have prepared, it
commits the \pput and makes it visible. The fundamental difference
between pipelined commit and 2PC is that in the commit phase, the
pipelined commit protocol does not write any record to persistent
storage and this saves one disk write request for each batch. And when
\rs fails and the corresponding commit information is lost, \sys
performs a recovery protocol to ensure that the whole volume is still
in a consistent state.  Compared to 2PC, pipelined commit reduces the
overhead in failure-free case and pushes the complexity to recovery,
which is usually considered a good trade-off.  \fi

\iffalse
\subsubsection{\pput Protocol}

\sys' {\em pipelined commit} protocol
is illustrated in Figure~\ref{fig:pipeline}. The client, as in HBase,
issues \pput{s} in batches. Unlike HBase, each client is allowed to
issue multiple outstanding batches. Each batch is committed using a
2PC-like protocol, consisting of the following phases:
\fi

\input{salus_fig_pipelined_commit}

\begin{sitemize}
\item[PC1.] { \em Choosing the batch leader and participants.}  To process a
  batch, a client divides its \pput{s} into various sub-batches, one
  per \rs. Just like a \get request, a \pput request to a
  region also includes a \texttt{prevNum} field to identify the last
  \pput request issued to that region. The client identifies one \rs
  as {\em batch leader} for the batch and sends each sub-batch to the
  appropriate \rs along with the batch leader's identity. The client sends
  the sequence numbers of all requests in the batch to the batch leader,
  along with the identity of the leader of the \emph{previous} batch.

\item[PC2.] { \em Preparing.} A \rs preprocesses the \pput{s} in its
  sub-batch by {\em validating} each request, i.e. by checking whether
  the request is signed and by using the \texttt{prevNum} field to
  verify it is the next request that the \rs should process. If
  validation succeeds for all requests in the sub-batch, the
  \rs logs the requests (which are now {\em prepared}) and sends its \textsc{yes} vote to the
  batch's leader; otherwise, the \rs  votes \textsc{no}.


\item[PC3.] { \em Deciding.}  The batch leader can decide
  \textsc{commit} only if it receives a \textsc{yes} vote for all the
  \pput{s} in its batch and a \textsc{commit-confirmation} from the
  leader of the previous batch; otherwise, it decides
  \textsc{abort}. Either way, the leader notifies the participants of
  its decision. Upon receiving \textsc{commit} for a request, a \rs
  updates its memory state (memstore), sends a
  \textsc{put\_success} notification to the
  client, and asynchronously marks the request as committed on
  persistent storage. On receiving \textsc{abort}, a \rs discards the
  state associated with that \pput and sends the client a
  \textsc{put\_failure} message. Abort can happen when the client
  is faulty (e.g. corrupted memory), the network is faulty (e.g. corrupted
  messages), or a logical \rs is faulty.
  As shown in the next section, each logical \rs is actually a replicated
  group and \sys can mask up to two failures inside a group. If
  more than two failures occur as a result of a severe
  problem (e.g. configuration error, bugs in the code, etc), then the logical \rs may
  behave unexpectedly and the batch leader
  may decide to abort. When a client receives the \textsc{put\_failure} message as
  the result of an aborted request, it retries
  the corresponding request for a certain number of times to see whether the problem is transient
  (e.g. an occasional bit blip in the network), and if not, the client must notify
  the administrator for further assistance.

\end{sitemize}

%The pipelined commit protocol achieves our goal of high parallelism
%through pipeling.

Notice that all disk writes---both within a batch and across
batches---can proceed in parallel and that the voting and commit
phases for a given batch can be similarly parallelized. Different \rs{s}
receive and log the \pput and \textsc{commit} asynchronously. The only
serialization point is the passing of \textsc{commit-confirmation}
from the leader of a batch to the leader of the next batch.

Despite its parallelism, the protocol ensures that requests commit in
the order specified by the client.  The presence of \textsc{commit} in
any correct \rs's log implies that all preceding \pput{s} in this
batch must have prepared. Furthermore, all requests in preceding
batches must have also prepared. Our recovery protocol
(Section~\ref{sec:recovery}) ensures that all prepared \pput{s}
eventually commit without violating \oc.

The pipelined commit protocol enforces \oc assuming the abstraction of
(logical) region servers that are correct. It is the {\em active
  storage} protocol (Section~\ref{sec:active}) that, from physical \rs{s}
that {\em can} lose committed data and suffer arbitrary failures,
provides this abstraction to the pipelined commit protocol.

%omit
\iffalse To tolerate arbitrary faults, \rs, \Dn, and leader are all
3-way replicated and the client does not accept the reply unless it
receives 3 matching replies from 3 \rs{s}. A faulty node can falsely
commit, refuse to commit, etc. These behaviors can block the system
but cannot violate the safety of the system.  We will show in recovery
how to handle these cases.

\iffalse
\paragraph{Theorem 1.} \emph{\sys is \pput safe when there are no server failures.}

\emph{Proof:} If a \rs receives a commit command for a \pput,
according to S3, it means the leaders of this batch and previous batch
both have decided to commit. When there are no server failures, all
corresponding \rs{s} will eventually receive the commit command. And
according to S2, all these \rs{s} have prepared for these \pput{s}, so
they will be able commit these \pput{s} eventually. \qed \fi

In the failure-free case, pipelined commit guarantees that
if a \pput is committed, all the \pput{s} in this batch and the previous
batch are eventually committed.
The major challenge of the pipelined commit protocol is how to
handle a server failure since commit information may be lost.
We will show that in the Recovery protocol and proves that it is
also safe and live.
\fi

%\input{viewchange}
%\input{viewchange-new}





