
\subsection{End-to-end verification}
\label{sec:etoeprot}


Local file systems fail in unpredictable
ways~\cite{prabhakaran05iron}.  Distributed systems like HBase are
even more complex and are therefore more prone to failures.  To
provide strong correctness guarantees, \foosys implements end-to-end
checks that (a) ensure that clients access correct and current
data and (b) do so without affecting performance:  \get{s} can be
processed at a single replica and yet retain the ability to identify
whether the returned data is correct and current.

\iffalse
\review{Why does Salus need a merkle tree?}

One could imagine a very simple approach that lets the client keep a
checksum for each block it has written to and when the client performs
a \get, it can check whether the data matches the checksum. However,
this approach does not work if the client fails and loses all its
state.  \fi

Like many existing
systems~\cite{Fu02fastand,sun06zfs,mahajan10depot,li04secure}, \sys'
mechanism for end-to-end checks leverages Merkle trees~\cite{Merkle80Protocols,Blum91Checking}, another type
of metadata, to efficiently
verify the integrity of the state whose hash is at the tree's root.
Specifically, a client accessing a volume maintains a Merkle tree on
the volume's blocks, called {\em volume tree}, that is updated on
every \pput and verified on every \get.

% \sys' implementation of this
% simple approach is guided by its goals of robustness and scalability.



\iffalse
In \sys, a client maintains a Merkle tree that reflects its view of
the system state. For every \pput, the client updates its Merkle tree
and attaches the weakly-signed root of the Merkle tree to the \pput;
for every \get, it ensures that the value returned by \sys matches the
expected hash in the Merkle tree.  For failure recovery, the client
rebuilds the tree and fetches the latest root from the \rs{s} and
checks whether it matches the tree.


\subsubsection{Merkle tree}
\label{sec:merkletree}
\sys uses Merkle trees in standard ways~\cite{li04secure}: each volume
maintains a Merkle tree (called {\em volume tree}) on its blocks which
is updated on every \pput and verified on every \get. \sys's
implementation of this simple approach is guided by its goals of
robustness and scalability.
\fi

For fast recovery, \sys distributes a copy of the volume tree
across the \rs{s} that host the volume so that, after a
crash, a client can rebuild its volume tree by contacting the \rs{s}
responsible for the regions in that volume.  Replicating the volume
tree at the \rs{s} also allows a client, if it so chooses, to only
store a subset of its volume tree during normal operation, fetching on
demand what it needs from the \rs{s} serving its volume.

% For robustness, \sys does not rely on the client to {\em never} lose
% its volume tree. Instead, \sys allows a client to maintain a subset of
% its volume tree and fetch the remaining part from the \rs{s} serving
% its volume on demand. Furthermore, if a crash causes a client to lose
% its volume tree, the client can rebuild the tree by contacting the
% \rs{s} responsible for the regions in that volume.  To support both
% these goals efficiently, \sys stores the volume tree also at the
% \rs{s} that host the volume.


Since a volume can span multiple \rs{s},  for scalability and
load-balancing each \rs only stores and validates a {\em region tree}
for the regions that it hosts. The region tree is a sub-tree of the
volume tree corresponding to the blocks in a given region. In
addition, to enable the client to recover the volume tree, each \rs
also stores the hash for the root of the full volume tree generated
by the most recent update to a block in the region for which the server
is responsible, together with the sequence number of the \pput request that
produced it.


Figure~\ref{fig:merkle-tree} shows a volume tree and its region
trees. The client stores the top levels of the volume tree that are
not included in any region tree so that it can easily fetch the
desired region tree on demand. A client can also cache recently used
region trees for faster access.

\begin{figure}[tpb]
\begin{center}
    \includegraphics{figures/MTVerification}
\end{center}
\caption{Verifying a response with the Merkle tree: the client caches
the whole volume tree and part of the region tree and fetches
a subset of the region tree to verify the response.}
\label{fig:mt-verification}
\end{figure}

To process a \get request for a block, the client sends the request to
any of the \rs{s} hosting that block. As shown in Figure~\ref{fig:mt-verification},
the response includes a subset of
the region tree (all non-cached nodes on the path from the target to the root of
the corresponding region tree, together with their siblings) sufficient for the client to validate the response using
the locally stored volume tree. If the check
fails (because of a commission failure) or if the client times out
(because of an omission failure), the client retries the \get using
another \rs. If the \get fails at all \rs{s}, the client contacts the
\m triggering the recovery protocol (Section~\ref{sec:recovery}). To process
a \pput, the client updates its volume tree and sends the
weakly-signed root hash of its updated volume tree along with the
\pput request to the \rrs. Attaching the root hash of the volume tree
to each \pput request enables clients to ensure that, despite
commission failures, they will be able to mount and access a
consistent volume.


A client's protocol to mount a volume after losing the volume tree is
simple. The client begins by fetching the region trees, the root
hashes, and the corresponding sequence numbers from the various
\rrs{s}. Before responding to a client's fetch request, a \rrs commits
any prepared \pput{s} pending to be committed using the {\em
  commit-recovery} phase of the recovery protocol
(Section~\ref{sec:recovery}).  Using the sequence numbers received from all
the \rrs{s}, the client identifies the most recent root hash and
compares it with the root hash of the volume tree constructed by
combining the various region trees. The check should always succeed
as long as there are no more than two failures.

%If the two hashes match, then the
%client considers the mount to be complete; otherwise it reports an
%error indicating that a \rrs is returning a potentially stale tree. In
%such cases, the client reports an error to the \m to trigger the
%replacement of the servers in the corresponding \rrs, as described in
%Section~\ref{sec:recovery}.

% \sys' end-to-end checks enforce its freshness property while the
% recovery protocol (Section~\ref{sec:recovery}) ensures liveness.

\iffalse
A client's protocol to mount a volume after losing the volume tree is
simple. Before mounting a volume, a client waits for the corresponding
\rs{s} to perform the recovery protocol described in
Section~\ref{sec:recovery}. The purpose of this step is to remove
uncommitted \pput{s} from the previous client and determine the
maximum sequence number.  After the \rs{s} have recovered, the client
fetches the region trees and the root hash with the maximum sequence
number. The client ensures that the root hash of the volume tree
constructed by combining the region trees matches the expected root
hash. If the hash matches, the client completes the mount; otherwise
it reports an error indicating that some \rs is returning a stale
tree. These end-to-end checks enforce \sys's safety property. We
describe in Section~\ref{sec:recovery} and Section~\ref{sec:active} how \sys
provides liveness to its clients.
\fi

\endinput
\subsubsection{\get Protocol}

The basic procedure is that the client sends a \get to either replica
and uses the locally stored Merkle tree to check the reply. If
timeouts or check fails, the client retries another replica.

However, since \pput{s} and \get{s} are processed asynchronously, it
is possible that a \get arrives earlier than a previous \pput to the
same block. To avoid reading a stale block, \sys uses an optimization
proposed in other approaches~\cite{Bolosky11Paxos}: a \get request to
a \rs carries a \texttt{prevNum} field indicating the sequence number
of the last \pput executed on that region, and a \rs will not execute
the \get until the \pput with \texttt{prevNum} is committed.

Furthermore, it's also possible that a \get arrives later than a
future \pput to the same block. In this case, the \get can never
retrieve expected data since it's overwritten.  To prevent this, \sys
requires a client to block \pput requests to a block that has
outstanding \get requests.\footnote{This requirement has minimal
  impact on performance, as such \pput requests are rare in practice.}

The client will not accept any incorrect \get replies as long as it
has the correct root of the tree, since incorrect reply cannot pass
the Merkle tree check. The client fetches the root of the tree, which
is weakly-signed by the previous client, during mount operation, so
the client can check the root is not corrupted.



\endinput

\begin{verbatim}

   For performance, want to \pput in parallel
   For correctness, want \pput{s} to commit in order

   --> pipelined \pput{s}

   High level flow:

\end{verbatim}


\begin{verbatim}

      client splits batch of \pput{s} into subbatches -- one per RS
      updated by batch; send subbatches to RS servers, designating one
      as the primary for the batch

      in parallel, the RS servers \pput their subbatches to their logs

      once a RS's subbatch is logged, send "PREPARED" msg for subbatch
      to batch's primary

      once primary has PREPARED from all subbatches *and* PREPARED
      from previous batch's master,
           (1) send PREPARED to next batch's master
           (2) send COMMIT to all RS in batch
           (3) send COMMIT to log
      Once COMMIT is in log, send DONE to client


      NOTE: parallelism and pipelining -- all PREPARE disk writes (both
      subbatches within a batch and across batches) can proceed in
      parallel, as can all acks

      Only serialization is passing "commit" message from batch i to
      batch i+1

      Notice: COMMIT in anyone's log --implies--> PREPARED in
           everyone's log for this batch *and all earlier batches*

           --> this commit can safely commit this and all prior
               batches




\paragraph{Implementation details}

        ****
       [[maybe can put message contents (at this level of detail) into
       figure ; then not much more to say (basically explain the
        go/no-go checks --> move that above?]]
        ****

         batch = sequence of \pput{s}

         subbatch = {batchID, prevRS_Root, list of \pput{s} to RS,
                     newRS_Root, oldVRoot, list of RS updated by batch, new VRoot}

              RS:
               to process: first check prevRS_Root matches current
                           state
               then apply list of \pput{s}
               if newRS_Root matches resulting state
                    \pput subbatch msg to log
                    (need the list of RS updated and
                     oldVRoot/newVRoot, for crash
                     recovery; see below)

                    send PREPARED = {batchID, list of sequence
                       numbers in subbatch, newRS_Root}

              client also sends LEADER = {batchID, prev batchleader, start seq num,
              start VRoot, end seq num, end VRoot, list of RS's}

              send request "tell me PREPARED" to prev batch leader

              when leader receives PREPARED from all in batch + prev
                     batch leader
                  -- check for "no gaps" from start seq num to end seq
                     num;
                  -- check old VRoot matches current state; check new
                     VRoot = new state;
                  -- compute new VRoot; compare new VRoot v. calculated
                     if mismatch -> send ABORT batchID to all
                     if match -> send COMMIT batchID to all and send
                                      PREPARED batchID to
                                      next batch leader


          Lower-level details like HDFS -- RS logs \pput{s} to HDFS file;
               \pput{s} also inserted into sorted heap; when heap size
                  exceeds threshold, write heap to HDFS file;

                later, read, combine, and write flushed checkpoints --
                "compaction"



\end{verbatim}



\subsubsection{Read}
\begin{verbatim}

     client server sends "\get" to appropriate \rs
        -- if client is not caching required part of RS tree, include
           request to send it, too

      if recv path: client verifies path from RS leaf to RS root

      now check \get result matches RS leaf

      QED

   rely on HBase/HDFS layers for \get current value from RS memory
   and file state (bloom filters, etc)
\end{verbatim}

\subsubsection{Retry/Recovery}

\begin{verbatim}
  what if RS fails?

  also -- if above checks fail, batch is aborted;
     client can retry; remount and retry; then fail RS and restart them


  Main point: need to guarantee prefix property when rebuilding RS
      state: i commits --> <i commits

        (need to check this. why? E2E guarantees require making
         sure RS state is consistent.
         protecting against missing items in log, for example)

  Basic idea is to
      -- query other RS in volume to make sure to commit eveything
         that others depend on

      -- use ZK to coordinate "sealing" the previous epoch and
         starting a new one (in this case of fail-over)


      protocol:
          recovering RS: Read log
               --> epoch number, list of PREPARED batches, last COMMIT
                   batch

              broadcast "new epoch needed" to all RS in volume


           recovering and other RS:
              Write Zookeeper /recovery/region/[regionID]/[epoch
                   number]:

                   write NEW record (write fails if already exists):
                       list of prepared batches (id, list of seq num,
                              ?last known VRoot?)
                       last COMMIT batch (id, last known VROOT?)


                   read ALL records for epoch
                   deterministically find gap-free new epoch start
                       (gap free both in batch number and seq num
                        within batch)
                       if none, then whoever is introducing gaps
                       is faulty --> delete record (OK?) and
                       restart (with different HDFS \Dn)


                   write new epoch start to ZK record

                   write appropriate ABORT and COMMIT to local log

                   start new epoch



\subsubsection{Other issues}

     Rethink the sync
          2 problems: latency, limit throughput b/c pipeline depth
          limited

        --> apply techniques from Rethink the sync

          simpler bc client is VM in data center
             -- only I/O path is network
             -- library or VMM router
                 on batch write: update EMBARGO
                 on NW write: tag with EMBARGO
                 on commit, release EMBARGO < CURRENT






      Semantics

         filure free:
           need barrier-consistent
           get stronger (sequential)
               -- need for end-to-end checks
               -- simpler than treating BARRIER as special case

          arbitrary servers:
              can lose availability, but if client accepts answer,
                  answer highly consistent: fork sequential

                  (why fork sequential? Recovery...)




         Geographic replication?




\end{verbatim}





