\subsection{Recovery}\label{sec:recovery}

The recovery protocol ensures that, despite commission or permanent
omission failures in up to two physical servers,
\sys continues to provide the abstraction of a virtual disk
with {\em standard disk semantics}.

To achieve this goal, \sys' recovery protocol collects the longest
available prefix $P_C$ of prepared \pput requests that satisfy the \oc
property. Recall from Section~\ref{sec:active} that every \pput for which
the client received a \textsc{put\_success} certificate must appear in the log of
at least one correct replica in the region that processed that
\pput. Hence, $P_C$ will contain all \pput requests for which the
client received a \textsc{put\_success} certificate, thus guaranteeing standard
disk semantics.

Specifically, recovery must address two key issues.

{\bf Resolving log discrepancies} Because of omission or commission
failures, different \Dn{s} within the same \rrs may store different
logs. A prepared \pput, for example, may have been made persistent at
one \Dn, but not at another.

% It is to address such
% discrepancies that the recovery protcol identifies $P_C$ and has all
% replicas in the \rrs agree on it.

{\bf Identifying committable requests} Because \commit decisions are
logged asynchronously, some \pput{s} for which a client received
\textsc{put\_success} may not be marked as committed in the logs. It
is possible, for example, that a later \pput be logged as committed
when an earlier one is not; or that a suffix of \pput{s} for which the
client has received a \textsc{put\_success} be not logged as
committed.

One major challenge in addressing these issues is that, while $P_C$ is
defined on a global {\em volume log}, \sys does not actually store any
such log: instead, for efficiency, each region keeps its own separate
{\em region log}. Hence, after retrieving its region log, a
recovering \rs needs to cooperate with other \rs{s} to determine
whether the recovered region log is correct and whether the \pput{s}
it stores can be committed.

\begin{figure}[!t]
\centering
\pseudocodeinput[breaklines=true,mathescape=true]{salus_recovery_pseudocode.txt}
\caption{Pseudocode for the recovery protocol.}
\label{fig:recovery}
\end{figure}

Figure~\ref{fig:recovery} describes the protocol that \sys uses to
recover faulty \Dn{s} and \rs{s}. The first two phases describe the
recovery of individual region logs, while the last two phases describe
how the \rrs{s} coordinate to identify commitable requests.
Note that just as a local disk is unavailable during a post-failure
consistency check, so is a volume unavailable to its client during
recovery.

\noindent {\em 1. Remap (\texttt{remapRegion}).} As in HBase, when a \rrs
crashes or is reported by the client as non-responsive, the \m
swaps out the servers in that \rrs and assigns its regions to one or
more replacement \rrs{s}.

\noindent {\em 2. Recover region log (\texttt{recoverRegionLog}).}  To
recover all prepared \pput{s} of a failed region, the new \rs{s}
need to replay the region's log. As shown in the active storage
protocol, \sys' log is organized as a chain of entries, each containing
a \textsc{log\_entry} certificate and its corresponding data, which may
contain a number of \pput{s} and some additional information for
HBase and HDFS. The recovery protocol iterates these entries in three
steps: \circledNum{1} the primary (first) \rs in the new \rrs asks the corresponding three
\Dn{s} and two witness nodes to provide the \textsc{log\_entry} certificate at
a specific position (which starts from zero and is updated if an entry is successfully read)
and waits for three valid responses. A response is valid if
it is a valid \textsc{log\_entry} certificate with proper signatures, file ID, and position,
or a message of ``No next entry'' properly signed. The
primary \rs is guaranteed to get three valid responses, since there are at least
three correct nodes. \circledNum{2} If at least one of the responses is a valid \textsc{log\_entry} certificate, the
primary \rs tries to read the corresponding data from \Dn{s}. This read is guaranteed to succeed
because the \textsc{log\_entry} certificate proves that the data has already been
stored on at least one correct \Dn; if all three responses are ``No next entry'', then the \rs
considers the log replaying as completed. \circledNum{3} In either case, the primary \rs
forwards its decision together with the proof---the data with \textsc{log\_entry} certificate or three properly
signed ``No next entry''s---to the other \rs{s} so that they can verify the decision. This
procedure works exactly as does unanimous consent in the active storage protocol.
If the new \rs{s} reach unanimous consent, they write the data
to a new file and garbage-collect the old ones, as in HBase. Finally, if
a \textsc{log\_entry} certificate is obtained in \circledNum{2} , meaning that the log
may have not been fully replayed yet, the protocol loops back to \circledNum{1} .
Otherwise, the protocol completes.

The protocol's liveness is already briefly discussed above. We now
briefly prove the following safety property: if a \pput request completes successfully for
a client, then this \pput request is replayed in the correct order during the recovery
protocol described above. First, we prove that if a \pput completed successfully,
it must be replayed during recovery: recall that a client considers a \pput as
complete only after it receives a \textsc{put\_success} certificate, which
proves that the corresponding \textsc{log\_entry} certificate must have been stored
on all correct \Dn{s} and witness nodes. Therefore, in the recovery protocol, the
primary \rs must receive at least one \textsc{log\_entry} certificate in \circledNum{1} ,
because it waits for three replies and at least one of them must be from a correct
\Dn or witness node. And the correct \textsc{log\_entry} certificate proves that the data
must be persistent on at least one correct \Dn, and the \rs can iterate all \Dn{s}
to find the correct data. Second, the order of replaying is guaranteed by the design
of \textsc{log\_entry} certificate itself because the certificate contains the position information
and such information is checked in \circledNum{1} . Note that
a log may be split into multiple actual files on HDFS, and the order
of those files is determined by the timestamps in the names of these files,
which are stored on the trusted NameNode.

\noindent {\em 3. Identify the longest commitable prefix (LCP) of the
  volume log (\texttt{identifyLCP}).} If the client is available, \sys
can determine the LCP and the root of the corresponding volume tree
simply by asking the client.  Otherwise, all
\rrs{s} must coordinate to identify the longest prefix of the volume
log that contains either committed or prepared \pput{s} (i.e. \pput{s}
whose data has been made persistent in at least one correct
\Dn). Since \sys keeps no physical volume log,
the \rrs{s} use ZooKeeper as a means of coordination, as follows. The
\m asks each \rrs to report its maximum committed sequence number as
well as its list of prepared sequence numbers by writing the requested
information to a known file in ZooKeeper. Upon learning from Zookeeper
that the file is complete (i.e. all \rrs{s} have responded),\footnote{If
some \rrs are unavailable during this phase, recovery
starts again from Phase 1, replacing the unavailable servers.} each
\rrs uses the file to identify the longest prefix of committed and
prepared \pput{s} in the volume log.  Finally, the sequence number of
the last \pput in the LCP and the attached Merkle tree root are
written to ZooKeeper.

\noindent {\em 4. Rebuild volume state (\texttt{rebuildVolume}).} This
phase ensures that all \pput{s} in the LCP
are committed and available. The first half of the task is simple: if a \pput in
the LCP is prepared, then the corresponding \rs marks it as
committed. With respect to availability, \sys makes sure that all
\pput{s} in the LCP are available, in order to reconstruct the volume
consistently.  To that end, the \m asks the \rrs{s} to replay their
log and rebuild their region trees; it then uses the same checks used
by the client in the mount protocol (Section~\ref{sec:etoeprot}) to
determine whether the current root of the volume tree matches the one
stored in ZooKeeper during Phase 3. The check should always succeed
as long as there are no more than two failures and it mainly serves as
a sanity check.

