\subsection{Robustness}
\label{section:robustness}

In this section, we evaluate \foosys' robustness, which includes
guaranteeing  freshness for read operations and liveness and \oc for all operations.

\iffalse
three
components: safety, durability, and liveness. A client considers \foosys to
be safe if every \get of an object that successfully completes returns
the data stored by the most recently completed \pput of that object. A
client considers \foosys to be durable if it stores for each block the
block's current contents until that block is overwritten. A client
considers \foosys to be available if, during a period in which there
are no partitions in the data center's network, a client's \get and
\pput operations complete.
\fi

\foosys is designed to ensure these properties as long as there are no
more than two failures in the \rs{s} within an \rrs and their corresponding
\Dn{s}, and fewer than a third of
the nodes in the implementation of each of UpRight NameNode, UpRight
ZooKeeper, and UpRight Master nodes are incorrect; however, since we
have not yet integrated in \sys UpRight versions of NameNode,
ZooKeeper, and Master, we only evaluate \sys' robustness when \Dn or
\rs fails.


We test our implementation via fault injection. We introduce failures
and then observe what happens when we attempt to access the
storage. For reference, we compare \foosys with
HBase (which replicates stored data across \Dn{s} but
does not support pipelined commit, active storage, or end-to-end checks).

In particular, we inject faults into clients to force them to crash
and restart. We inject faults into \Dn{s} to force them either to
temporarily or permanently crash or to corrupt block data. We cause
data corruption in both log files and checkpoint files.  We inject
faults into \rs{s} to force them to  either 1) crash; 2) corrupt data in
memory; 3) write corrupted data to HDFS; 4) refuse to process requests
or forward requests out of order; or 5) ask the NameNode to delete
files. Once again, we cause corruption in both log files and
checkpoint files. Note that data on \rs{s} is not protected by
checksums. Figure~\ref{graph:robustness} summarizes our results.

\input{salus_eval_robustness_table}

First, as expected, when a client crashes and restarts in HBase, a
volume's on-disk state can be left in an inconsistent state, because
HBase does not guarantee ordered commit. HBase can avoid these inconsistencies
by blocking all requests that follow a barrier request until the
barrier completes, but this can hurt performance when barriers are
frequent (see Section~\ref{sec:eval-barrier}).  Second, HBase's replicated
\Dn{s} tolerate crash and \emph{benign} file corruptions that alter
the data but don't affect the checksum, which is stored separately.
Thus, when considering only \Dn failures, HBase provides the same
guarantees as \sys.  Third, HBase's unreplicated \rs is a single point
of failure, vulnerable to commission failures that can violate
freshness as well as \oc.

In \sys, end-to-end checks ensure freshness for \get operations in all
the scenarios covered in Figure~\ref{graph:robustness}: a correct
client does not accept \get reply unless it can pass the Merkle tree
check. Second, pipelined commit ensures the \oc property in all
scenarios involving one or two failures, whether of omission or of
commission: if a client fails or \rs{s} reorder requests, the
out-of-order requests will not be accepted and eventually recovery
will be triggered, causing these requests to be discarded.  Third,
active storage protects liveness failure scenarios involving one or
two \rs/\Dn pairs: if a client receives an unexpected \get reply, it
retries until it obtains the correct data. Furthermore, during recovery,
the recovering \rs{s} find the correct log by using the certificates
generated by active storage protocol.  As expected, \oc
and  liveness cannot be guaranteed if {\em all} replicas either
permanently fail or experience commission failures.

\iffalse
Here are the details about what happens in each experiment:

\yang{see if we need the following details}

\begin{itemize}

\item Client failures: in HBase,  when a client crashes, the outstanding \pput{s}
can be committed arbitrarily, since the client sends multiple \pput{s} to
different \rs{s} independently and some of them may succeed before the client
fails and some of them may not. This violates the \pput safety as required
by a block store. \sys provides the \pput safety guarantee by pipelined commit:
the \rs does not commit a \pput until its previous \pput is committed; and if
some \pput{s} are lost due to client failure, the following \pput{s} are discarded
during the recovery protocol when the client restarts.

\item Datanode crashes or detectable corruption:
if 1 or 2 \Dn{s} crash, HBase continues serving
requests correctly by redirecting reads to
the remaining \Dn{s} and redirecting writes to some newly selected \Dn{s}. If the
failed \Dn{s} do not come back in a certain period of time, HBase will copy
data from the remaining \Dn{s} to some newly selected \Dn{s} to restore
durability. \sys uses the same strategy here. Detectable corruptions (data
does not match checksum) are treated similarly as in crashes.

\item Datanode undetectable corruption:
in HBase, if both data and checksum are corrupted, safety can be violated:
the client accepts corrupted data without any notification. \sys uses
different key ideas to provide different properties:
first, \sys uses end-to-end checks to provide \get safety: the corrupted result
cannot pass the Merkle tree check and the client retries other replicas
until it gets the expected result; second, \sys uses active storage to
guarantee that corrected data is stored on at least one replica when there are no
more than 2 failures; third, \sys uses certificates generated in active
storage protocol to find the correct version of the data during recovery, thus
guarantees \pput safety and liveness during recovery. Neither HBase nor \sys can tolerate
more than 2 failures but \sys can at least guarantee \get safety.

\item Region server crashes: HBase tolerates \rs crashes by migrating
related regions to other \rs{s} and asking them to recover data from \Dn{s}.
\sys performs the similar operations except that it does this for a quorum
of 3 \rs{s} and it performs additional checks in recovery.

\item Region server benign arbitrary failures:
HBase cannot tolerate arbitrary failures in \rs{s} since \rs{s} are
not replicated and not protected by checksums. In \sys, if a corruption
happens during a \get operation, end-to-end checks prevents the client from
accepting the corrupted result and the client retries another replica. If
the corruption happens during a \pput operation, the \pput will
not complete since the \rs{s} cannot reach agreement, so either the
\Dn{s} or the NameNode will reject the operation and block the protocol.
Finally, the master starts the recovery protocol, killing all three \rs{s} and
migrating corresponding regions to other \rs{s}. After that, the client
retries the \pput requests. Other failures like \rs dropping or reordering
requests also block the protocol and trigger the recovery. Again,
\sys cannot tolerate more than 2 failures but can at least guarantee \get safety.

\end{itemize}
\fi

%Note that
%\foosys is designed to tolerate arbitrary failures by fewer
%than 1/3 of the Zookeeper, NameNode, or Master servers, but we have not
%integrated UpRight versions of these services~\cite{clement09upright}
%into our prototype.





% We inject omission failures by crashing a node, \emph{benign
%   commission failures} by altering stored or transmitted data without
% altering the checksum that protects the data, and \emph{arbitrary
%   failures} by altering stored or transmitted data and also updating
% the checksum to match the altered data.


% Figure~\ref{graph:robustness} shows the result for some of our tests.


% Not End-to-End exactly now. Maybe ``Robustness''.

% $\bullet$ We want to answer: can foosys detect all faults (safe)? can foosys mask expected faults (durable and live)?

% $\bullet$ We inject different kinds of faults into different components of the system:

% $\bullet$ we show the result in Figure~\ref{}. View change is incomplete, so liveness cannot be evaluated.

% $\bullet$ In short, we can detect and mask faults as expected.
% We can detect any faults, so we're always safe.
% The data is durable when there are no more than 2 DN commission failures.
% The data is durable when there are no more than 2 RS commission failures.

% First, we evaluate the robustness of \foosys by injecting differents kinds of faults into different components
% of the system. We hope to see whether \foosys can detect all faults (safety) and can mask
% faults when there are no more than 2 concurrent commission failures (durability and availability), which is
% the design goal of \foosys. We also compare that to HBase.

% In particular, we inject faults into Data nodes to force them to 1) crash or permanently fail; 2) corrupt block data but not its checksum;
% 3) corrupt block and forge the checksum. We perform 2) and 3) for both log files and checkpoint files.
% We inject faults into Region servers to force them to 1) crash; 2) corrupt data in memory; 3) write corrupted data
% to HDFS; 4) refuse to process requests or forward requests out of order, 5) ask NameNode to delete files. We perform 3) for both log
% files and checkpoint files. And note that Region servers do not have checksum
% to protect its data, so it does not have benign corruptions.

% There are other faults we have considerred but not fully evaluated: 1) all meta servers including NameNode, ZooKeeper, and Master
% can be corrupted. UpRight has solved this for NameNode and ZooKeeper and we believe Master can be solved in the same way.
% 2) Region servers can claim to own a region by writing forged data to ZooKeeper and meta tables.
% The basic principle to solve this problem is to use unanimous consensus among different components of the system.
% We've implemented this for NameNode, but not for ZooKeeper and meta tables yet. 3) Strong malicious behaviors like
% DoS attacks, clients colluding with servers, etc, are not considered here.

% We've built two prototypes: the first one \foosys-Verify has the end-to-end verification enabled but not the active node technique;
% and the second one \foosys-Active with both end-to-end verification and active node enabled. And we evaluate their safety, durability,
% and liveness under different faults. Since the view change protocol has not been fully implemented for \foosys-Active, we cannot
% evaluate its liveness for now. We only evaluate the durability of the system if it is safe and we only evaluate its liveness
% if it is durable. Conceptually, a client accepting corrupted data is live but not safe, but liveness here does not make much sense.

% As shown in Figure~\ref{graph:robustness}, in short, HBase can tolerate benign corruptions at Data nodes, but not at Region servers, since
% Region servers are not protected by checksums, and client cannot detect corrupted data, so its safety can be violated;
% \foosys-Verify can detect all faults, so it's always safe, but it cannot mask faults since Region servers are not replicated.
% \foosys-Active improves this by replicating Region servers, so that if one region server returns corrupted data, the client can
% retry other region servers.

% For Data node experiments, first, not surprisingly, all systems can tolerate up to 2 omission failures. NameNode will detect % under-replicated blocks and re-replicate them. Second, for benign data corruptions, Data nodes can detect them by checksum % and reject read operations to those blocks, so that the client can try another Data node. NameNode will re-replicate these % blocks finally. Third, HBase cannot detect corrupted blocks with forged checksums. \foosys-Verify can detect this at the % client, but cannot repair it, since the primary Data node in the chain can forward the corrupted data to other Data nodes,
% so that no Data nodes hold the correct data. In \foosys-Active, each Data node receives data from its corresponding Region server,
% so no single Data node can corrupt all data. Therefore, the system remains durable since the client can detect a fault and retry
% reading from other replicas. The system should also remain live since if they cannot reach agreement on the contents of a
% file, a view change will be triggered. Finally, no system can remain durable and available when there are 3 permanent/commission
% failures, but \foosys-Verify and \foosys-Active remain safe which HBase cannot.

% For Region server experiments, first, HBase and \foosys-Verify and tolerate any number of crashes since Region servers do not
% have persistent storage and they can be recovered from logs and checkpoints on HDFS. \foosys-Active should be able to achieve this
% with the view change protocol. Second, in HBase, Region servers do not protect its data with checksum, so any corruptions can be
% propogated to HDFS and the client. \foosys-Verify can detect data corruptions but cannot repair them since Region servers are
% not replicated. \foosys-Active remains durable since each Region server verifies the data independently before writing to Data node,
% and the client waits for 3 replies for a write operations, thus it can guarantee that data is written to at least one correct
% Region server and \Dn. It should remain live since if there is no progress for a while, the system will perform a view change to
% swap out the corresponding Region server quorum and start new ones.



%MDD: I threw away whatever changes are here on a conflict:
%
% Not End-to-End exactly now. Maybe ``Robustness''.

% $\bullet$ We want to answer: can foosys detect all faults (safe)? can foosys mask expected faults (durable and live)?

% $\bullet$ We inject different kinds of faults into different components of the system:

% $\bullet$ we show the result in Figure~\ref{}. View change is incomplete, so liveness cannot be evaluated.

% $\bullet$ In short, we can detect and mask faults as expected.
% We can detect any faults, so we're always safe.
% The data is durable when there are no more than 2 \DN commission failures.
% The data is durable when there are no more than 2 \RS commission failures.

% First, we evaluate the robustness of \foosys by injecting differents kinds of faults into different components
% of the system. We hope to see whether \foosys can detect all faults (safety) and can mask
% faults when there are no more than 2 concurrent commission failures (durability and availability), which is
% the design goal of \foosys. We also compare that to HBase.

% In particular, we inject faults into \Dn{s} to force them to 1) crash or permanently fail; 2) corrupt block data but not its checksum;
% 3) corrupt block and forge the checksum. We perform 2) and 3) for both log files and checkpoint files.
% We inject faults into \rs{s} to force them to 1) crash; 2) corrupt data in memory; 3) write corrupted data
% to HDFS; 4) refuse to process requests or forward requests out of order, 5) ask NameNode to delete files. We perform 3) for both log
% files and checkpoint files. And note that \rs{s} do not have checksum
% to protect its data, so it does not have benign corruptions.

% There are other faults we have considerred but not fully evaluated: 1) all meta servers including NameNode, ZooKeeper, and Master
% can be corrupted. UpRight has solved this for NameNode and ZooKeeper and we believe Master can be solved in the same way.
% 2) \rs{s} can claim to own a region by writing forged data to ZooKeeper and meta tables.
% The basic principle to solve this problem is to use unanimous consensus among different components of the system.
% We've implemented this for NameNode, but not for ZooKeeper and meta tables yet. 3) Strong malicious behaviors like
% DoS attacks, clients colluding with servers, etc, are not considered here.

% We've built two prototypes: the first one \foosys-Verify has the end-to-end verification enabled but not the active node technique;
% and the second one \foosys-Active with both end-to-end verification and active node enabled. And we evaluate their safety, durability,
% and liveness under different faults. Since the view change protocol has not been fully implemented for \foosys-Active, we cannot
% evaluate its liveness for now. We only evaluate the durability of the system if it is safe and we only evaluate its liveness
% if it is durable. Conceptually, a client accepting corrupted data is live but not safe, but liveness here does not make much sense.

% As shown in Figure~\ref{graph:robustness}, in short, HBase can tolerate benign corruptions at \Dn{s}, but not at \rs{s}, since
% \rs{s} are not protected by checksums, and client cannot detect corrupted data, so its safety can be violated;
% \foosys-Verify can detect all faults, so it's always safe, but it cannot mask faults since \rs{s} are not replicated.
% \foosys-Active improves this by replicating \rs{s}, so that if one \rs returns corrupted data, the client can
% retry other \rs{s}.

% For \Dn experiments, first, not surprisingly, all systems can tolerate up to 2 omission failures. NameNode will detect
% under-replicated blocks and re-replicate them. Second, for benign data corruptions, \Dn{s} can detect them by checksum
% and reject read operations to those blocks, so that the client can try another \Dn. NameNode will re-replicate these
% blocks finally. Third, HBase cannot detect corrupted blocks with forged checksums. \foosys-Verify can detect this at the
% client, but cannot repair it, since the primary \Dn in the chain can forward the corrupted data to other \Dn{s},
% so that no \Dn{s} hold the correct data. In \foosys-Active, each \Dn receives data from its corresponding \rs,
% so no single \Dn can corrupt all data. Therefore, the system remains durable since the client can detect a fault and retry
% reading from other replicas. The system should also remain live since if they cannot reach agreement on the contents of a
% file, a view change will be triggered. Finally, no system can remain durable and available when there are 3 permanent/commission
% failures, but \foosys-Verify and \foosys-Active remain safe which HBase cannot.

% For \rs experiments, first, HBase and \foosys-Verify and tolerate any number of crashes since \rs{s} do not
% have persistent storage and they can be recovered from logs and checkpoints on HDFS. \foosys-Active should be able to achieve this
% with the view change protocol. Second, in HBase, \rs{s} do not protect its data with checksum, so any corruptions can be
% propogated to HDFS and the client. \foosys-Verify can detect data corruptions but cannot repair them since \rs{s} are
% not replicated. \foosys-Active remains durable since each \rs verifies the data independently before writing to \Dn,
% and the client waits for 3 replies for a write operations, thus it can guarantee that data is written to at least one correct
% \rs and \Dn. It should remain live since if there is no progress for a while, the system will perform a view change to
% swap out the corresponding \rs quorum and start new ones.

% \begin{figure*}[ht]
%     \begin{center}
%     \begin{tabular}{ | c | c | c  c  c| c  c  c | c  c  c | }
%     \hline
%     \multirow{2}{*}{Nodes} & \multirow{2}{*}{Faults}		&      \multicolumn{3}{|c|}{HBase} 	& 	\multicolumn{3}{|c|}{\foosys-Verify} 	& 	 \multicolumn{3}{|c|}{\foosys-Active}               \\ \cline{3-11}
% 						& & 	S  &  D & L	& 	S  &  D & L	&   S  &  D & L	\\ \hline	
%     \multirow{4}{*}{\Dn{s}}
%      & \DN: up to 2 omission failures	&   \color{green}	Y & \color{green} Y & \color{green} Y				
%                                         &	\color{green} Y & \color{green} Y & \color{green}Y	
%                                         &	\color{green}Y & \color{green}Y & Y			 \\ \cline{2-11}
%      & \DN: up to 2 benign commission failures		&   \color{green} Y & \color{green} Y & \color{green} Y				
%                                                     &	\color{green} Y & \color{green} Y & \color{green} Y
%                                                     & \color{green}	Y & \color{green}Y & Y			 \\ \cline{2-11}
%      & \DN: up to 2 commission failures			&   \color{green} N  & - & -				
%                                                 &	\color{green} Y & \color{green} N & -
%                                                 &	\color{green}Y & \color{green}Y & Y			 \\ \cline{2-11}
%      & \DN: 3 permanent/commission failures      &   \color{green}N  & - & -
%                                 				&	\color{green}Y & \color{green}N & -
%                                 				&	\color{green}Y & \color{green}N & -			 \\ \hline

%     \multirow{4}{*}{\Rs{s}}
%     & \RS: any number of crashes 				&   \color{green}Y & \color{green}Y & \color{green}Y
%                             				&  	\color{green}Y & \color{green}Y & \color{green}Y
%                             				&  	Y & Y & Y     			\\ \cline{2-11}
%     & \RS: up to 2 commission failures 		&   \color{green}N & - & -	
%     			                            &	\color{green}Y & \color{green}N & -	
%                             				&	\color{green}Y & \color{green}Y & Y			 \\ \cline{2-11}
%     & \RS: 3 commission failures 		    &       - & - & -				&	- & - & -					&	\color{green}Y & \color{green}N & -			 \\ \cline{2-11}
%     & \RS: attemps to corrupt Name node 	&       	     \color{green}   N & \color{green}N & \color{green}N	
% 						&	\color{green}Y & \color{green}N & \color{green}N				
% 						&	\color{green}Y & \color{green}Y & Y			 \\ \hline

%     \end{tabular}
%     \caption{\label{graph:robustness} Correctness (\DN=\Dn, \RS=\Rs,  S=Safety, D=Durability, L=Liveness, Y=Yes, N=No}
%     \end{center}\end{figure*}




