\documentclass{article}[14pt]
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{subfigure}
\pdfpagewidth 8.5in
\pdfpageheight 11in
\setlength\topmargin{0in}
\setlength\headheight{0in}
\setlength\headsep{0in}
\setlength\textheight{9.7in}
\setlength\textwidth{6.0in}
\setlength\oddsidemargin{0in}
\setlength\evensidemargin{0in}
\usepackage{setspace}
%\doublespacing
%\setlength\headheight{77pt}
%\setlength\headsep{0.25in}
\begin{document}


\title{File Sharing}
\author{Bing Wei, Zengbin Zhang \\
\textit{bwei,zengbin@cs.ucsb.edu}}

\maketitle

\section{System Architecture}
The general system implementation architecture is shown in fig.\ref{fig:topo}. We implemented
 the system in Python, including 4 classes: file\_server, file\_client, msg\_server,
 msg\_client. The message client and server are used as daemons to send and receive UDP
 messages. The system is configured by a configuration file, which contain  the \emph{process id}(PID), the
  \emph{ip address} and listening \emph{port} the file servers and clients. Initially, the server with PID \emph{0} is the leader.
\begin{figure}[h]
\centering
\includegraphics[height=1.7in]{arch.eps}
\caption{Client/Server Architecture}
\label{fig:topo}
\end{figure}
\section{System Properties}
\begin{itemize}
	\item \textbf{Vector-based Multi-Paxos Consensus Algorithm}
	In normal case, random EVENTs are sent between clients 
	to generate causal order of REQUESTs. REQUESTs are also randomly 
	generated at clients, each assigned with a unique vector clock time.
	These REQUESTs represents the operations that the client wants to 
	do in the distributed file servers. The final execution order of these 
	operations are defined by the leader according to their causal order,
	and are committed in exactly the same order on each file server. Each 
	operation is assigned with a unique \emph{sequence number} by the 
	leader, and goes through the basic Paxos Consensus algorithm before 
	being committed.
	
	The committed operations will be recorded in the \emph{Local Log} of 
	each file server, along with their vector clock time and sequence number,
	by which we can verify the correctness of the system.

\item \textbf{Paxos Leader Election}
	When the leader fails, the clients will experience timeout and thus 
	resend their request to normal servers. By default, the server will 
	forward the requests to the failed leader and wait for the \emph{accept} 
	message,
	thus will also experience timeout in this case. Then the server with 
	a 1-higher PID will start an election. Upon receiving \emph{ack} from 
	majority of the servers, it will become the new leader and collect 
	all the pending REQUESTs to re-accept them.

\item \textbf{Paxos File Server Recovery}
	We also implemented a recovery mechanism to allow the failed file server 
	to rejoin the system, which is not in the standard protocol. By an ``-r''
	option, the failed server will enter an ``RECOVER'' mode instead of ``NORMAL''
	mode. In this mode, the server will first read its local log to know the 
	status before it failed. Then by sending this status to the leader (any 
	normal server will forward this message), the leader will help the server 
	to recover until it catches up with the current progress (Detailed described later).

	Our algorithm supports multiple servers to recover simultaneously.
	
\item \textbf{Non-FIFO Channel Simulation}
Since in a relatively small network like CSIL, it is hard to get a non-FIFO scenario. In order to verify that our program works with non-FIFO environments, we
add a random delay for each outgoing message. This provided us non-FIFO channels among clients and servers. Thus,  we were able to verify that the program works well in all circumstances.
\end{itemize}

\section{Paxos Approach}
\subsection{Normal Mode}
\subsubsection{Client Implementation}
The client sends a REQ to the leader in a random time interval. Upon receiving the corresponding ACK, the REQ is successfully handled by the leader. The REQ is retransmitted to a 
higher indexed server if the timer expires.

\subsubsection{Server Implementation}
\begin{itemize}
\item Upon receiving a REQ, 
\begin{itemize}
\item IF it's leader, put the request in my REQUEST queue, dequeue them by 
	the causal order of their corresponding. 
	If the REQ is the next one to be committed, the leader increases sequence number by 1,  and sends ACPT(seq, ballot, value) to all the acceptors.
\item ELSE: forward the REQ to the leader.
\end{itemize} 

\item Upon receiving a ACPT(seq, ballot, value), 
\begin{itemize}
\item IF the incoming ballot is less than local ballot, drop the msg.
\item ELSE, IF (not has\_see this REQ, put it in Pending Request Queue and set count to 1
\item ELSE, IF (having receivd ACPT on this seq from majority), put the REQ in a 
	the Decided Queue, which will be dequeued by the order of seq, and then 
	be written to the Local Log.
	;ELSE, increase the count of corresponding request.
\end{itemize} 
\end{itemize} 

\subsection{Fault tolerant}
\subsubsection{Leader Election}

We use timeout mechanisms to detect leader failures. When the leader fails, a client waiting for ACK will time out,
which will then resend the REQ to the next server with a larger PID.
When a server who is not a leader receives a REQ, it forwards the request to the leader and waits for the corresponding ``accept'' message of the REQ. If the timer also expires, which means the leader fails, it begins election process.

Algorithm at the server $P_i$:
\begin{itemize}
\item When receiving a REQ, forward the REQ to the leader. IF timeout on the forwarded REQ, the server begins new leader election. It increase its ballot by 1, and sends out PREP(ballot).
\item When receiving PREP(ballot) from $P_j$, IF ballot is not less than local ballot, sends back all the REQs in the Pending Queue (accepted, but not committed) in ACK msg. With each ACK, also sends back the corresponding seq; Else drop the PREP msg. 
\item When receiving ACK from the majority, the server becomes the leader. For each pending
	seq, 
	the REQ value with the largest ballot number will be adopted and re-accepted.
\end{itemize}

\subsubsection{Server Recovery}
We also handle the scenario when failed servers come back shown in fig.\ref{fig:recovery} . This server needs to learn all the writes it missed. The leader are responsible to send back all the missed values. Since the system is still going on, the missing values include values already committed to the file and the on-going ones, which are sent back in LOG msg.

By reading the Local Log first, the recovering server can get to know the last sequence number  $s_1$ that it 
had committed. This value is attached in the RECOVER\_REQ message. Upon receiving this 
value, the leader will set the requests that between $s_1$ and its newest local sequence 
number 
$s_2$ to be the ones that needs to be recovered on the recovering server. These requests 
contain both committed ones (can be found in the Local Log) and some pending ones.
$s_2$ is sent to the recovering server immediately in RECOVER\_ACK message.
The leader will read the Local Log first and send all the committed ones to the recovering 
server. Then, the leader will store the recovering range $[s_1, s_2]$, and when commiting 
a request between this range, it will also send this to the recovering server in a LOG 
message. The server can track several recovering nodes at the same time in this way.

During the RECOVER mode, the recovering node will reject all messages except the 4 recover 
messages and normal ACPT messages with a sequence number greater than $s_2$, which 
will be processed as normal.


\begin{figure}[h]
\centering
\includegraphics[height=2.0in]{node_recover.eps}
\caption{Server Recovery}
\label{fig:recovery}
\end{figure}
%
Algorithm at the recovering server:
\begin{itemize}
\item Enter ``RECOVER'' mode.Read Local Log and sends RECOVER\_REQ(former\_seq) to another server, $s_1$ is the highest sequence number in the file. 
\item When receiving RECOVER\_ACK($s_2$), remember $s_2$
\item When receiving LOG(seq, ballot, value), write value to file. IF $ballot > local\_ballot$, $local\_ballot=ballot$. IF receiving all values up to $s_2$, enter ``NORMAL'' mode, sends RECOVER\_FIN to leader.
\item Upon receiving ACPT(seq, ballot, value) with $seq > s_2$, process as normal. 
\item Upon receiving other messages, omit.	
\end{itemize}

Algorithm at the leader:
\begin{itemize}
\item When receiving RECOVER\_REQ(seq), sends back RECOVER\_ACK(highest\_seq). Remember this recovering server with its recovering range.  Sends back all values from $seq+1$ to $highest\_seq$ with LOG msg. With each value, sends corresponding ballot and seq. 
\item When receiving RECOVER\_FIN msg from $P_i$, ends recovery for $P_i$.
\end{itemize}

Algorithm at other servers:
\begin{itemize}
\item When receiving RECOVER\_REQ, forward it to the leader.
\end{itemize}

\subsubsection{Special Issues}
\begin{itemize}
	\item 		\textbf{Deal With Missing Sequence Number}
	The leader should maintain the continuity of the sequence number 
	in order for the listener to commit the REQs in total order, since 
	 the listener will be stuck if a sequence number is missing.
	In the 
	case of leader election, it could happen that the new leader will miss 
	some of the sequence numbers among the pending REQs, due to the unpredictable
	delay and non-FIFO property. 
	
	We are facing a dilemma here. On one hand, if we reorder pending REQs and count 
	on the clients to resent the missing ones, it is possible that some servers 
	might have committed some of them, and a new sequence number with a committed 
	REQ will totally ruin the correctness of the system. On the other hand, if 
	we don't reorder them, there is no way we can pass the final committing phase,
	since the listener has no sense on what happened to the missing REQs.

	Finally, we solve the problem by letting the new leader waiting. Since one assumption
	is that the network is reliable. The missing accept messages of those REQs are still 
	on their way. The election process will end until all the gaps are filled with 
	the missing pending messages.

\item \textbf{Deal With Timeout Introduced by Delays}
	Since we use the timeout to detect the server failure, but the REQs will only be 
	ACKed after it's committed, the unpredictable network delay could sometimes 
	generate false alarm on the failure of the leader, which will cause frequent 
	election process.
        To make sure the timeout is due to leader failures instead of network delays or new leader elections, the client sends a PING to the leader, which is supposed to be replied immediately by the leader.
If the client does not receive an ECHO from the leader, it resends the REQ to the next server with a larger PID. Otherwise, it resends the REQ to the leader.

%The necessity of a PING msg for the client is that when a client sends REQ to a new leader, it usually takes longer for the client to receive ACK because the new leader need to finish election first. If the client simply relies on timeout and resends the REQ again to another server, this may lead to another leader election, thus increase the time of election further and result in more timeouts.

\end{itemize}

\section{Correctness Verification}

First, we check whether the files on different servers are the same. One file can have fewer records than another file due to msg delays. But the content they both have should be the same.

Second, we check whether every record maintains causal relations. We extract the vector clocks from the log file, and compare all the vector clock pairwisely. No causal violations have been detected.


%\section{Scenario One}
%
%\subsection{Settings}
%
%\subsection{Results}
%\begin{figure}[h]
%\centering 
%
%\subfigure[CDF of Mean Coverage]{
%\label{mean_cdf} %% label for first subfigure
%\includegraphics[height=2.4in,angle=0]{../mean_CDF}
%}
%\subfigure[CDF of Worst Coverage]{
%\label{worst_cdf} %% label for second subfigure
%\includegraphics[height=2.4in,angle=0]{../worst_CDF}
%}
%\caption{Coverage Ratio}
%\label{fig:cr} %% label for entire figure
%\end{figure}
%
%\begin{figure}[h]
%\centering 
%
%\subfigure[Total Utilization]{
%\label{util} %% label for first subfigure
%\includegraphics[height=2.3in,angle=0]{../Util}
%}
%\subfigure[Fairness: Variation]{
%\label{worst_cdf} %% label for second subfigure
%\includegraphics[height=2.3in,angle=0]{../var}
%}
%\caption{Allocation}
%\label{fig:alloc} %% label for entire figure
%\end{figure}
%
%\begin{figure}[h]
%\centering
%\includegraphics[height=2.4in]{../apn.eps}
%\caption{Number of AP got Allocated}
%\label{fig:apn}
%\end{figure}




%\section{Measurement Setup}
%
%\begin{figure}[h]
%\centering
%\includegraphics[height=2.4in]{hw2map.eps}
%\caption{Topology of Experiment Sites}
%\label{fig:topo1}
%\end{figure}


%\section{Experiment Results}


%\begin{figure}[h]
%\centering 
%
%\subfigure[Link Outage Frequency]{
%\label{outage1} %% label for first subfigure
%\includegraphics[height=3.0in,angle=270]{../outage1}
%}
%\subfigure[Outage Duration Distribution]{
%\label{outage2} %% label for second subfigure
%\includegraphics[height=3.0in,angle=270]{../outage2}
%}
%\caption{Link Outage}
%\label{fig:outage} %% label for entire figure
%\end{figure}
%
%
%
%\begin{table}[h]
%\centering
%\begin{tabular}{ l c c }
%Location & Outages & Percentage \\
%\hline
%Source & 16 & 3.6 \\
%Destination & 141 & 32.0455 \\
%Core &120 & 27.3 \\
%Same Site & 1 & 0.237273 \\
%Link Fluttering & 121 & 27.5  \\
%Unknown & 41 & 9.31818 \\
%\end{tabular}
%\label{localcore}
%\caption{Error Location}
%\end{table}
\bibliographystyle{abbrv}
\bibliography{./papers}

%\begin{thebibliography}{0}
%
%\end{thebibliography}

\end{document}
