
\documentclass[12pt]{article}

%\usepackage{algorithmic}
%\usepackage{amsmath}
%\usepackage{url}
\usepackage{graphicx}
\usepackage{appendix}

\setlength\pdfpagewidth{8.5in}
\setlength\pdfpageheight{11.0in} 
\setlength\textwidth{6.5in}
\setlength\textheight{9.0in}
\setlength\oddsidemargin{0.0in}
\setlength\evensidemargin{0.0in}
\setlength\topmargin{0.0in}
\setlength\headheight{0.0in}
\setlength\headsep{0.0in}

\begin{document}

\begin{center}                  
{\LARGE {\textbf{Distributed Systems - Project 1}} } \\
Vijay Chidambaram, Chitra Muthukrishnan, Deepak Ramamurthi, \& Elizabeth Soechting \\
\end{center}

\section{Introduction}
The goal of this project was to implement a reliable fault-tolerant distributed 
system. The system was to provide a key-value store similar to
Amazon Dynamo \cite{dynamo}. The service must be fault tolerant
and continue to work in the presence of failures. The system can
experience server crashes, network failures or partition, or 
process failure. Our system is designed to continue to be available
to a user when servers go down but consistency may not be perfectly
preserved. It is possible to lose updates if a node crashes before it
can propogate the changes to the other nodes in the system. Our 
system is eventually consistent with a maximum delay of propogation
of five seconds without crashes.

We describe the design of the system, our experience with Go and some of the problems we faced when implementing the system.

\section{Design}
The system is designed with a target of four nodes in mind. We designed our
system to be completely distributed, in that there is not a single
node which is a primary; rather, each node is a primary for some
portion of the data. In the ideal case (when all servers are 
running), each server will be a primary for one quarter of the key
space. As the primary, it will always have the most up to date data
for its portion of the data. It will exchange messages with the other
servers every five seconds to push the more current data for the
data that node is primary for. In the event of failure of a node,
the nodes execute a protocol to elect a new primary.

Like Dynamo \cite{dynamo}, our distributed system considers the available nodes as part of a ring (see Appendix \ref{ring} for a picture of this.) When a node fails, the node after the failed node ( in clockwise direction ) becomes the new primary for the the failed node. This has the nice property that minimal perturbations are caused by a node leaving or entering the system. 

Each node determines individually that another node has failed, and using a
deterministic protocol, each node assigns the same node to be the
new primary. The nodes do not exchange any messages to come to 
agreement. Similarly when a node recovers from a crash, each other
node in the system discovers the existence of the node individually
and correctly determines what partition that node is now the primary
for. The server side of the system consists of five parts: the
partition manager, the failure detector, the heartbeat generator, 
the election manager, and the log updater. A diagram of the system 
and the interaction of its various
pieces can be see in Appendix \ref{appImage}.

We deal with conflicting writes, possibly because of network partitions, by the `last-write wins' rule. We assume that the drift of clocks between nodes is not very high and use real time as reported by the nodes in order to resolve conflicts.

\section{Partition Manager}
The partition manager is the core of the system. This module interacts with all the other modules and ensures that the system behaves as required. The web-server, on receiving a request, uses the  partition manager to determine if it is safe to perform the requested operation locally. The partition manager, on its part, determines if the local server is responsible for the particular key. All read operations are satisfied locally, and hence a green signal is given to the webserver to perform the read. If the operation requested was a write, the server that holds the primary partition for the key is contacted and a remote write is performed. Once the remote operation has been successfully executed, the webserver is allowed to perform the write locally. 

The partition manager also interacts with the failure detector. As and when the failure detector communicates about the existence or loss of a node, it relays the same information to the election manager to find out the movement of data. Accordingly, the local data structures are updated and all future requests are redirected as per the new information.

The partition manager periodically calls the log updater module to transfer to all other nodes, any new additions to the log, so that the system remains consistent.

\section{Failure Detector and Heartbeat Generator}
The Failure detector module runs on each node and keeps track of the connectivity to every other server. 
The heartbeat generator consists of a pinger and listener. The pinger sends a probe to the listeners of other
servers at periodically (The interval is 1s). The listener runs on a pre defined port. It merely listens
to incoming probes and sends back a reply to the corresponding pinger . If the pinger does not elicit a reply from
the target node, the node is assumed to be down, and a message is passed to the Partition Manager with information
about the change in connectivity with that node. A similar message is sent to the Partition Manager when the target
nodes comes up and pings get through.

\section{Election Manager}
The election manager is responsible for determining who is the 
primary of what partition when nodes join and leave the system.
Each node has an instance of the election manager. The only messages
required to elect a new primary, is to determine that the node
has failed or recovered. If you think of the nodes are being on a
ring, when a node crashed its primary partition is shifted to the next
available node in a clockwise direction. The election manager has
an ideal configuration for the system so when a node recovers, it 
attempts to return the system to as close to the ideal state as
possible. The partition originally belonging to the recovered node
is restored and any partitions that it would have taken over had it
not been down are also transferred to that node. For example, imagine 
you have 4 nodes $A$, $B$, $C$, and $D$ which are responsible for
partition $1$, $2$, $3$, and $4$, respectively. If nodes $A$ and 
$B$ are down, then node $C$ will be primary for $1$, $2$, and $3$
and $D$ will be primary for $4$. If node $2$ rejoins the system, it 
will become the primary for $2$ (its original partition), and it 
will also become the primary for $1$ because if it had been up
when $1$ crashed, it would have become the primary. The system does
not make any attempt at load balancing.

\section{Log Updater}
Each write operation to a node is logged. The key, the new value and the time of update are logged. Each log record also has a unique ID associated with it. Each node has one log for each partition.

The Log Updater is in charge of transferring the logs from one node to another, and changing the receiving node's hash tables based on the received log. Upon receiving a log, the node goes through each log record and tries to apply it to its hashtable. If the time mentioned in the log record is greater than the time of update of the same key in the hashtable, the node changes the key's value and associated time. Otherwise it discards the update. 

Each node maintains the last log ID that another node has seen, for each partition. When pushing updates to other nodes, only logs that the node has not seen are sent. If the node has seen up-to log ID 100, and the sending node's latest update has log ID 200, logs 101 to 200 are sent to that node. When a node comes up after a crash, rather than sending it partial log updates, the entire hash table is converted into log form and sent.

\section{Experience with Go}
We found Go to be highly useful for the project. It took care of a number of low level details such as creating mutexes, TCP connections, etc. It allowed us to focus on the high level design, rather than on low level implementation details.

The object oriented nature of Go allowed us to develop at a rapid pace, independent of each other. The ease with which Go allows the use of concurrency let us program having many threads in mind, without worrying about instantiating those threads, worrying about their efficiency and so on. Go as a language is definitely a great choice for anyone dealing with concurrency.

\section{Problems encountered}
\begin{enumerate}

\item{\emph{Debugging Woes}}: Our program uses a very high number of threads. Therefore, when the program crashed during development, it was immensely difficult to pinpoint the source of the crash and get some debugging information. The debugging tools for Go seem inefficient at dealing with programs where there are hundreds of threads.

\item {\emph{Problems with REJECT}}: Our failure detector takes a long time to detect a partitioned node. Our Dial() function, which tries to
establish connection with the remote machine does not return
with a failure soon. It takes a few minutes before it says that the
other node is unreachable. The same is not the case when
the other node is shut down, or when the program on the other node is
not running. The reason we think this happens is that
the node on the other partition returns a port unreachable error,
which does not cause the underlying Dial() method to terminate.

\end{enumerate}

\bibliographystyle{abbrv}
\bibliography{project}

\appendixpage
\appendix
\section{System Design} \label{appImage}
\includegraphics[scale=0.50]{system2.png}

\section{Ring Design} \label{ring}
\includegraphics[scale=0.50]{ring.png}

\end{document}
