\documentclass[conference]{IEEEtran}
\usepackage{amsmath, array, graphicx, hyperref, algorithm, algorithmic}

\begin{document}
\title{Chord Maintenance and Merger}
\author{Group 2: Li Yifan, Wang Danqi, Yang Shengbo, Zhang Da}
\maketitle


\section{Introduction}

Chord \cite{10}, as one of the most famous distributed hash table (DHT) algorithms, has been 
widely used in structured overlay networks to locate the Internet resources. In this project 
of the course \lq\lq Advanced Topics In Distributed Systems \rq\rq, we investigate the principle 
and operations of the Chord protocol proposed by MIT and do some further study on the load balance 
and merger of multiple Chord networks. We implement the Chord protocol in our simulation program 
chordsim which is developed by several members of our group on the SVN platform with C++, reproduce 
most of the results presented in the paper \cite{10} and get more results on the extension (load 
balance and merger). Through the implementation and simulation, we better understand the attractive 
features of the Chord protocol, as well as some disadvantages that need to be overcome in future 
research work. 

The rest of this project report is structured as follows. Section 2 depicts the system that to be studied 
and implemented with some base ideas. Section 3 presents the performance evaluation design. In section 
4 we detail the implementation of the Chord protocol with the description of the system components and main 
algorithms, while section 5 presents the simulation results we get with our simulation program. Finally, 
we conclude the work in section 6.


\section{System Description}

\subsection{Basic protocol}

Chord involves a set of basic operations: predecessor lookup, successor lookup, key lookup 
, stabilization and fingers update. Stabilization and finger update are executed periodically
to maitain the Chord structure correctly at each node. In addition, during the stabilization,
data is transfer among peers when nodes' predecessors are changed.
%Our simulation will implement these operations with two enhancements: 
%successor-list and key replication. Each chord maintains a list of $r$ nearest successors 
%on the Chord ring. The maintenance of successor-list is done by the stabilization algorithm. 
%The successor lookup process then adds a timeout operation to detect whether the current 
%successor is available. If not, the lookup message is resent to the next successor in 
%successor-list. Moreover, key replication is added to maintain data integrity during peer 
%failure. A Chord node always transfers it own keys to the successors in its successor-list. 
%Thus there are always $r$ replicas of each key in the system. When a node detect its predecessor 
%failed, it will be responsible for the predecessor's keys since it already has a copy of them. 

\subsection{Power-of-two choices load balancing}

Distributed hash table (DHT) plays a significant part in many distributed applications 
such as peer to peer systems (\cite{6,8,9,10,12}). However, current schemes based on consistent 
hashing, including Chord, are not simple and effective enough, which attracts other alternatives to 
come out, such as \lq\lq virtual peers\rq\rq and the \lq\lq power of two choices\rq\rq paradigm.

In a basic consistent hashing approach, both peers and keys are hashed onto a one-dimensional 
ring. Keys are then allocated to the nearest peer in the clockwise direction, in other 
words, that peer is responsible for storing a value associated with the key \cite{1}. However, 
it is obvious that a peer that happens to be responsible for a larger arc of the ring will 
tend to be assigned a greater number of items, which may lead to load unbalance. 

A solution proposed in \cite{10} comes up with a new concept \lq\lq virtual peers\rq\rq. In this 
scheme, suppose there are n peers in all, each peer stimulates a logarithmic number of \lq\lq 
virtual peers\rq\rq, thus obtains several smaller segments whose total size is more approximated 
to the expectation 1/n. Since more \lq\lq virtual peers\rq\rq are assigned to a resource rich 
node and the load can be shifted easily, this method can theoretically balance the load much 
better but it does not work well in some cases in reality. 

As a practical alternative to \lq\lq virtual peers\rq\rq, \lq\lq Power of two choices\rq\rq 
solves the load balance problem by using two or more hash functions to pick up candidate bins 
for each item to be inserted. Prior to insertion, the load of each bin is compared and the item 
is assigned to the bin with the lowest load. Moreover, if more than one choice have the same 
smallest load, the \lq\lq Vocking tie-breaking scheme\rq\rq is proposed to improve the result, 
which suggests choosing the least loaded arc with the smallest length appears best, since that 
arc is least likely to obtain further load in the future.

In the \lq\lq Power of two choices\rq\rq scheme, there are two options when storing data \cite{1}. 
One is only storing data at one node, and the other is storing a pointer to the data in other 
nodes except the node storing data. The two options also correspond to two retrieving methods. 
Suppose d hash functions are used in this scheme. For the first case without pointers, given a 
query key, results of all d hash functions need to be calculated and it is necessary to request 
all of the possible nodes in parallel, finally the node storing the data corresponding to the 
query key will answer. For the second case with pointers, requesting only one of the possible 
nodes is enough since node can forward the request directly to the final node storing the queried 
data, with the help of pointers.

\subsection{Multiple Chord merging}

Before discussing the merger of multiple Chord overlays, firstly we state the basic 
assumption just as in \cite{Datta-merge}:

There is a strict ordering among the peers, and hence, there is an unique merged network 
in terms of choice of successor/predecessor nodes at each peer.

The assumption can be illustrated in Fig. \ref{fig:merger}. There is a globally unique 
peer identifier space, which can be seen as the holes located along the ring. In a Chord 
overlay, some of the holes are filled with the nodes. Neither two overlays have got the 
nodes that occupy the same hole. Then the merger of the two (or more) Chord overlays can 
been seen as the nodes from different rings all being moved to one ring.

\begin{figure}[htbp]
\centering
\includegraphics[height=5cm]{./figure/merge.pdf}
\caption{Merger of two Chord}
\label{fig:merger}
\end{figure}

There are two main operations in the merger of multiple overlays: 
reestablishment of the routing network and Management of the keys (data).

\subsubsection{Reestablishment of The Routing Network}
The reestablishment of the routing network is somewhat like the join process in the standard 
Chord protocol. The main difference is in join process, the incoming peer is an isolated one 
while in merger, the new peer joining into the current ring belongs to another Chord overlay. 
Therefore, in order to keep the correctness of routing in both of the two rings, the merger 
process must be propagated. For example, in Fig. \ref{fig:merger}, when peer 5 from network 
$N_2$ and peer 6 from network $N_1$ come in contact with each other, peer 5 will identify peer 
6 as its new successor and the merger process begins. The modification of successor/predecessor 
should be propagated means the other peers in both $N_1$ (e.g. peer 8, 12, 1) and $N_2$ (e.g. 
10, 0, 3) should also detect the new successor/predecessor in the new overlay network ($N_1+N_2$) 
and finally form the correct routing. As stated in \cite{Datta-merge} the ring merger process 
can be accelerated if it is conducted parallelly over disjoint parts of the identifier space. 
Such kind of parallelization is straightforward because of the unique mergered network in terms 
of choice of successor/predecessor nodes at each peer.

\subsubsection{Management of The Keys}
While for network centric applications routing correctness is adequate, data-centric applications 
also need to manage the binding of keys to correct peers. Facilitated by the algorithm proposed in 
\cite{Datta-merge}, the binding of corresponding key/value data will be transferred after the 
reestablishment of the ring.

Besides the merger process, we will also implement the background ring consistency maintenance 
mechanism \cite{Datta-merge} which is continuous and can handle the inconsistencies caused by churn.


\section{Performance Evaluation Design}

\subsection{Path length}

As the paper \cite{10} stated, $2^k$ nodes are deployed in simulation and $100\times2^k$ keys 
are mapped onto these nodes according to Chord Protocol. Every node chooses randomly a set of 
keys from the system for query, path length (or hops)of which will be recorded. The mean, and 
the 1st and 99th percentiles of path length as a function of k will be plotted into a figure 
to prove the logarithmatical increasing of path length for query as number of nodes increases.

\subsection{Simultaneous node failures }

This part focuses on measuring impacts of relative large amount of nodes failures on system performance.

Simulation Scenario: $10^6$ keys  are assigned to $10^4$ nodes, a fraction $p$ of which fail. After 
stabilization of the system, every node randomly select a set of keys to query. The failed response of 
according keys is treated as a look up failure. 
%Under this scenario, three kinds of experiments are conducted to measure different metrics:

%\begin{itemize}
%\item Look up failure rate without replicas of keys \\
%In the Chord paper, look up failure rate is measured under a system that neither replicate keys nor 
%recover them after stabilization. The mean look up failure rate and 95\% confidence interval is depicted 
%as a function of $p$.
%\end{itemize}
%\begin{itemize}
%\item Look up failure rate {\em \&} Communication overheads with replicas of keys \\
%To remedy the problem of loss of keys due to node failures, replicas of keys are distributed into 
%$\gamma$ nodes succeeding original successor of keys. This process can be facilitated by successor 
%list. The result figure will show the look up failure rate versus fraction of node failures under 
%different number of replicas $\gamma$. \\
%While replicas increase robustness facing node failures, it also introduced communication overheads 
%to maintain $\gamma$ replicas. So communication overheads(packets/second or bytes/second) versus 
%fraction of node failures under different number of replicas is also measured. \\
%From comparison of these two result figure, we expect to find an optimal choose of number of 
%replicas with relatively high robustness and low overheads.
%\end{itemize}
%\begin{itemize}
%\item Stabilization time\\
%Another key metric to evaluate the protocol performance is the recovering time from nodes\rq \ failure 
%to re-stabilization. The system during stabilizing is under relatively high inconsistence, so shortening 
%the stabilization time can effectively decrease the query failures rate. The result will be represented 
%by time spent as a function of fraction of failure nodes.
%\end{itemize}

\subsection{Lookup latency}

Path length only provides a measurement of search performance in terms of hops. In fact, it is more 
important to consider the lookup latency when routing messages through the Internet. Although proximity 
routing provides a efficient algorithm to take advantages of network proximity, another question is 
how to identify the network distance between two nodes so as to find the nearest routing target in 
the network. Considering that network measurement and positioning is another complicate research area, 
which is also out of the project's scope, we simplify the network distance between two nodes to the 
difference of their IP addresses, as follows:
\[
	distance( a, b ) = | a.ipaddr - b.ipaddr |
\]

IP addresses can be retrieved from the availability data from \cite{av-traces}. Chord protocol with 
and without proximity routing table will be simulated to measure the lookup latency in a static 
environment without peer churn.

\subsection{Lookups and data movements cost during stabilization}

%We will measure the impact of successor-list and data replication during peer churn in this set of simulation. 
Peer IP address and online time will be extracted for the availability traces at \cite{av-traces}. 
Two algorithms will be simulated: basic Chord protocol and the one with successor-list and data 
replication. Lookup fail ratio and data movements cost will be measured to compare the system stability 
and overhead. The number of nodes in successor-list is taken as a parameter to find the optimal value.

\subsection{Load balance}

We will implement the \lq\lq power of two choices\rq\rq to balance the load. Comparison between the 
protocol with and without \lq\lq power of two choices\rq\rq will focus on the metric -- Keys per peer.
1st percentile, mean and 99th percentile loads will be measured.

\subsection{Multiple chord merging}

The merger of two and more Chord overlays will be carried out. The most important performance 
metric we are interested in is how fast the merger process converges. It will be measured by 
the variations of the following two metrics as time goes by:

\begin{itemize}
 \item Routing success rate: to measure the efficiency of reestablishment of the routing network
 \item Query success rate: to measure the efficiency of the management of the keys
\end{itemize}

Two main experiments will be deployed:

\begin{itemize}
 \item Merger of two Chord overlays\\
 The above two success rates corresponding to the routing or query located in original network and 
across the two will be measured as the function of time just as the work done in \cite{Datta-merge}.
 \item Merger of multiple Chord overlays (2-5)\\
 The two success rates of the routing and query from Network 1 to 1(itself), 2, 3, 4 and 5 will be 
measured as Network 2, 3, 4 and 5 are mergered with Network 1 one by one (two or more merger processes 
may be overlapped). 
\end{itemize}

Then if time permits, we will also add some churn in the system. The background ring consistency 
maintenance mechanism will be implemented and performance with and without the background synchronization 
will be compared.


\section{System Implementation}
\begin{figure*}[htbp]
\centering
\includegraphics[height=10cm]{./figure/architecture.pdf}
\caption{System architecture}
\label{fig:arch}
\end{figure*}


\subsection{Overview}
We developed an event-driven simulator: chordsim, and implement Chord protocol with 
it. Chordsim providesan flexible framework, including event engine, configurable log 
mechanism, basic network topology managing,to facilitate the simulation of a distributed 
system. Basic chord protocol, power-of-n load balancing and multiple Chord network 
merging are implemented. The latest version of chordsim can be downloaded at Google 
code: http://code.google.com/p/chordsim/.

\subsection{System Components}
ChordSim consists several component (Fig. \ref{fig:arch}): memory management, 
configuration subsystem, log subsystem, simulator engine, underlay topology 
manager, event scheduler and Chord network.

\subsubsection{Memory Management}
ChordSim provides a simple reference counter-based memory management. 
Since the network simulation involves large number of packet transmission, 
which means a lot of objects should be allocated and destroyed during 
the simulation, it is critical to provide a mechanism to facilitate 
manipulating pointers so as to reduce memory errors and make debug easier. 
In addition, it also allows us to do some optimization on memory allocation 
in the future.

The memory management is implemented by class {\itshape Object} and a set 
of macros. {\itshape Object} maintains a counter which keeps track of the 
number of pointers pointing to this instance. Four macros,  NEW\_VAR\_OBJECT, 
NEW\_VAR\_OBJECT2, NEW\_OBJECT and RELEASE\_OBJECT are defined to create 
and release objects. Each class in ChordSim that could be dynamically 
instantiated, for example, using {\itshape new} operator, should derive 
from {\itshape Object}. Dynamically allocated objects should be created 
and released by the macros.

\subsubsection{Configuration System}
Class {\itshape Configuration} defines all parameters for the simulation. 
An global var of {\itshape Config}, {\itshape config}, is defined to provide 
these parameters in the global domain. Parameters are parsed from a XML file, 
using s simple XML parser from Frank Vanden Berghen.

\subsubsection{Log Subsystem}
Class {\itshape PeerLoger} provides a bunch of methods to enable flexible 
and reliable log output. It divides the output into several categories, 
such as logScheduler, logDebug and logError to provide fine granularity 
control of output. The first argument, {\itshape level} of these log function 
is not used now. The switches to control the output are defined in config.xml.

Since some file systems may not support large files, PeerLoger will create 
a new file when the current output file is larger than 1GB.

\subsubsection{Simulator Engine}
The simulator engine is implemented by class {\itshape Simulator}, which 
provides time advancing and event scheduling. It is simple enough so just 
read the code.

The event queue uses a heap vector to make the event insertion faster, so as 
to support large scale simulation.

\subsubsection{Underlay Topology Manager}
Class {\itshape UnderTopoMgr} defines an end-to-end level underlay network. 
It creates all node according to the parameters defined in {\itshape config.undertopo\_}. 
Currently only static network and end-to-end latency are supported.

\subsubsection{Event Scheduler}
When the simulation begins, class {\itshape Scheduler} generates a number 
of events which define the online time and duration of nodes to kick off 
the simulation. Node lifetime can be created by three sources: sequential, 
random and trace file. Sequential lifetime allows a node to get online 
one by one with a certain inter-arrival time. Random lifetime generates 
node online time and duration using some distributions. Trace 
file scheduler reads online time and duration from node availability 
trace file. 

\subsubsection{Chord Network}
Considering there may be more than one co-existing Chord networks, the Chord 
network is designed with the following classes:

\begin{itemize}
\item {\itshape ChordGod} manages all online nodes in the global domain. 
The whole network is divided into several Chord networks and each Chord 
network is identified by an unique ID. Each node registers itself to ChordGod 
on joining a Chord network and unregisters itself on leaving a Chord network. 
Moreover, when a node is about to join a Chord network, it contacts ChordGod 
to get a node as a start point to look up its successor;

\item {\itshape ChordNode} is derived from Node, which not only acts as an end 
node, but also process Chord messages;

\item {\itshape ChordProtocol} keeps nodes status in a specific Chord network. 
The Chord network is identified by ChordProtocol::net\_id\_;

\item {\itshape ChordKeyInfo} keeps information related to a specific key in a 
Chord network at one node. Since a node may be responsible for multiple keys, 
a set of information should be maintained separately. 

\item {\itshape Application} is a container for real data items.
\end{itemize}

The relationship of the classes is as follows:\\
\indent {\bfseries A ChordNode object contains multiple ChordProtocol objects, 
denoting one physical node may participates in several Chord networks at one time; 
A ChordProtocol object contains multiple ChordKeyInfo objects, denoting in a 
specific Chord network, a node may be responsible for several keys; A ChordNode 
has one Application object to keep all data. }

\subsection{Main Algorithm Description}
Supposing the key space is $2^m$, a node in chordsim, whose node ID is $n$, keeps 
a set of information to maintain the structure, as listed in Table \ref{tab:info_list}.

\begin{table}[thbp]
	\centering
	\begin{tabular}{|l|l|}
		\hline
		Fields & Description\\
		\hline
		$finger$ & An vector of $m$ items\\
		\hline
		$finger[k].start$ & $n+2^k$ mod $2^m$, $0 \geq k < m$\\
		\hline
		$finger[k].node$ & first node $ge finger[k].start$\\
		\hline
		successor & $finger[0]$ \\
		\hline
		predecessor & the previous node on the identifier circle \\
		\hline
	\end{tabular}
	\caption{Fields at node $n$ for Chord maintenance}
	\label{tab:info_list}
\end{table}

When a node joins the system, it needs to know at least one other nodes who is already 
in the Chord network. Chordsim implement tracker as a global manager, which keeps all 
online nodes in the network. A joining node $n$ gets a random node $i$ from tracker and 
send a search message for {\itshape n.finger[0].start} to $i$. The joining process is 
described in Algorithm \ref{NNJ}.

\begin{algorithm}
\caption{New Node Joining}
\label{NNJ}
\begin{algorithmic}
\STATE //tracker is a global manager as a rendezvous point
\STATE $n = tracker.GetNode()$
\IF{n == NULL} 
    \STATE //I am the first node, get all the keys from tracker 
    \STATE $GetKeysFromTracker()$
\ELSE
    \STATE $InitFingerTable()$
\ENDIF
\end{algorithmic}
\end{algorithm}


\begin{algorithm}
\caption{Initialization of Finger Table}
\label{IFT}
\begin{algorithmic}
\STATE $n = a\ node\ known\ by\ current\ node$
\FOR{$i \leftarrow 0$ to $m-1$}
    \STATE $finger[i].node = n.find\_successor(finger[i].start)$
\ENDFOR
\end{algorithmic}
\end{algorithm}

After node $n$ joins the system, a set of routine procedures are executed periodically 
to keep all information listed in Table \ref{tab:info_list} up-to-date. These procedures 
include checking whether a finger or predecessor is online (Algorithm \ref{CDF}), 
stabilizationby querying successor's predecessor, fixing fingers (Algorithm \ref{FDF}) 
and checking keys to see whether they are correctly distributed (Algorithm \ref{CK}).

Merge operation is triggered by Tracker. When a merge event happens, Tracker randomly 
selects one peer in each currently existed network and notify it to start to merge. The 
merge process is implemented as a combination of leaving the existed network and joining 
a new network, as described in Algorithm \ref{MERGE}.

\begin{algorithm}
\caption{Check Dead Fingers}
\label{CDF}
\begin{algorithmic}
\FOR{$f \leftarrow finger[0]$ to $finger[m-1]$}
    \IF{$f.node.isOffline()$}
        \STATE $f.setInvalid()$
    \ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}


\begin{algorithm}
\caption{Fix Fingers}
\label{FDF}
\begin{algorithmic}
\STATE $n = a\ node\ known\ by\ current\ node$
\STATE $max\_msg\_number = 10$
\FOR{ $i \leftarrow 1$ to $max\_msg\_number$}
    \STATE $j = random\ index\ into\ finger\ table$
    \STATE $n.find\_successor(finger[j].start)$
    \IF{$search\ hits\ after\ finding$}
        \STATE $Update(finger[j])$
    \ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}


\begin{algorithm}
\caption{Check Keys}
\label{CK}
\begin{algorithmic}
\STATE $n = a\ node\ known\ by\ current\ node$
\STATE $i = random\ index\ into\ keys[\ ]$
\IF{$n.find\_successor(keys[i].id) != my\_id$}
    \IF{$n.find\_predecessor(keys[i].id) == my\_id$}
        \STATE $TransferKeysTo(keys[i],finger[0].node)$
    \ELSE
        \STATE $target = find\_successor(keys[i].id)$
        \IF{$target != NULL$}
            \STATE $TransferKeysTo(keys[i],target)$
        \ENDIF
    \ENDIF
    \STATE $RemoveKeys(keys[i])$
\ENDIF
\end{algorithmic}
\end{algorithm}

\begin{algorithm}
\caption{Merge from network $p$ to $q$}
\label{MERGE}
\begin{algorithmic}
\STATE Join network $q$
\STATE Move keys from network $p$ to $q$ (local operation)
\STATE Leave network $p$
\STATE Notify successor in network $p$ to merge to network $q$
\STATE Stabilize in network $q$
\end{algorithmic}
\end{algorithm}



\section{Simulation Results And Analysis}

\subsection{Path length}

Path length is defined as the hops it takes to solve a query in Chord. We simulated a 
network with $N=2^k$ nodes, storing $100 \times 2^k$ keys and vary $k$ from 4 to 12. 
Each node randomly selects a key to query. The path length is presented in Fig. \ref{fig:pathlen}. 
Fig. \ref{fig:pathlen}(a) shows that the average path length grows logarithmically with the 
number of nodes and it is about ${1\over2}\log N$. Fig. \ref{fig:pathlen}(b) shows the PDF of 
path length when $k=10$.

\begin{figure*}[htbp]
\begin{center}
\begin{tabular}{cc}
\includegraphics[height=5.5cm]{./diagram/pathlen.pdf}&
\includegraphics[height=5.5cm]{./diagram/hops_distribution.pdf}\\
(a)&(b)\\
\end{tabular}
\end{center}
\caption{(a) The path length as a function of network size. (b) The PDF of the path length 
in the case of a $2^{10}$ node network}
\label{fig:pathlen}
\end{figure*}

\subsection{Simultaneous Node Failures}

In peer-to-peer system, a structured network should be resilient to peer dynamics. To verify 
the robustness of Chord,  simultaneous node failures are considered in our simulation. We 
simulate a Chord network with $10^3$ nodes storing $10^4$ keys. When the network is stabilized, 
a set of randomly picked nodes fails simultaneously, then we wait the network to be stable again and 
send queries to the network. Fig. \ref{fig:failure_miss} shows the fraction of failed lookups is 
almost the same as the fraction of failed nodes, which implies most failed lookups are due to key lose 
caused by failed nodes and the structure can be maintained correctly after simultaneous node failures.

\begin{figure}[htbp]
\centering
\includegraphics[height=5.5cm]{./diagram/failure_miss.pdf}
\caption{The fraction of lookups that fail as a function of the fraction of nodes that fail.}
\label{fig:failure_miss}
\end{figure}

\subsection{Lookups During Stabilization \label{sec:lookup_stab} }

In this experiment, we test the fraction of failed lookup in Chord during node join and departure. There are
two possible situations that may cause failed lookups: keys lost when nodes leave the system and
structure inconsistence due to node churn. Only the second situation is considered in this experiment since 
it reflects the structure robustness of Chord against node churn. 

The network starts with 500 nodes, then nodes join and leave the system according to a Poisson process of mean arrival
rate of $R$. Key lookups are generated at a rate of one per second. Stabilization interval and check key interval are
set to 30 seconds. The results of Figure \ref{fig:churn_lookups} are averaged over 10000 seconds of simulation time.

Figure \ref{fig:churn_lookups} shows that the failed lookups due to structure inconsistency is quite limited during node churn 
and grows very slow when the node failure rate increases. However, we found that when peer churn occurs, it takes a very large path length to 
solve part of queries. For example, when the failure rate is 0.1, the mean value of the 99th percentile of path length 
is 42680, which is too large that can be considered as lookup failure in real application. Therefore, although Chord shows excellent
robustness during node churn, the structure inconsistencies may cause lookups loop in the network and introduce high search delay.
One possible solution is choose a threshold to limit the maximum lookup hops. The threshold is a parameter in chordsim and will be studied
in the future.

\begin{figure}[htbp]
\centering
\includegraphics[height=5.5cm]{./diagram/churn_lookups.pdf}
\caption{The fraction of lookups that fail during nodes fail and join.}
\label{fig:churn_lookups}
\end{figure}

\subsection{Data movements cost during stabilization}
Figure \ref{fig:churn_datamove} shows the data movement during stabilization. The experiment setup is the same as the one in 
Section \ref{sec:lookup_stab}. Only node join may cause data movement.

\begin{figure}[htbp]
\centering
\includegraphics[height=5.5cm]{./diagram/churn_datamove.pdf}
\caption{Data movement during nodes fail and join.}
\label{fig:churn_datamove}
\end{figure}


%\begin{figure}[htbp]
%\centering
%\includegraphics[height=5.5cm]{./diagram/transfer_hist.pdf}
%\caption{Data movement during stabilization}
%\label{fig:transfer_hist}
%\end{figure}

\subsection{Load Balance}

\begin{figure*}[htbp]
\begin{center}
\begin{tabular}{cc}
\includegraphics[height=5.5cm]{./diagram/keypernode.pdf}&
\includegraphics[height=5.5cm]{./diagram/key_distribution.pdf}\\
(a)&(b)\\
\end{tabular}
\end{center}
\caption{(a) The mean and 1st and 99th percentiles of the number of keys stored per node in 
a $10^4$ node network. (b) The probability density function (PDF) of the number of keys per 
node. The total number of keys is $5\times10^5$.}
\label{fig:key-node}
\end{figure*}

In this section, we detail the results of our experiments for the load balance problem. We use 
$10^4$ peers with numbers of keys ranging from $10^5$ to $10^6$, and try different key assignment 
schemes from the Power of 1 choice to the Power of 9 choices, respectively.

Power of 1 is actually means the basic protocol without any scenarios dealing with the load balance 
problem. The result is shown in Fig. \ref{fig:key-node}. As mentioned in \cite{10}, the number of 
keys per node exhibits large variations. This is demonstrated again in our simulation. Recall the 
discussion in section 2, we get the idea that the power of 2 choices might be a promising mechanism 
to balance the load in Chord networks. Therefore we implement it in our simulator and try different 
choices of the power of n.

Fig. \ref{fig:sigma-power} describes how the standard deviation of key numbers assigned to 
different peers changes with various applied schemes (power of i, with i ranges from 1 to 9). 
Here the standard deviation of key numbers is used as a measurement to evaluate whether the load 
of the system is balanced or not. The smaller the value is, the better the load is balanced. 
Note that when the number of keys increases from $10^5$ to $5\times10^5$, the standard deviation 
increases intuitively, which are marked by different colors respectively. Also Note that for a 
certain number of keys, as the number of powers increases, the standard deviation of key numbers 
decreases, and it decreases most obviously from power of 1 choice to power of 2 choices. However, 
the subsequent increase in the number of powers does not contribute much to load balance, since 
the load is already approximately balanced. The result also suggests that the power of 2 choices 
or the power of 3 choices is enough in reality. Larger number of powers will increase the overhead 
significantly, and the system will be more vulnerable to churn, leading to degradation of the 
performance.

\begin{figure}[htbp]
\centering
\includegraphics[height=5.5cm]{./diagram/sigma-power.pdf}
\caption{Standard deviation with different value of power}
\label{fig:sigma-power}
\end{figure}


\begin{figure*}[htbp]
\centering
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[height=5.5cm]{./diagram/power1-key.pdf}
\caption{Power of 1}
\label{fig:power1-key}
\end{minipage}
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[height=5.5cm]{./diagram/power2-key.pdf}
\caption{Power of 2}
\label{fig:power2-key}
\end{minipage}

\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[height=5.5cm]{./diagram/power3-key.pdf}
\caption{Power of 3}
\label{fig:power3-key}
\end{minipage}
\end{figure*}

Fig. \ref{fig:power1-key}, \ref{fig:power2-key} and \ref{fig:power3-key} show the 1st, mean 
and 99th percentile loads using various load balancing strategies, corresponding to applying 
the power of 1 choice, the power of 2 choices and the power of 3 choices, respectively. The 
bottom of the blue bar represents the 1st percentile load (the mean number of keys allocated 
to the 100 peers with the least load in the experiment), while the top of the blue bar 
represents the 99th percentile load (that is the mean number of keys allocated to the 100 peer 
with the heaviest load in the experiment), the mean number is shown as the short horizontal 
bar between the top and bottom. Note that the mean load increases linearly with the number of 
keys (shown with the red line), here it illustrates how far or close each scheme is to the 
ideal. Since for the ideal case, both the 1st percentile load and the 99th percentile load 
should be equal to or around the mean load, the greater the difference is, the worse the load 
balance is.

Comparing Fig. \ref{fig:power1-key} with Fig. \ref{fig:power2-key}, we can conclude that applying 
the power of 2 choices significantly balances the load, the difference between the 1st and the 
99th percentile load greatly decreases and gets much nearer to the mean load, while using the power 
of 1 choice makes the difference so large that the 99th percentile load is too heavy. Moreover, 
this comparison result coincides with the conclusion we made from Fig. \ref{fig:sigma-power}. 
Compare Fig. \ref{fig:power2-key} and Fig. \ref{fig:power3-key}, we can find that the power of 3 
choices does contribute to better load balance than using the power of 2 choices, indicating the 
trend that increasing power numbers will achieve more balanced load. However, the difference 
between the two schemes is far less significant than that between the power of 2 choices and the 
power of 1 choice, which suggests that the power of 3 choices only makes a small enhancement to the 
power of two choices, which also proves the conclusion we get from Fig. \ref{fig:sigma-power}.


\subsection{Multiple Chord Merger}

Figure \ref{fig:merge_recall} shows the Recall during merging. Recall is the fraction of the data that are
relevant to the query that are successfully retrieved. $R_{i/j}$ denotes the recall when querying a key which
originally belongs to network $i$ from a node which is originally in network $j$. When two network are separated,
both $R_{0/1}$ and $R_{1/0}$ should be 0 since they are not connected. After merging and the new network is 
stabilized, all keys should be reachable in two networks. Figure \ref{merge_recall} shows that our results
conform to the analysis: both $R_{0/1}$ and $R{1/0}$ increase from 0 to 1 with time.

Figure \ref{fig:merge_transfer_hist} shows the data movement during merging. We can see as more and more nodes
joins the new network, data is redistributed over nodes.

\begin{figure*}[htbp]
\centering
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[height=5.5cm]{./diagram/merge_query_hist.pdf}
\caption{Recall during merger of two networks}
\label{fig:merge_recall}
\end{minipage}
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[height=5.5cm]{./diagram/merge_transfer_hist.pdf}
\caption{Data movement during merger of two networks}
\label{fig:merge_transfer_hist}
\end{minipage}
\end{figure*}

To simplify the merge process, we implement the merger as combine multiple existed Chord network 
to a new network. For example, if there a two Chord network, network 0 and 1, then the merger of 
network 0 and 1 is accomplished by generating a new network, namely network 3, by taking all nodes 
in network 0 and 1 to it. We consider two networks, each with 500 nodes and 5000 keys, and randomly 
pick one node in each network to start the merging. Meanwhile, queries are generated randomly with 
inter-arrival time of 1 seconds. The results are aggregated using interaction round, which is equal 
to 10 times of stabilization interval.

Fig. \ref{fig:merge_recall} shows the Recall during merging. Recall is the fraction of the data 
that are relevant to the query that are successfully retrieved. $R_{i/j}$ denotes the recall when 
querying a key which originally belongs to network $i$ from a node which is originally in network $j$. 
When two network are separated, both $R_{0/1}$ and $R{1/0}$ should be 0 since they are not connected. 
After merging and the new network is stabilized, all keys should be reachable in two networks. 
Fig. \ref{fig:merge_recall} shows that our results conform to the analysis: both $R_{0/1}$ and $R{1/0}$ 
increase from 0 to 1 with time.

Fig. \ref{fig:merge_transfer_hist} shows the data movement during merging. We can see as more and 
more nodes joins the new network, data is redistributed over nodes.


\section{Conclusion}

Chord, as one of first four DHTs, shows excellent properties in the efficiency to locate the resource. 
And it is also robust with the help of suitable stabilization schemes. For the load balance problem, it 
is demonstrated in the simulation that possible mechanisms are feasible to balance the load, but at the 
cost of more overhead. On the other hand, compared with unstructured overlay networks, in which merger occurs 
trivially, it is not easy to make two or more Chord networks merge perfectly. As shown in the simulation 
result, the merger procedure take a certain amount of time in which the key may not be found correctly and 
several keys need to be transfered. A simulator, chordsim, is developed not only to measure the performance
of basic Chord protocol, but also to provide a fast, lightweight and flexible framework to enable further 
research on Chord and other Peer-to-Peer algorithms. The source code of chordsim is available at Google 
code and will be kept updating in the future.





\begin{thebibliography}{99}

\bibitem{6}
Karger D. R., Lehman E., Leighton F. T., Panigrahy R., Levine M. S., and Lewin D. 
``Consistent hashing and random trees: Distributed caching protocols for relieving 
hot spots on the world wide web,'' 
In ACM Symposium on Theory of Computing (May 1997), pp. 654–663.

\bibitem{8}
Ratnasamy S., Francis P., Handley M., Karp R., and Shenker S. 
``scalable content addressable network,'' 
In ACM SIGCOMM (2001), pp. 161–172.

\bibitem{9}
Rowstron A., and Druschel P. 
``Pastry: Scalable, distributed object location and routing for large-scale 
peer-to-peer systems,'' 
In Proceedings of Middleware 2001 (2001).

\bibitem{10}
Stoica I., Morris R., Karger D., Kaashoek M. F., and Balakrishnan H. 
``Chord: A scalable peer-to-peer lookup service for internet applications,''
In ACM SIGCOMM (2001), pp. 149–160.

\bibitem{12}
Zhao B. Y., Kubiatowicz J. D., and Joseph A. D. 
``Tapestry: An infrastructure for fault-tolerant wide-area location and routing,'' 
Tech. Rep. UCB/CSD-01-1141, UC Berkeley, Apr. 2001.

\bibitem{1}
J. Byers, J. Considine, and M. Mitzenmacher. 
``Simple Load Balancing for Distributed Hash Tables,'' 
IPTPS 2003

\bibitem{Datta-merge}
Anwitaman Datta. 
``Merging ring structured overlay indices: A data-centric approach,'' 
Manuscripts, 2009

\bibitem{av-traces}
``Availability traces,'' 
http://www.cs.berkeley.edu/~pbg/availability/

\end{thebibliography}


\end{document}


