\input{format.tex}

\section*{Oct 1, 2014}
Evaluate memory usage of Java models for node/link. In a heap of 3814MB, we loaded node instance and related data shown in Table \ref{heap_size}. This shows that the estimate memory consumption for entire dataset would be around 14G, not including link size.

\begin{table*}[t]
\centering
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{|l|r|r|}
\hline
\textbf{Class Name} & \textbf{Count} & \textbf{Size}\\
\hline
scala.collection.immutable.\$colon\$colon & 30,240,944 & 967,710,208 \\
\hline
char[] & 15,739,467 & 800,258,752 \\
\hline
java.lang.String & 15,739,398 & 440,703,144 \\
\hline
edu.clarkson.cs.clientlib.caida.itdk.model.Node & 14,520,443 & 1,335,880,756 \\
\hline
\end{tabular}
\caption{Object in Heap Dump}
\label{heap_size}
\end{table*}

\section*{Oct 3, 2014}
Here's how we load data into memory and construct a distributed graph. First we talk about partition of nodes. In PowerGraph \cite{joseph_2012} the author mentioned that they want the number of machines spanned by each vertex to be limited in order to minimize storage and network overhead. However in our case, the more machines a vertex is split to, the higher concurrency we can achieve.

\section*{Oct 4, 2014}
Did experiment to collect data about node/link degree distribution. The result is shown in Figure \ref{cdf_nodedegree} and \ref{cdf_linkdegree}.

In our system, a link(similar to edge in other system) connects to multiple nodes. Let $V(l)$ be the vertices on link $l$, $L(v)$ be the links connecting to vertex $v$. A $p$-vertex cut is necessary if the vertices $V(l)$ in a link $l$ is allocated to $p$ machines.  The structure of Internet topology is static. This allows different machines to have overlapped parts.

\section*{Oct 6, 2014}
A partition algorithm need to address the following problems: it should first be able to allocate links to different machines. It should also be able to build routing tables that allows vertices that spread on different machines to communicate. The second part can be done by comparing each links in memory vs. the links in file. Any missing link indicate a requirement of communication. With a huge number of links, the memory required to maintain a routing table is also huge. Thus it can be an option to always broadcast information instead of to maintain a routing table in memory.

As is mentioned is \cite{joseph_2012}, a vertex-cut can be implemented by allocating edges to each machines. In that paper they mentioned three methods of allocation. The first one is random allocation. Each edge is randomly allocated to some machine. A better idea is to allocate an edge to some existing machines if one of its nodes has been already allocated there. This requires an mapping of nodes to machines to be maintained either locally or globally. A global mapping table provide optimized result while requiring more communication. It is clear how to implement such a global table. On the contrast, a local mapping table will have clear idea about which had been allocated to itself but can only guess what happened remotely. How does it guess? It can guess when it goes across each link and randomly assign it to some machines. Here's a possible algorithm.
\begin{algorithmic}
\State $map(n) \gets $ empty
\For {$e = (u,v) \in Edges$}
\State $scores \gets empty$ 
\For {$m \in Machines$}
\If {$m.edges.size > threshold$}
\State $scores[m].\_1 = -100$
\State \textbf{continue}
\EndIf
\If {$m \in map(u)$ and $m \in map(v)$}
\State $scores[m].\_1 \text{+=} 20$
\ElsIf {$m\in map(u)$ or $m \in map(v)$}
\State $scores[m].\_1 \text{+=} 10$
\EndIf
\State $scores[m].\_2 \gets -m.edges.size$
\EndFor
\State $m\_max = high\_score(scores)$
\State $m\_max.edges \gets e$
\State $map(u) \gets m\_max$
\State $map(v) \gets m\_max$
\EndFor
\end{algorithmic}

I want to know whether I need to scan entire data file on each machine. If this consumes too much time, I may need to split the file into small pieces and then scan them separately. This leads to another problem that we need to be able to detect missing edges. If different machines processes different datasets, then machines must be able to detect missing edges. A simple method is that each machine write its linkset to HDFS. The linkset can be easily joined and distincted and joined with original linkset to get unallocated links. As this linkset is assumed to be small, a coordinator is then responsible to allocate it to some machines.  

I have thought about lazy loading but found that lazy loading may not be a good idea for this problem.

TODO: Evaluate the time needed for one machine to process entire dataset.
%--------------------------------------------------------------------

\section*{Oct 7, 2014}
One method that further decrease the size of vertices to be evaluated is bloom filter merging. We merge links being connected by the same node together into a small cluster. This cluster is then treated as an unsplittable unit in subsequent allocations. Of course that the larger a node's degree is, the smaller chance it should be split. It is natural to start from the nodes with highest degrees. 

We first discuss a sequential version of this clustering, as is shown in Figure \ref{seq_bf_cluster}. In this version, the node is ordered by its degree(number of links containing that node) in a descending order. We then group the links that connects to the same node together as a cluster. Each link that had been clustered once will not be clustered again. This is done with a Bloom Filter. At the end, we will get a list of clusters.
\begin{figure}
\centering
\begin{tikzpicture}
[
>=triangle 60,              % Nice arrows; your taste may be different
start chain=going below,
node distance=6mm and 60mm,
base/.style={draw=gray!80,text width=3.5cm, align=center,on chain, on grid, minimum width={4cm}},
block/.style= {base, rectangle, minimum height={1.2cm}, fill=red!20},
test/.style={base, diamond, aspect=2.4, fill=yellow!20, text width={3cm}, minimum width={1cm}}
]

\node[block](degree-order){Compute and order by node degree}; 
\node[block](each-nl){Foreach nodelink record};
\node[block](each-link){Foreach a link};
\node[test](test-exist) {Link is clustered};
\node[block](mark){Mark the link};
\node[block](cluster){Cluster all marked link};
\node[block](record){Record all clustered links in Bloom Filter};
\node[block](end){End};

\draw[->] (degree-order) to (each-nl);
\draw[->] (each-nl) to (each-link);
\draw[->] (each-link) to (test-exist);
\draw[->] (test-exist) to (mark.north) node[above left]{F};
\draw[->] (mark) to (cluster);
\draw[->] (cluster) to (record);
\draw[->] (record) to (end);

\draw[->] (mark.west) to ([left=of mark, xshift=-0.7cm]mark.west) to ([xshift=-0.7cm]each-link.west) to (each-link.west);
\draw[->] (record.west) to ([left=of record, xshift=-1.3cm]record.west) to ([xshift=-1.3cm]each-nl.west) to (each-nl.west);
\draw[->] (test-exist.east) to ([xshift=4mm] test-exist.east) to ([xshift=5.9mm] each-link.east) to (each-link.east);

\begin{scope}[on background layer]
\node[rounded corners, fit={($(each-nl.north west)+(-17mm,2mm)$)($(record.south east)+(14mm,-2mm)$)}, fill=green!10]{};
\node[rounded corners, fit={($(each-link.north west)+(-9mm,2mm)$)($(mark.south east)+(9mm,-2mm)$)}, fill=blue!10]{};
\end{scope}
\end{tikzpicture}
\caption{Sequential Clustering}
\label{seq_bf_cluster}
\end{figure}

A parallel version of the same method can be done by splitting the file into pieces and cluster them separately. However, we are now in a higher risk of having links that goes into more than one clusters. What will be the sequence of having a link managed by multiple machines? This may just mean some space wasting in the target machine but will not impact the performance of our system. 

\section*{Oct 8, 2014}
The clustering result can be used directly for random partitioning to machines. We can also adopt the algorithm described above. In this case, each cluster is treated as a single edge that contains multiple nodes.

The next step is to build routing table. The routing table's structure is as is shown below
\[
<node>: <machine\_id>+
\]

With either a hash function or a pre-defined mapping table in previous steps, we know that machine\_id is uniquely determined by cluster\_id that we obtain in previous steps. We can use the node id that we use to build the cluster as the cluster id. 

\section*{Oct 9, 2014}

Figure \ref{build_routing} shows the steps we build a routing table. 

\begin{figure}
\centering
\begin{tikzpicture}
[
class/.style={rounded corners, rectangle split, rectangle split parts=2, draw=gray!80, minimum width=2.5cm, minimum height = {2cm},text width= 3cm},
abstract/.style={class, fill=blue!20},
concrete/.style={class, fill=yellow!20},
anchor/.style={draw=none}
]
\node[concrete](cluster) {
\textbf{Clustering}
\nodepart{second} cluster-id \newline link-id
};

\node[concrete, right=of cluster](nodelink) {
\textbf{Nodelink}
\nodepart{second} node-id \newline link-id
};

\node[concrete, below= of cluster](joined-result){
\textbf{Joined Result}
\nodepart{second} cluster-id \newline node-id \newline link-id
};

\node[concrete, below=of nodelink](mcn-mp) {
\textbf{Machine\\ Mapping}
\nodepart{second} cluster-id \newline machine-id
};

\node[concrete, below= of $(joined-result.south)!0.5!(mcn-mp.south)$](routing-table){
\textbf{Routing Table}
\nodepart{second} node-id \newline machine-id
};

\draw(cluster) to (nodelink);
\draw[->]($(cluster)!0.5!(nodelink)$) to ($(nodelink)!0.5!(joined-result)$) to ($(cluster)!0.5!(joined-result)$) to (joined-result);
\draw(joined-result) to (mcn-mp);
\draw[->]($(mcn-mp)!0.5!(joined-result)$) to (routing-table);
\end{tikzpicture}
\caption{Build Routing Table}
\label{build_routing}
\end{figure}

This routing table contains all the node information for all machines without filtering. To save space, machines could remove their own 
information from the table when loading the file.

With node, index and routing table ready, we can start working on the memory structure. The operation we need to support on this data model is listed below. I will add more when I realize it.

\begin{itemize}
\item Given a route with known length as well as the start/stop. Find all possible paths.
\item With traceroute information we obtain, we want to show the most heavily used routers.
\end{itemize}

I suddenly realize that the easiest way to partition network routers is to with their geographical information. 

\section*{Oct 10, 2014}
For each node connecting to a link, there may be an IP address or not. The existance of an IP address means that this node connects to the link through only that IP. When an access path comes to this link, it can only reach this node if the IP matches. Similarly, if an IP address is provided, a node will only propagate the message to the link that match this IP address.

\begin{figure}
\centering
\begin{tikzpicture}
[
map/.style={rectangle split,rectangle split parts=2, minimum width={4cm},minimum height={1cm}, draw=gray!90, fill=yellow!20}
]

\node[map](ip-node){\textbf{Node Map} \nodepart{second} IP $\to$ node\_id};
\node[map, below=of ip-node](link-map-1){\textbf{Link Map 1}\nodepart{second} (node, ip)$\to$ link};
\node[map, right=of link-map-1](link-map-2) {\textbf{Link Map 2}\nodepart{second}node$\to$ List$<link>$};
\node[map,below=of link-map-1](lnm1){\textbf{Nodelink Map 1}\nodepart{second} (link, ip)$\to$ node};
\node[map,right=of lnm1](lnm2){\textbf{Nodelink Map 2}\nodepart{second} link $\to$ List$<node>$};
\end{tikzpicture}
\caption{Memory Structure}
\label{mem_structure}
\end{figure}
\section*{Oct 11, 2014}
Start working on memory representation of node and links in a partition. We have the following memory tables (hashed map). 
The basic operations in a map is to get its edges by vertex and get vertices by edge. In our case, the problem is more sophisicated due to the introduction of shared link and IP address associated with each node. The tables we built are shown in Figure \ref{mem_structure}.

A path searching operation starts from locating an node with an IP address. This is done directly using Node Map. The next step is to search for all links connecting to this node through that IP address. As some links may not have a known IP associated with it, we build two tables.  Link Map 1 is a map from (node\_id, ip pair) to link\_id. If the IP used to connect to some link is known, one entry is appended to this map. Link Map 2 is a map from node\_id to a list of link\_id. All links that is connected to this node but has no corresponding IP is appended to this list. The search logic is shown below.
\begin{algorithmic}
\Function{findLink}{node\_id, ip}
\If{linkMap1.contains(node\_id,ip)}
\State \Return linkMap1.get(node\_id,ip);
\EndIf
\State \Return linkMap2.get(node\_id);
\EndFunction
\end{algorithmic}

With a link id, we need also to find all nodes that connect to it. This is achieved using Nodelink Map 1 and Nodelink Map 2. Nodelink Map 1 is a map from pair of (link\_id, ip) to node\_id. It is used to directly locate the next node on this link based on IP address. If no known IP address matches the given one, Nodelink Map 2 is used to get all next nodes without known IP and they will all be used as candidates for next step.

\section*{Oct 12, 2014}
I found that I made a mistake of building separate tables instead of putting data directly into node structure. So there will be only one global structure of node map. Everything else will be found thorough this entry. I fix it in the code.

\section*{Oct 13, 2014}
\subsection*{Task structure}
With the partition and memory table done, I start to design the programming model of this system. We also adopt the idea of vertex programming. However, we allow the program to indicate which node will be triggered next. This is achieved with \textit{Task} interface. A task is a logical job that is executed in a separated thread.

Whenever a task need to be propagated to another node, a subtask will be created and sent to the destination machine. The subtask will record the parent id so that their response can be later combined to the primary task result. Task will have following status:
\begin{itemize}
\item \textbf{Ready} Ready to be executed.
\item \textbf{Active} In active execution.
\item \textbf{WaitingForSub} Local execution is done. Remote execution not finished yet.
\item \textbf{End} Computation has been done.
\item \textbf{Error} Error happens either locally or remotely.
\end{itemize}

\subsection*{Communication}
I also start designing distributed system communication. We use a simple structure of single master. We don't support dynamic adding of machines and thus the allocation can be maintained in a local file. Master will keep a heartbeat communication with each client since then and update its status table. Tasks will be handled by message channels with a timeout. Thus there's no need for worker nodes to know the status of others. A timeout will mean no available machines in the indicated partition and the related task will be marked error.

I use a message structure to interact between machines, which allows the data to be processed asynchronously. There will be these message channels:
\begin{itemize}
\item Master-worker channel. This is a multiple-publisher-one-consumer queue. Worker send their heartbeat message to master through this channel.
\item Dataplane channel. This is a multi-publisher-multi-consumer queue. Message will comes with a destination and each consumer comes with a selector indicating the message they accept.
\end{itemize}

\subsection*{Failure recovery}
The failure of nodes require manual restart and involves no reallocation of jobs. Master node can restarted fast without any penalty. Worker nodes will be re-initialized when restarted. It will resume the heartbeat report after restarting finished.

Jobs submitted to one node locally will be persisted on the submission and will be deleted when the task finishes. This allows them to be re-submitted when the node recovered from failure. Jobs propagated from other machines will also be recorded and a failure message will be returned to the source machine on restart. The source machine will need to resend this request to some other machines if there is one available. If none of the machines in a partition is available, the task is stopped and error is reported to the user.

\section*{Oct 15, 2014}
I started the system design. I noticed that a core difference between our work and the works GraphX/PowerGraph target at is that our work focus on multi-task execution and their system is a batch processing system. This would be a strong support that our system is necessary.

Currently the system is separated into three parts.  Task, Scheduler, and DistComm. When user submit a work to do, we generate a new task object representing it. Each task has a uid upon generation. The task is submitted to scheduler and will be scheduled to run. The scheduler is basically a thread pool of TaskThread. TaskThread has a reference to Partition and provide the TaskWorker, the object that contains vertex programming access to the graph data. TaskThread is also responsible to spawn child tasks when it noticed that a node goes pass multiple machines. This is done through its reference to the worker node.

Who is going to manage the task status change? There are two choices, the task itself or the scheduler. As it is the TaskThread that will update task status, and TaskThread is almost the same as TaskScheduler, it is more natural to put the status change listener on scheduler instead of the task itself. WorkerUnit listen to taskEnd information. For a normal task, this is used to return query result to user. For a subtask, this is used to return result to the requester node.
\section*{Nov 18, 2014}
I have finished coding of the Internet Topology Platform and is ready to do the partitioning task now. It is pretty easy to do the random partitioning and geo-based partitioning, so the only problem is that how do we do the clustering partitioning in a distributed way. Previous version assumes that we should only use a single reducer and it is not practical when dealing with large datasets. I am trying to make it better.

The idea is to do multiple round of clustering and merge adjacent nodes into big clusters. To start, we first make each node a separate cluster and assign links to the node with highest degree it adjacents. This gives us a table CL of cluster\_id (just node\_id) and link\_id. Now we should start several rounds of merging clusters. We merge two clusters only if they share the same nodes. The way we get such candidate list is to use table node\_id, link\_id to generate table link\_id1, link\_id2 which contains adjacent links and join this table to CL. By filtering out links that belongs to the same cluster, we can get a list of candidate to be checked in the following format: clustera\_id, linka\_id, node\_id, linkb\_id, clusterb\_id. 

For each of these pairs, we will calculate a merge factor. The merge factors consists of two parts: how many common nodes are there between each pair. the sum of the total degree of the nodes involved. We calculate these two factors, and the result is: clustera, clusterb, score1, score2. Each cluster will merge to the cluster with the highest score. If one cluster only has a single adjacent cluster, we also merge them. We use this merge information to rewrite the mapping from cluster to link in the beginning. If necessary, this procedure can be repeated.

There may be a chain mapping like ``$A->B$'', ``$B->C$''. So we need to Solve this problem. According to datasize, temporarily this can be solved using a single machine. When data amount is too big, we can convert this to a graph problem and use distributed graph platform to solve it. 

\section*{Nov 19, 2014}

Another way to do this is gradually shrinking. Each round we only merge the leaf clusters (those who are not a intemediary) to its adjacent. This may be slower, but require less system configuration.

\section*{Nov 20,2014}

I have done the init cluster part.
\section*{Feb 2, 2015}
The work is resumed. With the system coding ready, I will now focus on the partitioning method comparison. As a recap, there will be three candidate methods for partitioning:
\begin{itemize}
\item Random Partitioning
\item Degree-based Partitioning
\item Geo-based Partitioning
\end{itemize}
As we mentioned in Oct 7, 2014, with a mapping from link id to partition id we can generate routing table. Thus here we only need to discuss how to map a link id to a partition.

Random Partitioning basically exists as a basis of comparison. The generation process is simple: each link is randomly assigned to a partition.

Degree-based Partitioning requires processing to the data. We first order the nodes by their degree. An node with a high degree means there are many links attaching to it, and if we separate these links to different partitions, a vertex cut will be generated. To avoid this, we put all such links to the same partition. This is the basic idea behind degree-based partitioning. Degree-based partitioning can be implemented with several iterations. We call them $I_n$ partitioning in subsequent discussion, where n is the number of iterations.

$I_1$ partitioning first calculate node degree and group links by nodes. Nodes with higher degrees has a higher priority. Ties are broken randomly. This is shown in Figure \ref{i1partition}. The clustered link groups are then split to different partitions randomly.

\begin{figure}
\centering
\begin{tikzpicture}
[
node distance=2cm,
node/.style={circle,  draw=gray!90, fill=yellow!20},
link/.style={rectangle, draw=gray!90, fill=blue!20}
]
\node[node](n1){n1};
\node[link,right of = n1](l1){Link 1};
\node[node, right of =l1](n2){n2};
\node[yshift=-0.3cm] at (n1.south){Degree 20};
\node[yshift=-0.3cm] at (n2.south){Degree 5};
\node[yshift=-0.4cm] at (l1.south){To n1};
\draw(n1.east) to (l1.west);
\draw(l1.east) to (n2.west);
\end{tikzpicture}
\caption{$I_1$ Partitioning}
\label{i1partition}
\end{figure}

The goal of a perfect partitioning is to ensure that most operations can be done in as few partitions as possible. $I_2$ partition expand the distance of grouping to 2, which means, links are grouped to nodes with highest degree and has a distance within 2 of them. Again, nodes with a higher degree will have higher privilege. This can be expanded to $I_n$ case, where links are group to nodes with a highest degree and has a distance to it of not greater than $n$. This is shown in Figure \ref{inpartition}.

\begin{figure}
\centering
\begin{tikzpicture}
[
node distance=2cm,
node/.style={circle,  draw=gray!90, fill=yellow!20},
link/.style={rectangle, draw=gray!90, fill=blue!20}
]
\node[node](n1){n1};
\node[link,right of = n1](l1){Link 1};
\node[node, right of =l1](n2){n2};
\node[node, below of = l1, yshift=1cm](n3){n3};
\node[link, below of = n3, yshift={1cm}](l2){Link 2};
\node[node, right of = l2](n4){n4};
\node[yshift=-0.3cm] at (n1.south){Degree 20};
\node[yshift=-0.3cm] at (n2.south){Degree 5};
\node[yshift=-0.3cm] at (n4.south){Degree 30};
\draw(n1.east) to (l1.west);
\draw(l1.east) to (n2.west);
\draw(l1.south) to (n3.north);
\draw(l2.north) to (n3.south);
\draw(l2.east) to (n4.west);
\end{tikzpicture}
\begin{flushleft}
The distance D(n1, l1) = 1, D(n2, l1) = 1, D(n4,l1) = 2. So when $n=1$, Link 1 will go to n1, while when $n = 2$, link 1 will go to n4. 
\end{flushleft}
\caption{$I_n$ Partitioning}
\label{inpartition}
\end{figure}

Before the experiement, I have no idea what value $n$ should be assigned with. However, the resulted cluster should not be too big. We will setup a threshold to limit the size of each cluster. 

Geo-based partitioning is similar to Degree-based partitioning. However the grouping logic is not based on degree anymore. Instead, we use the geolocation associated with each node to group nodes. Links between nodes with same geo information is grouped to the same cluster, while links between different geo regions will be considered later. We then separate the cluster to different partitions. If some geo regions contain too many links, it will be further splitted using methods like  degree-based partitioning. Then the links between geo-regions will be assigned to the region that contains less links to make sure each partition has a fair amount of links. We can try to use two layer geo partitioning. First use country level to cluster links. If a cluster is too small to fill in a single partition, it will find other country level clusters that belongs to the same continent to construct a bigger cluster.  

\subsection*{Feb 3, 2015}
\subsubsection*{How to compare different partitioning methods}
It is important to define a fair comparison metrics between different partitioning methods. The goal of a good partitioning is to minimize the communication between partitions. Also, the vertices on the cut should be with a small degree cause a higher degree means higher chance of being involved. We thus declare the following two comparison metrics:
\begin{itemize}
\item The possibility of randomly selected paths go across partition borders. The lower the better.
\item  The sum of degree of the vertices on the cut. The lower the better.
\end{itemize}
\subsubsection*{Implementation of $I_1$ degree-based partitioning}
\begin{enumerate}
\item Calculate the degree of each vertex
\item Join link with vertex and degree
\item Group data with link, find node with highest degree in each group
\item Get distinct node list, and distribute each node into partition
\item Generate mapping from link to partition
\end{enumerate}
\subsubsection*{Implementation of $I_n$ degree-based partitioning}
In $I_1$ partitioning, we have obtained an node $\to$ link mapping $M(n_i, l_j)$, where $M(n_i, l_j) = 1$ if $D(n_i, l_j) = 1$. In other words, $n_i$ and  $l_j$ is directly connected. In $I_n$ partitioning, we need to replace this mapping with an expanded one, which gives us all the $n_i, l_j$ that satisfy $D(n_i, l_j) <= n$. This can be done with an iteration.
\begin{enumerate}
\item Obtain D(1) mapping.
\item Join D(1) mapping with node-link mapping to get an node-node mapping. 
\item Join node-node mapping with node-link mapping to get D(2) mapping
\item Repeat 2 and 3 until we get all required D(n) mapping
\item $M(n) = \sum^{n}_{i=1} D(i)$
\end{enumerate} 
\subsection*{Feb 11, 2015}

Done with the development and test of Hadoop-based data-preprocessing described above. Now the process is running on local Hadoop environment. \hl{Not sure whether it is necessary to consider the time consumption of Hadoop environments.} 

\subsection*{Feb 12, 2015}
Start the setup of comparison between different partitioning method. As mentioned before, there are some criteria:
\begin{itemize}
\item \textbf{The average border-crossing of the path connecting two random nodes.} Less border crossing means less communication between machines and thus provides a better performance.
\item \textbf{Average partition each vertext cut connects.} The higher this number is, the more communication will generate.
\item \textbf{The count of nodes on border.} Less nodes on border means a smaller routing table and thus decrease the time consumption on routing table search.
\end{itemize}

Here we also update the algorithm of generating routing table. In contrast to Figure \ref{build_routing}, this updated version no longer map the partition to machine in the routing generating process. This is because in the software platform, only target partition is necessary, while target machine can be allocated dynamically. Instead, we add one more step to filter out the nodes that connect to only single partition. This version is shown in Figure \ref{build_routing_updated}. The corresponding code is also updated.
\begin{figure}
\centering
\begin{tikzpicture}
[
class/.style={rounded corners, rectangle split, rectangle split parts=2, draw=gray!80, minimum width=2.5cm, minimum height = {2cm},text width= 3cm},
abstract/.style={class, fill=blue!20},
concrete/.style={class, fill=yellow!20},
anchor/.style={draw=none}
]
\node[concrete](cluster) {
\textbf{Link Partition}
\nodepart{second} partition-id \newline link-id
};

\node[concrete, right=of cluster](nodelink) {
\textbf{Nodelink}
\nodepart{second} node-id \newline link-id
};

\node[concrete, below= of cluster](joined-result){
\textbf{Joined Result}
\nodepart{second} partition-id \newline node-id \newline link-id
};

\node[concrete, below=of nodelink](mcn-mp) {
\textbf{Filter Node \\in Single Partition}
\nodepart{second} node-id \newline partition-id
};

\node[concrete, below= of $(joined-result.south)!0.5!(mcn-mp.south)$](routing-table){
\textbf{Routing Table}
\nodepart{second} node-id \newline partition-id
};

\draw(cluster) to (nodelink);
\draw[->]($(cluster)!0.5!(nodelink)$) to ($(nodelink)!0.5!(joined-result)$) to ($(cluster)!0.5!(joined-result)$) to (joined-result);
\draw[->](joined-result) to (mcn-mp);
\draw[->](mcn-mp) to (routing-table);
\end{tikzpicture}
\caption{Build Routing Table(Updated)}
\label{build_routing_updated}
\end{figure}

\subsection*{Feb 13, 2015}
Run Hadoop tasks on local clusters. Fix bugs and prepare data.

\subsection*{Feb 15, 2015}
Degree-n expansion is too big for this small cluster to process. Degree-2 file size is expanded to 56G. 

There are some problems in the version of degree-n partitioning we mentioned before. 
\begin{itemize}
\item Some central nodes have too many adjacent nodes (over 160,000). This will lead to a big reduce group. When processing degree-n, the group size may continue increasing.
\item Cluster degree can be calculated and stored. This is more efficient instead of repeating the calculation each time.
\end{itemize}

We create the following new tables and redesign the degree-n algorithm.
\begin{table}
\centering
\begin{tabular}{|c|c|}
\hline
\textbf{Column} & \textbf{Comments} \\
\hline
cluster\_id & \\
\hline
merge\_times & \\
\hline
max\_degree & \\
\hline
\end{tabular}

Cluster 
\vspace{5mm}

\begin{tabular}{|c|c|}
\hline
\textbf{Column} & \textbf{Comments} \\
\hline
cluster\_id & \\
\hline
cluster\_id & \\
\hline
\end{tabular}

Adjacent Cluster

\vspace{5mm}

\begin{tabular}{|c|c|}
\hline
\textbf{Column} & \textbf{Comments} \\
\hline
cluster\_id & \\
\hline
node\_id & \\
\hline
\end{tabular}

Cluster to node mapping

\caption{Cluster Tables}
\label{cluster}
\end{table}
\subsubsection*{Initialize Tables}
Each node is itself an init cluster. The cluster degree equals to node degree. The merge time is 0. Cluster id equals to node id. Adjacent Clusters can be get from adjacent nodes.
\subsubsection*{Merge cluster}
For each pair of adjacent cluster, merging decision is made based on degree. Clusters with lower degrees will be merged to adjacent clusters with higher degrees. With these decisions, we generate the final merge solution.

We discuss the following situations:
\begin{enumerate}
\item One cluster has two candidates to merge to. We call this ``two heads''. This is shown in Figure \ref{two_head}. This situation can easily be solved by looking for the candidate with higher degree to merge to. Tie can be broken randomly.
\begin{figure}
\centering
\begin{tikzpicture}
[
node/.style={circle, fill=blue!20}
]

\node[node](n1){n1};
\node[node, right = of n1,xshift = 0.5cm](n2){n2};
\node[node, below = of $(n1.east)!0.5!(n2.west)$](n3){target};
\draw[->](n3)--(n1);
\draw[->](n3)--(n2);
\end{tikzpicture}
\caption{Cluster merging - two heads}
\label{two_head}
\end{figure}
\item Chain of merging, as is shown in Figure \ref{chain_merge}, n2 merge to n1, while n3 merge to n2. We solve this by breaking the chain from the second link, leaving only the header link. Thus, only the header link in a chain will be merged in each round. Subsequent links will be merged later. This is shown in Figure \ref{chain_merge_solve}. 

To determine whether a link is a header link, we collect the separate merge decisions and records all the nodes that have an outgoing link. Only when a node has no outgoing links will we accept the merge decisions connecting to it. 

\begin{figure}
\centering
\begin{tikzpicture}
[
node/.style={circle, fill=blue!20}
]

\node[node](n1){n1};
\node[node, right = of n1,xshift = 0.5cm](n2){n2};
\node[node, below = of $(n1.east)!0.5!(n2.west)$](n3){n3};
\draw[->](n2)--(n1);
\draw[->](n3)--(n2);
\end{tikzpicture}

\caption{Cluster merging - chain merging}
\label{chain_merge}
\end{figure}

\begin{figure}
\centering

\begin{tikzpicture}
[
node/.style={circle, fill=blue!20}
]

\node[node](n1){n1};
\node[node, right = of n1,xshift = 0.5cm](n2){n2};
\node[node, below = of $(n1.east)!0.5!(n2.west)$](n3){n3};
\draw[->](n2)--(n1);
\draw[->,dashed](n3)--(n2);
\end{tikzpicture}

round 1

\vspace{1cm}

\begin{tikzpicture}
[
node/.style={circle, fill=blue!20}
]

\node[node](n1){n1-n2};
\node[node, below = of n1](n2){n3};
\draw[->](n2)--(n1);
\end{tikzpicture}

round 2

\caption{Solving chain merging}
\label{chain_merge_solve}
\end{figure}

\end{enumerate}

After dealing with these two situations we can get a list of valid merge decisions. We then update the three tables mentioned above based on the merge decision table. 

\subsubsection*{Update Cluster Table}
For all the "from" cluster in merge decision, remove the entry in cluster table. For all the "to" cluster in merge decision, increase merge time by 1, keep max\_degree unchanged. This means in each round, no matter how many clusters are merged to one destination, the merge\_times field will only increase by one.

\subsubsection*{Update Adjacent Cluster Table}

If a cluster appears as the "from" node in merge decision, all its appearance in adjacent cluster will be replaced by corresponding "to" node.

\subsubsection*{Update cluster to node mapping} 

If a cluster appears as the "from" node in merge decision, all its appearance in cluster to node mapping table will be replaced by corresponding "to" node.
\subsection*{Feb 17, 2015}
After running several round of merging clusters, we are ready to partition the links. We do this by first assign links to clusters. Links in the same partition are surely assigned to that cluster. Links between different clusters will be assigned to clusters with higher degrees. We then random assign clusters to partitions.

Remove merge\_times field from cluster because in each round a cluster will be merged at most once. 

\subsection{Feb 19, 2015}
After a long time of not understanding why the configuration in ``mapred-site.xml'' doesn't take effect, finally it is discovered that configuration items such as ``mapreduce.map.memory.mb'' and ``mapreduce.reduce.memory.mb'' only works when it is setup on the client ( the machine used to submit request to hadoop cluster). However, other configuration items such as ``mapreduce.map/reduce.java.opts'' will work when you setup them on separate node machines.

\subsection{Feb 24, 2015}
Just realized that in order to convince others that this new platform is more efficient than existing ones, we need to show that existing algorithms can be implemented easily using our platform. Current vertex programming model is fit for batch update. However, how can we use it to implement general graph algorithms such as WFS, DFS?

According to current implementation, it is under the framework's control whether
\bibliographystyle{plain} 
\bibliography{reference}
\end{document}