
This section describes the solution and design to the problems outlined in the problem analysis. 

\subsection{Tagging, untagging and index creation}
\label{sec:tagging_creation}

\begin{figure}
 \includegraphics[keepaspectratio=true,width=\linewidth]{graphics/tag_flowchart.eps}
 \caption{Life cycle of a tag message}
 \label{fig:tagflowchart}
\end{figure}

Let $hash: object \rightarrow GUID$ be Pastry's default hash function that maps objects to nodes on the id-space, and let GUIDs be expressed as hexadecimals.

Suppose that a user tags an object $obj$ under keyword $K$. Firstly, the peer hashes the object to attain its $GUID_{obj}=hash(obj)$. It envelopes $GUID_{obj}$ as well as the title of $obj$ in a tag message and dispatches it towards $GUID_{K}=hash(K)$, the location for the index. 

If an index for $K$ does not exist at the receiving node, it is created with an entry for $obj$ that is identified by $GUID_{obj}$ and has a popularity rank of 1. If the index exists but does not contain an entry for $obj$, an entry is inserted like before. Otherwise, the entry's rank is incremented. This is illustrated in figure \ref{fig:tagflowchart}.

Untag messages are similar except that they decrement the popularity of entries. If the popularity of an entry falls to zero, it is removed from the index. Also, if the number of entries at an index drops to 0, it is deleted.

Two or more indexes may be mapped to the same node. To ensure that their entries do not mix, each index is stored at a bucket that is uniquely identified by the index's keyword.   

Since we are using hash values to identify entries, collisions may occur. To make indexes collisions-safe, when a tag or untag message arrives and the index identifies an entry with a matching GUID, the entry's title and the title in the message are compared to guarantee that this is the same object. In theory, there is a risk that two different objects will have the same GUID and the same title. In this case, one corrupted entry would represent both objects, but the probability of this is negligibly small.  

Note that entries within an index are sorted by popularity. We shall soon see the effect of this on the efficiency of searching.

\subsection{Index expansion}
\label{index_expansion}

Once the number of entries in an index exceeds a certain threshold, the index distributes its entire set of entries among $C$ other nodes, which become its children. For now, we use $C=16$. Indexes are structured as search trees to enable locating and updating entries after their insertion. The rules for expansion are:
\begin{itemize} 
\item  Let the root be level 0, its children level 1, its grandchildren level 2 and so on. For the $L^{th}$ level of children, entries are partitioned based on the $L$ first digits of their GUIDs.
\item Let $E$ denote these first $L$ digits, and let $\circ$ be a string concatenation operator. Each partition is mapped and sent to the node identified by $hash(K \circ E)$.  
\end{itemize}

For example, an expanding root partitions its entries into 16 subsets based on the first digit of each entry's GUID, so $E = hash(obj) \bmod 16$. It envelopes each subset in an \textit{index expansion message} and dispatches it towards $GUID_{child}=hash(K \circ E)$. Thus, entries with GUIDs beginning with 0 are sent to $hash(K \circ 0)$, entries with GUIDs beginning with 1 are sent to $hash(K \circ 1)$, and so on. When one of the root's children expands, it partitions its entries based on entries' GUID's first two digits, so $H = hash(obj) \bmod 16^2$. If the child originally contains the subset of entries beginning with 0, it sends all entries beginning with 00 to $hash(K \circ 00)$, all entries beginning with 01 to $hash(K \circ 01)$, and so on. Clearly, for an expanding node at the L$^{th}$ level of a tree, the L$^{th}$ digit of every entry's GUID determines to what child the entry is mapped. 

In general, for an index with $L$ levels of children, an object obj would be mapped to $hash(K \circ ( hash(obj) \bmod 16^L) )$. Thus, the address for an entry is fixed by its GUID, and so the index can forward to it tag and untag messages. 

%\subsubsection{Addressing the Generals' Problem}
%
%As discussed in the problem analysis, guaranteeing the delivery of entries from parent to child is crucial for maintaining indexes' integrity. Since FreePastry uses TCP/IP, we can assume that the transportation medium is reliable. Nevertheless, there is a small chance that a node would 
%
%(... I have doubts about how to continue this. Ask Arne during meeting. )

\subsubsection{Minor optimization}

During expansion, if one or more of the buckets is empty the parent does not attempt to send it. This saves having to forward untag and search messages thereafter to an empty child. Nevertheless, this event is only likely to occur if the threshold is extremely small. Otherwise, the buckets should be evenly populated due to the uniformly distributive nature of the hash function.

\subsection{Searching}

Suppose a user searches for objects tagged under keyword $K$. Her node identifies the whereabouts of the root of $K$'s index: $hash(K)$ and dispatches a \textit{search message} towards it. 
If the root has no children, it envelopes its entries in a \textit{results message} and returns them to the requester. Otherwise, it forwards the message to all of its children. This forwarding continues recursively until the messages reach the index's leaves, which return their entries to the requesting peer. 

As mentioned in \ref{sec:tagging_creation}, the entries stored on every constituent node of an index are sorted by popularity. Therefore, when they arrive at the requester they can be merged with O(n), where n is the number of entries in the index.

\subsubsection{Optimization for routing results}

The routing via Pastry of result messages from index leaves to the requester is expensive because a lot of data needs to be transported. We have optimized it by attaching the IP address and port number of the requester to search messages. Index leaves use this to route result messages directly to requesters as opposed to using Pastry's more expensive routing.

\subsubsection{Algorithmic analysis of searching}
\label{sec:searching}
The more objects are tagged under a given index, the more the index distributes and the more nodes it spans. Thus, the more search messages are required to traverse the entire tree. This underlines a trade-off between the number of entries in an index and the amount of bandwidth required for searching it.  

\begin{figure}
 \includegraphics[keepaspectratio=true,width=\linewidth]{graphics/search_traversal.eps}
 \caption{Search traversal}
 \label{fig:searchtraversal}
\end{figure}

For simplification, we suppose that each parent can have up to 2 children instead of 16. As illustrated in figure ~\ref{fig:searchtraversal}, during traversal the number of messages passed between two subsequent levels of parents and children increases exponentially. In this example, the requester sends 1 message, the root forwards 2, the children forward 4 and the grandchildren 8.

We can express the number of messages sent between each two subsequent levels as a geometric sequence. Let the number of children be expressed as $C$, the level of children as $L$ and the number of messages passed between level $L-1$ and $L$ as $M_L$. Also, let the root be level 0, its children level 1 and so on. The root receives 1 message from the requester, so $M_0=1$. It forwards the message to each of its C children, so $M_1=C$. Each child forwards the message to its own C children, so $M_2=C^2$. Thus, the general term is expressed in \ref{eq:seq}. Note that this is equivalent to the term for the number of nodes on the $L^{th}$ level of the tree.

\begin{equation} 
\label{eq:seq}
M_L=C^L
\end{equation}

The total number of messages required to traverse the tree up to the $L^{th}$ level is:

\begin{equation} 
\label{eq:sum}
sum_L=\frac{C^{L+1}-1}{C-1}
\end{equation}

To get an expression for the total messages required for searching, we must also consider the results messages. The number of result messages is always equivalent to the number of leaves in the tree, which is $C^L$. Adding this to the previous sum, we get:

\[search_L=\frac{C^{L+1}-1}{C-1}+C^L\]

The number of messages grows exponentially with the number of levels of children: $M \in O(C^L)$. Nevertheless, the number of levels of children grows logarithmically with the amount of content in an index: $L \in O(log_C(entries))$. Putting those observations together, the amount of search messages and therefore also bandwidth grows linearly with the amount of content at the index: $M \in O(C^{log_C(entries)})=O(entries)$. 

In section \ref{sec:opt_search_lat}, we discuss an optimization for reducing the search latency perceived by the user. Moreover, in section \ref{sec:eff_dist_alg} we propose an alternative expansion algorithm with better performance in terms of bandwidth and search latency.

\subsection{Index retraction}

Section \ref{sec:pa-expansionretraction} describes the need for contracting distributed indexes as their number of entries decreases. The challenge of doing this in practice is that root cannot easily keep track of the number of entries in its children. Although it is aware of all of the tag and untag messages passed to the children, it is unable to tell whether these messages lead to the creation/deletion of entries or merely to rank incrementation/decrementation.

In our solution, the parent remembers the number of children it had before expansion. A maintenance thread on each child periodically sends to the parent an update message informing it of changes in its number of entries. 

Suppose the inefficient situation that the number of entries in an object index shifts minimally over and below the expansion threshold many times within a short period. The result would be a costly sequence of index expansion and retraction, burdening the system with a high load of message for a small change in index. In order to solve this problem we amortize the cost of distribution by introducing a retraction threshold $\lambda$ that is 20\% smaller than the expansion threshold. An index contraction will only occur when the cumulative size of the object indexes falls below $\lambda$.

\subsection{Buffering}

\begin{figure}
 \includegraphics[keepaspectratio=true,width=\linewidth]{graphics/buffering.eps}
 \caption{Buffering during index expansion}
 \label{fig:buffering}
\end{figure}

While an index expands or retracts it is unable to serve tag, untag or search messages because its entries are being transferred. For this reason, the parent buffers messages for its children. Figure \ref{fig:buffering} illustrates expansion. Messages are buffered until an index expansion confirmation is received. 

Retraction is similar with one difference. After a parent has sent its children \textit{index retraction requests}, each child responds with an \textit{index retraction response} containing all of its children. Once the parent has successfully processed the response message from it's children, the children remove their object indexes. Thus, the retracting parent must buffer incoming search requests until is has received the entire set of entries from its children. Only then can it return a results message to the user containing the whole content that it is responsible for. 

\subsection{Number of children per parent}
\label{sec:numchildren}

\begin{figure}
 \includegraphics[keepaspectratio=true,width=\linewidth]{graphics/children_number.eps}
 \caption{The number of children's effect on the number of search messages}
 \label{fig:children_num}
\end{figure}

The more children per parent, the less messages are required for traversal as an index grows. Figure \ref{fig:children_num} illustrates this. Since both trees have the same number of leaves, their number of entries is approximately equal. Nevertheless, the tree with 2 children per parent requires sending a total of 6 messages to reach the leaves whereas the tree with the 4 children per parent requires 4. This difference increases as the trees grow. Thus, trees with more children seem to be more scalable. 

\textbf{Proof:} Equation \ref{eq:seq} expresses the number of leaves for a tree with $L$ levels. Expressing this in terms of $L$, we get $L=\log_C{M}$ where $C$ is the number of children and $M$ is the number of leaves. $C$ must be greater than 1 in order to achieve distribution. Since $L\geq 0$, we observe that $M$ must be equal or greater than C. Thus: $1<C\leq M$.

Equation \ref{eq:sum} expresses the number of messages required to traverse the tree down to the $L^{th}$ level. Substituting for L, we get:

\begin{eqnarray}
Sum &=& \frac{C^{L+1}-1}{C-1} \\ \nonumber
&=& \frac{C^{\log_C{M}+1}-1}{C-1} \\ \nonumber
&=& \frac{C^{\log_C{M}+\log_C{C}}-1}{C-1} \\ \nonumber
&=& \frac{C^{\log_C{(M \times C)}}-1}{C-1} \\ \nonumber
&=& \frac{M \times C - 1}{C-1} 
\end{eqnarray}

Thus, we have an expression for the number of messages required to traverse a tree with $M$ leaves and $C$ children per parent. Differentiating $Sum$ with respect to $C$ gives:

\begin{eqnarray}
\frac{\partial Sum}{\partial C} &=& \frac{(C-1)M-(MC-1)}{(C-1)^2} \\ \nonumber
&=& \frac{CM-M-MC+1}{(C-1)^2} \\ \nonumber
&=& \frac{1-M}{(C-1)^2} 
\end{eqnarray}

$\frac{\partial Sum}{\partial C}$ is always negative, so the sum of messages required to traverse the tree decreases as $C$ increases. But $C\leq M$, so for a given value of $M$, $Sum$ is minimized when $C=M$. If $C=M$, then the tree has just one level of children. Thus, trees are most bandwidth efficient when they only have one level of children.

For a given index, $M$ grows with the number of entries. Thus, an optimal value of $C$ depends on the amount of entries within the index. $C$ should therefore be adjusted such that most indexes in the system have just one level of children.
 
This analysis inspired an alternative expansion algorithm described in section \ref{sec:eff_dist_alg}. It structures trees with one level of children and a variable number of children per parent. This means its bandwidth consumption is more scalable and its search latency is constant. Tagapastry should be modified to use this algorithm in the future.  
 
For now, we use 16 children with a expansion threshold $\upsilon$ in the thousands. This means that an index with one level of children could store up to $16 \times \upsilon$ entries before the children expand and grandchildren emerge. Having shown that the solution operates optimally with only one level of children, is means that an index is maximally efficient when its number of entries is below $16 \times \upsilon$. 

\subsection{Semantic indexes}

In some applications, it may be required to categorize indexes. For example, consider a music application in which users can tag music files by artist and by genre. If there is a naming conflict between an artist and a genre, tagged files would be mixed within the same index. 

For example, let the method signature for Tagapastry's tag operation be tag(keyword, object). Two users may tag as follow:

$tag("Baroque", bach\_canon\_in\_d\_major.mp3)$
$tag("Baroque", Caramel\_Drops.mp3)$

The first keyword refers to a musical genre from the 16th century whereas the second keyword refers to a modern Japanese band. 

To avoid this, we extend our API to enable tagging not only by a keyword \textit{K} but also by a \textit{type} \textit{T}. Such indexes are mapped to nodes using both \textit{K} and \textit{T}. 

\subsection{Collision probability for entries}

We are using GUIDs generated by a hash function to identify nodes within an index. The probability of entries' GUIDs colliding grows with the number of entries for an index. To approximate this probability, a frame of reference is needed. In Delicious.com, the number of entries for the most popular keyword is approximately 10 million \cite{delicious199}. For a Tagapastry index that contains 10 million entries, the probability of collision is negligibly low. 

\begin{eqnarray}
\frac{\mbox{entries per index}}{\mbox{size of id-space}} & = & \frac{10^7}{16^{32}} = 2.94 \times 10^{-32}
\end{eqnarray}


