\subsection{Context and problem formulation}
Structured P2P overlay networks such as Pastry, Tapestry, Chord and CAN are useful for applications in which users know in advance the specific objects that they wish to access. Nevertheless, these systems do not allow searching for objects easily. This restricts the range of applications for which structured P2P networks can be used. 

Tagapastry is a preliminary solution that allows developers to easily integrate keyword based search capabilities into Pastry applications. It provides a terse API and painless configuration. Moreover, it strives to uphold the benefits of structured P2P topologies, namely scalability and resilience.

\subsection{Pastry in a nutshell}
The original paper published by Rowstron and Druschel \cite{rowstrondruschel2001} describes Pastry as a scalable structured peer to peer overlay network. Examples include the archival storage facility PAST \cite{rowstrondruschel2001b,rowstrondruschel2001c} and the group communication / event notification system Scribe \cite{castroetal2002}.

Pastry uses GUIDs\footnote{globally unique identifiers} to uniquely identify nodes within its network. It assumes that the ID space is circular and that GUIDs are assigned on a random unique basis, so that nodes are uniformly distributed across the geographic area. 

Routing in Pastry takes place using prefixes of GUIDs, a concept which is also known as \textit{Plaxton Tree} or \textit{Plaxton Mesh} first published by Plaxton \cite{plaxton99}. In general, when a node sends a message specific to an object $obj$ (i.e.\ store, retrieve), it computes a $GUID_{obj}$ for $obj$ and routes the message to the node with the GUID that is numerically closest to $GUID_{obj}$. Each node can determine the node to which the message should be routed next based on its routing table. In a routing table, each node stores references to the nodes whose GUIDs share prefixes on first $b$ positions. If a node is unable to determine the next node that shares prefix with the $GUID_{obj}$ on at least $b+1$ positions, the message is routed to a node which shares the same prefix but is numerically closer to $GUID_{obj}$ than the current node. 

This algorithm takes place on every node (except the computation of the object's GUID) as long as the message is not delivered to the destination. It takes at most $\log_{2^b} N$ hops to accomplish this task where $N$ is the overall number of nodes and b is usually 4. Delivery of a message is guaranteed unless all the nodes from some leaf set fail simultaneously.

Pastry is self-organizing which means that it allows nodes to join or leave the network by appropriately updating routing tables and leaf sets of other nodes. 

\subsection{Tagapastry}

Tagapastry is a Pastry application that provides an API for keyword based searching. It allows users to tag and untag any object under one or more descriptive keyword. For every keyword, an index is constructed. This index either contains actual tagged objects provided that they are sufficiently small or a reference to the location of each object in the network. Search is enabled by entering a keyword and acquiring results from its index. 

A question arises of where indexes should be stored. One approach is to map each keyword to a GUID and to create and maintain an index for it at the node identified by this GUID. The problem here is that the number of nodes in the ring would often drastically exceed the number of indexes. Therefore, nodes that happen to store indexes would be burdened far beyond other nodes. Tagapastry addresses this problem by distributing indexes. 

The definitions for Tagapastry's operations are:

\begin{itemize}
\item \textbf{Tagging} allows end-users to associate objects with descriptive keywords. Such associations are perpetuated in the system.
\item \textbf{Untagging:} allows end-users to dissociate an object from a keyword given that an association between them exists in the first place. 
\item \textbf{Searching:} allows end-users to retrieve references to objects that have been associated with a given keyword. 
\end{itemize}

\subsection{Requirements}
\label{sec:requirements}

The following functional and non-functional requirements constitute our success criteria for Tagapastry.

\subsubsection{Functional}
\label{sec:func_req}

We define three simple functional requirements. The API shall allow:
\begin{enumerate}
\item \textbf{tagging} any object under any keyword
\item \textbf{untagging} any object under any keyword
\item \textbf{searching} under any keyword and thereby retrieving all objects that have been tagged under the keyword sorted by popularity
\end{enumerate}

\subsubsection{Non-functional}
\label{sec:non_func_req}
The following requirements describe the desirable behaviour for a system that utilizes Tagapastry. A justification follows each requirement. 

\textbf{Criterion 1:} \textit{ Tagapastry must operate within the constraints of end-users' system resources. }

Networking systems vary considerably in their bandwidth connections, computational power and storage space. Disregarding this would not only lead to the exclusion of users with less powerful machine. It would also reduce the system's scalability. Nodes with small bandwidth or CPU power would become bottlenecks, and nodes with low storage capacity may overflow. Thus, the index distribution algorithm should take into account a node's resources when assigning content.

\begin{comment}
Due to time-constraints, we have only been able to test the solution using a simulator that assumes that all of the nodes in the system are uniform. Thus, although we acknowledge this vitality of this criterion, we must relax if for now. Instead, we aim to show that Tagapastry performs within reason on modern PCs. 
\end{comment}

\textbf{Criterion 2:} \textit{ Every node should contribute equally to the system relative to its local resources. }

As elaborated on in the problem analysis, the distribution of indexes across nodes may vary drastically. This is undesirable for two reasons.
Firstly, end-users whose machines are disproportionally burdened may perceive their situation as unfair and be tempted to leave the system. Therefore, the most vital nodes, those that store and serve content, would be the most likely to spontaneously drop from the system. This would increase the risk of a simultaneous drop of all replica keys of an index thereby leading to a loss of data.

\textbf{Criterion 3:} \textit{Tagapastry should not constitute a significant burden in terms of traffic on networks that utilize it. }

Although storage equality among nodes should presumably improve with the distribution of an index, there is a trade-off. More messages and therefore bandwidth would be required for maintaining and traversing it. It is important to optimize this trade-off such that the additional network traffic is acceptable. 

\textbf{Criterion 4:} \textit{ Search latency should never obstruct usability.}

The time taken for a search to traverse an entire index is likely to increase with an index's degree of distribution. It is important that it never rises highly enough to deter end-users from using applications that utilize Tagapastry. 

\subsubsection{Focus}
For this iteration of Tagapastry, we focus on index distribution and its effect on storage equality and overall bandwidth consumption of a system. For simplicity, we assume that all nodes in the system are uniform. Future work should be done to allow Tagapastry to operate across a diverse range of machines such that all of the requirements are satisfied.  
