\subsection{Multiple keyword search}

We may enable multiple keywords search by retrieving data from several indexes simultaneously and filtering out objects that are not common to all queried indexes at the requester. 

\subsection{Coping with bottlenecks}
\label{sec:bottlenecks}

\begin{figure}
 \includegraphics[keepaspectratio=true,width=\linewidth]{graphics/bottlenecks.eps}
 \caption{Subsequent Tagging.}
 \label{fig:bottleneck}
\end{figure}

Section \ref{sec:pa-bottlenecks} outlines the problem of bottlenecks occurring at nodes. We may solve this problem by replicating the node beyond the leafset's boundaries. The following heuristics can be used to determine when and to what extend replicate. 

Every index could store a moving average of the number of messages that it receives per unit of time. When this average exceeds a certain threshold relative to the amount of available system resources on its particular device, the index would send a \textit{replication extension messages} bidirectionally across the leaf-set. This would lead to the creation of two extra replicas on both sides of the leafset.  The two news replicas would then send leafset \textit{new replica messages} across the leafset to inform the original replica of their existence. This is illustrated in figure  \ref{fig:bottleneck}

Once the moving average falls below this relative threshold, it sends an index replication retraction request. This message also passes bidirectionally across the leafset. Nevertheless, there may be leafset members with lower system resources that are still struggling. They are allowed to drop the message thereby preventing the retraction. 

\subsection{Coping with deadweight data}

Indexes and entries that are inserted into the network and never looked at thereafter constitute a needless burden on the network's storage capacity. Possible methods for coping with them follow.
 
For scarcely used indexes, we could use the index's moving average introduced in \ref{sec:bottlenecks} as well as an approximation of the number of users in the system given by the density of the a node's leafset. The root of an index could routinely checks if the moving average is low relative to the number of users in the system. If so, the index would self-destruct. 

To get rid of unused entries, we could attach to each entry a creation time-stamp. Thus, we could routinely calculate an entry's popularity across time by dividing it's rank by its lifetime. We could then compare the entry with the lowest average against the entry with the highest average. If the relative difference exceeds a certain threshold, the entry with the lowest popularity over time would be deleted. 

\subsection{Optimizing search latency}
\label{sec:opt_search_lat}

As discussed in section \ref{sec:searching}, search latency increases linearly with the number of entries in an index. Nevertheless, it is possible to optimize search latency from the requester's perspective. 

We could introduce a routine maintenance procedure whereby each leaf of a distributed index would gather its highest ranking entries and send them recursively up the tree until they reach the root. The root would sort these entries by rank and store them in a list. We refer to such lists, which would contain indexes' most popular entries, as the indexes' snapshots.  

In response to search messages, a root would immediately return its snapshot to the requester. It would still multicast the search message down to its children as before, so eventually the requester would obtain the entire content of the index. Nevertheless, the effect is that the user would now receive the most popular results with a search latency that is independent on the amount of data in an index.

\subsubsection{Aggressive caching}

As a further improvement, we could use an aggressive caching approach to cache snapshots of extremely popular indexes throughout the network. 

As discussed in section \ref{sec:bottlenecks}, a root of an index could contain a moving average of the number of all the messages that it receives per some unit of time. The average exceeding a certain threshold would set aggressive caching for that index on. This means that as Pastry routes the index's snapshot to the requester, it would cache it on nodes along the way. Thus, future search queries may stumble upon the snapshot as they are routed towards the root. When this happens, the node that contains the cached snapshot would send the snapshot to the requester immediately. Of course, we would have to attach a time-to-live for snapshots lest they become outdated. 

\subsection{Boundary cases}
\label{sec:fw-boundarycases}

\subsubsection{Expansion rejections}
Section~\ref{sec:pa-thresholds} idendified the situation that the index distribution can exhaust the storage capacity of a node in such a way that it could not store any additional object indexes. This problem can be solved through a dialogue between the parent and its prospective children.

Like before, the parent would send an \textit{expansion request message} to each child containing entries that it would like the child to store. If the child's system resources are utilized to their full capacity, however, the child could refuse the expansion request. In this case, it would send a negative \textit{expansion response message} to the parent.

The parent would seek an alternative child by appending a random salt to the first child's GUID, hashing again to produce a new GUID, and sending an expansion request to the node identified by the new GUID. The salts are generated using a random number generator with a constant seed. The parent sends expansion requests iteratively until it receives a positive response message. 

\subsubsection{Dynamic threshold}
Another boundary implied by section~\ref{sec:pa-thresholds} is when a node has exhausted its storage capacity and its object indexes have not exeeded their upper threshold $\upsilon$. In this situation the distribution of the indexes would come to a halt as there is no capacity to store additional index entries.

One solution for this problem is to calculate $\upsilon$ dynamically with resprect to the number of indexes stored on the node and making sure that the threshold can actually be reached given the available capacity. 
Adding a new object index to a node might then result in the situation that two or more object indexes that already existed on the node exceed $\upsilon$. Question arises if all those object indexes should expand or if would be sufficient to expand the indexes sequentially until none of the remaining object indexes are required to expand.

\subsection{Optimizing parent-child interaction}

In the current solution, parents and children store each other's GUIDs and communicate using Pastry's routing mechanism, which requires O(log N) hops, where N is the number of nodes in the system. 

We considered optimizing this communication by allowing parents and children to store each other's IP addresses and port numbers. This would have allowed direct messaging thereby making parent-child communication independent of the number of nodes in the system. The complete idea would have involved each replica of a parent to know every replica of each of its children and vice versa. Thus, they could randomly pick a replica to address at each step of interaction in order to balance communication across each other's leafsets. 

We ruled the idea out because of its overheads and complexity when nodes fail. While it is possible to detect node failures, it would be expensive to send messages throughout the system to update all references to the failed node. 
%We may investigate this idea in further details for the future. For example, 
It may be possible for nodes to approximate the system-wide node failure rate based on their leafsets, and if the rate is sufficiently low to use this optimization after all. 

\subsection{Ideas about security}
\label{sec:future_security}

\subsubsection{Rank distortion}

Malicious users may tag or untag an object multiple times to distort its rank. To avoid this problem, we could rule that a user cannot tag a given object under a given keyword more than once. Moreover, only once a user has tagged an object under a keyword can that user untag it. Once a user has untagged an object, that user has possibility to tag it again.

An important question is where information about what a user has tagged should be best stored? One solution is to save it serialized on a local file, and ensure that its deletion would lead to the peer permanently failing. Nevertheless, a malicious user could in theory still create a new peer for every tag message. Therefore, the best way to store the user's tagging history may be across the network. A solution is still open to discussions. 

\subsubsection{Denial of Service}

Malicious users may overload the network with search requests. To avoid this, we could restrict the number of searches per unit of time that a user is permitted to execute to once every 10 seconds. This does not significantly obstruct usability since users typically spend several seconds browsing through their results before issuing subsequent searches. 

\subsection{A more efficient distribution algorithm}
\label{sec:eff_dist_alg}

\begin{figure}
 \includegraphics[keepaspectratio=true,width=\linewidth]{graphics/new_algo.eps}
 \caption{Expansion}
 \label{fig:new_expansion}
\end{figure}

As proved in section \ref{sec:numchildren}, indexes structured as trees with one level of children and a variable number of branches are more efficient than the current Tagapastry indexes. We coin the former single-level tree and the latter multi-level trees. 

Single and multi-level trees behave and look the same until the root's children expand. Whereas multi-level trees simply grow another level, a single-level tree grows more branches. 

The expansion algorithm for single-level trees is illustrated in figure \ref{fig:new_expansion}. The original children randomly select nodes to distribute their load onto and inform the root of them. Finally, the root assumes control over the new node. 

The number of messages required to traverse a single-level tree is equivalent to the number of children $C$. Moreover, search latency does not increase the more the tree grows because the number of levels is constant. The degree of storage distribution is identical to that of multi-level indexes. Thus, this solution is more efficient.
% and has no drawbacks. It will be adapted as soon as possible. 

