%!TEX root = main.tex

\section{Social-Aware Pruning}\label{sec:ShortestPath}

Under our CubeTA algorithm,
to process a query issued by a user $v$, a large number of distance queries need to be evaluated between $v$ and the users who post records containing the
query keywords in \textbf{GetActualScore}. In order to accelerate such one-to-many distance query evaluation, we develop a series of optimizations that operate in three categories:
(1) Quick computation of the social relevance scores for records that cannot be pruned;
(2) Immediate pruning for records that are not among the final top-k records;
(3) Fast construction of the pruning bound at the initial rounds of graph traversal to facilitate pruning as early as possible.

%\begin{comment}
%By extending Dijkstra's algorithm with several pruning techniques, we are able to answer the query on large graph with an amazing speed. \rtext{Think we need some number or formula to explain the performance}.
%\end{comment}

%We extend Dijkstra's algorithm (DA) to traverse the social graph, while enforcing as many pruning as possible. As we can see, among the 3 search dimensions, social dimension is probably of biggest interest to us. As the inverted lists of records are ordered by the posting time,temporal issue can be addressed easily; the retrieval based on text content does not make much difference from the traditional IR approaches. In contrast, the search along the social dimension may dominate the total query processing cost without any pruning. Therefore, we particularly focus on optimizing the performance of processing such one-to-many distance query along the social dimension.

\begin{figure}[t]
    \centering
        \includegraphics[width=0.45\textwidth]{pics/fit_distribution}
    \caption{The averaged distance distribution of Twitter Dataset.}
    \label{fig:distance_distribution}
\end{figure}

\subsection{Observation}
First, we highlight an observation
on the real SNP datasets that movitivates our pruning idea.
Fig.~\ref{fig:distance_distribution} is the distance distribution which shows the percentage of vertices that have the corresponding social distances to a randomly sampled vertex in the social network.
The distribution is constructed by uniformly sampling 1000 vertices from the Twitter dataset which is extensively studied in our experiments. We observe that there exist layers of vertices with different distances from a random vertex. Due to the \textbf{small world} property,
the number of layers is usually small.
This finding has also been verified in the news dataset that we have studied.
Moreover, this is consistent in real life context
where close friends are rare and the rest just scatter around, which means our defined social distance measure may enable more pruning power on real world SNPs. If we avoid computing the distances between the source vertex and those vertices that are positioned at these layers, the performance will be greatly improved.


\begin{table}[t]
    \centering
    \caption{Notations used across Sec. \ref{sec:ShortestPath}}
    \label{tab:notation_for_optimization_sec}
    \begin{tabular}{|p{1.9cm}|l|}
    \hline  Notation & Meaning \\
    \hline \multirow{2}{*}{$v$, $u$} & $v$ is the query user, $u$ is any other user in\\
    & the social network \\
    \hline $r_u$ & the record posted by user $u$ \\
    \hline $r^{*}$ & the kth best record among evaluated records \\
    \hline \multirow{2}{*}{$S$} & the set contains users with determined \\
    & social distances to $v$ \\
    \hline \multirow{2}{*}{$PQ$} & the priority queue contains users with \\
    & undetermined social distances to $v$ \\
    \hline $\textbf{SD}(v,u)$ & the actual distance between $v$ and $u$ \\
    \hline $\textbf{SD}^{*}(v,u)$ & the estimated $\textbf{SD}(v,u)$ in $PQ$ \\
    \hline  \multirow{2}{*}{$\min\textbf{SR}_{r_u}$} & the minimum $\textbf{SR}(v,u)$ allowed for $r_u$\\
    & s.t. $\Re(v,r_u) \geq \Re(v,r^{*})$ \\
    \hline $\max\textbf{SD}_{r_u}$ & $1-\min\textbf{SR}_{r_u}$ \\
    \hline  $n$ &  $argmin_{x \in PQ}\textbf{SD}(v,x)$ \\
    \hline  $mdist$ & \textbf{SD}(v,n) \\
    \hline $DA$ & Dijkstra's Algorithm \\
    \hline
    \end{tabular}
\end{table}


\subsection{Baseline Solution: Direct Pruning}
For a given source vertex $v$, Dijkstra Algorithm (DA) \cite{Dijkstra} is the most widely-adopted approach to find the distance between $v$ and every other vertex in the graph. The key idea of DA is to maintain a set $S$ and
a priority queue $PQ$. $S$ contains all the vertices whose distances to $v$ have been determined while $PQ$ orders the remaining vertices by their estimated distances to $v$, i.e. $\textbf{SD}^{*}(v,u), u \in PQ$. In each iteration, the vertex $n$ with the smallest $\textbf{SD}^{*}(v,n)$ is popped from $PQ$. At this moment, we are sure that $\textbf{SD}(v,n)=\textbf{SD}^{*}(v,n)$ and we denote $\textbf{SD}(v,n)$ by $mdist$. Then $n$ is inserted into $S$ and for each $n' \in PQ$ where $(n,n') \in E$, $\textbf{SD}^{*}(v,n')$ is updated to $\min(\textbf{SD}^{*}(v,n'),mdist+\textbf{SD}(n,n'))$.
%if there is a cheaper path from $v$ to $n'$ that goes through $n$.
%
Therefore, in order to answer such one-to-many distance query, we choose to use DA as the backbone for our traversal while presenting a series of pruning methods to access as few nodes as possible.

%By maintaining $S$ and $PQ$ in evaluating records for the query user $v$, a direct pruning can be applied before computing a new record's social score.
In particular, whenever a record $r_u$ posted by $u$ contains any query keyword(s), we may need to evaluate $\textbf{SD}(v,u)$. If $u \in S$, no further computation is needed because $\textbf{SD}(v,u)$ has already been determined.
Otherwise, by Algorithm \ref{algo:ComputeExactScore} (line 2), we can get a lower bound for $u$'s social relevance: $\min\textbf{SR}_{r_u}$, i.e. the minimum possible $\textbf{SR}(v,u)$ for $r_u$
to be no less relevant than the current $k^{th}$ best record $r^{*}$.
Equivalently, $\max\textbf{SD}_{r_u}$ denotes the maximum allowed $\textbf{SD}(v,u)$ in order for $r_{u}$ to be a potential top-k candidate.
Therefore, once we find $mdist \geq \max\textbf{SD}_{r_u}$, it means $r_{u}$ is definitely not a top-k candidate and we can simply terminate our search and return. This forms our baseline method called \emph{Direct Pruning}.


\begin{example}\label{exmp:directPrune}
In Example \ref{exmp:cubeTA}, CubeTA needs to evaluate $r7$, $r6$, $r8$, $r11$ before confirming the top-1 result. For each record evaluated, its $\min\textbf{SR}$ and $\max\textbf{SD}$ are listed in Fig. \ref{fig:cubeTAexample},
and the user who posted it is in Fig. \ref{fig:social_graph}.
When direct pruning is enabled, the search proceeds as below.
We first evaluate $r7$ posted by $u_7$. Since no record has been computed, $\min\textbf{SR}_{r7}=1$ and original DA visits the vertices in the order of $u_1$, $u_3$, $u_2$, $u_{10}$, $u_8$, $u_{11}$,
$u_5$, $u_6$ and lastly reach $u_7$ to get $\textbf{SR}(u_1,u_7)=1-\textbf{SD}(u_1,u_7)=0.4$. When we proceed to Iteration 3 to evaluate $r6$ posted by $u_4$, the current top-1 result is $r_7$, so we have $mdist = \textbf{SD}(u_1,u_7) = 0.6$ and $\max\textbf{SD}_{r6}=0.5$.
We find $mdist > \max\textbf{SD}_{r6}$, which means $r6$ needs to be pruned from the top-1 candidate.
\end{example}



\subsection{In-circle Pruning}
Since DA traverses the graph in a closest-vertex-first fashion, we must avoid traversing to any vertex that has a large social distance from the query user $v$ so that the layers in Fig.~\ref{fig:distance_distribution} are never reached.
Otherwise evaluating just one distance query may require traversing the entire social graph and the performance can be as bad as original DA, i.e. $O(|E|$$+$$|V|log|V|)$. We will discuss three issues, namely \textbf{Far True Candidates},
\textbf{Far False Candidates} and \textbf{Cold Start}, where
direct pruning fails to avoid the distance computation from $v$ to far vertices. 


\subsubsection{Far True Candidates}
It refers to records that are socially far from the query user $v$ but cannot be pruned at the current stage of query processing. Thereby we have to compute the exact social distances for these far candidates. Since direct pruning only
prunes the candidates rather than computing the exact social distance, i.e. $\textbf{SD}(v,u)$, we propose the first pruning
method called \emph{early-determination} to quickly compute the social distance as outlined in category (1)'s optimization.
%. As demonstrated in Sec. \ref{subsec:efficiency} of our experiment, in a social graph with 1 million nodes over 40 million edges, more than
%one second is needed for just one distance query,
%
According to DA, the estimated distance $\textbf{SD}^{*}(v,u)$ is only updated to its actual distance $\textbf{SD}(v,u)$ when there exists a $u'$ that is popped from $PQ$ in previous iteration(s) and is also a neighbor of $u$.
Therefore, given the vertex $n=argmin_{x \in PQ}\textbf{SD}(v,x)$ that is being popped from $PQ$ and the $mdist=\textbf{SD}(v,n)$,
we can get a lower bound of $\textbf{SD}(v,x)+\textbf{SD}(x,u), \forall x \in PQ$. If this bound is no smaller than the current $\textbf{SD}^{*}(v,u)$, then we can simply stop and determine that $\textbf{SD}(v,u)=\textbf{SD}^{*}(v,u)$.

\begin{theorem}\label{theorem:early_determine}
Let $\mathbf{SD^{*}}(v,u)$ be the estimated $\textbf{SD}(v,u)$ for a vertex $u$ popped from $PQ$. If $mdist + \min_{u'}\mathbf{SD}(u,u') \geq \mathbf{SD^{*}}(v,u)$ where $(u,u') \in E$, then $\mathbf{SD}(v,u) = \mathbf{SD^{*}}(v,u)$.
\end{theorem}

\noindent \textit{Proof}:
If $mdist + \min_{u'}\mathbf{SD}(u,u') \geq \mathbf{SD^{*}}(v,u)$,
then $\forall x \in PQ$, $\mathbf{SD}(v,x)+\mathbf{SD}(x,u) \geq mdist + \min_{u'}\mathbf{SD}(u,u') \geq \textbf{SD}^{*}(v,u)$.
It means that the estimated distance cannot be updated to a smaller value via any path that contains vertices in $PQ$, so the social distance between $v$ and $u$ has been determined, i.e. $\mathbf{SD}(v,u) = \mathbf{SD^{*}}(v,u)$. $\hspace{3cm} \blacksquare$


\begin{example}\label{exmp:EarlyDetermine}
Recall Example \ref{exmp:directPrune}, the direct pruning needs to traverse 9 vertices to get the social score for $r6$.
With early-determination applied, the traversal still starts from $u_1$, and when popping $u_8$ from $PQ$, we update $\textbf{SD}^{*}(u_1,u_7)=0.6$ as $u_7$ is a neighbour of $u_8$. We can stop the traversal
at $u_{11}$ and determine $\textbf{SD}(u_1,u_7) = 0.6$ because $\textbf{SD}(u_1,u_{11}) + \min_{u'}\textbf{SD}(u_7,u') = 0.6 \geq \textbf{SD}^{*}(u_1,u_7) = 0.6$. As a result, early-determination terminates by traversing 6 vertices only.
\end{example}

To make the pruning more effective, it is critical to obtain a precise estimation of $\mathbf{SD}(v,u)$ before traversing. Thus we use $\textbf{SD}^{*}(v,u) = \min\{\textbf{SD}(v,pn_u)+\textbf{SD}(pn_u,u),\textbf{SD}(u,pn_v)+\textbf{SD}(pn_v,v)\}$ as an upper bound of $\mathbf{SD}(v,u)$ and $pn_u, pn_v$ are pivot vertices from partitions $P_v$, $P_u$ ($v \in P_v$, $u \in P_u$).


\subsubsection{Far False Candidates}

It refers to records
%on the contrary of far true candidates,
that are socially far away from $v$ and should be pruned from the current top-k result set. For a record $r_u$, early-determination only computes $\textbf{SD}(v,u)$ without exploiting $\max\textbf{SD}_{r_u}$ for pruning $r_u$.
In Fig.~\ref{fig:distance_distribution}, suppose we want to evaluate $r_u$ where $\textbf{SD}(v,u)=0.8$ and the nearest neighbor of $u$ has a distance of 0.1 from $u$, then the processing by early-determination is slow because
$\textbf{SD}(v,u)$ %is only determined when we pop a vertex from $PQ$ that has a distance of 0.7 from $v$.
cannot be determined until we pop a vertex from $PQ$ that has a distance of 0.7 from $v$.
This motivates us to propose the second pruning method, namely \emph{early-pruning} as outlined in category (2), to complement the early-determination method.
Like early-determination that estimates a lower bound that $\textbf{SD}^{*}(v,u)$ could be updated to by vertices in $PQ$, early-pruning essentially prunes the record $r_u$ by exploiting $\max\textbf{SD}_{r_u}$, as described in Theorem \ref{theorem:early_prune}.

\begin{theorem}\label{theorem:early_prune}
If $mdist + \min_{u'}\mathbf{SD}(u,u') \geq \max\textbf{SD}_{r_u}$ and $\mathbf{SD^{*}}(v,u) \geq \max\textbf{SD}_{r_u}$, then $\mathbf{SD}(v,u) \geq \max\textbf{SD}_{r_u}$.
\end{theorem}
\noindent \textit{Proof}:
If $mdist + \min_{u'}\mathbf{SD}(u,u') \geq \max\textbf{SD}_{r_u}$, similar to Theorem \ref{theorem:early_determine}, $\mathbf{SD^{*}}(v,u)$ will not be updated to a distance smaller than $\max\textbf{SD}_{r_u}$ by any remaining vertices in $PQ$. Thus it is possible for $\mathbf{SD}(v,u) < \max\textbf{SD}_{r_u}$ if $\mathbf{SD^{*}}(v,u) < \max\textbf{SD}_{r_u}$. So by ensuring $\mathbf{SD^{*}}(v,u) \geq \max\textbf{SD}_{r_u}$, we can determine $\mathbf{SD}(v,u) \geq \max\textbf{SD}_{r_u}$. $\hspace{3.5cm} \blacksquare$


%\begin{figure}[t]
%    \centering
%    \includegraphics[width=0.5\textwidth]{pics/proof_dist}
%    \caption{This figure illustrates the idea of Theorem \ref{theorem:early_prune}. The axis from left to right denotes the social distance from source vertex $u_1$.}
%    \label{fig:proof_distance}
%\end{figure}

\begin{example}\label{exmp:EarlyPrune}
From Example \ref{exmp:EarlyDetermine}, the traversal stops at $u_{11}$ after evaluating $r7$. The next record to compute is $r6$ posted by $u_4$. $\textbf{SD}^{*}(u_1,u_4) = 0.7$ as $u_2 \in S$ and $u_2$ is a neighbor of $u_4$. We also know $\max\textbf{SD}_{r6}=0.5$ in Fig. \ref{fig:cubeTAexample}. However, we cannot early determine $\textbf{SD}(u_1,u_4)$ at $u_{11}$ because $\textbf{SD}(u_1,u_{11}) + \min_{u'}\textbf{SD}(u_4,u') = 0.6 < \textbf{SD}^{*}(u_1,u_4) = 0.7$. Instead we can use early-pruning to eliminate $r6$ because $\textbf{SD}(u_1,u_{11}) + \min_{u'}\textbf{SD}(u_4,u') > \max\textbf{SD}_{r6}$ and $\textbf{SD}^{*}(u_1,u_4) > \max\textbf{SD}_{r6}$.
\end{example}



\subsubsection{Cold Start}
It refers to scenarios where $\max\textbf{SD}$ is not small enough for efficient pruning at the early stage of query processing. Suppose the first record to evaluate is posted
by the furthest vertex in the social graph w.r.t $v$, early-pruning cannot help as the pruning bound is trivial. Although early-determination can reduce the query time to some degree, it is still highly possible to visit almost every vertex.

Thus, we propose the third pruning method called \emph{warm-up queue}, which aligns with the optimization of category (3). The warm-up queue $WQ$ is meant to evaluate the records that are nearer to $v$ first and to obtain a decent bound for further pruning. $WQ$ is constructed as follows:
we push a number of records to $WQ$ before computing any social distance. $WQ$ ranks the records with an estimated distance, which is computed by exploiting the pivot vertices as the transit vertices between $v$ and the authors of the records.
When the size of $WQ$ exceeds $\delta$, all records in $WQ$ will be popped out and their exact scores are computed followed by the original CubeTA.

A key problem is to determine $\delta$. We wish that, among $\delta$ records in $WQ$, there are at least $q.k$ records whose social distances to $v$ are smaller than the left most layer in the vertex-distance distribution. Based on the observation from Fig. \ref{fig:distance_distribution}, we model the vertex-distance distribution as a mixture of gaussians, which is a weighted sum of $M$ normal distributions: $p(x)=\sum^{M}_{i=1}w_ig(x|\mu_i,\sigma_i)$ where $g(x|\mu_i,\sigma_i)$ is the probability density of a normal distribution with mean $\mu_i$ and variance $\sigma_i^2$.
In the context of social network, the number of layers in the vertex-distance distribution is small due to the \textbf{small world} property and it makes the training complexity low. The model is established by the standard expectation maximization method.
Given the mixture model and a random record $r_u$, the probability $p_{opt}$ that  $\textbf{SD}(q,u) < \mu_{min}$  where $\mu_{min}$ is the mean of the left most normal component which means $\mu_{min} \leq \mu_i, \forall i=1..M$.

\noindent\begin{equation}
p_{opt} = \int_{-\infty}^{\mu_{min}}\sum^{M}_{i=1}w_ig(x|\mu_i,\sigma_i)dx
\end{equation}
\vspace{-4mm}

\noindent Assuming the record authors are independent and identically distributed random variables w.r.t their distance to $v$ in the social graph, the probability of having at least $q.k$ records whose social distances to $v$ are smaller than $\mu_{min}$ follows the binomial distribution:

\vspace{-4mm}
\begin{equation}
p(\delta) = 1-\sum_{i=0}^{q.k} \binom{\delta}{i}(1-p_{opt})^{\delta-i}{p_{opt}}^i
\end{equation}
\vspace{-4mm}

\noindent In this work we aim to ensure $p(\delta) > 99.9\%$ so that, in most cases, the first $q.k$ records in $WQ$ have social distances that are less than $\mu_{min}$.

\vspace{-2mm}
\begin{algorithm}[htp]
\caption{\textbf{Optimized Distance Query Computation}}
\label{algo:2HopDistancePrune}
\KwIn{Query user $v$, record $r_u$, set $S$, priority queue $PQ$, $\max\textbf{SD}_{r_u}$ and $mdist$}
\KwOut{$\textbf{SD}(v,u) < \max\textbf{SD}_{r_u}$ ? $\textbf{SD}(v,u)_{r_u}$ : $-1$}
\If{$u \in S$}{
    \Return $\textbf{SD}(v,u) < \max\textbf{SD}_{r_u}$ ? $\textbf{SD}(v,u)_{r_u}$ : $-1$
}

$u'' \gets$ nearest 2-hop neighbour of $u$.\\
\For{$(u,u') \in E$}{
    \If{$\textbf{SD}^{*}(v,u')+\textbf{SD}(u',u) < \textbf{SD}^{*}(v,u)$}{
        $\textbf{SD}^{*}(v,u) \gets \textbf{SD}^{*}(v,u')+\textbf{SD}(u',u)$ \\
        Update $\textbf{SD}^{*}(v,u)$ of $u$ in $PQ$
    }
}
\While{$PQ$ is not empty}{
    \If{$mdist + \textbf{SD}(u,u'') \geq \textbf{SD}^{*}(v,u)$}{
        \Return $\textbf{SD}^{*}(v,u) < \max\textbf{SD}_{r_u}$ ? $\textbf{SD}(v,u)$ : $-1$
    }

    \If{$mdist + \textbf{SD}(u,u'') \geq \max\textbf{SD}_{r_u} \wedge \textbf{SD}^{*}(v,u) \geq \max\textbf{SD}_{r_u}$}{
        \Return $-1$
    }

    Vertex $n \gets$ $PQ.pop()$ \\
    $mdist \gets \textbf{SD}^{*}(v,n)$ \\
    $S \gets S \cup n$ \\

    \For{$(n',n) \in E$ and $n' \not\in S$}{
        \If{$\textbf{SD}^{*}(v,n) + \textbf{SD}(n,n') < \textbf{SD}^{*}(v,n')$}{
            $\textbf{SD}^{*}(v,n') \gets \textbf{SD}^{*}(v,n) + \textbf{SD}(n,n')$ \\
            Update $\textbf{SD}^{*}(v,n')$ of $n'$ in $PQ$
        }
        \If{$(n',u) \in E$}{
            \If{$\textbf{SD}^{*}(v,n') + \textbf{SD}(n',u) < \textbf{SD}^{*}(v,u)$}{
                 $\textbf{SD}^{*}(v,u) \gets \textbf{SD}^{*}(v,n') + \textbf{SD}(n',u)$ \\
                 Update $\textbf{SD}^{*}(v,u)$ of $u$ in $PQ$
            }
        }
    }
}
\end{algorithm}
\vspace{-2mm}
\subsection{Out-of-circle Pruning}
In Theorems \ref{theorem:early_determine} and \ref{theorem:early_prune}, the nearest neighbor $u'$ of the target vertex $u$ and its distance to $u$ play a critical role to quickly determine $\textbf{SD}(v,u)$, However, when $u'$ has a very short distance to $u$, none of the aforementioned pruning techniques is effective.
So by exploiting the 2-hop nearest neighbour of $u$ in the above pruning methods, in particular \emph{early-determination} and \emph{early-pruning},
the pruning power could be enhanced as compared to its in-circle counterparts. Because the 2-hop nearest neighbour has a longer distance from $u$, which results in an earlier determination of $\textbf{SD}(u,v)$.
We refer it as \emph{out-of-circle pruning}, and will demonstrate its merit over the in-circle pruning in Example~\ref{exmp:2hop}.
%Next we first illustrate how the out-of-circle pruning improves the performance, and present a complete distance query computation along the social dimension by equipping the DA with both in-circle and out-of-circle pruning.

\begin{example}\label{exmp:2hop}
By Examples~\ref{exmp:EarlyDetermine} \& \ref{exmp:EarlyPrune}, in-circle early-determination needs to traverse 6 vertices to evaluate $\textbf{SD}(u_1,u_7)$, while with the out-of-circle early-determination we only need to traverse 3 nodes.
The nearest 2-hop neighbour of $u_7$ is $u_3$ and $\textbf{SD}(u_7,u_3)=0.5$. Before any traversal, we use $u_8$, which we assume it to be the pivot vertex of partition 3, to estimate $\textbf{SD}^{*}(u_1,u_7)=\textbf{SD}(u_1,u_8)+\textbf{SD}(u_8,u_7)=0.6$.
Then the out-of-circle early-determination takes effect when we reach $u_2$: now $\textbf{SD}(u_1,u_2) + \textbf{SD}(u_7,u_3) = 0.7 > \textbf{SD}^{*}(u_1,u_7) = 0.6$; so by Theorem \ref{theorem:early_determine} we guarantee $\textbf{SD}(u_1,u_7)=0.6$.
Furthermore, we can use out-of-circle early-pruning to eliminate $r6$. Since $u_4$ posted $r6$, we first identify the 2-hop nearest distance of $u_4$ to be $\textbf{SD}(u_4,u_9)=0.4$.  In addition we know that $\textbf{SD}^{*}(u_1,u_4)=0.7$ since $u_2$ was reached when evaluating $r7$.
Then by out-of-circle early-pruning we are sure $r6$ is not the top-1 candidate at $u_2$ because $\textbf{SD}(u_1,u_2)+\textbf{SD}(u_4,u_9) > \max\textbf{SD}_{r6}$ and $\textbf{SD}^{*}(u_1,u_4) > \max\textbf{SD}_{r6}$.
\end{example}

%--------------------------------
\Comment
{
According to the intuition in Example~\ref{exmp:2hop}, Algorithm \ref{algo:2HopDistancePrune} presents a complete solution for the social distance computation, where it follows the original DA except: lines 9-12 are the extensions of Theorems \ref{theorem:early_determine} and \ref{theorem:early_prune}
where the extension is to change the nearest neighbour distance to the nearest 2-hop distance. Note that the in-circle pruning does not require lines 4-7 and 20-23 in Algorithm \ref{algo:2HopDistancePrune}. The additional codes are meant to make sure if $n$ is a 2-hop neighbor of $u$,
$\textbf{SD}^{*}(v,u)$ gets updated via a path that contains $n$.
%We demonstrate the effectiveness of the extended pruning method in Example \ref{exmp:2hop}. The improvement will be further magnified when the graph is large, as verified in our experiments as well.
%and we will represent the efficiency of different optimizations in the experimental study.

One may wonder whether the optimizations can be further extended by using 3-hop nearest neighbour and beyond. The answer is that 3-hop neighbour brings more complexity and ends up with less efficient algorithms in both space and time.
We have mentioned that in order to guarantee the correctness of out-of-circle pruning (Algorithm \ref{algo:2HopDistancePrune}), lines 4-7 and 20-23 need to be inserted on top of the in-circle pruning. We use a running example to explain such infeasibility.

\begin{example}\label{exmp:2hopAdditionUpdate}
In Example \ref{exmp:2hop}, we visit $u_1$, $u_3$, $u_2$ in order after evaluating $r7$, $r6$ while $r8$ and $r11$ remain to be evaluated. The total score of $r8$ is 2.2 and we continue to evaluate $r11$ posted by $u_{11}$. By adopting the same assumption from Example \ref{exmp:2hop}, we use $u_8$ to be the pivot vertex of partition 3. Then we obtain $\textbf{SD}^{*}(u_1,u_{11}) = \textbf{SD}(u_1,u_8)+\textbf{SD}(u_8,u_{11})=0.8$. According to lines 4-7, we need to update $\textbf{SD}^{*}(u_1,u_{11}) = \textbf{SD}^{*}(u_1,u_{10})+\textbf{SD}(u_{10},u_{11}) = 0.4$.
Since $u_{10}$ is a neighbour of both $u_3$ and $u_{11}$, $\textbf{SD}^{*}(u_1,u_{11})$ will be updated when we traverse to $u_3$ .

If we do not perform lines 4-7, this means $\textbf{SD}^{*}(u_1,u_{11}) = 0.8$ instead of $0.4$. Since $r8$ is the current top-1 candidate, $\max\textbf{SD}_{r11}=0.5$. As the traversal stops at $u_2$ and nearest 2-hop neighbour of $u_{11}$ is $u_3$, the out-of-circle early-pruning will eliminate $r11$ because $\textbf{SD}(u_1,u_2) + \textbf{SD}(u_3,u_{11}) = 0.5 \geq \max\textbf{SD}_{r11}$ and $\textbf{SD}^{*}(u_1,u_{11}) > \max\textbf{SD}_{r11}$.
But $r11$ is the ultimate top-1 result which should not be pruned. lines 20-23 in algorithm \ref{algo:2HopDistancePrune} have a similar reason to guarantee the correctness of the out-of-circle pruning.
\end{example}

From Example~\ref{exmp:2hopAdditionUpdate} we can see it is crucial to perform the additional updates for the out-of-circle pruning.
If we use 3-hop nearest neighbours, one has to ensure $\textbf{SD}^{*}(v,u)$ is updated via a path that contains $n'$ if $n'$ is 2-hop away from $u$. However, to check if $n'$ is a 2-hop neighbour of $u$,
we must either store all 2-hop neighbours for each vertex or validate 2-hop relationship on the fly. Storing all 2-hop neighbours are not realistic for a large graph whereas computing on the fly will trivially slow down the query processing.
Therefore, it can be concluded that out-of-circle pruning achieves the maximum pruning along the social dimension.

}
%--------------------------------

%% - zhifeng, Aug 5
As a result, by enabling more powerful out-of-circle pruning upon the DA traversal, we present a complete solution of the social distance computation in Algorithm \ref{algo:2HopDistancePrune}.
In particular, lines 9-12 extend the idea of Theorems \ref{theorem:early_determine} and \ref{theorem:early_prune} by replacing the nearest neighbour distance by the nearest 2-hop distance; lines 4-7 and 20-23 are meant to guarantee the correctness of pruning: if $n'$ is a neighbor of $u$,
$\textbf{SD}^{*}(v,u)$ gets updated via a path that contains $n'$. The rest part is in line with the original DA: specifically, Line 1-2 returns the social distance if $\textbf{SD}(v,u)$ has been determined in S; Lines 13-19 follows the graph traversal and distance estimate in DA.

%
%According to the intuition in Example 6, Algorithm 3 presents a complete solution
%for the social distance computation with out-of-circle pruning enabled. The backbone of Algorithm 3
%is still the original DA. Line 1-2 returns the social distance if SD(v,u) has been determined in S.
%The original graph traversal and distance estimate in DA is done in Line 13-19. Line 9 and Line 11 are the
%extension of early-determination and early-pruning by 2-hop nearest neighbour. To ensure the correctness of out-of-circle pruning,  it
%requires updates which are executed in Line 4-7 and Line 20-23. The addition updates compared to in-circle pruning
%are meant to make sure if n, the next vertex popped from PQ, is a 2-hop neighbour of u, SD*(v,u) gets updated via a path that contains n.


\begin{example}\label{exmp:2hopAdditionUpdate}
In Example \ref{exmp:2hop}, we visit $u_1$, $u_3$, $u_2$ in order after evaluating $r7$, $r6$ while $r8$ and $r11$ remain to be evaluated. The total score of $r8$ is 2.2 and we continue to evaluate $r11$ posted by $u_{11}$. By adopting the same assumption from Example \ref{exmp:2hop}, we use $u_8$ to be the pivot vertex of partition 3. Then we obtain $\textbf{SD}^{*}(u_1,u_{11}) = \textbf{SD}(u_1,u_8)+\textbf{SD}(u_8,u_{11})=0.8$. According to lines 4-7, we need to update $\textbf{SD}^{*}(u_1,u_{11}) = \textbf{SD}^{*}(u_1,u_{10})+\textbf{SD}(u_{10},u_{11}) = 0.4$ when we traverse to $u_3$ as $u_{10}$ is a neighbour of both $u_{11}$ and $u_3$.

If we do not perform lines 4-7, this means $\textbf{SD}^{*}(u_1,u_{11}) = 0.8$ instead of $0.4$. Since $r8$ is the current top-1 candidate, $\max\textbf{SD}_{r11}=0.5$. As the traversal stops
at $u_2$ and nearest 2-hop neighbour of $u_{11}$ is $u_3$, the out-of-circle early-pruning will eliminate $r11$ because $\textbf{SD}(u_1,u_2) + \textbf{SD}(u_3,u_{11}) = 0.5 \geq \max\textbf{SD}_{r11}$ and $\textbf{SD}^{*}(u_1,u_{11}) > \max\textbf{SD}_{r11}$.
But $r11$ is the ultimate top-1 result which should not be pruned. lines 20-23 in Algorithm \ref{algo:2HopDistancePrune} have a similar reason to guarantee the correctness of the out-of-circle pruning.
\end{example}


Out-of-circle pruning requires the pre-computation of the nearest 2-hop distance from each vertex which is retrieved by using DA. The worst time complexity is $O(|V|$$+$$|E|)$ and the process can easily be parallelized. The space complexity is $O(|V|)$ which brings almost no overhead for using out-of-circle pruning compared to in-circle pruning.
One may wonder whether the pruning can be further extended by using 3-hop nearest distance and beyond. The answer is that it brings more complexity in both time and space.
%
%In Example~\ref{exmp:2hopAdditionUpdate}, we see it is crucial to perform the additional updates for the out-of-circle pruning.
If we use 3-hop nearest distance, one has to ensure $\textbf{SD}^{*}(v,u)$ is updated via a path that contains $n'$ if $n'$ is 2-hop away from $u$. However, to check if $n'$ is a 2-hop neighbour of $u$,
we must either store all 2-hop neighbours for each vertex or validate 2-hop relationship on the fly. Storing all 2-hop neighbours are not realistic for large graphs whereas computing on the fly will trivially slow down the query processing.
Therefore, it can be concluded that out-of-circle pruning achieves the maximum pruning along the social dimension.
