% !TEX root = pagerank.tex

\section{A Faster Distributed \pr Algorithm\\ (for  Undirected Graphs)}\label{sec:undirected}
We present a faster algorithm for PageRanks computation in {\em undirected} graphs. Our algorithm's time complexity holds in the bandwidth restricted communication model, requires only $O(\log^3 n)$ bits to be sent over each link in each round.  %First we present an algorithm for {\em undirected} graphs and in Section 5 we modify it slightly to work for directed graphs.   Our algorithm's time complexity for the undirected graphs holds in the {\sc CONGEST} model, whereas for directed graphs a slightly better time complexity applies only in the {\sc LOCAL} model. 
 

We use a similar Monte Carlo method as described in Section \ref{sec:simple-algo} to estimate PageRanks. This says that the PageRank of a node $v$ is the ratio between the number of visits of \pr random walks to $v$ itself and the sum of all the visits over all nodes in the network. In the previous section (cf. Section \ref{sec:simple-algo}) we show that in $O(\log n/\eps)$ rounds, one can approximate RageRank accurately by walking in a naive way in general graphs. We now outline how to speed up our previous algorithm (cf. Algorithm \ref{alg:simple-pagerank-walk}) using an  idea similar to the one used in \cite{DasSarmaNPT10}. In \cite{DasSarmaNPT10}, it is shown how one can perform {\em a}  simple  random walk in an undirected graph\footnote{In each step, an edge is taken from
the current node $x$ with probability proportional to $1/d(x)$ where
$d(x)$ is the degree of $x$.}  of length $L$  in $\tilde O(\sqrt{LD})$ rounds w.h.p. ($D$ is the diameter of the network). The high level idea of their algorithm  is to perform `many' short walks in parallel and later `stitch' them to get the desired longer length walk. To apply this idea in our case, we  modify our approach accordingly as speeding up ({\em many}) PageRank random walks is  different from speeding up {\em one}  simple random walk. 
%In other words, we have to speed up many random walks of length $\log n/\eps$ (see Section \ref{sec:simple-algo}) in the model where only a limited sized message communication is allowed per round. 
We show that our improved algorithm (cf. Algorithm \ref{alg:pr-walk-undirected}) approximates PageRanks in $O(\frac{\sqrt{\log n}}{\eps})$ rounds. 

%\textbf{Anisur: Random walk or PageRank walk---which one would be appropriate to maintain throughout the paper??}

%Gopal --- Say upfront that we use the termilogy "PageRank random walk" for the modified random walk process.
%Anisur: Said in pagerank background inside a footnote

\subsection{Description of Our Algorithm}

%*Gopal --- I changed 1 random walk from each node to $\Theta(log n)$ walks from each node. Check throughout.\\
%*Anisur: Shall we put the particular value $c\log n$, where $c = \frac{2}{\delta' \eps}$ throughout? or only inside Algo Box would be fine?   

In  Section \ref{sec:simple-algo}, we showed that by performing $\Theta(\log n)$ walks (in particular we are performing $c\log n$ walks, where $c = \frac{2}{\delta' \eps}$, $\delta'$ is defined in Section \ref{sec:correctness}) of length $\log n/\eps$ from each node, one can estimate the \pr vector $\pi$ accurately with high probability. In this section we focus on the problem of  performing efficiently $\Theta(n\log n)$ walks ($\Theta(\log n)$ from each node) each of length $\log n/\eps$ and count the number of visits of these walks to different nodes.
Throughout, by ``random walk'' we mean the ``PageRank random walk" (cf. Section \ref{sec:simple-algo}). %For simplicity of our analysis, without loss of generality we may consider to perform a single random walk of length $L = n\frac{1}{\eps}$. 

The main idea of our algorithm is to first perform `many' short random walks in parallel and then `stitch' those short walks to get the longer walk of length $\log n/\eps$ and subsequently `count' the number of visits of these random walks to different nodes. In particular, our algorithm runs in three phases. In the first phase, each node $v$ performs $d(v) \eta$ ($d(v)$ is degree of $v$) independent `short' random walks of length $\lambda$ in parallel. While value of the parameters $\eta$ and $\lambda$ will be fixed later in the analysis, the assigned value will be $O(\log^2 n/\eps)$ and $\sqrt{\log n}$ respectively. This is done naively by forwarding $d(v)\eta$ `coupons' having the ID of $v$ from $v$ (for each node $v$) for $\lambda$ steps via random walks. Besides the node's ID, we also assign a coupon number ``$Coupon_{ID}$" to each coupon to keep track the path followed by the random walk coupon. The intuition behind performing $d(v)\eta$ short walks is that the PageRanks of an undirected graph is proportional to the degree distribution \cite{undirected12}. Therefore we can easily bound the number of visits of random walks to any node $v$ (cf. Lemma \ref{lem:visit-bound}). At the end of this phase, if node $u$ has $k$ random walk coupons with the ID of a node $v$, then $u$ is a destination of $k$ walks starting at $v$. Note that just after this phase, $v$ has no knowledge of the destinations of its own walks, but it can be  known by direct communication from the destination nodes. The destination nodes (at most $d(v)\eta$) have the ID of the source node $v$. So they can contact the source node via {\em direct} communication. We show that this takes at most constant number of rounds  as only polylogarithmic number of bits are sent (since $\eta$ will be at most $O(\log^2 n/\eps)$).  It is shown that the first phase takes $O(\frac{\lambda}{\eps})$ rounds (cf. Lemma \ref{lem:phase1}).

In the second phase, starting at source node $s$, we `stitch' some of the $\lambda$-length walks prepared in first phase. Note that we do this for every node $v$ in parallel as we want to perform $\Theta(\log n)$ walks from each node. The algorithm starts from $s$ and samples one coupon distributed from $s$ in Phase 1. %We now discuss how to sample one such coupon randomly and go to the destination vertex of that coupon. One simple way to do this is as follows: 
In the end of Phase 1, each node $v$ knows the destination node's ID of its $d(v)\eta$ short walks (or coupons). When a coupon needs to be sampled, node $s$ chooses a coupon number sequentially (in order of the coupon IDs) from the unused set of coupons and informs the destination node (which will be the next stitching point) holding the coupon $C$ by direct communication, since $s$ knows the ID of the destination node at the end of the first phase. 
Let $C$ be the sampled coupon and $v$ be the destination node of $C$. The source $s$ then sends a `token' to $v$ and $s$ deletes the coupon $C$ so that $C$ will not be sampled again next time at $s$. This is because our goal is to produce independent random walks of a given length, so naturally we do not reuse the same short walks, or in other words, this will preserve randomness when we concatenate short walks. The process then repeats. That is, the node $v$ currently holding the token samples one of the coupons it distributed in Phase 1 and forwards the token to the destination of the sampled coupon, say $u$. Nodes $v, u$ are called `connectors' --- they are the endpoints of the short walks that are stitched. A crucial
observation is that the walk of length $\lambda$ used to distribute the corresponding coupons from $s$ to $v$ and from $v$ to $u$ are independent random walks. Therefore, we can stitch them to get a random walk of length $2\lambda$. We therefore can generate a random walk of length $3\lambda, 4\lambda, \ldots$ by repeating this process. We do this until we have completed  a length of at least $( O(\log n/\eps) - \lambda)$. Then, we complete the rest of the walk by doing the naive random walk algorithm. Note that in the beginning of Phase 2, we first check the length of survival of each walk and then stitch them accordingly.  We show that  Phase 2 finishes in $O(\frac{\log n}{\lambda \eps} + \lambda)$ rounds (cf. Lemma \ref{lem:phase2}).

In the third phase we count the number of visits of all the random walks to a node. \hspace{0.01in} As we have discussed,  we have to create many short walks of 

%%%
\begin{algorithm}[H]%\small %\footnotesize 
\caption{\sc Improved-PageRank-Algorithm}
\label{alg:pr-walk-undirected}
\textbf{Input (for every node):}  Number of nodes $n$, reset probability $\eps$ and short walk length $\lambda = \sqrt{\log n}$.\\
\textbf{Output:} Approximate PageRank of each node.\\

%Number of walks $K = c\log n$ (where $c =  \frac{2}{\delta' \eps}$ and $\delta'$ is defined in Section \ref{sec:correctness})

\textbf{Phase 1: (Each node $v$ performs $d(v)\eta = O(d(v)\log^2 n/\eps)$
random walks of length $\lambda = \sqrt{\log n}$. % + r_i$ where $r_i$ (for each $1\leq i\leq \eta$) is chosen independently at random in the range $[0,\lambda-1]$. 
At the end of this phase, there are $d(v)\log^2 n/\eps$ (not necessarily
distinct) nodes holding a `coupon' containing the ID of $v$.)}
\begin{algorithmic}[1]

\STATE Each node $v$ construct $Bd(v)\log^2 n/\eps$ messages $C = \langle ID_v,  \lambda, Coupon_{ID} \rangle$. \hspace{0.1in}// [We will refer to these messages created by node $v$ as `coupons created by $v$'.]

\FOR{$i=1$ to $\lambda$}

\STATE This is the $i$-th iteration. Each node $v$ holding at least one coupon does the following in parallel: 
\FOR{each coupon $C$ held by $v$} \hspace{0.5in}// [i.e., the coupons which received by $v$ in the $(i - 1)$-th iteration.]  
\STATE Generate a random number $r \in [0, 1]$. 

\IF {$r< \eps$}

\STATE Terminate the coupon $C$ and keep the coupon as then $v$ itself is the destination. 

\ELSE 

\STATE pick a neighbor $u$ uniformly at random for the coupon $C$ and forward $C$ to $u$. 

%\COMMENT{Note that $v$ does this for every coupon simultaneously in the $i$-th round.} 

\ENDIF
\ENDFOR

\noindent \COMMENT{Note that an iteration could require more than 1 round, because of congestion}

\ENDFOR

%If $C$'s desired walk length is at most $i$, then $v$ keeps this coupon ($v$ is the desired destination).

\STATE Each destination node sends its ID to the source node, as it has the source node's ID now. \hspace{0.5in}// [destination nodes hold the short random walk coupon(s) $C$ and contact the source nodes through {\em direct} communication.]

\algstore{myalg}
\end{algorithmic}


\textbf{Phase 2: (Stitch short walks by token forwarding. Stitch approximately $\Theta(\sqrt{\log n}/\eps)$ walks, each of
length  $\sqrt{\log n}$. Recall that each node wants to perform $K = c\log n$ long random walks, where $c =  \frac{2}{\delta' \eps}$ and $\delta'$ is defined in Section \ref{sec:correctness})}
\end{algorithm}

%algorithm continued...

\begin{algorithm}%[H]

\begin{algorithmic}[1]
\algrestore{myalg}

%\FOR{all nodes $v$ in parallel}
\STATE  Each node $v$ generates $K$ ``tokens''  $\langle ID_v,  L \rangle$, where $L$ is a random integer value $x$ chosen with probability $\eps(1-\eps)^{x-1}$ \hspace{0.3in}//  [$L$ is drawn from the geometric distribution with parameter $\eps$ i.e., from the distribution of the lengths of random walks.]

\FOR{$i = 1, 2, \ldots, B_1\sqrt{\log n}/\eps$} \hspace{0.1in}//[for sufficiently large constant $B_1$]

\STATE Each node $v$ holding at least one {\em token} with $L>0$ does the following in parallel: 

\STATE For each token $\langle ID_v,  L \rangle$ with $L\geq \lambda$, send $\langle ID_v,  L - \lambda, Coupon_{ID} \rangle$ to $u$, where $u$ is sampled using a coupon of sequence number $Coupon_{ID}$ from the set of the coupons distributed by $v$ in Phase~1,  and delete the token  $\langle ID_v,  L \rangle$ \hspace{0.5in}//  [$v$ sends to $u$ via the {\em direct} communication.] 

\STATE For each such received message $\langle ID_v,  L - \lambda, Coupon_{ID} \rangle$, node $u$ memorizes $(ID_v, Coupon_{ID})$ and creates a token $\langle ID_u,  L - \lambda \rangle$    \hspace{0.3in}//  [Each node $u$ memorizes it for backtracking in Phase 3.] 

\ENDFOR

\STATE \label{step:count} For the remaining tokens  $\langle ID_v,  L \rangle$ (whose $L >0$), it holds that $L < \lambda$:  for each of them walk naively in parallel for another $\lambda$ steps. 

\end{algorithmic}

\textbf{Phase 3: (Counting the number of visits of  short walks to a node)}
\begin{algorithmic}[1]

\STATE Each node $w$ maintains a counter $\zeta_w$ to keep track of the number of visits of walks at $w$.

\STATE Each node $u$ which memorizes coupon IDs $(ID_v, Coupon_{ID})$ in Phase 2, does the following in parallel: 

%\STATE Start from each connector node, except the source node $s$.

\STATE  For each such coupon, starting from $u$ trace all the short random walks in reverse.% up to $v$.% of the corresponding short walk. %(Recall that each connector node is the destination of some short walk).

\STATE Count the number of visits to any node $w$ during this reverse tracing and add to $\zeta_w$. Also count the visits during `naively walking' walks (Step \ref{step:count} in Phase 2) and add it to $\zeta_w$.    

%\ENDFOR

\STATE Each node $v$ outputs its PageRank $\pi_v$ as $\frac{\zeta_v \eps}{c n \log n}$. 
  
\end{algorithmic}

\end{algorithm}
%%%

\noindent length $\lambda$ from each node. Some short walks may not be used to make the long walk of length $\log n/\eps$. We show a technique to count all the used short walks' visits to different nodes.  We note  that after completion of Phase 2, all the $\Theta(n\log n)$  long walks ($\Theta(\log n)$ from each node) have been  stitched.  During stitching  (i.e., in Phase 2), each connector node (which is also the end point of the short walk) should remember the source node and the $Coupon_{ID}$ of the short walk. Then start from  each of the connector nodes and do a walk in reverse  direction (i.e., retrace the short walk backwards) to the respective source nodes in parallel. During the reverse walk, simply count the visits to nodes. It is easy to see that this will take at most $O(\frac{\lambda}{\eps})$ rounds, in accordance with Phase~1 (cf. Lemma \ref{lem:phase3}). Now we analyze the running time of our algorithm {\sc  Improved-PageRank-Algorithm}. The compact pseudo code is given in Algorithm \ref{alg:pr-walk-undirected}. 

\subsection{Analysis}
First we are interested in the value of $\eta$ i.e., the number of coupons (short walks)  needed from each node to successfully answer all the stitching requests. Notice that it is possible that $d(v)\eta$ coupons are not enough if $\eta$ is not chosen suitably large: We might forward the token to some node $v$ many times in Phase 2 and all coupons distributed by $v$ in the first phase may be deleted. In other words, $v$ is chosen as a connector node many times, and all its coupons have been exhausted.
If this happens then the stitching process cannot progress. To fix this problem, we use an easy upper bound of the number of visits to any node $v$ of a random walk of length $\ell$ in an undirected graph: $d(v)\ell$ times. 
%But this bound is not enough to get the desired running time, as it does not say anything about the distribution of the connector nodes. We use the following idea to overcome it: Instead of nodes performing walks of length $\lambda$, each such walk $i$ do a walk of length $\lambda + r_i$ where $r_i$ is a random number in the range $[0, \lambda-1]$. Since the random numbers are independent for each walk, each short walks are now of a random length in the range $[\lambda, 2\lambda-1]$. This modification is needed to claim that each node $v$ will be visited as a connector only $O(\ell/\lambda)$ times if it is visited at most $\ell$ times (cf. Lemma \ref{lem:connector-bound}). 
Therefore each node $v$ will be visited as a connector node at most $O(d(v)\ell)$ times. This implies that each node does not have to prepare too many short walks. 

%Gopal --- note that I have changed the number of visits to $O(d(v)\ell)$.

%Now we  bound the number of short walks required from each node to successfully complete Phase 2.  This is because of performing many short walks from each node will increases the algorithm's running time. 
The following lemma bounds the number of visits to every node when we do $\Theta(\log n)$ walks
from each node, each of length $\log n/\eps$  (note that this is the maximum length of a long walk, w.h.p.). %We show that no node $v$ is visited more than $\tilde{O}(d(v)\sqrt{\ell}/\lambda)$ times as a connector node of a $\ell$-length random walk. For this we need a technical result on  random walks that bounds the number of times a node will be visited in a $\ell$-length random walk. Consider a \pr walk on a connected graph on n vertices. Let $N_x^t (y)$ denote the number of visits to vertex $y$ by time $t$, given the walk started at vertex $x$. 
%Now, consider $k$ walks, each of length $\ell$, starting from (not necessary distinct) nodes $x_1, x_2, \ldots ,x_k$. 

%The following proof is wrong. The correct proof: SUppose we perform so many long walks in parallel.
%The bound on the number of visits to each node follows because in each round
%a node $v$ can get only at most $d(v)$ walks in expectation (and hence $O(d(v)\log n)$ w.h.p.) and the long walk length is $O(\log n/\eps)$. So total number of visits is $O(d(v)\log^2 n/\eps)$ w.h.p.


\begin{lemma}\label{lem:visit-bound}
If {\em each} node performs $\Theta(\log n)$ random walks of length $\log n/\eps$, then no node $v$ is visited more than $O(\frac{d(v) \log^2 n}{\eps})$ times with high probability. 
\end{lemma}
\begin{proof}
We show the above  bound on the number of visits still holds if each node $v$ performs $\Theta(d(v)\log n)$ random walks of length $\log n/\eps$. Suppose each node $v$ starts $\Theta(d(v) \log n)$ simple random walks in parallel. We claim that after any given number of steps $i$, the expected number of random walks at node $v$ is still $\Theta(d(v)\log n)$. Consider the random walk's transition probability matrix $A$. Then, $A{\bf x} = {\bf x}$ holds for the stationary distribution ${\bf x}$ having value $\frac{d(v)}{2m}$, where $m$ is the number of edges in the graph. Now the number of random walks started at any node $v$ is proportional to its stationary distribution, therefore, in expectation, the number of random walks at any node after $i$ steps remains the same. We show this is true with high probability using Chernoff bound technique, since the random walks are independent. For each random walk coupon $C$, any $i = 1, 2, \ldots, \log n/\eps$, and any vertex $v$, we define $W_C^i(v)$ to be the random variable having value
1 if the random walk $C$ is at $v$ after $i^{th}$ step. Let $W^i(v)=\sum_{C: \text{random walk}} W_C^i(v)$, i.e., $W^i(v)$ is the total number of random walks are at $v$ after $i^{th}$ step. By Chernoff bound, for
any vertex $v$ and any $i$,
$$\Pr[W^i(v)\geq 18 d(v)\log{n}]\leq 2^{-3 d(v)\log{n}} \leq n^{-3}.$$
It follows that the probability that there exists an vertex $v$ and an
integer $1\leq i\leq \log n/\eps$ such that $W^i(v)\geq 18 d(v)\log{n}$ is
at most $|V(G)| (\log n/\eps) n^{-3}\leq \frac{1}{n}$ since $|V(G)| =
n$ and $\log n/\eps \leq n$. Therefore, $W^i(v) \leq 18 d(v)\log{n}$ for all $v$ and for all $i$, with high probability.  

Now, if each node starts $\Theta(\log n)$ independent random walks that terminate with probability $\eps$ in each step, the number of random walks to any node $v$ is dominated from above by $\Theta(d(v)\log n)$. This is because there will be at most $n\log n$ random walk coupons in the network in each step. Therefore, the total number of visits by all random walks to any node $v$ is bounded by $O(d(v) \log^2 n/\eps)$ w.h.p., since there are total of $\log n/\eps$ steps.
%
%Suppose we perform so many long walks in parallel. In other words, we can say that each node performing one walk of length $\Theta({\log^2 n/\eps})$. 
%The bound on the number of visits to each node follows because in each round
%a node $v$ can get only at most $d(v)$ walks in expectation (since we have an undirected graph) and hence $O(d(v)\log n)$ w.h.p. (via Chernoff bound). Since long walk length is $\Theta(\log^2 n/\eps)$, so total number of visits is $O(d(v)\log^3 n/\eps)$ w.h.p. 
 \end{proof}

It is now clear from the above lemma (cf. Lemma~\ref{lem:visit-bound}) that $\eta = O(\log^2 n/\eps)$ i.e., each node $v$ has to prepare $O(d(v)\log^2 n/\eps)$ short walks of length $\lambda$ in Phase 1. Now we show the running time of our algorithm (cf. Algorithm \ref{alg:pr-walk-undirected}) using the following lemmas.  
\begin{lemma}\label{lem:phase1}
Phase 1 finishes in $O(\frac{\lambda}{\eps})$ rounds.  
\end{lemma} 
\begin{proof}
It is known from  Lemma \ref{lem:visit-bound} that in Phase 1, each node $v$ performs $O(d(v)\log^2 n/\eps)$ walks of length $\lambda$. %We proof a bit stronger statement that each node $v$ can in fact perform $\eta d(v)$ of length $2\lambda$ and still finish in $O(\eta \lambda)$ rounds with high probability. 
Assume that initially each node $v$ starts with $d(v)\log^2 n/\eps$ coupons (or messages) and each coupon takes a random walk according to the \pr transition probability. Now, in the similar way we showed in Lemma \ref{lem:visit-bound} that after any given number of steps $j$ $(1 \leq j \leq \lambda)$, the expected number of coupons at any node $v$ is $d(v)\log^2 n/\eps$. Therefore, in expectation the number of messages, say $X$, that want to go through an edge in any round is at most $2 \log^2 n/\eps$ (from the two end points of the edge). This is because the number of messages, the edge receives from its one end node, say $u$, in expectation is exactly the
number of messages at $u$ divided by $d(u)$. Using Chernoff bound we get, $\Pr[X\geq 24 \log^2 n/\eps] \leq 2^{-4 \log^2 n/\eps} \leq n^{-4}$. By union bound we get that  there exists an edge and an
integer $1\leq j\leq \lambda$ such that the probability of $X \geq 24\log^2 n/\eps$ is
at most $|E(G)| \lambda n^{-4}\leq \frac{1}{n}$, since $|E(G)|\leq n^2$ and $\lambda < n$. Hence the number of messages that go through any edge in any round is at most $24 \log^2 n/\eps = O(\log^2 n/\eps)$ with high probability. So the message size will be at most $O(\log^3 n/\eps)$ bits w.h.p. over any edge in each round (a message contains source IDs and coupon IDs each of which can be encoded using $\log n$ bits). Since our considered model allows polylogarithmic (i.e., $O(\log^3 n)$) bits messages per edge per round, we can extend all the random walk's length from $i$ to length $i+1$ in $O(1/\eps)$ rounds. Therefore, for walks of length $\lambda$ it takes $O(\lambda/\eps)$ rounds as claimed.   
\end{proof}

\begin{lemma}\label{lem:sample-coupon}
With the message size $O(\log n)$ in Phase 2, one stitching step from each node in parallel can be done in one round.  
\end{lemma} 
\begin{proof}
 Each node knows all of its short walks' (or coupons') destination address and the $Coupon_{ID}$. Each time when a source or connector node wants to stitch, it chooses its unused coupons (created in Phase 1) sequentially in order of the coupon IDs. Then it contacts the destination node (holding the coupon) through {\em direct} communication and informs the destination node as the next connector node or stitching point. Therefore, in each round, it is sufficient for any node to send to connector node $u$ the maximal $Coupon_{ID}$ with destination $u$ that it has used so far. This implies that message size of $O(\log n)$ bits per edge suffices for this process. Since we assume the network allows $O(\log^3 n)$ congestion, this one time stitching from each node in parallel will finish in one round.
\end{proof}


\iffalse
\begin{lemma}\label{lem:visit-bound}
$(${\sc Random Walk Visits Lemma}$)$. For any nodes $x_1, x_2, \ldots, x_k$, \[\Pr\bigl(\exists y\ s.t.\
\sum_{i=1}^k N_\ell^{x_i}(y) \geq 32 \ d \sqrt{k\ell+1}\log n+k\bigr) \leq 1/n\,.\]
\end{lemma}
\begin{proof}
This is a lengthy proof and given in \cite{DasSarmaNPT10} with full details. The main technical results on which it relies is due to Lyons paper (see Lemma 3.4 and remark 4 in \cite{Lyons}). To apply the Lyons lemma in our context, it suffices to show that the \pr walk is reversible. As we discussed earlier in Lemma \ref{lem:phase1} that we are terminating the \pr walk with probability $\eps$ and with remaining probability it is like a simple random walk. So the reversibility follows from there. 
\end{proof}

The above lemma says that the number of visits to each node can be bounded. However, for each node,
we are only interested in the case where it is used as a connector (the stitching points). The lemma below
shows that the number of visits as a connector can be bounded as well; i.e., if any node appears $t$ times in the
walk, then it is likely to appear roughly $t/\lambda$ times as connectors. 

\begin{lemma}\label{lem:connector-bound}
For any vertex $v$, if $v$ appears in the walk at most $t$ times then it appears as a connector node at most $t(\log n)^2/\lambda$ times with probability at least $1-1/n^2$.
\end{lemma}
\begin{proof}
Intuitively, this argument is simple, since the connectors are spread out in steps of length approximately $\lambda$. However, there might be some periodicity that results in the same node being visited multiple times but exactly at $\lambda$-intervals. To overcome this we crucially use the fact that the algorithm uses short walks of length $\lambda + r$ (instead of fixed length $\lambda$) where $r$ is chosen uniformly at random from $[0, \lambda -1]$. Then the proof can be shown via constructing another process equivalent to partitioning the $\ell$ steps into intervals of $\lambda$ and then sampling points from each interval. The detailed proof can be found in Lemma~2.7 in~\cite{DasSarmaNPT10}.
\end{proof}


%The above lemma says that the number of visits to each node can be bounded. However, for each node,
%we are only interested in the case where it is used as a connector (the stitching points). This is easy to see that if any node appears $t$ times in the walk, then it is likely to appear roughly $O(t/\lambda)$ times as connectors.  

We can fix the value of the parameter $\eta$. From the above Lemma \ref{lem:visit-bound}, it follows that any node $v$ is used as a connector node at most $O(\frac{d(v) \log n}{\eps})$ times with high probability. Therefore it is sufficient for each node $v$ to have $O(\frac{d(v)\log n}{\eps})$ short walks to successfully complete Phase 2. We may want to slightly modify Phase 1 of our algorithm due to this bound on the number of visits as a connector node. Recall that, in Phase 1, each node prepares $\eta d(v)$ short walks of length $\lambda$ and we are choosing $\eta = \frac{\log n}{\eps}$. However, as we consider $\polylog n$  {\sc CONGEST} model, an immediate corollary follows by removing the parameter $\eta$ from the Lemma \ref{lem:phase1}.


\begin{corollary}\label{cor:phase1}
Phase 1 takes $O(\frac{\lambda}{\eps})$ rounds with high probability for performing $O(\frac{d(v)\log n}{\eps})$ walks from each node $v$ of length $\lambda$. 
\end{corollary}
\begin{proof}
We see from the above paragraph that $\eta = O(\frac{\log n}{\eps})$. Since we consider the $\polylog n$ {\sc CONGEST} model, so in any round any node can send or receive a {\em polylogarithmic} number of messages through an edge. Therefore,  we can say that $O(\frac{\log n}{\eps})$ walks can be performed in $O(\frac{1}{\eps})$ rounds.  Hence the claimed bound follows together with the Lemma \ref{lem:phase1}. 
\end{proof}   

\fi    

\begin{lemma}\label{lem:phase2}
Phase 2 finishes in $O(\frac{\log n}{\lambda \eps} + \lambda)$ rounds.  
\end{lemma} 
\begin{proof}
Phase 2 is for stitching short walks of length $\lambda$ to get a long walk of length $B_1\log n/\eps$, where the constant $B_1$ is chosen sufficiently large so that all the random walks terminate within this length with high probability. Therefore, it is sufficient to stitch approximately $O(\log n/\lambda \eps)$ times from each node in parallel. Since each  stitching step can be done in one of round (cf. Lemma \ref{lem:sample-coupon}), the stitching process takes $O(\frac{\log n}{\lambda \eps})$ rounds. Now it remains to show the running time of completing the random walks at the end of Phase 2 (Step \ref{step:count} in Algorithm \ref{alg:pr-walk-undirected}). For this step, the length of the random walk  is less than $\lambda$, which are executed in parallel. In this case, we do not need to send any IDs or counters with the coupon, simply send the count of the tokens traversing an edge in a given round to the appropriate neighbors (i.e., in the similar way as of Algorithm \ref{alg:simple-pagerank-walk}). Each token corresponds to a random walk for the remaining length left to complete the length $L$. This will take at most $O(\lambda)$ rounds. Hence, Phase 2 finishes in $O(\frac{\log n}{\lambda \eps} + \lambda)$ rounds.  
\end{proof}

%*Gopal ---- Is Sample-Coupon formally defined as  a procedure?\\
%*Anisur: Not really. I changed it now. Please check it! Also in Phase 2 rounds, lemma \ref{lem:phase2} holds without randomness? I removed the word 'w.h.p.' from it.   

\begin{lemma}\label{lem:phase3}
Phase 3 finishes in $O(\frac{\lambda}{\eps})$ rounds.  
\end{lemma} 
\begin{proof}
Recall that each short walk is of length $\lambda$. Phase 3 is simply tracing back the $\Theta(\log n)$ short walks from each node in parallel. So it is easy to see that we can perform all the reverse walks in parallel in $O(\lambda/\eps)$ rounds (in the same way as to do all
the short walks in parallel in Phase 1). Therefore, in accordance with the Lemma \ref{lem:phase1}, Phase 3 finishes in $O(\frac{\lambda}{\eps})$ rounds.  
\end{proof}

Notice that the {\em Coupon IDs} are useful in this context, since the random walks starting at $v$ and ending at $u$ may have followed different paths; $u$ just knowing the number of random walks coming from $v$ is insufficient to backtrace the walks. Moreover, the nodes on the paths will need to know the $Coupon_{ID}$ as well for the same reason. Now we are ready to show the main result of this section. 

\begin{theorem}\label{thm:main}
 The {\sc Improved-PageRank-Algorithm} (cf. Algorithm \ref{alg:pr-walk-undirected}) computes a $\delta$- approximation of the PageRanks with high probability for any constant $\delta$ and finishes in $O(\frac{\sqrt{\log n}}{\eps})$ rounds. 
\end{theorem}
\begin{proof}
The algorithm {\sc Improved-PageRank-Algorithm} consists of three phases. We have calculated above the running time of each phase separately. Now we want to compute the overall running time of the algorithm by combining these three phases and by putting appropriate value of parameters. By summing up the running time of all the three phases, we get from Lemmas \ref{lem:phase1}, \ref{lem:phase2}, and \ref{lem:phase3} that the total time taken to finish the {\sc Improved-PageRank-Algorithm} is $O(\frac{\lambda}{\eps}+ \frac{\log n}{\lambda \eps} + \lambda + \frac{\lambda}{\eps})$ rounds. Choosing $\lambda = \sqrt{\log n}$, gives the required bound as $O(\frac{\sqrt{\log n}}{\eps})$. The correctness and approximation guarantee follows from the previous section.    
\end{proof}

  \iffalse
  \paragraph{\textcolor{red}{Note:}} For balanced directed graph we can use same technique as undirected graph. My guess is that for the directed graph whose ratio between in-degree and out-degree is constant, we can apply the same approach.(?)  



\subsection{A Faster Algorithm for Directed Graphs}
We extend the algorithm of Section 4 to directed graphs. Recall that we want to perform one \pr random walk of length $\log n/\eps$ from each node. The basic idea of the algorithm is same i.e. create some short walks from each node in parallel and later stitch them to get long walk and then counting the number of visits to different nodes. The main difference of undirected and directed graph is that there could be large discrepancy between {\em indegree} and {\em outdegree} of a directed graph (in shorthand we use {\em indeg} and {\em outdeg} respectively). Therefore, for any node whose indeg and outdeg ratio is large enough, it is very likely that many random walk coupons would visit at that node in every round but it can not forward all the coupons to its outgoing neighbors in the next round. So there will be large congestion on those nodes. This is because of our congest model assumption. We now use two crucial idea to overcome this problem. The first idea is to bound the number of times any node is visited in a random walk of length $\log n/\eps$. We show that any node $v$ will be visited at most $\frac{\log n}{\eps}\cdot d^+_v$ times w.h.p. (we denote indeg by $d^+_v$ and outdeg by $d^-_v$ of a node $v$) if we perform one walk of length $\log n/\eps$ from each node. This bound is also trivially holds for number of visits as a connector nodes. This implies that we have to create $\frac{\log n}{\eps}\cdot d^+_v$ short walks of length $\lambda$ from each node in Phase 1. With the help of overlay communication we will show that one can perform so many short random walks in $O(\frac{\lambda}{\eps})$ rounds in congest $\polylog$ model. Next two phases of the algorithm namely, Phase 2 (stitching short walks) and Phase 3 (counting number of visits) can be done by same approach as undirected graph. Therefore in a directed graph we only concentrate on Phase 1 of the algorithm.   
     
\paragraph{Phase 1.}
\begin{lemma}\label{lem:phase1-directed}
No node $v$ will be visited more than $\frac{\log n}{\eps}\cdot d^+_v$ times with high probability if we perform one walk of length $\log n/\eps$ from each node.
\end{lemma}
\begin{proof}
This should comes similar way as in undirected graph since in any round the number of random walks visited a node $v$ is proportional to its indeg $d^+_v$. But FORMAL PROOF REQUIRED.  
\end{proof}
Therefore each node $v$ is need to perform $\frac{\log n}{\eps}\cdot d^+_v$ short walks of length $\lambda$. We now present an approach to perform so many walks of length $\lambda$ from each node in $O(\frac{\lambda}{\eps})$ rounds w.h.p. This is where we use the advantage of overlay communication. The overall idea is similar as undirected graph, but the only problematic nodes are those whose {\em indeg} and {\em outdeg} ratio is very large i.e. whose $\frac{d^+}{d^-} > \polylog n$. We treat those node as follows. Suppose $v$ is one such node. If at any round $i -1$, $v$ receives larger than $(d^-_v\cdot \polylog n)$ coupons then $v$ can send only $(d^-_v\cdot \polylog n)$ number of messages as usual manner as above. For each remaining messages, $v$ generate a random number $r \in [0, 1]$. If {$r< \eps$}, terminate the coupon $C$ and keeps the coupon as then $v$ itself is the destination.Else,\{ select an outgoing neighbor $u$ uniformly at random. Add one coupon counter number to $T^v_u$ where the variable $T^v_u$ indicates the number of coupons (or random walks) are chosen to move to the neighbor $u$ from $v$ in the $i$-th round. Send the coupon's counter number $T^v_u$ together with the ID of $v$ plus the round number $i$ to the respective outgoing neighbors $u$\}. 

At the end of this phase, each node $u$ which hold a coupon containing ID of some node $v$ PLUS a round number $r$, contacts the node $v$ mentioning the round number $r$ and ask the original source node $s$ of the short walk over the overlay link. Then $v$ randomly choose one source node from the set of coupons (which $v$ sent with only count number, ID of $v$ plus round $r$ at round $r$) received at round $r-1$ and send the ID to $u$ through overlay. This randomly choose of coupon should be without reuse of same coupon. Now the destination node $u$ knows the true source node of the short walk. We formally writing the pseudocode of Phase 1 below. 

\begin{quote}
\begin{algorithmic}[1]
\STATE Initially, each node $v$ constructs $\eta = \frac{\log n}{\eps}\cdot d^+_v$ messages containing its ID and also the desired walk length of $\lambda$. We will refer to these messages created by node $v$ as `coupons created by $v$'.

\FOR{$i = 1$ to $\lambda$}

\STATE This is the $i$-th round. Each node $v$ does the following: 

\IF{$(\frac{d^+_v}{d^-_v} \leq \polylog n)$ OR $(\frac{d^+_v}{d^-_v} > \polylog n$ and the number of coupons held by $v$ is $\leq d^-_v\cdot \polylog n)$} 
\STATE Consider each coupon $C$ held by $v$ which is received in the $(i - 1)$-th round. If the coupon $C$'s desired walk length is at most $i$, then $v$ keeps this coupon ($v$ is the desired destination). Else, \{$v$ generate a random number $r \in [0, 1]$. If {$r< \eps$}, terminate the coupon $C$ and keeps the coupon as then $v$ itself is the destination. Else, picks a neighbor $u$ uniformly at random for the coupon $C$ and forward $C$ to $u$ after incrementing counter\}. Note that $v$ do this for every coupon simultaneously in $i$-th round. 
%\ELSEIF{$\frac{d^+_v}{d^-_v} > \polylog n$ and the number of coupons held by $v$ is $\leq d^-_v \ploylog n$} 
%\STATE 

\ELSE
\STATE $v$ can send $d^-_v\cdot \polylog n$ number of messages as usual manner as above.

\STATE  For each remaining messages, $v$ generate a random number $r \in [0, 1]$. If {$r< \eps$}, terminate the coupon $C$ and keeps the coupon as then $v$ itself is the destination.Else,\{ select an outgoing neighbor $u$ uniformly at random. Add one coupon counter number to $T^v_u$ where the variable $T^v_u$ indicates the number of coupons (or random walks) are chosen to move to the neighbor $u$ from $v$ in the $i$-th round. Send the coupon's counter number $T^v_u$ together with the ID of $v$ plus the round number $i$ to the respective outgoing neighbors $u$\}. 

\ENDIF

\ENDFOR

\STATE Each node $u$ which hold a coupon containing ID of some node $v$ PLUS a round number $r$, contacts the ID node $v$ mentioning the round number $r$ and ask the original source node $s$ of the short walk over the overlay link. Then $v$ randomly choose one source node from the set of coupons (which $v$ sent with only count number, ID of $v$ plus round $r$ at round $r$) received at round $r-1$ and send the ID to $u$ through overlay. This randomly choose of coupon should be without reuse of same coupon. Now the destination node $u$ knows the true source node of the short walk.  

\STATE Each destination node send their ID to the source node directly through the overlay, as they have the source node's ID now.  

\end{algorithmic}
\end{quote}

\paragraph{Phase 2.}
For stitching using overlay, it can be done by similar way as above of undirected case. Then Phase 2 takes $O(\frac{\log n}{\eps \lambda})$ rounds. 

\paragraph{Phase 3.}
Counting visits is same as above. It will cost $O(\lambda)$ rounds. 

\begin{theorem}\label{thm:main-directed}
 The algorithm {\sc Improved-\pr-Algorithm}2 (cf. Algorithm \ref{alg:pr-walk-undirected}) computes the PageRanks accurately and with high probability finishes in $O(\frac{\sqrt{\log n}}{\eps})$ rounds. 
\end{theorem}
\begin{proof}
TimeOf( Phase 1+Phase 2+Phase 3) = $\frac{\lambda}{\eps} + \frac{\log n}{\eps \lambda} + \lambda$ which is $O(\frac{\sqrt{\log n}}{\eps})$ by choosing $\lambda = \frac{\sqrt{\log n}}{\eps}$. 

\end{proof}

\fi

\iffalse
\begin{algorithm}[t]
\caption{\sc Phase~1 of Algorithm~\ref{alg:pr-walk-undirected}($\eta = \log n/\eps, \lambda, \eps$)}
\label{alg:phase1}
\textbf{Input:} Number $\eta =\log n/\eps$ of walks from each node, length $\lambda$ of each walk and reset probability $\eps$.\\
\textbf{Output:} Each node know the destination node's ID of $\eta$ (not necessarily
distinct) short walks.\\

\textbf{(Each node $v$ performs $\eta = \frac{\log n}{\eps}\cdot d^+_v$
random walks of length $\lambda$. % + r_i$ where $r_i$ (for each $1\leq i\leq \eta$) is chosen independently at random in the range $[0,\lambda-1]$. 
At the end of the process, there are $\eta$ (not necessarily
distinct) nodes holding a `coupon' containing the ID of $v$.)}
\begin{algorithmic}[1]
\FOR{each node $v$}
%\STATE  Generate $\eta$ random integers in the range $[0, \lambda - 1]$, denoted by $r_1, r_2, \ldots,r_{\eta}$.
\STATE Construct $\eta = \frac{\log n}{\eps}\cdot d^+_v$ messages containing its ID and also the desired walk length of $\lambda$. 
We will refer to these messages created by node $v$ as `coupons created by $v$'.
\ENDFOR

\FOR{$i=1$ to $\lambda$}

\STATE This is the $i$-th round. Each node $v$ does the following: 

\IF{$\frac{d^+_v}{d^-_v} \leq \polylog n$} 
\STATE Consider each coupon $C$ held by $v$ which is received in the $(i - 1)$-th round. If the coupon $C$'s desired walk length is at most $i$, then $v$ keeps this coupon ($v$ is the desired destination). Else, \{$v$ generate a random number $r \in [0, 1]$. If {$r< \eps$}, terminate the coupon $C$ and keeps the coupon as then $v$ itself is the destination. Else, picks a neighbor $u$ uniformly at random for the coupon $C$ and forward $C$ to $u$ after incrementing counter\}. Note that $v$ do this for every coupon simultaneously in $i$-th round. 
\ELSE 

\STATE $v$ generate a random number $r \in [0, 1]$. If {$r< \eps$}, terminate the coupon $C$ and keeps the coupon as then $v$ itself is the destination.Else,\{ select an outgoing neighbor $u$ uniformly at random. Add one coupon counter number to $T^v_u$ where the variable $T^v_u$ indicates the number of coupons (or random walks) are chosen to move to the neighbor $u$ from $v$ in the $i$-th round. Send the coupon's counter number $T^v_u$ together with the ID of $v$ plus the round number $i$ to the respective outgoing neighbors $u$\}. 

\ENDIF

\ENDFOR

\STATE Each node $u$ which hold a coupon containing ID of some node $v$ plus a round number $r$, contacts the ID node $v$ mentioning the round number $r$ and ask the original source node $s$ of the short walk over the overlay link. Then $v$ randomly choose one source node from the set of coupons (without repetition) received at round $r-1$ and send the ID to $u$ through overlay. Now the destination node $u$ knows the true source node of the short walk.  

\STATE Each destination node send their ID to the source node directly through the overlay, as they have the source node's ID now.  

\end{algorithmic}

\end{algorithm}

\fi


\endinput