% !TEX root = pagerank.tex


\documentclass[preprint,12pt]{elsarticle}
%\usepackage{numcompress}
%
\usepackage{algcompatible}
\usepackage{algorithm}
%\usepackage{algorithmic}
%\usepackage{caption}


\usepackage{graphics}
\usepackage{graphicx}
\usepackage{epsfig}
 
\usepackage{amssymb}
\usepackage{amsthm}
\usepackage{amsmath}

\biboptions{comma,square}
 
\journal{Theoretical Computer Science}

\iffalse
%\setlength{\textheight}{9.4in} \setlength{\textwidth}{6.55in}
\setlength{\textheight}{9.2in} \setlength{\textwidth}{6.55in}
%\setlength{\topmargin}{0in}

\voffset=-0.9in
\hoffset=-0.8in
\fi

\newtheorem{theorem}{Theorem}[section]
%\newtheorem{definition}[theorem]{Definition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{claim}[theorem]{Claim}
%\newtheorem{example}[theorem]{Example}
\newtheorem{remark}[theorem]{Remark}
\theoremstyle{definition}\newtheorem{example}[theorem]{Example}
\theoremstyle{definition}\newtheorem{definition}[theorem]{Definition}
\theoremstyle{observation}\newtheorem{observation}[theorem]{Observation}


\newcommand{\comment}[1]{}
%\newcommand{\QED}{\mbox{}\hfill \rule{3pt}{8pt}\vspace{10pt}\par}
%\newcommand{\eqref}[1]{(\ref{#1})}
\newcommand{\theoremref}[1]{(\ref{#1})}
%\newenvironment{proof1}{\noindent \mbox{}{\bf Proof:}}{\QED}
%\newenvironment{observation}{\mbox{}\\[-10pt]{\sc Observation.} }%
%{\mbox{}\\[5pt]}


\def\m{{\rm min}}
%\def\m{\bar{m}}
\def\eps{{\epsilon}}
\def\half{{1\over 2}}
\def\third{{1\over 3}}
\def\quarter{{1\over 4}}
\def\polylog{\operatorname{polylog}}
\newcommand{\ignore}[1]{}
\newcommand{\eat}[1]{}
\newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor}
\newcommand{\ceil}[1]{\left\lceil #1 \right\rceil}

\newcommand{\algorithmsize}[0]{}

%---------------------
%  SPACE SAVERS
%---------------------

%\usepackage{times}
%\usepackage[small,compact]{titlesec}
%\usepackage[small,it]{caption}

\newcommand{\squishlist}{
 \begin{list}{$\bullet$}
  { \setlength{\itemsep}{0pt}
     \setlength{\parsep}{3pt}
     \setlength{\topsep}{3pt}
     \setlength{\partopsep}{0pt}
     \setlength{\leftmargin}{1.5em}
     \setlength{\labelwidth}{1em}
     \setlength{\labelsep}{0.5em} } }
\newcommand{\squishend}{
  \end{list}  }

%---------------------------------
% FOR MOVING PROOFS TO APPENDIX
%\usepackage{answers}
%%\usepackage[nosolutionfiles]{answers}
%\Newassociation{movedProof}{MovedProof}{movedProofs}
%\renewenvironment{MovedProof}[1]{\begin{proof}}{\end{proof}}

\def\e{{\rm E}}
\def\var{{\rm Var}}
\def\ent{{\rm Ent}}
\def\eps{{\epsilon}}
\def\lam{{\lambda}}
\def\bone{{\bf 1}}
\newcommand{\pr}{PageRank }



\begin{document}

\begin{frontmatter}

\title{Fast Distributed PageRank Computation\tnoteref{t1}}
\tnotetext[t1]{A preliminary version of the paper appeared in the proceedings of 14th International Conference on Distributed
Computing and Networking (ICDCN), pages 11-26, 2013 \cite{icdcn13}.}

%\begin{titlepage}

%\date{}

\author[atish]{Atish {Das Sarma}}
\ead{atish.dassarma@gmail.com}

\author[anisur]{Anisur Rahaman Molla}
\ead{anisurpm@gmail.com}

\author[gopal]{Gopal Pandurangan\corref{cor}}
\ead{gopalpandurangan@gmail.com}

\author[eli]{Eli Upfal\corref{label}}
\ead{eli\_upfal@brown.edu}
%
\cortext[cor]{Supported in part by the following research grants: Nanyang Technological University grant M58110000, Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 2 grant MOE2010-T2-2-082, and a grant from the US-Israel Binational Science Foundation (BSF).}

\cortext[label]{Partially supported by NSF BIGDATA Award IIS 1247581.}
%


\address[atish]{eBay Research Labs, eBay Inc., CA, USA.}
\address[anisur]{Division of Mathematical Sciences, Nanyang Technological University, Singapore 637371.}
\address[gopal]{Division of Mathematical Sciences, Nanyang Technological University, Singapore 637371 and Department of Computer Science, Brown University, Providence, RI 02912, USA.}
\address[eli]{Department of Computer Science, Brown University, Providence, RI 02912, USA.}



\begin{abstract}

%Gopal --- I changed "PageRanks" to "PageRank" in some places. But I see that reviewer 2 is suggesting using "PageRanks" when we are computing Pageranks of many webpages. So change it back to what you had before ---- "PageRanks".
%Anisur --- Fixed it. 

Over the last decade, PageRank has gained importance in a wide range of applications and domains, ever since it first proved to be  effective in determining node importance in large graphs (and was a pioneering idea behind Google's search engine). In distributed computing alone, PageRank vector, or more generally random walk based quantities have been used for several different applications ranging from determining important nodes, load balancing, search, and identifying connectivity structures. 
Surprisingly, however, there has been little work towards designing provably efficient fully-distributed algorithms for computing PageRank. The difficulty is that traditional matrix-vector multiplication style iterative methods may not always adapt well to the distributed setting owing to communication bandwidth restrictions and convergence rates. 
%Therefore, PageRank computation using Monte Carlo method is more appropriate in a distributed model with bandwidth constraints. 

In this paper, we present fast random walk-based distributed algorithms  for computing PageRanks in  general  graphs  and prove strong bounds on the round complexity.  We first present a distributed algorithm that  takes $O(\log n/\eps)$ rounds with high probability on any graph (directed or undirected), where $n$ is the network size and $\eps$ is the reset probability used in the PageRank computation (typically $\eps$ is a fixed constant).  We then present a faster algorithm that takes
$O(\sqrt{\log n}/\eps)$ rounds in undirected graphs. %We further show distributed algorithms with improved guarantees for undirected general graphs.
Both of the above algorithms  are  scalable, as each node  sends only small ($\polylog n$) number of  bits over each edge per round. %% and hence work in the {\sc CONGEST} distributed computing model. For directed graphs, we present an algorithm that has a running time of $O(\sqrt{\log n/\eps})$, but it requires a polynomial number of bits to be processed and sent per node in a round.
To the best of our knowledge, these are the first fully distributed algorithms for computing PageRank vector with provably efficient running time. 
\end{abstract}
%\end{titlepage}

\begin{keyword}
PageRank, Distributed Algorithm,  Random Walk, Monte Carlo Method

\end{keyword}

\end{frontmatter}

\input{introduction}

\input{background}

%\input{related}

\section{A Distributed Algorithm for PageRank}\label{sec:simple-algo}

We present a Monte Carlo based distributed algorithm for computing PageRank distribution of a network \cite{mcm-avrachenkov}. The main idea of our algorithm (formal pseudocode is given in Algorithm \ref{alg:simple-pagerank-walk}) is as follows. Perform $K$ ($K$ will be fixed appropriately later) random walks starting from each node of the network in parallel. In each round, each random walk independently  goes to a random (outgoing) neighbor with probability $1-\eps$ and with the remaining probability (i.e., $\eps$) terminates in the current node. Henceforth, we call such a random walk a {\em `PageRank random walk'}. In \cite{mcm-avrachenkov}, this random walk process is shown to be equivalent to one based on the PageRank transition matrix $P$, defined in Section 2.2. It is easy to see that picking each node as starting point for the same number of times (i.e., restarting walks according to the uniform distribution) accounts for  the $(\eps/n) J$ term in Equation~\ref{equ:transition-prob};  and between any two restarts, we just have a simple random walk that terminates with probability $\eps$ in each step --- which accounts for  the $(1-\eps)Q$ term. Since $\eps$ is the probability of termination of a walk in each round, the expected length of every walk is $1/\eps$ and the length  will be at most $O(\log n/\eps)$ with high probability.  Let every node $v$ count the number of visits (say, $\zeta_v$) of all the walks that go through it. Then, after termination of all walks in the network, each node $v$ computes (estimates)  PageRank $\pi_v$ as $\tilde \pi_v = \frac{\zeta_v \eps}{n K}$. Notice that $\frac{nK}{\eps}$ is the (expected) total number of visits over all nodes of all the $n K$ walks. The above idea of counting the number of visits is a standard technique to approximate PageRanks (see e.g., \cite{mcm-avrachenkov,ppr-bahmani2010}). We want to note that our algorithm in this section does not require any direct communication between non-neighbors. 

We show in the next section that the above algorithm computes PageRank vector $\pi$ accurately (with high probability) for an appropriate value of $K$. The main technical challenge in implementing the above method is  that performing many walks from each node in parallel can create a lot of congestion. Our algorithm uses a crucial idea to overcome the congestion. We show that (cf. Lemma \ref{lem:congestion}) that there will be no congestion in the network even if we start a polynomial number of random walks from every  node in


\newcommand{\mindegree}[0]{\delta}
\begin{algorithm}[H]
\caption{\sc Basic-PageRank-Algorithm}
\label{alg:simple-pagerank-walk}
\textbf{Input (for every node):} Number of nodes $n$ and reset probability $\eps$.\\
\textbf{Output:} Approximate PageRank of each node.\\

\textbf{[Each node $v$ starts $K = c\log n$ walks, where $c =  \frac{2}{\delta' \eps}$ and $\delta'$ is defined in Section \ref{sec:correctness}. All walks keep moving in parallel until they terminate. The termination probability of each walk is $\eps$, so the expected length of each walk is $1/\eps$.]}
\begin{algorithmic}[1]
\STATE Each node $v$ maintains a count variable ``$couponCount_v$" corresponding to number of random walk coupons at $v$. Initially, $couponCount_v = K$ for starting $K$ random walks. 
\State Each node $v$ also maintains a counter $\zeta_v$ for counting the number visits of  random walks to it. Set $\zeta_v = 0$. 

%\WHILE{there is at least one (alive) coupon}
\FOR{round $i = 1, 2, \ldots, B\log n/\eps$} \hspace{0.1in}//[for sufficiently large constant $B$]
\STATE Each node $v$ holding at least one alive coupon (i.e., $couponCount_v \neq 0$) does the following in parallel: 
%\COMMENT{Consider each coupon $C$ held by $v$ which is received in the $(i-1)$-th round.} 
\STATE For every neighbor $u$ of $v$, set $T^u_v = 0$  \hspace{0.5in}// [$T^u_v$ is the number of random walks moving from $v$ to $u$ in round $i$] 
\FOR{$j = 1,2, \ldots, couponCount_v$} 
\STATE With probability $1 - \eps$, pick a uniformly random outgoing neighbor $u$
\STATE $T^u_v : = T^u_v + 1$ 
\ENDFOR
\STATE Send the coupon counter number $T^u_v$ to the respective outgoing neighbors $u$. 
\STATE Each node $u$ computes:  $\zeta_u = \zeta_u + \sum_{v \in N(u)} T^u_v$.  \hspace{0.5in} //[the quantity $\sum_{v \in N(u)} T^u_v$ is  the total number of visits of random walks to $u$ in $i$-th round (from its neighbors)]
\STATE Each node $u$ update the count variable $couponCount_u = \sum_{v \in N(u)} T^u_v$
\ENDFOR
%\ENDWHILE

\STATE Each node $v$ outputs its PageRank as $\frac{\zeta_v \eps}{c n \log n}$.


%\FOR{$i=1$ to $\lambda$}

%\STATE This is the $i$-th iteration. Each node $v$ does the following: Consider each coupon $C$ held by $v$ which is received in the $(i - 1)$-th iteration. If the coupon $C$�s desired walk length is at most $i$, then $v$ keeps this coupon ($v$ is the desired destination). Else, $v$ picks a neighbor $u$ uniformly at random and forward $C$ to $u$.

%\ENDFOR

\end{algorithmic}

\end{algorithm}

\noindent parallel. The main idea is based on the Markovian (memoryless) properties of the random walks and the process that terminates the random walks. To calculate how many walks move from  node $i$ to node $j$, node $i$ only needs to know the number of walks that reached it. It does not need to know the sources of these walks or the transitions that they took before reaching node $i$.  Thus it is enough to  send the {\em count} of the number of walks that pass through a node. The algorithm runs till all the walks are terminated which is at most $O(\log n/\eps)$ rounds with high probability. Then every node $v$ outputs  PageRank as the ratio between the number of visits (denoted by $\zeta_v$) to it and the total number of visits over all nodes of all the walks $(\frac{nK}{\eps})$.  We show that our algorithm computes approximate PageRanks  in $O(\log n/\eps)$ rounds with high probability (cf. Theorem \ref{thm:main-round}). 

\subsection{Analysis}
Our algorithm computes the PageRank of each node $v$ as $\tilde \pi_v = \frac{\zeta_v \eps}{n K}$ and we say that $\tilde \pi_v$ approximates original PageRank $\pi_v$. We first focus on the correctness of our approach and then analyze the running time. 

\subsection{Correctness of PageRank Approximation}\label{sec:correctness}
The correctness of the above approximation follows directly from the main result of \cite{mcm-avrachenkov} (see Algorithm $4$ and Theorem $1$) and also from \cite{ppr-bahmani2010} (Theorem $1$). In particular, it is mentioned in \cite{mcm-avrachenkov,ppr-bahmani2010} that the approximate \pr value is quite good even for $K = 1$. It is easy to see that the expected value of $\tilde \pi_v$ is $\pi_v$ (formal proof is given in \cite{mcm-avrachenkov}). Now it follows from the Theorem~1 in \cite{ppr-bahmani2010} that, $\tilde \pi_v$ is sharply concentrated around its expectation $\pi_v$.
%it shows that $\tilde \pi_v$ is sharply concentrated around $\pi_v$ using a Chernoff bound  technique \cite{MU-book-05}. They show, 
%\begin{equation}\label{equ:convergence}
%\Pr[\mid \tilde \pi_v - \pi_v \mid \geq \delta \pi_v] \leq e^{-nK\pi_v \delta'}
%\end{equation}
%where $\delta'$ is a constant depending on $\eps$ (the reset probability) and $\delta$. 
We included the proof of the theorem below for the sake of completeness.

%Gopal --- State Theorem 1 (in [4]) and then say that we are showing the proof here for sake of completeness.
%Anisur ---- Stated here! 

\begin{theorem}[Theorem~1 in \cite{ppr-bahmani2010}]\label{thm:pr-concentration-bahmani}
 $\Pr[\mid \tilde \pi_v - \pi_v \mid \geq \delta \pi_v] \leq e^{-nK\pi_v \delta'}$, where $\delta'$ is a constant depending on $\eps$, the reset probability and $\delta$. 
\end{theorem}
\begin{proof}%(Theorem $1$ in \cite{ppr-bahmani2010})
For simplicity we first show the result assuming $K = 1$. For general value of $K$, it will follow in the similar way. Fix an arbitrary node $v$. Define $X_u$ to be $\eps$ times
the number of visits to $v$ in the walk started at $u$, $Y_u$ to be
the length of this walk, $W_u = \eps Y_u$, and $x_u = E[X_u]$. Then, $X_u$'s are independent, $\tilde \pi_v = \frac{\sum_u X_u}{n}$ and hence $\pi_v = \frac{\sum_u x_u}{n}$, $0 \leq X_u \leq W_u$, and $E[W_u] = 1$. Then it follows easily that, 
\begin{align*}
E[e^{tX_u}] & \leq x_uE[e^{tW_u}] + 1 - x_u  \hspace{0.2in}\text{[From the definition of expectation]} \\ & = x_u (E[e^{tW_u}] - 1) + 1 \\ & \leq e^{- x_u (1 - E[e^{tW_u}])} \hspace{0.5in} \text{[Since $1 + y \leq e^y$ for any $y$]}
\end{align*}
Thus,
\begin{align*}
\Pr[\tilde \pi_v \geq (1 + \delta)\pi_v]  & \leq \frac{E[e^{tn\tilde \pi_v}]}{e^{tn(1 + \delta)\pi_v}} \hspace{0.5in} \text{[Markov's inequality]} \\ & = \frac{E[e^{t\sum_u X_u}]}{e^{tn(1 + \delta)\pi_v}} = \frac{\prod_u E[e^{tX_u}]}{e^{tn(1 + \delta)\pi_v}} \leq \frac{\prod_u e^{-x_u (1 - E[e^{tW_u}])}}{e^{tn(1 + \delta)\pi_v}} \\ & = \frac{e^{-\left(\sum_u x_u (1 - E[e^{tW_u}])\right)}}{e^{tn(1 + \delta)\pi_v}} = \frac{e^{-n\pi_v (1 - E[e^{tW}])}}{e^{tn(1 + \delta)\pi_v}}  \\ & = e^{-n\pi_v (1 + t(1 + \delta) - E[e^{tW}])} \leq e^{-n\pi_v \delta'}
\end{align*}
where $W = \eps Y$ is a random variable with $Y$ having geometric distribution with parameter $\eps$, and $\delta' = 1 + t(1 + \delta) - E[e^{tW}]$ is a constant depending on $\delta$ and $\eps$, and can be found by optimization over $t$. 

The proof for the other direction $\Pr[\tilde \pi_v \leq (1 - \delta)\pi_v]$ is similar. 
\end{proof}
From the above bound (cf. Theorem \ref{thm:pr-concentration-bahmani}), we see that for $K = \frac{2\log n}{\delta' n\pi_{min}}$, $\Pr[\mid \tilde \pi_v - \pi_v \mid \geq \delta \pi_v] \leq n^{-2}$ for any $v$, where $\pi_{min}$ is minimal PageRank. Using union bound, it follows that there exist a node $v$ such that $\Pr[\mid \tilde \pi_v - \pi_v \mid \geq \delta \pi_v]$ is at most $|V|n^{-2} = 1/n$. Hence, for all nodes $v$, $\mid \tilde \pi_v - \pi_v \mid \leq \delta \pi_v$ with probability at least $1 - 1/n$, i.e., with high probability. This implies that we get a  $\delta$-approximation of the PageRank vector with high probability for $K = \frac{2\log n}{\delta' n\pi_{min}}$. Note that $\delta$ can be arbitrary.  
%$\pi_v = \Omega(\ln n/n)$ which is actually slightly larger than the expected PageRank value $1/n$. For $K=O(\ln n/n\pi_{min})$, we can get a very good approximation of the full PageRank vector $\pi$.
Since the \pr of any node is at least $\eps/n$ (i.e., the minimal \pr value, $\pi_{min} \geq \eps/n$), so it gives $K = \frac{2\log n}{\delta' \eps}$. 
For simplicity we define that $c =  \frac{2}{\delta' \eps}$, which is constant assuming $\delta$ (and hence $\delta'$) and $\eps$ are constant. Therefore, it is enough if we perform $c\log n$ \pr random walks from each  node. We note that while this  value of $K$ is sufficient to guarantee a constant approximation of the PageRanks, our algorithm permits a larger value of $K$, allowing for tighter approximation with the same running time (follows from Lemma \ref{lem:congestion} below). Now we focus on the running time of our algorithm. 

\subsection{Time Complexity}\label{sec:complexity}
From the above section we see that our algorithm is able to compute the PageRank vector $\pi$ in $O(\log n/\eps)$ rounds with high probability if we can perform $c\log n$ walks from each node in parallel without any congestion.  The lemma below guarantees that there will be no congestion even if we do a polynomial number of walks in parallel.   

%Gopal --- Actually set the correct value of K in the algorithm above and just thalk about that value. No two values of K. Is it 1 or what?
%Anisur: Fixed K to $c\log n$---done!

\begin{lemma}\label{lem:congestion}
The algorithm can be implemented such that the message size is at most $O(\log n)$ per each edge in every round. 
%There is no congestion in the network if every node starts at most a polynomial number of random walks in parallel. 
\end{lemma}
\begin{proof}
It follows from our algorithm that each node only needs to count the number of visits of random walks to itself.
% Since random walks are Markovian processes, it is sufficient to send the count of random walk coupons traversing an edge in a given round. 
Since the total number of random walk coupons in the network is polynomially bounded, $O(\log n)$ bits suffice. 
%Therefore, our algorithm simply sends the count, i.e., the number of coupons traversing an edge in every round.
%and hence works in the {\em CONGEST} model. 
%Therefore nodes do not require to know from which source node or rather from where it receives the random walk coupons. Hence it is not needed to send  the ID of the source node with  the coupon. Recall that in our algorithm,  in each round, every node currently holding at least one random walk coupon (could be many) does the following.  For each coupon, either the walk is terminated with probability $\eps$ or with remaining probability $1-\eps$, any outgoing edge is chosen uniformly at random to send the coupon. Any particular outgoing edge may be chosen for more than one coupon. Instead of sending each coupon separately through that edge, the algorithm simply sends the count, i.e., number of coupons, to the chosen outgoing neighbor. Since we consider {\em CONGEST} model, a polynomial in $n$ number of coupon's count (i.e., we can send count of up to a polynomial number) can be sent in one message through each edge without any congestion.    
\end{proof}

\begin{theorem}\label{thm:main-round}
The algorithm {\sc Basic-PageRank-Algorithm} (cf. Algorithm \ref{alg:simple-pagerank-walk}) computes a $\delta$-approximation of the PageRanks in $O(\frac{\log n}{\eps})$ rounds with high probability for any constant $\delta$. 
\end{theorem}
\begin{proof}
The algorithm outputs the RageRanks when all the walks terminate. Since the termination probability is $\eps$, in expectation after $1/\eps$ steps, a walk terminates and with high probability (via a Chernoff bound) the walk terminates in $O(\log n/\eps)$ rounds. By the union bound \cite{MU-book-05}, all walks (they are only polynomially many) terminate
in $O(\log n/\eps)$ rounds with high probability. Since all the walks are moving in parallel and there is no congestion (follows from the Lemma \ref{lem:congestion}), all the walks in the network terminate in $O(\log n/\eps)$ rounds with high probability. Hence the algorithm is able to output the PageRanks in $O(\log n/\eps)$ rounds with high probability. The correctness of the PageRanks approximation follows  from \cite{mcm-avrachenkov,ppr-bahmani2010} as discussed earlier in Section \ref{sec:correctness}. The $\delta$-approximation guarantee is follows from the Theorem~\ref{thm:pr-concentration-bahmani}.
\end{proof}


\input{undirected-algo}


%\input{directed-algo}


\input{conclusion}

%\newpage


%  \let\oldthebibliography=\thebibliography
  %\let\endoldthebibliography=\endthebibliography
  %\renewenvironment{thebibliography}[1]{%
   % \begin{oldthebibliography}{#1}%
      %\setlength{\parskip}{0ex}%
   %   \setlength{\itemsep}{0ex}%
  %}%
  %{%
    %\end{oldthebibliography}%
  %}
  
%{ \small
%\tiny
\bibliographystyle{abbrv}
\bibliography{Distributed-RW}


\end{document}
