\section{Estimating Random Walk Probability Distribution}\label{sec:prob-estimate1} %$p_{\ell}(s, i)$} 
%We focus on estimating $p_{\ell}(s, i)$ which is the probability of landing at node $i$ after a random walk of length $\ell$ from a specific source node $s$. As we noted above, we denote it by simply $p(i)$. The basic idea is to perform several random walks of length $\ell$ from $s$ and at the end, each node $i$ computes the fraction of walks that land at node $i$. It is easy to see that the accuracy of estimation is dependent on the number of random walks that are performed from $s$. Let us parameterize the number as $K$. We show (cf. Lemma \ref{lem:time-randomwalk}) that we can perform a polynomial in $n$ number of random walks without any congestion in the network. 
We want to estimate $p_{\ell}(s, i)$, the landing probability at node $i$ of a random walk of length $\ell$ starting from the node $s$. For simplicity, it is denoted by $p(i)$. Recall that, the idea is to perform $K$ random walks of length $\ell$ from $s$ and at the end, each node $i$ computes the fraction of walks that land at node $i$. We first present the algorithm {\sc EstimateProbability}, and then describe the result on accuracy of the estimation (cf. Lemma \ref{lem:probability-accuracy}). The pseudocode of the algorithm {\sc EstimateProbability} is given below in Algorithm \ref{alg:randomwalk}. 
 
\begin{algorithm}[H]
\caption{\sc EstimateProbability}
\label{alg:randomwalk}
\textbf{Input:} Starting node $s$, length $\ell$, and number of walks $K$.\\% = \Theta(n \log n/\eps)$.\\
\textbf{Output:} $\tilde p(i)$ for each node $i$, which is an estimate of $p(i)$ with explicit bound on additive error.\\
\begin{algorithmic}[1]
%\STATE Each node $t$ maintains a counter number $\eta_t$ to count the number of walks land over it. 

\STATE  Node $s$ creates $K$ tokens of random walks and performs them simultaneously for $\ell$ steps as follows. 

\FOR{each round from $1$ to $\ell$}   

\STATE A node holding random walk tokens, samples a random neighbor corresponding to each token and subsequently sends the appropriate {\em count} to each neighbor. (Note that tokens do not contain any node IDs.)  
%\COMMENT{$M$ is the mixing time of the graph.}
\ENDFOR

\STATE Each node $i$ counts the number of tokens that landed on it --- let this count be $\eta_i$.   

\STATE Each node estimates the probability $\tilde p(i)$ as $\frac{\eta_i}{K}$. 


%\STATE Each node $t$ outputs $\tilde p(i)$.

\end{algorithmic}

\end{algorithm}  


We show that for $K = \Theta(n^2 \log n/\eps^2)$, the algorithm {\sc EstimateProbability} (cf. Algorithm \ref{alg:randomwalk}) gives an estimation of $p(i)$ with accuracy $p(i) \pm \eps/n$ for each node $i$. In other words, by performing $\Theta(n^2 \log n/\eps^2)$ random walks, if $\tilde p(i)$ is an estimation for $p(i)$, then  $|\tilde p(i) - p(i)| \leq \eps/n$. This follows directly from the following lemma. %Due to space limit, we place the proof of following two lemmas in Appendix.  
\begin{lemma}\label{lem:probability-accuracy}
If the probability of an event $X$ occurring is $p$, then in $t = 4 n^2 \log n/\eps^2$ trials , the fraction of times the event $X$ occurs is $p \pm \frac{\eps}{n}$ with high probability. 
\end{lemma}
\begin{proof}
The proof  follows from a standard Chernoff bound: $$ \Pr \left[\frac{1}{t} \sum_{i=1}^t X_i < (1 - \delta)p \right] < \left(\frac{e^{-\delta}}{(1-\delta)^{(1-\delta)}} \right)^{tp} < e^{-tp\delta^2/2}$$ and 
$$\Pr \left[\frac{1}{t} \sum_{i=1}^t X_i > (1 + \delta)p \right] < \left(\frac{e^{\delta}}{(1+ \delta)^{(1+ \delta)}} \right)^{tp}.$$ Where $X_1, X_2, \ldots, X_t$ are $t$ independent identically distributed $0-1$ random variables such that $\Pr[X_i = 1] = p$ and $\Pr[X_i = 0] = (1-p)$. The right hand side of the upper tail bound further reduces to $2^{-\delta t p}$ for $\delta > 2e -1$ and for $\delta <2e - 1$, it reduces to $e^{-tp\delta^2/4}$. 

Let us choose $t = 4n^2\log n/\eps^2$, and $\delta =  \frac{\eps}{pn}$. Consider two cases, when $pn \leq \eps$ and when $pn > \eps$. When $pn \leq \eps$ , the lower tail bound automatically holds as $pn - \eps < 0$. In this case, $\delta > 1$, so we consider the weaker bound of the upper tail bound which is $2^{- \delta t p}$. We get $2^{- \delta t p} = 2^{- \eps t/n} = 2^{- 4 n \log n/\eps} = \frac{1}{n^{(4n/\eps)}}$. Now consider the case when $pn > \eps$. Here, $\delta < 1$ is small and hence the lower and upper tail bounds are $e^{-tp\delta^2/2}$ and $e^{-tp\delta^2/4}$. Therefore, between these two, we go with the weaker bound of $e^{-tp\delta^2/4} = e^{- \frac{tp \eps^2}{4p^2n^2}} = e^{- \frac{1}{p}\log n} = 1/n^{\Theta(1)}$. 
\end{proof}


\begin{lemma}\label{lem:time-randomwalk}
Algorithm {\sc EstimateProbability} (cf. Algorithm \ref{alg:randomwalk}) finishes in $O(\ell)$ rounds, if the number of walks $K$ is at most polynomial in $n$.   
\end{lemma}
\begin{proof}
To prove this, we first show that there is no congestion in the network if we perform at most a polynomial number of random walks from $s$. This follows from the algorithm that each node only needs to count the number of random walk tokens that end on it. Therefore nodes do not need to know from which source node or rather from where it receives the random walk
tokens. Hence it is not needed to send the ID of the source node with the token. Since we consider CONGEST model, a polynomial in $n$ number of token's
count (i.e., we can send count of up to a polynomial number) can be sent in one
message through each edge without any congestion. Therefore, one round is enough to perform one step of random walk for all $K$ walks in parallel, where $K$ is at most polynomial in $n$. This implies that $K$ random walks of length $\ell$ can be performed in $O(\ell)$ rounds. Hence the lemma.
\end{proof} 