\section{Algorithm for $\kappa$ Random Walks}\label{sec:k-algo}
The previous section was devoted to performing a single random walk of length $\tau$ (mixing time) efficiently to obtain a sample from the stationary distribution. In many applications, one typically requires a large number of random walk samples. A larger amount of samples allows for a better estimation of the problem at hand. In this section, we focus on obtaining several random walk samples.  Specifically, we consider the scenario when we want to compute $\kappa$ independent walks each of
length $\tau$ from different sources $s_1, s_2, \ldots, s_{\kappa}$. We show that {\sc Single-Random-Walk} (cf. Algorithm~\ref{alg:single-random-walk}) can be extended to solve this problem. \\

\noindent {\bf A trivial extension:} One immediate approach is to run the {\sc Single-Random-Walk} algorithm for every source node sequentially. In this case, one can perform as many random walks as needed (i.e., $\kappa$ can be anything), even from a single source node. The running time of the algorithm will be $\kappa$ times the running time of the {\sc Single-Random-Walk} algorithm. Recall that the running time of the {\sc Single-Random-Walk} algorithm is $\tilde{O}(\sqrt{\tau \Phi})$ rounds. Therefore, in this case the {\sc Many-Random-Walks} algorithm finishes in $\tilde{O}(\kappa \sqrt{\tau \Phi})$ rounds w.h.p. \\

\noindent {\bf A faster extension:} We can extend the {\sc Single-Random-Walk} algorithm in a different way to present a faster algorithm if the source nodes are chosen randomly with probability  proportional to the node degrees. In particular, the algorithm {\sc Many-Random-Walks} (pseudocode is given in Algorithm~\ref{alg:many-random-walk}) to compute $\kappa$ walks is essentially repeating the {\sc Single-Random-Walk} algorithm on each source with one common/shared phase, and yet through overlapping computation, completes faster than the above bound. The crucial observation is that we have to do Phase 1 only once and still ensure all walks are independent. The high level analysis is following. 

\begin{algorithm}[H]\label{many-walks algorithm}
\caption{\sc Many-Random-Walks}
\label{alg:many-random-walk}
\textbf{Input:} Source nodes $s_1, s_2, \ldots, s_{\kappa}$ (given from uniform distribution) and desired walks length $\tau$ and parameter $\mu$.\\
\textbf{Output:} Each destination node of the walk outputs the ID of its corresponding source.\\

\textbf{Case~1.} When $\mu \ge \tau$. [we assumed $\mu=(32 \sqrt{\kappa \tau \Phi+1}\log n + \kappa)(\log n)^2$]
\begin{algorithmic}[1] 
\STATE  Run the naive random walk algorithm, i.e., the sources find walks of length $\tau$ simultaneously by sending tokens.

\end{algorithmic}

\textbf{Case~2.} When $\mu < \tau$. \\
\textbf{Phase 1: (Each node $v$ performs $d\log n$ random walks of length $\mu + r_i$ where $r_i$ (for each $1\leq i \leq d\log n$) is chosen independently at random in the range $[0, \mu -1]$. At the end of the process, there are $d\log n$ (not necessarily distinct) nodes holding a ``coupon" containing the ID of $v$.)}
\begin{algorithmic}[1]
\FOR{each node $v$}
\STATE  Perform $d\log n$ walks of length $\mu + r_i$, as in Phase~1 of algorithm {\sc Single-Random-Walk}. 
\ENDFOR

\end{algorithmic}


\textbf{Phase 2: (Stitch $\Theta (\tau/\mu)$ short walks for each source node $s_j$)}
\begin{algorithmic}[1]
\FOR{$j = 1$ to $\kappa$}
\STATE  Consider source $s_j$. Use algorithm {\sc Single-Random-Walk} to perform a walk of length $\tau$ from $s_j$.
\STATE When algorithm {\sc Single-Random-walk} terminates, the sampled destination outputs ID of the source $s_j$. 
\ENDFOR
\end{algorithmic}

\end{algorithm}

\paragraph{{\sc Many-Random-Walks}} Let $\mu=(32 \sqrt{\kappa \tau \Phi+1}\log n + \kappa)(\log n)^2$. If
$\mu \ge \tau$ then run the naive random walk algorithm. %, i.e., the sources find walks of length $\tau$ simultaneously by sending tokens. 
Otherwise, do the following. Do Phase~1 once as before. Modify Phase~2 of {\sc Single-Random-Walk} to create multiple walks, one at a time; i.e., in the second phase, we stitch the short walks together to get a
walk of length $\tau$ starting at $s_1$ then do the same thing for $s_2$, $s_3$, and so on. We show that {\sc Many-Random-Walks} algorithm finishes in $\tilde O\left(\min \{\sqrt{\kappa \tau \Phi}, \kappa + \tau \} \right)$ rounds with high probability. Moreover, the algorithm is able to perform a constant fraction of $\frac{\mu nd\log n}{\tau}$ walks of length $\tau$ if the source nodes are chosen randomly with probability  proportional to the node degrees. (Note that it is possible to get $\frac{\mu nd\log n}{\tau}$ random walks, if we can use/stitch all the short walks created in phase~1.) This result is  stated in the Theorem \ref{thm:kwalks} (Section \ref{sec:results}), and the formal proof is given below in Section~\ref{sub:mrw-proof}. %The details of this specific extension is similar to the previous ideas even for the dynamic setting.

%\subsection{Correctness}
%We show that the {\sc Many-Random-Walks} algorithm samples a node from (close to) the uniform distribution of the vertex set for every source node. In {\sc Many-Random-Walks}, we create `short' random walks (i.e., Phase~1) only once. Then for each source node, we stitch those short walks together to get a
%walk of length $\tau$. This is same as repeating the {\sc Single-Random-Walk} algorithm for each source node. Hence, it follows from the correctness proof of the {\sc Single-Random-Walk} algorithm that the {\sc Many-Random-Walks} algorithm samples node from (close to) the uniform distribution for every source node. 

%Now notice that each node creates $d$ short walks of length $\mu$ in phase~1 only one time. So there are total of $nd$ short walks of length $\mu$. Moreover, a short walk can not be reused to get a long walk of length $\tau$. So the question is how many long walk of length $\tau$ can we get, using $nd$ short walks of length $\mu$. It is shown in \cite{SarmaMP12} that almost all short walks can be used (without reusing the same short walk) to create long walks of length $\tau$ if the source nodes are chosen randomly proportional to the node degrees. Therefore, for $k$ random walks of length $\tau$, we need $k\tau/\mu$ short random walks, which gives the condition that $k\tau/\mu$ is at most $nd$. Putting the value of $\mu$, $\tau$ (which is $n^2$ for regular graphs) and $\Phi$ (which is at most $n$), we get that the algorithm {\sc Many-Random-Walks} can perform at most $O(n)$ long walks of length $\tau$, i.e., $k = O(n)$, when source nodes are chosen randomly proportional to the node degrees.   


\subsection{Proof of the Theorem \ref{thm:kwalks} (restated below)}
\label{sub:mrw-proof}
\begin{theorem}\label{thm:kappabound} {\sc Many-Random-Walks} (cf. Algorithm~\ref{alg:many-random-walk}) finishes  in
$\tilde O\left(\min \{\sqrt{\kappa \tau \Phi}, \kappa + \tau\} \right)$
rounds with high probability, where $\kappa = O(\frac{n^2d^2\Phi}{\tau})$ random walks, assuming the source nodes are chosen randomly with probability proportional to the node degrees (for a $d$-regular dynamic graph, this means that the source nodes are chosen
uniformly at random).
\end{theorem}
\begin{proof}
We first show the correctness of the algorithm. We show that the {\sc Many-Random-Walks} algorithm samples a node from (close to) the uniform distribution of the vertex set for every source node. In {\sc Many-Random-Walks}, we create `short' random walks (i.e., Phase~1) only once. Then for each source node, we stitch those short walks together to get a walk of length $\tau$. This is the same as repeating the {\sc Single-Random-Walk} algorithm for each source node. Hence, it follows from the correctness proof of the {\sc Single-Random-Walk} algorithm that the {\sc Many-Random-Walks} algorithm samples nodes from (close to) the uniform distribution for every source node. \\

Recall that we assume $\mu=(32 \sqrt{\kappa \tau \Phi+1}\log n+ \kappa)(\log n)^2$. First, consider the case where $\mu \ge \tau$. In this case, $\tilde O(\min \{\sqrt{\kappa \tau \Phi}+ \kappa, \sqrt{\kappa \tau}+ \kappa +\tau\} )=\tilde O(\sqrt{\kappa \tau}+ \kappa +\tau)$. By Lemma~\ref{lem:visit-bound}, each
node $x$ will be visited at most $\tilde O(d (\sqrt{\kappa \tau}+ \kappa))$ times w.h.p. Therefore, using the same argument as in proof of Lemma~\ref{lem:phase1},
the congestion is $\tilde O(\sqrt{\kappa \tau} + \kappa)$ with high probability. Since the walk length is $\tau$, it follows from the idea of pipelining the tokens that {\sc Many-Random-Walks}
takes $\tilde O(\sqrt{\kappa \tau} + \kappa +\tau)$ rounds as claimed. Since $2\sqrt{\kappa \tau} \le \kappa + \tau$, this bound reduces
to $\tilde O(\kappa +\tau)$. 

Now, consider the other case where $\mu < \tau$. In this case,
$\tilde O( \min\{\sqrt{\kappa \tau \Phi} + \kappa, \sqrt{\kappa \tau}+ \kappa +\tau\})=\tilde O(\sqrt{\kappa \tau \Phi}+ \kappa)$. Phase~1 takes $O(\mu) = \tilde O(\sqrt{\kappa \tau \Phi} + \kappa)$. The stitching in Phase~2 takes $\tilde O(\kappa \Phi\tau /\mu) = \tilde O(\sqrt{\kappa \tau \Phi})$. Since $\kappa \Phi\tau /\mu \geq \kappa \Phi \geq \kappa$, the total number of rounds required is $\tilde O(\sqrt{\kappa \tau \Phi})$ as claimed.\\

We note  that each node creates $d\log n$ short walks of length $\mu$ in Phase~1. So there are a total of $nd\log n$ short walks of length $\mu$. Moreover, a short walk cannot be reused to get a long walk. We use a technical result shown in \cite{SarmaMP12} that a constant fraction of all the short walks can be utilized (without reusing the same short walk) to create long walks successfully (i.e., the stitching process continues without exhausting short walks of any node) of length $\tau$ if the source nodes are chosen randomly proportional to the node degrees. Further, to perform $\kappa$ random walks of length $\tau$, the algorithm must successfully stitch $\kappa \tau/\mu$ short walks. Therefore, if we choose source nodes randomly proportional to the node degrees then we get $\kappa \tau/\mu$ can be up to a constant fraction of all the short walks, i.e.,  $\kappa \tau/\mu =  \Theta(nd\log n)$.  Putting the value of $\mu = (32 \sqrt{\kappa \tau \Phi+1}\log n+ \kappa)(\log n)^2$, we get $\kappa = \tilde O(\frac{n^2d^2\Phi}{\tau})$.
\end{proof}  

Our {\sc Many-Random-Walks} algorithm is better than the naive approach when $\sqrt{\kappa \tau \Phi} \le (\kappa + \tau)/\polylog n$. (The naive approach takes $\kappa + \tau$ rounds to sample $\kappa$ nodes). Therefore, our approach is better when both $\tau$ and $\Phi$ are small and $\kappa$ is large or when $\kappa$ and $\Phi$ are small and $\tau$ is large or when only $\Phi$ is very small. For a quick example, consider the dynamic graph as a sequence of all expander graphs. Then $\tau$ and $\Phi$ are at most $O(\log n)$. Hence our algorithm is better when $\kappa$ is larger than some $\polylog n$ in this case.  
