\section{Theoretical Analysis of Algorithms} \label{sec:algos}
\vspace{-0.15in}
\subsection{Algorithm descriptions}
\vspace{-0.15in}We first describe the algorithm for single random walk in~\cite{DasSarmaNPT10} and then describe how to extend this idea for continuous random walks. The current algorithm is also randomized and we focus more on the message complexity. The high-level idea for single random walk is to perform many short random walks in parallel and later stitch them together~\cite{DNP09-podc, DasSarmaNPT10}. Then for multiple random walks we choose the source node randomly each time and perform single random walk using the same set of short length walks.     

Our main algorithm for performing continuous random walk each of length $\ell$ is described in {\sc Continuous-Random-Walk} (cf. Algorithm~\ref{alg:continuous-random-walk}). This algorithm uses other algorithms {\sc Pre-Processing} (cf. Algorithm~\ref{alg:pre-processing}) and {\sc Single-Random-Walk} (cf. Algorithm~\ref{alg:single-random-walk}). The {\sc Pre-Processing} function is called only one time at the beginning of {\sc Continuous-Random-Walk}, to perform $\eta d(v) \log n$ short walks of length $\lambda$ from each vertex $v$; once these pre-processed short walks are insufficient to answer a single random walk request, only then is the pre-processing table reconstructed and the algorithm resumes answering single random walk requests accessing the short length walks from the new table. At the end of {\sc Pre-Processing},  each vertex knows the destination IDs of the short walks
that it initiated.
%all destinations are aware of the source, however, the sources does not know the destinations ids. %The sources can obtain all 
%$\eta$ $[\leq \Delta \log n]$  destinations in %$O(\eta + \lambda)$ rounds; this can be shown by a standard congestion $+$ dilation argument. The crucial point is that in the choice of our parameters, $\lambda$
%is more than $\eta$, and so the overall asymptotic bounds of the algorithm are not affected. Even more, if the nodes had access to a shortest path routing table, then the algorithm
%{\sc Single-Random-Walk} will never need to construct BFS trees.
\vspace{-0.15in}
\subsection{Previous Results - Rounds and Messages}
\vspace{-0.15in}
We first restate the main round complexity theory for {\sc Single-Random-Walk} and also state the message complexity of this algorithm. 
\begin{lemma} [Theorem $2.5$ in~\cite{DasSarmaNPT10}]
\label{thm:1-walk}
For any $\ell$, Algorithm {\sc Single-Random-Walk} (cf. Theorem $2.5$ in~\cite{DasSarmaNPT10}) solves the Single Random Walk Problem and, with probability at least
$1-\frac{2}{n}$, finishes in $\Theta\left(\lambda \eta \log{n} + \frac{\ell D}{\lambda}\right)$ rounds.
\end{lemma}

\input{algorithms}

\begin{lemma}\label{thm: message-1-walk} The message complexity of {\sc Single-Random-Walk} is
$O\left(\eta \lambda m \log n + \frac{ \ell D}{\lambda} \right)$ where $m$ is number of edges and $D$ is the diameter of the network.  
\end{lemma}
\begin{proof}
For computing $\eta deg(v\log n)$ short walks of length $\lambda$ it uses $\Theta(\lambda \eta deg(v) \log n)$ messages. Since for a single short walk of length $\lambda$ it sends $\lambda$ messages and hence for $n$ nodes it requires $\Theta(\lambda \eta \log n \sum_v{deg(v)}) = \Theta(\lambda \eta m \log n)$ messages. 
For stitching one short walk with another we need to contact the destination ID. This can be done quickly by using a BFS tree.  Note that  the BFS tree needs to be constructed only once \footnote{If we assume that nodes
have access to shortest path routing table, then BFS tree is not needed.} ($\Theta(m)$ messages) and each stitch  uses $O(D)$ messages. Combining these, the lemma follows. 
\end{proof}

In networks such as P2P or overlay networks, if we assume that a node can access quickly (in constant time)
 another node whose ID (IP address) is known, then one can improve the time and message complexity of stitching, saving a $\Theta(D)$ factor.
 


We now analyze the round and message complexity for {\sc Continuous-Random-Walk} algorithm in the next two subsections. For simplified analysis, we use $\kappa$ to denote the fraction of short walks of the pre-processing table that get used, before the algorithm fails and needs to rerun the pre-processing stage. The next two subsections assume a value of $\kappa$ and prove bounds using it. In the following section, we actually present bounds on $\kappa$ itself to arrive at the main theorem of this paper. To recall other notation, $\eta_v = \eta deg(v) \log n$ is the number of short length walks pre-processed for each node $v$, $\lambda$ is the length of these short walks, $n$ is the number of nodes, $m$ is the number of edges, and $D$ is the diameter of the network.
 
\vspace{-0.11in}
\subsection{Round Complexity}
\begin{lemma}\label{thm:round-multi-walk}
For any $\ell$, Algorithm {\sc Continuous-Random-Walk} (cf. Algorithm~\ref{alg:continuous-random-walk}) serves continuous random walk requests such that, with probability at least
$1-\frac{2}{n}$, the total number of rounds used until {\sc Pre-Processing} needs to be invoked for a second time is $O\left(\lambda \eta \log{n} + \kappa m \eta D \log n \right)$, where $\kappa$ is the fraction of used short length walks from the preprocessing table.
\end{lemma}
\begin{proof}
The proof is same as Theorem 2.5 in~\cite{DasSarmaNPT10} for Single Random Walk; the only difference is we are doing continuous walks of same length $\ell$.  Therefore for Continuous Walks, if $\kappa$ is the fraction of used short length walks form the preprocessing table, then a total $O(\kappa m \eta \log n)$ short walks are used. Hence  we need to stitch  $O(\kappa m \eta \log n)$ times  and therefore by Lemma 2.3 in~\cite{DasSarmaNPT10}, contributes $O(\kappa m \eta \log n D)$ rounds. Hence total $O\left(\lambda \eta \log{n} + \kappa m \eta D \log n \right)$ rounds. 
\end{proof}

\begin{corollary}\label{thm:avg-round}
The average number of rounds per random walk of length $\ell$ of {\sc Continuous-Random-Walk} (cf. Algorithm~\ref{alg:continuous-random-walk})  is $ O\left( \frac{\ell}{ \kappa} (\frac{\log{n}}{m} + \frac{\kappa D}{\lambda}) \right)$ with high probability.
\end{corollary}
\begin{proof}
The total number of random walks of length $\ell$ that have been completed successfully by {\sc Continuous-Random-Walk} is $\Theta\left(\frac{\kappa m \eta \lambda \log n}{\ell}\right)$, as total $O(\kappa m \eta \log n)$ short walks each of length $\lambda$ have been used. Hence the bound on the average number of rounds per walk follows.
\end{proof}

\vspace{-0.11in}
\subsection{Message Complexity}
\begin{lemma}\label{thm:message-complexity1} The message complexity of {\sc Continuous-Random-Walk}, until {\sc Pre-Processing} needs to be invoked for a second time, is
$O\left(\eta \lambda m \log n + \kappa m \eta D\log n \right)$ where $\kappa$ is fraction of used short length walks from the preprocessing table. 
\end{lemma}
\begin{proof}
The message complexity of the stage of {\sc Pre-Processing} is as before. Further, for each subsequent $\ell$ length walk request, an additional $O(D \ell/\lambda)$ messages are used. Also, as before we know that the total number of random walks of length $\ell$ that have been completed successfully by {\sc Continuous-Random-Walk} is $\Theta\left(\frac{\kappa m \eta \lambda \log n}{\ell}\right)$, as total $O(\kappa m \eta \log n)$ short walks each of length $\lambda$ have been used. Therefore the contribution from this towards the total message complexity is $O(D\ell/\lambda * \frac{\kappa m \eta \lambda\log n}{\ell})$ which reduces to $O(mD\eta\kappa\log n)$. Combining these, the lemma follows.
\end{proof}

\begin{corollary}\label{thm:avg-message-complexity} The average number of messages per random walk of length $\ell$ of {\sc Continuous-Random-Walk} is
$ O\left( \frac{\ell}{ \kappa} (1 + \frac{\kappa D}{\lambda}) \right)$. 
\end{corollary}
\begin{proof}
From the above Lemma~\ref{thm:message-complexity1} we know that the total number of messages used for computing all walks of  {\sc Continuous-Random-Walk} is $O\left(\eta \lambda m \log n + \kappa m \eta D \log n \right)$. Now the total number of walks of length $\ell$ is $O\left(\frac{\kappa m \eta \lambda \log n}{\ell}\right)$, as total $O(\kappa m \eta)$ short walks each of length $\lambda$. Hence we get the average number of messages per walk by dividing by this.
\end{proof}

Combining the above two corollaries, we get the following. 

\begin{lemma}\label{thm:combined-avg-complexity}
The average number of rounds and messages per random walk of length $\ell$ of {\sc Continuous-Random-Walk} (cf. Algorithm~\ref{alg:continuous-random-walk}) are $ O\left( \frac{\ell}{\kappa} (\frac{\log{n}}{m} + \frac{\kappa D}{\lambda}) \right)$ and $ O\left( \frac{\ell}{ \kappa} (1 + \frac{\kappa D}{\lambda}) \right)$ respectively. 
\end{lemma}

\begin{corollary}\label{cor:avg-complexity}
For our choice of $\lambda = \tilde{\Theta}(\sqrt{\ell D})$, the average rounds and messages per random walk becomes $\tilde{O}\left( \frac{\ell}{ \kappa m}  + \sqrt{\ell D} + D \right)$ and $O\left( \frac{\ell}{ \kappa}  + D \right)$ respectively.
\end{corollary}
\begin{proof}
If we put $\lambda = \tilde{\Theta}(\sqrt{\ell D})$ in Lemma~\ref{thm:combined-avg-complexity} then the average round and message becomes  $ \tilde{O}\left( \frac{\ell}{ \kappa m}  + \sqrt{\ell D} \right)$ and $ O\left( \frac{\ell}{ \kappa}  + \sqrt{\ell D} \right)$ respectively. Now $\sqrt{\ell D} \leq \ell + D$.  So the corollary follows. 
\end{proof}

Note that, in the above corollary, $\kappa < 1$ can be small, so that the bounds can become large. We show in the next section that $\kappa$ is a constant and hence our bounds are 
almost optimal.
% (up to polylogarithmic factors). 

