\section{Concentration Bounds on $\kappa$} \label{sec:conc}

The goal of this section is to present a lower bound on $\kappa$, the fraction of rows of the pre-processing table (or the fraction of all short walks) that get used before the algorithm fails to perform a random walk request, and needs to rerun the pre-processing stage.  All the analysis in this section assumes that sources $S$ in {\sc Continuous-Random-Walk} are sampled according to the degree distribution. While the algorithm {\sc Continuous-Random-Walk} remains meaningful otherwise also, our proofs crucially rely on this random sampling of sources. Obtaining similar theorems for more general sequence of sources in $S$ remains an open question. 

We now present the central theorem that lower bounds $\kappa$.

\begin{theorem} \label{thm:kappabound}
Given any graph $G$, if {\sc Continuous-Random-Walk} is invoked on $\ell = O(m)$ and the source nodes $S$ are chosen randomly proportional to the node degrees, then the algorithm uses up at least $\kappa = \Omega(1)$ fraction of all short walks in {\sc Pre-Processing} table, before a request fails and a second call needs to be made to {\sc Pre-Processing}. 
\end{theorem}
\begin{proof}
Assume for now that we do $d(v)$ short walks for each vertex $v$. The total number of walks of length $\ell$ is $ T = \frac{2m \lambda}{\ell}$ if all the short walks are used. 
Let $K = \alpha T$, where $\alpha$ is a constant in $[0, 1]$. 
Note that if we manage to perform  $K$ walks of length $\ell$, then we have utilized a constant fraction
of the short walks. For one $\ell$-length walk, in expectation a vertex $v$ can be a connector at most $\frac{d(v) \ell}{2m \lambda}$ times (by linearity of expectation). (Connectors are the endpoints of the short walks, i.e., the points where we stitch. Note that only when
a vertex is visited as a connector we end up using a short walk initiated from that vertex.) Then for $K$ walks, each of length $\ell$, the expected number of times that $v$ is visited as a connector vertex is $K \frac{d(v) \ell}{2m \lambda} = \alpha d(v)$. Let $N$ denote the number of times the vertex $v$ is visited as a connector in $K$ walks. By above, $ E[N] = \alpha d(v) $. 
By Markov's inequality, $\Pr\left(N \geq d(v) \right) \leq \frac{\alpha d(v)}{d(v)} = \alpha$.
 Now consider the above experiment (for a fixed vertex $v$) repeated $c\log n$ independent times for some constant $c$, that is suitably large. (In other words, assume that we do  $c d(v) \log n$ short walks --- total over
 all experiments --- from each node $v$.)
We say that an experiment is  "success'' if $N < d(v)$.  If we have success,
then that means that we have done $K$ walks of length $\ell$ (and hence utilized a constant fraction of the $d(v)$ short walks) for that experiment before a request fails. By above,
the probability of success is at least some constant $\alpha' = 1 - \alpha$. 
 Let $X_1^v, X_2^v, \ldots, X_{c\log n}^v$ be the 0-1 indicator random variables such that 
$X_i^v = 1$ (if success occurs in $i$-th time)  and zero otherwise.
Let $X^v = \sum_{1=1}^{c \log n} X_i^v $. Then $E[X^v]  = \alpha' c \log n$. Since the variables are independent, by Chernoff's bound
$ \Pr(| X^v - E[X^v] | \geq c' \log n) \leq  e^{-\frac{2c'^2\log^2 n}{c \log n}} \leq \frac{1}{n^2}$,
for a suitable constant $c \leq (c')^2$.
Therefore,  $ \Pr(| X^v - E[X^v] | \geq c' \log n) \leq \frac{1}{n^2}$.  Thus, at least a constant
fraction of the experiments succeed with probability $1 - 1/n^2$.    By union bound \cite{MU-book-05},
the total number of visits to every vertex $v$ as connector in all  ($c \log n$  times $K$) walks is at most $O(\alpha d(v) c \log n)$ with probability at least $1 - 1/n$.  This implies that the total number of short walks utilized is
a constant fraction of the best possible, before a request fails.
\end{proof}

We now present the main theorem of this paper, which follows from the above bound on $\kappa$ stated in Theorem~\ref{thm:kappabound}, and the message and round complexity bounds in terms of $\kappa$ stated in Corollary~\ref{cor:avg-complexity}. Notice that this presents optimal round and message complexities simultaneously for every walk (since independently also $\Omega(\ell + D)$ is a clear lower bound on the number of messages for a single $\ell$-length random walk, and $\Omega(\sqrt{\ell D} + D)$ is a nontrivial lower bound on the number of rounds for a single $\ell$-length random walk as shown in~\cite{NanongkaiDP11}).

\begin{theorem}
Algorithm {\sc Continuous-Random-Walks} satisfies walk requests continuously and indefinitely such that the amortized message complexity per walk is $O(\ell + D)$, and, with high probability, every single walk request completes in $\tilde{O}(\sqrt{\ell D} + D)$ rounds.
\end{theorem}

\subsection{Extensions to different walk lengths}

While our main algorithm of {\sc Continuous-Random-Walk} and the associated theorems are stated for a fixed $\ell$, they can call be generalized to handle different walk lengths. We omit the rigorous details for brevity and present a brief explanation of the generalization here. The theorems and experiments go through verbatim for this case as well.

Suppose that {\sc Continuous-Random-Walk} is designed to not only support new source node requests each time but also new length requests for the random walks. 
%Then each request looks like a pair $(s_i, \ell_i)$, instead of $(s_i, \ell)$. 
One can of course store multiple {\sc Pre-Processing} tables, one for each associated $\ell_i$, in the entire allowed range for $\ell$. This way, when a request is presented, the appropriate {\sc Pre-Processing} table is accessed and the corresponding short walks queried. Then, whenever {\sc Continuous-Random-Walk} fails on a specific single random walk request, 
%lets say $(s_i, \ell_i)$, this means that the pre-processing table corresponding to $\ell_i$ has insufficient short walks. Therefore, 
only this {\sc Pre-Processing} is rerun, and answering the random walk requests resume. 

%It is easy to see that essentially all our bounds remain the same, as we have completely dissociated handling different $\ell$-length requests into essentially different problems. 
While this does solve the problem and guarantees the identical throughput and efficiency, a practical concern is that performing and storing so many short walks, corresponding to multiple different lengths, can be expensive. There is a simple way to counter this, by storing short walks in a doubling fashion. In particular, if each $\ell_i$ was in the range $[1, n]$, instead of storing short walks corresponding to each of $\ell_i = 1, 2, 3, \ldots, n-1, n$, we perform short walks only corresponding to $\ell_i = 1, 2, 4, \ldots, n/2, n$. 
%That is, we store walks only in a doubling fashion. 
This exponentially reduces the number of short walks at each node, or the number of pre-processing tables, from $n$ to $\log n$. Now, whenever a walk request for $\ell_i$ is received, it can be answered by just performing a longer walk, of length $\tilde{\ell}_i$ such that $\ell_i\leq \tilde{\ell}_i\leq 2\ell_i$.
% and $\tilde{\ell}_i$ is a power of two. Notice that such a $\tilde{\ell}_i$ always exists, and is only a $2$-factor distortion to the length requested. We omit the formal details here. 