\section{Algorithm for Single Random Walk Problem}\label{sec:algo}
\subsection{Description of the Algorithm}
In this section we present the main ideas of this paper by developing an algorithm called {\sc Single-Random-Walk} (cf. Algorithm~\ref{alg:single-random-walk}) for regular dynamic (stationary evolving) networks ($\mathcal{G} = (V, E_t)$). The algorithm performs a random walk in order to sample a destination from the uniform distribution on the vertex set $V$. 
%We initiate the analysis with the following observation.
%\begin{observation}\label{obs:observation1}
%From the Theorem~\ref{thm:mixtime}, we have $\tau = O(\frac{1}{1 - \bar{\lambda_2}} \log n)$ which will be assumed throughout. Recall that we consider stationary %evolving graphs, so $\bar{\lambda_2}$ is same for all $G_t$. 
%\end{observation}
The high-level idea of the algorithm is to perform ``many" short random walks in parallel and later ``stitch" the short walks to get the desired walk of length $\tau$. In particular, we perform the algorithm in two phases, as follows.
In Phase 1, we perform $\eta$ ``short" random walks of length $\lambda$ from each node $v$, where $\eta$ and $\lambda$ are some parameters whose values will be fixed in the analysis. This is done naively by forwarding $\eta$ ``coupons" having the ID of $v$, from $v$ to random current destinations as follows.
\begin{quote}
\begin{algorithmic}[1]
\STATE Initially, each node $v$ in $G_1$creates $\eta$ messages (called coupons) $C_1,C_2, \ldots,C_{\eta}$ and writes its ID on them.

\FOR{$i = 1$ to $\lambda$}

\STATE  This is the $i$-th iteration. Each node $v$ does the following: Consider each coupon
$C$ held by $v$ which is received in the $(i-1)$-th iteration. Now $v$ picks a neighbor $u$ from the graph $G_i$
uniformly at random and forwards $C$ to $u$ after incrementing the counter on the coupon to $i$.
\ENDFOR
\end{algorithmic}
\end{quote}
In Phase 2, starting at source $s$, we ``stitch" (see Figure \ref{fig:connector} in the Appendix) some of $\lambda$-length walks prepared in Phase 1 together to form a longer walk. The algorithm starts from $s$ and randomly picks one coupon distributed from $s$ in Phase 1. Let $C$ be the sampled coupon and $v$ be the destination node of $C$. The source $s$ then sends a ``token" to $v$ and $v$ deletes coupon $C$ (so that $C$ will not be sampled again next time, otherwise, randomness will be destroyed). The process then repeats. That is, the node $v$ currently holding the token samples one of the coupons it distributed in Phase 1 and forwards the token to the destination of the sampled coupon, say $v'$. Nodes $v, v'$ are called ``connectors" - they are the endpoints of the short walks that are stitched. A crucial observation is that the walk of length $\lambda$ used to distribute the corresponding coupons from $s$ to $v$ and from
$v$ to $v'$ are independent random walks. Therefore, we can stitch them to get a random walk of length $2\lambda$. We therefore can generate a random walk of length $3\lambda, 4\lambda, \ldots $ by repeating this process. We do this until we have completed more than $\tau - \lambda$ steps. Then, we complete the rest of the
walk by doing the naive random walk algorithm. The algorithm for Phase 2 is thus the following. 
\begin{quote}
\begin{algorithmic}[1]
\STATE The source node $s$ creates a message called ``token" which contains the ID of $s$

\WHILE{Length of walk completed is at most $\tau - \lambda$}

\STATE  Let $v$ be the node that is currently holding the token.
\STATE $v$ sample one of the coupons distributed by $v$ (in Phase 1) uniformly at random. Let $C$ be the sampled coupon.
\STATE Let $v'$ be the node holding coupon $C$. (ID of $v'$ is written on $C$)
\STATE $v$ sends the token to $v'$ and $v'$ deletes $C$ so that $C$ will not be sampled again.
\STATE The length of walk completed has now increased by $\lambda$.
\ENDWHILE
\STATE Walk naively (i.e., forward the token to a random neighbor) until $\tau$ steps are completed.
\STATE A node holding the token outputs the ID of $s$.
\end{algorithmic}
\end{quote}
To understand the intuition behind this algorithm, let us analyze its running time. First, we claim that Phase 1 needs $\tilde{O}(\eta \lambda)$(see Lemma~\ref{lem:phase1}) rounds with high probability. Recall that, in Phase 1, each node prepares $\eta$ walks of length $\lambda$. However, we make each node prepare $\eta d$ walks instead. This is because if we send out $d = \deg(v)$ coupons from each node $v$ at the same time, each edge in the current graph should receive two coupons in the average case. In other words, there is essentially no congestion (i.e., not too many coupons are sent through the same edge). Therefore sending out (just) $d$ coupon from each node for $\lambda$ steps will take $O(\lambda)$ rounds in expectation and the time becomes $O(\eta \lambda)$ for $\eta d$ coupons. This argument can be modified to show that we need $\tilde{O}(\eta \lambda)$ rounds with high probability. Now by the definition of dynamic diameter, flooding takes $\Phi$ rounds. We show that sample a coupon can be done in $O(\Phi)$ rounds and it follows that Phase 2 needs $O(\Phi \cdot \tau/\lambda)$ rounds. Therefore, the algorithm needs $\tilde{O}(\eta \lambda+ \Phi \cdot \tau/\lambda)$ which is $\tilde{O}(\sqrt{\tau \Phi})$ when we set $\eta = 1$ and $\lambda =\sqrt{\tau \Phi}$. The compact pseudo code (cf. Algorithm \ref{alg:single-random-walk}) of the above algorithm can be found in the Appendix. 

The reason the above algorithm for Phase 2 is incomplete is that it is possible that $\eta$ coupons are not enough: We might forward the token to some node $v$ many times in
Phase 2 and all coupons distributed by $v$ in the first phase are deleted. In other words, $v$ is chosen as a connector node many times, and all its coupons have been exhausted.
If this happens then the stitching process cannot progress. To fix this problem, we will show (in the next section) an important property of the random walk which
says that a random walk of length $O(\tau)$ will visit each node $v$ at most $\tilde{O}(\sqrt{\tau} d)$ times. We then use some modification to claim that each node will be visited as a connector only $\tilde{O}(\sqrt{\tau} d/\lambda)$ times. This implies that each node does not have to prepare too many short walks. It turns out that this aspect requires quite a bit more work in the dynamic setting and therefore needs new ideas and techniques. 

\subsection{Analysis}
From the above algorithm we see that we are performing many short walks in parallel of length $\lambda$, and then stitching them to get the desired length $\tau$.  The following results states the main outcome of this section. It states that the algorithm {\sc Single-Random-Walk} correctly samples a node uniformly at random from the stationary evolving graph after a random a walk of $\tau$ steps and the algorithm takes, with high probability, $\tilde{O}(\sqrt{\tau \Phi})$ rounds where $\Phi$ is dynamic diameter of the network. 
%\begin{theorem}\label{thm:maintheorem}
%The algorithm {\sc Single-Random-walk} solves the Single random walk problem and with high probability finishes in $\tilde{O}(\sqrt{\tau \Phi})$ rounds. 
%\end{theorem}
We prove the above result (also stated in Theorem \ref{thm:maintheorem}; Section \ref{sec:results}) using the following lemmas. 



\begin{lemma}\label{lem:phase1}
Phase 1 finishes in $O(\lambda \eta \log n)$ rounds with high probability. 
\end{lemma} 
 
\begin{lemma}\label{lem:lemma2.3}
Sample-Coupon always finishes within $O(\Phi)$ rounds.
\end{lemma} 

We note that the adversary can force the random walk to visit any particular vertex several times. Then we need many short walks from each vertex which increases the round complexity.  We show the following key technical lemma (Lemma~\ref{lem:visit-bound}) that bounds the number of visits to each node in a random walk of length $\ell$.  
In a $d$-regular dynamic graph, we show that no node is visited more than $\tilde{O}(\sqrt{\tau} d/\lambda)$ times as a connector node of a $\tau$-length random walk. For this we need a technical result on  random walks that bounds the number of times a node will be visited in a $\ell$-length (where $\ell = O(\tau)$) random walk. Consider a simple random walk on a connected $d$-regular evolving graphs on n vertices. Let $N_x^t (y)$ denote the number of visits to vertex $y$ by time $t$, given the walk started at vertex $x$. 
Now, consider $k$ walks, each of length $\ell$, starting from (not necessary distinct) nodes $x_1, x_2, \ldots ,x_k$. 

\begin{lemma}\label{lem:visit-bound}
$(${\sc Random Walk Visits Lemma}$)$. For any nodes $x_1, x_2, \ldots, x_k$, \[\Pr\bigl(\exists y\ s.t.\
\sum_{i=1}^k N_\ell^{x_i}(y) \geq 32 \ d \sqrt{k\ell+1}\log n+k\bigr) \leq 1/n\,.\]
\end{lemma}

%The full proof is in Appendix \ref{proof of rw visit lemma}. 
This lemma says that the number of visits to each node can be bounded.
However, for each node, we are only interested in the case where it is used as a connector (the stitching points). The lemma below shows that the number of visits as a connector can be bounded as well; i.e., if any node appears $t$ times in the walk, then it is likely to appear roughly $t/\lambda$ times as connectors.

\begin{lemma}\label{lem:connector-bound}
For any vertex $v$, if $v$ appears in the walk at most $t$ times then it appears as a connector node at most $t(\log n)^2/\lambda$ times with probability at least $1-1/n^2$.
\end{lemma}

%The proof of the above lemmas are in the Appendix. 

\subsection{Generalization to non-regular evolving graphs $\mathcal{G}$}
%It is known that the cover time could be exponential for a simple random walk on undirected dynamic graphs~\cite{AKL08}. 
%If we assume a  lazy random walk, then that guarantees polynomial cover time regardless of the changes made by the adversary. 
By using a {\em lazy} random walk strategy, we can generalize our results to work for a non-regular dynamic graph also. The lazy random walk strategy ``converts"  a random walk on an non-regular graph to a slower random walk
on a regular graph. 
\begin{definition}\label{def:lazy-rw}
At each step of the walk pick a vertex $v$ from $V$ uniformly at random and if there is an edge from the current vertex to the vertex $v$ then we move to $v$, otherwise we stay at the current vertex. 
\end{definition} 
This strategy of lazy random walk in fact makes the graphs $n$-regular: every edge adjacent to the current vertex is picked with the probability $1/n$ and with the remaining probability we stay at the current vertex. 
Using this strategy, we can obtain the same results on non-regular graphs as well, but with a factor of $n$
slower.
%Another random walk strategy can mix the probability distribution on vertex set slightly faster. Suppose all the node knows an upper bound $d_{max}$ on the maximum degree of the dynamic network. Then at each step of the walk stay at the current vertex $u$ with probability $1 - (d(u)/(d_{max} + 1))$ and with the remaining probability pick a neighbors uniformly at random. \\
%We now move to showing how these techniques can be further extended for performing several random walks efficiently.
