\section{Algorithm for Single Random Walk}\label{sec:algo}
%\vspace{-0.07in}
\subsection{Description of the Algorithm}
We develop an algorithm called {\sc Single-Random-Walk} (cf. Algorithm~\ref{alg:single-random-walk}) for $d$-regular dynamic graph $\mathcal{G} = (V, E_t)$.  The algorithm performs a random walk of length $\tau$ (the dynamic mixing time of $\mathcal{G}$ --- cf. Section \ref{sec:rwd}) in order to sample a destination from  (close to) the uniform distribution on the vertex set $V$. 
%We initiate the analysis with the following observation.
%\begin{observation}\label{obs:observation1}
%From the Theorem~\ref{thm:mixtime}, we have $\tau = O(\frac{1}{1 - \bar{\lambda_2}} \log n)$ which will be assumed throughout. Recall that we consider stationary %evolving graphs, so $\bar{\lambda_2}$ is same for all $G_t$. 
%\end{observation} 

The high-level idea of the algorithm is to perform ``many"  short random walks in parallel and later ``stitch" the short walks to get the desired walk of length $\tau$. In particular, we perform the algorithm in two phases, as follows. For simplicity we call the messages used in Phase~1 as ``coupons" and in Phase~2 as ``tokens". 
In Phase 1, we perform $d\log n$ ($d$ is the degree of the graph) ``short"  (independent) random walks of length $\mu$ (to bound the running time correctly, we show later that we do short walks of length approximately $\mu$, instead of exact length $\mu$) from each node $v$, where $\mu$ is a parameter whose value is $\tilde O(\sqrt{\tau \Phi})$ which will be fixed in the analysis. This is done simply  by forwarding $d\log n$ ``coupons" having the ID of $v$ from $v$ (for each node $v$) for $\mu$ steps via random walks. 
%The coupons end up in some destinations which are used in stitching in Phase 2.
\iffalse
\begin{quote}
\begin{algorithmic}[1]
\STATE Initially, each node $v$ in $G_1$creates $\eta$ messages (called coupons) $C_1,C_2, \ldots,C_{\eta}$ and writes its ID on them.

\FOR{$i = 1$ to $\mu$}

\STATE  This is the $i$-th round. Each node $v$ does the following: Consider each coupon
$C$ held by $v$ which is received in the $(i-1)$-th round. Now $v$ picks a neighbor $u$ from the graph $G_i$
uniformly at random and forwards $C$ to $u$ after incrementing the counter on the coupon to $i$.
\ENDFOR
\end{algorithmic}
\end{quote}
\fi

In Phase 2, starting at source $s$, we ``stitch" (see Figure \ref{fig:connector}) some short walks prepared in Phase 1 together to form a longer walk. The algorithm starts from $s$ and randomly picks one coupon distributed from $s$ in Phase 1. We now discuss how to sample one such coupon randomly and go to the destination vertex of that coupon. This can be done easily as follows: In the beginning of Phase~1, each node $v$ assigns a coupon number for each of its $d\log n$ coupons. At the end of Phase 1, the coupons originating at $s$ (containing ID of $s$ plus a coupon number) are distributed throughout the network (after Phase 1). When  a coupon needs to be  sampled, node $s$  chooses a random coupon number (from the unused set of coupons) and informs the destination node (which will be the next stitching point) holding the coupon $C$  through flooding. 

\begin{figure}[t]
\centering
%\includegraphics[width=0.98\linewidth]{connector.eps}
\includegraphics[width=0.98\linewidth]{connector-21.pdf}
\caption{Figure illustrating the Algorithm of stitching short walks
together (Figure is taken from \cite{drw-jacm}).} \label{fig:connector}
\end{figure}


Let $C$ be the  sampled coupon and $v$ be the destination node of $C$. $s$ then sends a ``token" to $v$ (through flooding) and $s$ deletes coupon $C$ (so that $C$ will not be sampled again next time at $s$, otherwise, randomness will be destroyed). The process then repeats. That is, the node $v$ currently holding the token samples one of the coupons it distributed in Phase 1 and forwards the token to the destination of the sampled coupon, say $v'$. Nodes $v, v'$ are called ``connectors" --- they are the endpoints of the short walks that are stitched. A crucial observation is that the walk of length $\mu$ used to distribute the corresponding coupons from $s$ to $v$ and from
$v$ to $v'$ are independent random walks. Therefore, we can stitch them to get a random walk of length $2\mu$. We therefore can generate a random walk of length $3\mu, 4\mu, \ldots $ by repeating this process. We do this until we have completed more than $\tau - \mu$ steps. Then, we complete the rest of the
walk by doing the naive random walk algorithm. 


To understand the intuition behind this algorithm, let us analyze its running time. First, we claim that Phase 1 needs $O(\mu)$(see Lemma~\ref{lem:phase1}) rounds with high probability. Recall that, in Phase 1, each node prepares $d\log n$ independent random walks of length $\mu$ (approximately). We start with $d\log n$  coupons from each node $v$ at the same time, each edge in the current graph should receive $2\log n$ coupons in the average case. In other words, at most $\log n$ coupons are sent through the same edge. Therefore sending out (just) $d\log n$ coupons from each node for $\mu$ steps will take $O(\mu)$ rounds in expectation in our model. This argument can be modified to show that we need $O(\mu)$ rounds with high probability (see full proof of the Lemma~\ref{lem:phase1}). Now by the definition of dynamic diameter, flooding takes $\Phi$ rounds. We show that sampling a coupon can be done in $O(\Phi)$ rounds (cf. Lemma~\ref{lem:lemma2.3}) and it follows that Phase 2 needs $\tilde O(\Phi \cdot \tau/\mu)$ rounds. Therefore, the algorithm needs $\tilde{O}(\mu+ \Phi \cdot \tau/\mu)$ which is $\tilde{O}(\sqrt{\tau \Phi})$ when we set $\mu =\sqrt{\tau \Phi}$. 

The reason the above algorithm for Phase 2 is incomplete is that it is possible that $d\log n$ coupons are not enough: We might forward the token to some node $v$ many times in
Phase 2 and all coupons distributed by $v$ in the first phase are deleted (In other words, $v$ is chosen as a connector node many times, and all its coupons have been exhausted.). If this happens then the stitching process cannot progress. To fix this problem, we will show (in the next section) an important property of the random walk which
says that a random walk of length $O(\tau)$ will visit each node $v$ at most $\tilde{O}(d\sqrt{\tau})$ times w.h.p. (cf. Lemma \ref{lem:visit-bound}). But this bound is not enough to get the desired running time, as it does not say anything about the distribution of the connector nodes. We use the following idea to overcome it: Instead of nodes performing walks of length $\mu$, each such walk $i$ does a walk of length $\mu + r_i$ where $r_i$ is a random  number in the range $[0, \mu-1]$. Since the random numbers are independent for each walk, each short walks are now of a random length in the range $[\mu, 2\mu-1]$. This modification is needed to claim that each node will be visited as a connector only $\tilde{O}(d\sqrt{\tau}/\mu)$ times (cf. Lemma \ref{lem:connector-bound}). This implies that each node does not have to prepare too many short walks. It turns out that this aspect requires quite a bit more work in the dynamic setting and therefore needs new ideas and techniques. The compact pseudo code is given in Algorithm \ref{alg:single-random-walk}. 

\subsection{Analysis}
We first show the correctness of the algorithm and then analyze its time complexity.
\subsubsection{Correctness}
\label{sec:correctness}
\begin{lemma}\label{lem:correctness}
The algorithm {\sc Single-Random-Walk}, with high probability, outputs a node sample that is close to  the uniform probability distribution on the vertex set $V$. 
\end{lemma}


\newcommand{\mindegree}[0]{\delta}
\begin{algorithm}[H]
\caption{\sc Single-Random-Walk}
\label{alg:single-random-walk}
\textbf{Input:} Starting node $s$, desired walk length $\tau$ and parameter $\mu$.\\
\textbf{Output:} Destination node of the walk outputs the ID of $s$.\\

\textbf{Phase 1: (Each node $v$ performs $d\log n$ random walks ($d=\deg(v)$) of length $\mu + r_i$ where $r_i$ (for each $1\leq i \leq d\log n$) is chosen independently at random in the range $[0, \mu - 1]$. At the end of the process, there are $d\log n$ (not necessarily distinct) nodes holding a ``coupon" containing the ID of v.)}
\begin{algorithmic}[1]
\FOR{each node $v$}
\STATE  Generate $d\log n$ random integers in the range $[0, \mu - 1]$, denoted by $r_1, r_2, \ldots,r_{d\log n}$.
\STATE Construct $d\log n$ messages containing its ID, a counter number and in addition, the $i$-th message contains the desired walk length of $\mu + r_i$. 
We will refer to these messages created by node $v$ as ``coupons created by $v$".
\ENDFOR

\FOR{$i=1$ to $2 \mu$}

\STATE This is the $i$-th round. Each node $v$ does the following: Consider each coupon $C$ held by $v$ which is received in the $(i - 1)$-th round. If the coupon $C$'s desired walk length is at most $i$, then $v$ keeps this coupon ($v$ is the desired destination). Else, $v$ picks a neighbor $u$ uniformly at random  for each coupon $C$ and forwards  $C$ to $u$.

%\COMMENT{Note that any iteration could require more than 1 round.}

\ENDFOR

\end{algorithmic}


\textbf{Phase 2: (Stitch short walks by token forwarding. Stitch $\Theta (\tau/\mu)$ walks, each of length in $[\mu, 2 \mu -1]$.)}
\begin{algorithmic}[1]
\STATE The source node $s$ creates a message called ``token'' which contains the ID of $s$.

\STATE The algorithm will forward the token around and keep track of a set of connectors, denoted by $\Re$. Initially, $\Re = \{s\}$.

\WHILE {Length of walk completed is at most $\tau-2 \mu$}

  \STATE Let $v$ be the node that is currently holding the token.
  
 \STATE $v$ samples one of the coupons distributed by $v$ uniformly at random (by randomly choosing one counter number from the unused set of coupons). Let $v'$ be the destination node of the sampled coupon, say $C$.

 % \STATE $v$ calls {\sc Sample-Destination($v$)} and let $v'$ be the
  %returned value (which is a destination of an unused random walk starting at $v$
  %of length between $\mu$ and $2\mu-1$.)

  %\IF{$v'$ = {\sc null} (all walks from $v$ have already been used up)}

  %\STATE $v$ calls {\sc Get-More-Walks($v$, $\mu$)} (Perform $\Theta(l/\mu)$ walks
  %of length $\mu$ starting at $v$)

%  \STATE $v$ calls {\sc Sample-Destination($v$)} and let $v'$ be the
  %returned value

  %\ENDIF

  \STATE $v$ sends the token to $v'$ through broadcast and deletes the coupon $C$.  

  \STATE $\Re = \Re \cup \{v\}$

\ENDWHILE

\STATE Walk naively until $\tau$ steps are completed (this is at
most another $2 \mu$ steps)

\STATE A node holding the token outputs the ID of $s$

\end{algorithmic}

\end{algorithm}

%%%%Adjusting space before Algo box%%%
\iffalse
\subsection{Analysis}
We first show the correctness of the algorithm and then analyze its time complexity.
\subsubsection{Correctness}
\label{sec:correctness}
\begin{lemma}\label{lem:correctness}
The algorithm {\sc Single-Random-Walk}, with high probability, outputs a node sample that is close to  the uniform probability distribution on the vertex set $V$. 
\end{lemma}
\fi

\begin{proof} 
We know (from  Theorem \ref{thm:mixtime}) that any random walk on a regular dynamic graph reaches ``close"  to the uniform distribution at step $\tau$  regardless of any changes of the graph in each round as long as it is $d$-regular, non-bipartite and connected. Therefore it is sufficient to show that {\sc Single-Random-Walk} finishes with a node $v$ which is the destination of a true random walk of length $\tau$ on some appropriate dynamic graph from the source node $s$. We show this below in two steps. \\
First we show that each short walk (of length approximately $\mu$) created in Phase~1 is a true random walk on a dynamic graph sequence $G_1, G_2, \ldots, G_{\tilde{\mu}}$ ($\tilde{\mu}$ is some approximate value of $\mu$). This means that in every step $t$, each walk moves to some random neighbor from the current node on the graph $G_t$ and each walk is independent of others. The proof of the Lemma~\ref{lem:phase1}  shows that w.h.p. there is at most $O(\log^3 n)$ bits congestion  in any edge in any round in Phase~1. Since we consider {\em CONGEST}($\log^3 n$) model, at each round $O(\log^3 n)$ bits can be sent through each edge from each direction. Hence effectively there will be no delay in Phase~1 and all walks can extend their length from $i$ to $i+1$ in one round. Clearly each walk is independent of others as every node sends messages independently in parallel. This proves that each short walk (of a random length in the range $[\mu, 2\mu-1]$) is a true random walk on the graph $G_1, G_2, \ldots, G_{\tilde{\mu}}$. \\
In Phase~2, we stitch  short walks to get a long walk of length $\tau$. Therefore, the $\tau$-length random walk is not from the dynamic graph sequence $G_1, G_2, \ldots, G_{\tau}$; rather it is from the sequence:\\ $G_1, G_2, \ldots, G_{\tilde{\mu}}, G_1, G_2, \ldots, G_{\tilde{\mu}}, \ldots,$ ($\tau/\mu$ times approximately). The stitching part is done on the graph sequence from $G_{\tilde{\mu} +1}, G_{\tilde{\mu} +2}, \ldots$ onwards. This does not affect the distribution of probability on the vertex set in each step, since the graph sequence from $G_{\tilde{\mu} +1}, G_{\tilde{\mu} +2}, \ldots$ is used only for communication. 
Also note that since we define $\tau$ to be the maximum of any static graph $G_t$'s  mixing time, it clearly reaches close to the uniform distribution after $\tau$ steps of walk
 in the graph sequence
 $G_1, G_2, \ldots, G_{\tilde{\mu}}, G_1, G_2, \ldots, G_{\tilde{\mu}}, \ldots,$ ($\tau/\mu$ times approximately).\\
 Finally, when we stitch at a node $v$, we are sampling a coupon (short walk) uniformly at random among many coupons (and therefore, short walks starting at $v$) distributed by $v$. It is easy to see that this stitches  short random walks  independently and hence gives a true random
walk of longer length.
%One can do this easily by the following way. In the beginning of phase~1, each node $v$ assign a number with their $\eta d$ coupons (short walks). Therefore, each node $v$ has $\eta d$ (we choose $\eta = 1$ later in the analysis) coupons containing ID of $v$ plus a coupon number, distributed throughout the network. At each time of sampling a coupon ($C$), node $v$ choose a random coupon number (from the unused set of coupons) and inform the node holding the coupon $C$ as a stitching point through flooding.
Thus it follows that the algorithm {\sc Single-Random-Walk} returns a destination node of a $\tau$-length random walk (starting from $s$) on some dynamic graph.     
\end{proof}

\subsubsection{Time Analysis}
%We show that the algorithm {\sc Single-Random-Walk} runs  with high probability in $\tilde{O}(\sqrt{\tau \Phi})$ rounds where $\tau$ and $\Phi$ are respectively dynamic mixing time and dynamic diameter of the network. 
%\begin{theorem}\label{thm:maintheorem}
%The algorithm {\sc Single-Random-walk} solves the Single random walk problem and with high probability finishes in $\tilde{O}(\sqrt{\tau \Phi})$ rounds. 
%\end{theorem}
We show the running time of algorithm {\sc Single-Random-Walk}  (cf. Theorem \ref{thm:maintheorem}) using the following lemmas.  
\begin{lemma}\label{lem:phase1}
Phase 1 finishes in $O(\mu)$ rounds with high probability.  
\end{lemma} 
\begin{proof}
In Phase 1, each node $v$ performs $d\log n$ walks of length $\mu$. Initially all the nodes start with $d\log n$ coupons (or messages) and each coupon takes a random walk. We prove that after any given number of steps $j$, the expected number of coupons at node any $v$ is still $d\log n$. At any round, every node has $d$ neighbors connected with it. So at each step every node can send (as well as receive) $d$ messages. Now the number of messages  started at any node $v$ is proportional to its degree and its stationary distribution (which is uniform). Therefore, in expectation the number of messages at any node remains the same. Thus in expectation the number of messages, say $X$ that go through an edge in any round is at most $2\log n$ (from both end points). Using Chernoff's bound we get $\Pr[X\geq 4 \log^2 n] \leq 2^{-4\log n} = n^{-4}$. It follows  that the number of messages that can go through any edge in any round is at most $4 \log^2 n$ with high probability. Hence there will be at most $O( \log^3 n)$ bits w.h.p. in any edge per round . Since we consider {\em CONGEST}($\log^3 n$) model, so there will be no delay due to congestion. Hence, Phase 1 finishes in $O(\mu)$ rounds with high probability.     
\end{proof}
\begin{lemma}\label{lem:lemma2.3}
Sample-Coupon always finishes within $O(\Phi)$ rounds where $\Phi$ is the dynamic diameter of the network. 
\end{lemma} 
\begin{proof}
The proof follows directly from the fact that through flooding one can send a message to all other nodes in the network. The flooding finishes in diameter time.  
\end{proof}

We note that the adversary can force the random walk to visit any particular vertex several times. Then we need many short walks from each vertex which increases the round complexity.  We show the following key technical lemma (Lemma~\ref{lem:visit-bound}) that bounds the number of visits to each node in a random walk of length $\ell$.  
In a $d$-regular dynamic graph, we show that no node is visited more than $\tilde{O}(d\sqrt{\tau}/\mu)$ times w.h.p. as a connector node of a $\tau$-length random walk. For this we need a technical result on  random walks that bounds the number of times a node will be visited in a $\ell$-length random walk (where $\ell = O(\tau)$). Consider a simple random walk on a connected $d$-regular dynamic graphs on $n$ vertices. Let $N_x^t (y)$ denote the number of visits to vertex $y$ by time $t$, given the walk started at vertex $x$. 
Now, consider $k$ walks, each of length $\ell$, starting from (not necessary distinct) nodes $x_1, x_2, \ldots ,x_k$. 

\begin{lemma}\label{lem:visit-bound}
$(${\sc Random Walk Visits Lemma}$)$. For any nodes $x_1, x_2, \ldots, x_k$, \[\Pr\bigl(\exists y\ s.t.\
\sum_{i=1}^k N_\ell^{x_i}(y) \geq 32 \ d \sqrt{k\ell+1}\log n+k\bigr) \leq 1/n\,.\]
\end{lemma}
To prove the above lemma we need to go through some key auxiliary results. We start with the bound of the first moment of the number of visits at each node by each walk.
\begin{proposition}\label{proposition:first-moment} For
any node $x$, node $y$ and $t = O(\tau)$,
\begin{equation}
\e[N_t^x(y)] \le 8 \ d \sqrt{t+1}
\end{equation}
\end{proposition}

To prove the above proposition, let $P$ denote the transition probability matrix of such a random walk and let $\pi$ denote the stationary distribution of the walk. 

We first prove a general bound result on the regular dynamic graph below. This bound follows from Lyons lemma (see Lemma~3.4 in \cite{Lyons}).  The proof is technical and long and is placed in the Appendix. 
 
\begin{lemma}\label{lem:lyons}
Let $Q$ denote the transition probability matrix of a $d$-regular dynamic graph. Let $c= \min{\{\pi(x) Q(x,y) : x \neq y \mbox{ and }Q(x,y)>0\}} > 0\,$. Note that here $c = \frac{1}{n d}$, as $\pi$ is the uniform distribution. Then for any vertex $x, y$ and for $\hat{k} \leq \rho m^2$, where $\rho$ is a suitably chosen constant,

\begin{equation}\label{one_sided_decay} 
Q^{\hat{k}}(x,y)  \le \frac{4\pi(y)}{c \sqrt{\hat{k}+1}} = \frac{4d}{\sqrt{\hat{k}+1}}\,.
\end{equation}
where $Q^{\hat{k}}(x, y)$ is the probability that a random walk starts from node $x$ will be at $y$ after $\hat{k}$ steps on the dynamic graph. 
\end{lemma}
 
%$$\bigl|\frac{Q^k(x,x)}{\pi(x)} - 1\bigr| \le
%\min\Bigl\{\frac{1}{\alpha c \sqrt{k+1}}, \frac{1}{2\alpha^2 c^2(k+1)} \Bigr\}\,.$$
 

%%%******Proof was here*****%%%%

%For $k= O(\tau)$ and small $\alpha$, the above  can be simplified to the following bound (see Remark~3 in \cite{Lyons}).
%\begin{equation}
%\label{one_sided_decay} Q^k(x,y)  \le \frac{4\pi(y)}{c \sqrt{k+1}} =
%\frac{4d}{\sqrt{k+1}}\,.
%\end{equation}

Note that given a simple random walk on a graph $G$, and a
corresponding matrix  $P$, one can always switch to the lazy version
$Q=(I+P)/2$, and interpret it as a walk on graph $G'$, obtained by
adding  self-loops  to vertices in $G$ so as to double the degree of
each vertex. In the following, with abuse of notation we assume our
$P$ is such a lazy version of the original one.

\begin{proof}[Proof of Proposition~\ref{proposition:first-moment}]
Remember that the dynamic graph is $\mathcal{G} = G_1, G_2, \ldots$. Let $X_0, X_1, \ldots $ describe the random walk, with $X_i$
denoting the position of the walk at time $i\ge 0$ on $G_{i+1}$, and let
$\bone_A$ denote the indicator (0-1) random variable, which takes
the value 1 when the event $A$ is true. In the following we also use
the subscript $x$ to denote the fact that the probability or
expectation is with respect to starting the walk at vertex $x$.
%Let $X_0=x$
First the expectation.
\begin{align*}
\e[N_t^x(y)] =  \e_x[  \sum_{i=0}^t \bone_{\{X_i=y\}}] & = \sum_{i=0}^t Q^i(x,y) \\
& \le  4 d \sum_{i=0}^t \frac{1}{\sqrt{i+1}} , \hspace{0.3in} \text{[using the above inequality  (\ref{one_sided_decay})]} \\
& \le 8 d \sqrt{t+1} 
\end{align*}
The last inequality follows from Riemann integral approximation, as $\int\frac{1}{\sqrt{x}} \leq 2 \sqrt{x}$.
\end{proof}

Using the above proposition, we bound the number of visits of each walk at each node, as follows. 

\begin{lemma}\label{lemma:whp one walk one node bound}
For $t = O(\tau)$ and any vertex $y \in V$, the random walk
started at $x$ satisfies:
\begin{equation*}
\Pr\bigl(N^x_t(y) \ge  32  \ d \sqrt{t+1}\log n \bigr) \le \frac{1}{n^2} \,.
\end{equation*}
\end{lemma}
\begin{proof}
First, it follows from Proposition \ref{proposition:first-moment} that
%
\begin{equation} 
\Pr\bigl(N^x_t(y) \ge  4\cdot 8 \ d \sqrt{t+1}\bigr) \le \frac{1}{4} \,.\label{eq:simple bound} \hspace{0.3in} \text{[Using Markov's inequality]}
\end{equation}
%

For any $r$, let $L^x_r(y)$ be the time that the random walk
(started at $x$) visits $y$ for the $r^{th}$ time. Observe that, for
any $r$, $N^x_t(y)\geq r$ if and only if $L^x_r(y)\leq t$.
Therefore,
\begin{equation}
\Pr(N^x_t(y)\geq r)=\Pr(L^x_r(y)\leq t).\label{eq:visits eq length}
\end{equation}

Let $r^*=32  \ d \sqrt{t+1}$. By \eqref{eq:simple bound} and
\eqref{eq:visits eq length}, $\Pr(L^x_{r^*}(y)\leq t)\leq
\frac{1}{4}\,.$ We claim that
\begin{equation}
\Pr(L^x_{r^*\log n}(y)\leq t)\leq \left(\frac{1}{4}\right)^{\log
n}=\frac{1}{n^2}\,.\label{eq:hp length bound}
\end{equation}
To see this, divide the walk into $\log n$ independent subwalks,
each visiting $y$ exactly $r^*$ times. Since the event $L^x_{r^*\log
n}(y)\leq t$ implies that all subwalks have length at most $t$,
\eqref{eq:hp length bound} follows. Note that all bounds holds for any vertex $x$ and so true for $y$. Therefore we can apply the bound for $x= y$ when subwalks start at $y$.  
%
Now, by applying \eqref{eq:visits eq length} again,
\[\Pr(N^x_t(y)\geq r^*\log n) = \Pr(L^x_{r^*\log n}(y)\leq t)\leq
\frac{1}{n^2}\] as desired.
\end{proof}

We now extend the above lemma to bound the number of visits of {\em
all} the walks at each particular node.

\begin{lemma}\label{lemma:k walks one node bound}
For $t = O(\tau)$, and for any vertex $y \in
\mathcal{G}$, the random walk started at $x$ satisfies:
\begin{equation*}
\Pr\bigl(\sum_{i=1}^k N^{x_i}_t(y) \ge  32  \ d \sqrt{kt+1} \log n+k\bigr) \le \frac{1}{n^2} \,.
\end{equation*}
\end{lemma}
\begin{proof}
First, observe that, for any $r$, $$\Pr\bigl(\sum_{i=1}^k
N^{x_i}_t(y) \geq r-k\bigr)\leq \Pr[N^y_{kt}(y)\geq r].$$ To see
this, we construct a walk $W$ of length $kt$ starting at $y$ in the
following way: For each $i$, denote a walk of length $t$ starting at
$x_i$ by $W_i$. Let $\tau_i$ and $\tau'_i$ be the first and last
time (not later than time $t$) that $W_i$ visits $y$. Let $W'_i$ be
the subwalk of $W_i$ from time $\tau_i$ to $\tau_i'$. We construct a
walk $W$ by stitching $W'_1, W'_2, ..., W'_k$ together and complete
the rest of the walk (to reach the length $kt$) by a normal random
walk. It then follows that the number of visits to $y$ by $W_1, W_2,
\ldots, W_k$ (excluding the starting step) is at most the number of
visits to $y$ by $W$. The first quantity is $\sum_{i=1}^k
N^{x_i}_t(y)-k$. (The term `$-k$' comes from the fact that we do not
count the first visit to $y$ by each $W_i$ which is the starting
step of each $W'_i$.) The second quantity is $N^y_{kt}(y)$. The
observation thus follows.

Therefore,
\begin{align*}
& \Pr\bigl(\sum_{i=1}^k N^{x_i}_t(y)\geq 32 \ d
\sqrt{kt+1}\log n + k\bigr) \\ & \leq \Pr\bigl(N^y_{kt}(y)\geq 32 \ d
\sqrt{kt+1}\log n\bigr) \\ & \leq \frac{1}{n^2}
\end{align*}
%
where the last inequality follows from Lemma~\ref{lemma:whp one walk
one node bound}.
\end{proof}


%Lemma~\ref{lemma:visits bound}
Now the Random Walk Visits Lemma (cf. Lemma~\ref{lem:visit-bound}) follows immediately from
Lemma~\ref{lemma:k walks one node bound} by union bounding over all
nodes. \\
 
The above lemma says that the number of visits to each node can be bounded.
However, for each node, we are only interested in the case where it is used as a connector (the stitching points). The lemma below shows that the number of visits as a connector can be bounded as well; i.e., if any node appears $t$ times in the walk, then it is likely to appear roughly $t/\mu$ times as connectors.

\begin{lemma}\label{lem:connector-bound}
For any vertex $v$, if $v$ appears in the walk at most $t$ times then it appears as a connector node at most $t(\log n)^2/\mu$ times with probability at least $1-1/n^2$.
\end{lemma}
\begin{proof}
Intuitively, this argument is simple, since the connectors are spread out in steps of length approximately $\mu$. However, there might be some periodicity that results in the same node being visited multiple times but exactly at $\mu$-intervals. To overcome this we crucially use the fact that the algorithm uses short walks of length $\mu + r$ (instead of fixed length $\mu$) where $r$ is chosen uniformly at random from $[0, \mu -1]$. Then the proof can be shown via constructing another process equivalent to partitioning the $\tau$ steps into intervals of $\mu$ and then sampling points from each interval. The detailed proof follows immediately from the proof of the Lemma~2.7 in~\cite{drw-jacm}.
\end{proof}

%The proof of the above lemmas are in the Appendix. 
Now we are ready to proof the main result (Theorem~\ref{thm:maintheorem}) of this section. \\ %The theorem is restated below with proof.  

\noindent\textbf{Proof of the Theorem \ref{thm:maintheorem} (restated below)}
\begin{theorem}
The algorithm {\sc Single-Random-walk} (cf. Algorithm \ref{alg:single-random-walk}) solves the Single Random Walk problem and with high probability finishes in $\tilde{O}(\sqrt{\tau \Phi})$ rounds. 
\end{theorem}
\begin{proof}%[Proof of the Theorem \ref{thm:maintheorem}]
First, we claim, using Lemma \ref{lem:visit-bound} and
\ref{lem:connector-bound}, that each node is used as a connector node
at most $\frac{32 \ d \sqrt{\tau}(\log n)^3}{\mu}$ times with
probability at least $1-2/n$. To see this, observe that the claim
holds if each node $x$ is visited at most
$t(x)=32 \ d \sqrt{\tau+1}\log n$ times and consequently appears as a
connector node at most $t(x)(\log n)^2/\mu$ times. By
Lemma~\ref{lem:visit-bound}, the first condition holds with
probability at least $1-1/n$. By Lemma~\ref{lem:connector-bound} and
the union bound over all nodes, the second condition holds with
probability at least $1-1/n$, provided that the first condition
holds. Therefore, both conditions hold together with probability at
least $1-2/n$ as claimed.

Now, we choose $\mu=32 \sqrt{\tau \Phi}(\log n)^2$.
%
By Lemma~\ref{lem:phase1}, Phase~1 finishes in $O(\mu) = \tilde O(\sqrt{\tau \Phi})$ rounds with high probability.
%
For Phase~2, {\sc Sample-Coupon} is invoked
$O(\frac{\tau}{\mu})$ times (only when we stitch the walks) and
therefore, by Lemma~\ref{lem:lemma2.3}, contributes
$O(\frac{\tau \Phi}{\mu})=\tilde O(\sqrt{\tau \Phi})$ rounds.

Therefore, with probability at least $1-2/n$, the rounds are $\tilde
O(\sqrt{\tau \Phi})$ as claimed.
\end{proof}


\subsection{Generalization to Non-regular Dynamic Graphs}
\label{sec:nonregular}
%\vspace{-0.03in}
%It is known that the cover time could be exponential for a simple random walk on undirected dynamic graphs~\cite{AKL08}. 
%If we assume a  lazy random walk, then that guarantees polynomial cover time regardless of the changes made by the adversary. 
By using a {\em lazy} random walk strategy, we can generalize our results to work for a non-regular dynamic graph also. The lazy random walk strategy ``converts"  a random walk on an non-regular graph to a slower random walk on a regular graph. 

\begin{definition}\label{def:lazy-rw}
At each step of the walk pick a vertex $v$ from $V$ uniformly at random and if there is an edge from the current vertex to the vertex $v$ then we move to $v$, otherwise we stay at the current vertex. 
\end{definition} 

This strategy of lazy random walk in fact makes the graphs $n$-regular: every edge adjacent to the current vertex is picked with the probability $1/n$ and with the remaining probability we stay at the current vertex. 
Using this strategy, we can obtain the same results on any non-regular graphs as well, but with a factor of $n$
slower. However, we can do better, if nodes know an an upper bound $d_{max}$ on the maximum degree of the dynamic network. Modify the lazy walk such that at each step, the walk stays at the current vertex $u$ with probability $1 - (d(u)/(d_{max} + 1))$ and with the remaining probability goes to a neighbor chosen uniformly at random. This results in a slow down by a factor of $d_{max}$ compared to the regular case. Therefore, for a bounded degree dynamic graph where each node's degree is bounded by a constant $d$, the running time of our algorithm can be affected only by a constant factor. %Another random walk strategy can mix the probability distribution on vertex set slightly faster. Suppose all the node knows an upper bound $d_{max}$ on the maximum degree of the dynamic network. Then at each step of the walk stay at the current vertex $u$ with probability $1 - (d(u)/(d_{max} + 1))$ and with the remaining probability pick a neighbors uniformly at random. \\
%We now move to showing how these techniques can be further extended for performing several random walks efficiently.
