\section{Application: Information Dissemination (or $k$-Gossip)}\label{sec:apps}
While the previous sections focused on performing the fundamental primitive of random walks efficiently in a dynamic network, in this section we show that these techniques  directly help in specific applications in dynamic networks as well. 
%The following application results are specifically strong and novel for the dynamic setting, and  rely crucially on our round complexity guarantees and strong bounds in this dynamic distributed network framework. Hence previous results on random walks in static networks did not yield the following performance improvements in well-studied problems.
We present a fully distributed algorithm for the {\em $k$-gossip} problem (defined in Section \ref{sec:results}) in $d$-regular dynamic graph (cf. Algorithm \ref{alg:token-dissemination}). 
Our distributed algorithm is based on the centralized algorithm of \cite{DPRS-arxiv} which consists of two phases. The first phase consists
of sending some $f$ copies (the value of the parameter $f$ will be fixed in the analysis) of each of the $k$ tokens to a set of {\em random} nodes. 
%In \cite{DPRS-arxiv}, this is implemented by a centralized algorithm assuming that the algorithm knows the entire sequence of graphs in advance. 
%Here, we show that
%this can be efficiently implemented in a distributed and localized fashion using our ``many" random walks algorithm (cf. Section \ref{sec:k-algo}) --- which shows how to efficiently perform many independent random walks simultaneously.
 %In the first phase we send some $f$ copies of each token $t$ to random nodes. 
 We use algorithm {\sc Many-Random-Walks} (cf. Section~\ref{sec:k-algo}) to efficiently do this. In the second phase we simply broadcast each token $t$ from the random places to reach all nodes. We show that if every node having a token $t$ broadcasts it for $O(n\log n/f)$ rounds, then with high probability all the nodes will receive the token $t$.%The pseudo code (cf. Algorithm \ref{alg:token-dissemination}), analysis, and the full proof of the result (cf. Theorem \ref{thm:token-bound}) are placed in the Appendix.
 
\begin{algorithm}[H]
\caption{\sc K-Information-Dissemination}
\label{alg:token-dissemination}
\textbf{Input:} A dynamic graph $\mathcal{G}: G_1, G_2, \ldots$ and $k$ token in some nodes.\\
\textbf{Output:} To disseminate $k$ tokens to all the nodes.\\

\textbf{Phase 1: (Send $f$ copies of each token to random places. $f$ is chosen appropriately in the analysis for two different cases)}
\begin{algorithmic}[1]

\STATE  Every node holding token $t$, sends $f$ copies of each token to random nodes using the algorithm {\sc Many-Random-Walks}.

\end{algorithmic}

\textbf{Phase 2: (Broadcast each token for $O(n\log n/f)$ rounds)}
\begin{algorithmic}[1]
\FOR{each token $t$}
\STATE  For the next $2 n\log n/f$ rounds, let all the nodes that have token $t$ broadcast the token.
%\STATE When algorithm {\sc Single-Random-walk} terminates, the sampled destination outputs ID of the source $s_j$. 
\ENDFOR
\end{algorithmic}

\end{algorithm}

\subsection{Analysis}
First we prove a lemma which will guarantee that our algorithm disseminates the tokens to all the nodes in the network correctly. 
\begin{lemma}\label{lem:token-broadcast}
Let $S \subseteq V$ be a set of random nodes (of size $\geq 2\log n$) chosen from close to the uniform distribution over $V$. Let all nodes in $S$ hold a token $t$. If every node having a token $t$ broadcasts it for $2n\log n/|S|$ rounds, then all the $n$ nodes receive the token $t$ with high probability.  
\end{lemma}
\begin{proof}
Let $|S| = f$. Fix a node $v$. Suppose the token $t$ is broadcast for $2 n \log n/f$ rounds, then there is a set $S_v^t$ of at least $2 n \log n/f$ nodes from which $v$ is reachable within $2 n \log n/f$ rounds. This  follows from the fact that at any round at least one uninformed node will be informed as the graph is (always) connected. It is now clear that if $S$ intersects $S_v^t$, $v$ will receive token $t$. Suppose the elements of the set $S$ were sampled from the vertex set with probability $1/n \pm 1/n^2$, i.e., close to the uniform distribution. So the probability that a single node $w \in S$ does not intersect $S^t_v$ is at most $(1 - |S^t_v|(\frac{1}{n} \pm \frac{1}{n^2})) \leq (1 - \frac{2n\log n}{f} \times \frac{n \pm 1}{n^2})$. Therefore the probability that any of the $f$ sampled nodes in $S$ do not intersect $S^t_v$ is at most $(1 - \frac{2(n \pm 1)\log n}{n f})^f \leq \frac{1}{n^{2 \pm 2/n}}$. Now using union bound, it follows  that every node in the network receive the token $t$ with high probability. Note that this proof applies when $|S| \geq 2\log n$. If $|S| < 2\log n$, then the results of the lemma follows immediately, i.e., all the nodes will receive the token in $O(n)$ rounds.
\end{proof}

%Next we analyze the running time of our $k$-gossip algorithm. %in two cases.

%%% REMOVING THE FIRST CASE %%%
\iffalse
\paragraph{Case I} First we consider the case where $k$ tokens are initially situated among nodes (may not be distinct) arbitrarily. For this, we use the trivial version of {\sc Many-Random-Walks} algorithm to send the tokens to random places in phase~1. We show that our proposed $k$-gossip algorithm finishes in $\tilde{O}(k n^{\frac{1}{2}}(\tau \Phi)^{\frac{1}{4}})$ rounds w.h.p. To make sure that the algorithm terminates in $O(k\Phi)$ rounds, each node compares the running time of our algorithm and the naive algorithm (which is simply broadcasting each of the $k$ tokens sequentially; clearly this will take $O(k\Phi)$ rounds in total) and run the faster one. We assume that nodes know the dynamic diameter $\Phi$ and the dynamic mixing time $\tau$. Moreover, nodes can know the number of tokens $k$ in $O(\Phi)$ rounds. Therefore, each node can easily compare the running time. Thus the claimed bound in Theorem \ref{thm:token-bound1} holds. The formal proof is given below. \\

\noindent \textbf{Proof of the Theorem \ref{thm:token-bound1} (restated below)}
\begin{theorem}
The  $k$-gossip problem can be solved  with high probability in $\tilde{O}(\min\{k n^{\frac{1}{2}}(\tau \Phi)^{\frac{1}{4}}, k\Phi \})$ rounds. 
\end{theorem}
\begin{proof}
We  run the faster algorithm between naive and our proposed algorithm (cf. Algorithm \ref{alg:token-dissemination}). Since the naive algorithm finishes in $O(k\Phi)$ rounds, therefore we concentrate here only on the round complexity of our proposed algorithm.\\
In Phase~1, we send $f$ copies of each $k$ token to random nodes which means we are sampling $k f$ random nodes from uniform distribution. We assumed that any node may want to disseminate information (even multiple pieces) to all other nodes. Therefore, we use the trivial {\sc Many-Random-Walks} algorithm which is just repeating the {\sc Single-Random-Walk} algorithm for each source node. The running time of this trivial {\sc Many-Random-Walks} algorithm is $\tilde{O}(\kappa \sqrt{\tau \Phi})$ rounds to perform $\kappa$ random walks. Hence, Phase~1 of {\sc K-Information-Dissemination} takes $\tilde{O}(kf \sqrt{\tau \Phi})$ rounds to sample $k f$ random nodes.  

Now consider a particular token $t$. Let $S$ be the set of nodes which has the token $t$ after phase 1. Suppose the {\sc Many-Random-Walks} algorithm samples nodes with probability $1/n \pm 1/n^2$ which means that each node in $S$ is sampled with probability\footnote{The algorithm {\sc Many-Random-Walks} samples nodes from close to the uniform distribution and this can be made arbitrarily close to uniform by increasing the length of walk by a suitable constant factor.} $1/n \pm 1/n^2$. Now we can apply the above Lemma \ref{lem:token-broadcast} and conclude that in phase~2, every node in the network receives the token $t$ with high probability. Therefore, Phase~2 uses $k n\log n/f$ rounds and sends all the $k$ tokens to all the nodes with high probability. Therefore the algorithm finishes in $\tilde{O}(kf\sqrt{\tau \Phi} + k n/f)$ rounds. Choosing $f = n^{\frac{1}{2}}/(\tau \Phi)^{\frac{1}{4}}$ gives the bound as $\tilde{O}(kn^{\frac{1}{2}}(\tau \Phi)^{\frac{1}{4}})$. Hence, the $k$-gossip problem can be  solved with high probability in $\tilde{O}(\min\{kn^{\frac{1}{2}}(\tau \Phi)^{\frac{1}{4}}, k\Phi \})$ rounds. 
\end{proof} 
\fi

%%% This was actually Case 2. Rephrasing the sentences here, because of removal of Case 1 %%%
Next we analyze the running time of our $k$-gossip algorithm. We assume that $k$ tokens are initially situated among nodes from a specific distribution. In particular, we assume source nodes are chosen randomly proportional to the node degrees, i.e., from the uniform distribution in our model. In this case, we present a sophisticated algorithm for $k$-gossip problem. We use the {\sc Many-Random-Walks} algorithm which runs in $\tilde O\left(\min \{\sqrt{\kappa \tau \Phi}, \kappa + \tau\} \right)$ rounds to perform $\kappa$ random walks (cf. Theorem \ref{thm:kappabound}). 

We show that our proposed $k$-gossip algorithm finishes in $\tilde{O}(n^{\frac{1}{3}}k^{\frac{2}{3}}(\tau \Phi)^{\frac{1}{3}})$ rounds w.h.p. 
To make sure that the algorithm terminates in $O(k\Phi)$ rounds, each node compares the running time of our algorithm and the naive algorithm (which is simply broadcasting each of the $k$ tokens sequentially; clearly this will take $O(k\Phi)$ rounds in total) and runs the faster one. We assume that nodes know the dynamic diameter $\Phi$ and the dynamic mixing time $\tau$. Moreover, nodes can know the number of tokens $k$ in $O(\Phi)$ rounds. Therefore, each node can easily compare the running time of the two algorithms. Thus the claimed bound in Theorem \ref{thm:token-bound} holds. The formal proof is given below.\\

\noindent \textbf{Proof of the Theorem \ref{thm:token-bound} (restated below)}
\begin{theorem}
The  $k$-gossip problem can be solved  with high probability in $\tilde{O}(\min\{n^{\frac{1}{3}}k^{\frac{2}{3}}(\tau \Phi)^{\frac{1}{3}}, k\Phi \})$ rounds. 
\end{theorem}
%\begin{proof}[Proof of the Theorem \ref{thm:token-bound}]
%begin{theorem}
%The algorithm~(cf. algorithm~\ref{alg:token-dissemination}) solves $k$-gossip problem with high probability\\ in $\tilde{O}(\min\{n^{1/3}k^{2/3}(\tau \Phi)^{1/3}, nk\})$ rounds. 
%\end{theorem}
\begin{proof}%[Proof of the Theorem \ref{thm:token-bound}]
We  run the faster of the two algorithms: our proposed algorithm (cf. Algorithm \ref{alg:token-dissemination}) and the naive algorithm. Since the naive algorithm finishes in $O(k\Phi)$ rounds,  we concentrate here only on the round complexity of our proposed algorithm. Hence we assume that our algorithm has better running time in the following analysis, i.e.,   $n^{\frac{1}{3}}k^{\frac{2}{3}}(\tau \Phi)^{\frac{1}{3}} < k\Phi$.\\
In Phase~1, we send $f$ copies of each of the $k$ tokens to random nodes which means we are sampling $k f$ random nodes from uniform distribution. Since the source nodes are chosen uniformly at random, we can use the faster {\sc Many-Random-Walks} algorithm to efficiently do the sampling. The {\sc Many-Random-Walks} algorithm can sample a constant fraction of $\frac{n^2d^2\Phi}{\tau}$ nodes (cf. Theorem \ref{thm:kappabound}). Therefore, repeating the {\sc Many-Random-Walks} algorithm by at most a constant number of times (before starting the Phase~2 of {\sc K-Information-Dissemination} algorithm), we can sample $\frac{n^2d^2\Phi}{\tau}$ nodes. Hence  Phase~1 takes $\tilde{O}(\sqrt{k f \tau \Phi})$ rounds and can send at most $\frac{n^2d^2\Phi}{\tau}$ tokens to random nodes. 

Now fix a token $t$. Let $S$ be the set of nodes that have the token $t$ after Phase 1. Then using Lemma \ref{lem:token-broadcast}, we can say that in Phase~2, every node in the network receives the token $t$ with high probability. Therefore, Phase~2 uses $k n\log n/f$ rounds and sends  all the $k$ tokens to all the nodes with high probability. Hence the algorithm finishes in $\tilde{O}(\sqrt{k f \tau \Phi} + k n/f)$ rounds. Choosing $f = n^{\frac{2}{3}} k^{\frac{1}{3}}/(\tau \Phi)^{\frac{1}{3}}$ gives the bound as $\tilde{O}(n^{\frac{1}{3}} k^{\frac{2}{3}} (\tau \Phi)^{\frac{1}{3}})$. Hence, the $k$-gossip problem can be  solved with high probability in $\tilde{O}(\min\{n^{\frac{1}{3}}k^{\frac{2}{3}}(\tau \Phi)^{\frac{1}{3}}, k\Phi \})$ rounds. 

The only thing left is to show that $kf < \tilde O(\frac{n^2d^2\Phi}{\tau})$. This is because in Phase~1, using {\sc Many-Random-Walks} algorithm we can sample at most $\tilde O(\frac{n^2d^2\Phi}{\tau})$ nodes. Therefore, $kf$ which is $n^{\frac{2}{3}} k^{\frac{4}{3}}/(\tau \Phi)^{\frac{1}{3}}$ must be less than $\frac{n^2d^2\Phi}{\tau}$. That is,
$\tau < \frac{n^2 d^3 \Phi^2}{k^2}$. We show that this is true for any $k$ such that $k < nd$. 
%In fact, it is easy to see that $n^{\frac{2}{3}} k^{\frac{4}{3}}/(\tau \Phi)^{\frac{1}{3}}$ is less than $\frac{n^2d^2\Phi}{\tau}$ if $\tau < d^3 \Phi^2$, assuming $k = n$ (this is also true for larger value of $k$, i.e., for $k$ is at most $n\polylog n$).  
As assumed in the beginning of the proof,  $n^{\frac{1}{3}}k^{\frac{2}{3}}(\tau \Phi)^{\frac{1}{3}} < k\Phi$, which gives $\tau < k \Phi^2/n$ 
%Hence, the performance of our algorithm is better than the naive algorithm when $\tau < k \Phi^2/n$, 
which is always less than $\frac{n^2 d^3 \Phi^2}{k^2}$  if $k < nd$. 
\end{proof}

\begin{remark}
Our proposed algorithm has better running time than the naive algorithm when $\tau < \tilde O(k \Phi^2/n)$ (where $\tau$ is dynamic mixing time and $\Phi$ is dynamic diameter of the network) and $k < nd$. We note that  these conditions are quite restrictive to allow the applicability of our algorithm to many settings. Nevertheless, there do exists graphs where
%Further, we note that a typical  version of the $k$-gossip problem where $k = n$ and every node contains a token to disseminate to all other nodes in the network can be solved by appealing to {\em Case~II}.  In particular,  the running time for this version will be within  a polylogarithmic factor of the {\em Case~II} running time.
our algorithm achieves better running time --- e.g., graphs with large expansion and large degree (regular expander graphs 
with slightly superpolylogarithmic  degree) and when the number of tokens $k$ is larger than $n$, say $k = n\polylog n$.  
We further like to mention that if we apply the trivial version of {\sc Many-Random-Walks} algorithm to solve the $k$-gossip problem then the naive algorithm (which is simply broadcasting each of the $k$ tokens sequentially) would have better running time.
\end{remark}


\iffalse
\paragraph{Note:}
1. Our algorithm has better running time than the trivial algorithm gives the condition, $\tau < k \Phi^2/n$. \\
2. Also the length of the short random walk, $\mu$ should be less than $\tau$ in {\sc Many-Random-Walks} algorithm. This gives, $\tau > k \Phi$ (since $\mu$ is proportional to $\sqrt{k\tau\Phi}$). \\
From the above two bound, we get $k\Phi < \tau < k\Phi^2/n$. This implies $\tau = k\Phi$ and $\Phi = n$. \\

Recall that every node (who have information to disseminate) sample $f$ random nodes through {\sc Many-Random-Walks}. $f = n^{2/3} k^{1/3}/(\tau \Phi)^{1/3}$. Putting the above value of $\tau$ and $\Phi$, we get $f$ is constant.
      

%Note that the mixing time $\tau$ of an expander dynamic graph i.e., a dynamic graph process consists of all expander graph, is at most $O(\log n)$. This is follows from the Theorem~\ref{thm:mixtime}, as the second largest eigenvalue $\lambda$ of an expander graph is constant. Putting this in Theorem \ref{thm:token-bound}, yields a better  bound for $k$-gossip problem in expander dynamic graph when $k=n$. This $k=n$ is the most interesting case in $k$-gossip problem where every node disseminate an information to all the nodes in the network.   


%\iffalse

\subsection{Decentralized Estimation of Mixing Time}
\label{sec:mixest}
We focus on estimating the {\em dynamic mixing time} $\tau$ of a $d$-regular connected non-bipartite dynamic graph $\mathcal{G}= G_1, G_2, \ldots$. We discussed in Section \ref{mixing_time} that $\tau$ is maximum of the mixing time of any graph in $\{G_t : t \geq 1 \}$. To make it appropriate for our algorithm, we will assume that all graphs $G_t$ in the graph process $\mathcal{G}$ have the same mixing time $\tau_{mix}$. Therefore $\tau = \tau_{mix}$. While the definition of $\tau$ (cf. Definition \ref{def:mix-dynamic}) itself is consistent, estimating this value becomes significantly harder in the dynamic context. The intuitive approach of estimating distributions continuously and then adapting a distribute-closeness test works well for static graphs, but each of these steps becomes far more involved and expensive when the network itself changes and evolves continuously. Therefore we need careful analysis and new ideas in obtaining the following results. We introduce related notations and definitions in Section~\ref{mixing_time}. 

The goal is to estimate $\tau^x_{mix}$ (mixing time for source $x$). Notice that the definition $\tau^x_{mix}$ and dynamic mixing time, $\tau$ (cf. Section \ref{mixing_time}) are consistent for a $d$-regular dynamic graph $\mathcal{G} = G_1,G_2, \dots$ due to the monotonicity property (cf. Lemma~\ref{lem:monotonicity}) of distributions.

We now present an algorithm to estimate $\tau$. The main idea behind this approach is, given a source node, to run many random walks of some length $\ell$ using the approach described in Section \ref{sec:k-algo}, and use these to estimate the distribution induced by the $\ell$-length random walk. We then compare the the distribution at length $\ell$, with the stationary distribution to determine if they are close, and if not, double $\ell$ and retry.     

For the case of static graph (with diameter $D$), Das Sarma et al.~\cite{DasSarmaNPT10} shows that the one can approximate mixing time in $\tilde O(n^{1/4} \sqrt{D\tau^x(\epsilon)})$ rounds. We show here that this bound also holds to approximate mixing time even for the dynamic graphs which is $d$-regular.  We use the technique of Batu et al.~\cite{BFFKRW} to determine if the distribution is $\epsilon$-near to uniform distribution. Their result is restated in the following theorem. 

\begin{theorem}[\cite{BFFKRW}]\label{thm:batu}
For any $\epsilon$, given $\tilde{O}(n^{1/2}poly(\epsilon^{-1}))$ samples of a distribution $X$
over $[n]$, and a specified distribution $Y$, there is a test that outputs PASS with high probability if $|X-Y|_1\leq \frac{\epsilon^3}{4\sqrt{n}\log n}$, and outputs FAIL with high probability if $|X-Y|_1\geq 6\epsilon$.
\end{theorem}

The distribution $X$ in our context is some distribution on nodes and $Y$ is the stationary distribution, i.e., $Y(v) = 1/n$ (assume $\vert V \vert = n$ in the
network). %In this case, the algorithm used in the above theorem can be simulated in a distributed network in �O(D + 2/ log(1 + ?)) rounds, as in the following theorem.
We now give a very brief description of the algorithm of Batu et. al.~\cite{BFFKRW} to illustrate that it can in fact be simulated on the distributed network efficiently. The algorithm partitions the set of nodes in to buckets based on the steady state probabilities. Each of the $\tilde{O}(n^{1/2}poly(\epsilon^{-1}))$ samples from $X$ now falls in one of these buckets. Further, the actual count of number of nodes in these buckets for distribution $Y$ are counted. The exact count for $Y$ for at most $\tilde{O}(n^{1/2}poly(\epsilon^{-1}))$ buckets (corresponding to the samples) is compared with the number of samples from $X$; these are compared to determine if $X$ and $Y$ are close. Note that the total number of nodes n and $\epsilon$ can be broadcasted to all nodes in $O(\Phi)$ rounds and each node can determine which bucket it is in in $O(\Phi)$ rounds.We refer the reader to their paper~\cite{BFFKRW} for a precise description.


Our algorithm starts with $\ell=1$ and runs $K=\tilde{O}(\sqrt{n})$ walks of length $\ell$ from the specified source $x$. As the test of comparison with the steady state distribution outputs FAIL (for choice of $\epsilon=1/12e$), $\ell$ is doubled. This process is repeated to identify the largest $\ell$ such that the test outputs FAIL with high probability and the smallest $\ell$ such that the test outputs PASS with high probability. These give lower and upper bounds on the required $\tau^x_{mix}$ respectively. Our resulting theorem is presented below. \\

\noindent \textbf{Proof of the Theorem \ref{thm:complexity_bound_mixing_time} (restated below)}
\begin{theorem}
Given connected $d$-regular dynamic graphs with dynamic diameter $\Phi$, a node $x$ can find, in $\tilde{O}(n^{1/4}\sqrt{\Phi \tau^x(\epsilon)})$ rounds, a time
$\tilde{\tau}^x_{mix}$ such that $\tau^x_{mix}\leq \tilde{\tau}^x_{mix} \leq \tau^x(\epsilon)$, where $\epsilon = \frac{1}{6912e\sqrt{n}\log n}$.
% where $T$ is the smallest time such that $r_x(T)\leq \frac{1}{6912e\sqrt{n}\log n}$.
%This can be done in $\tilde{O}(n^{1/2} + n^{1/4}\sqrt{Dt_{mix}})$ rounds.
%
%that is w.h.p. between the $6\epsilon$-near mixing time and $\frac{\epsilon^3}{4\sqrt{n}\log n}$-near mixing time in $\tilde{O}(n^{1/2}poly(\epsilon^{-1}) + n^{1/4}poly(\epsilon^{-1})\sqrt{Dt_{mix}})$ rounds.
%
%If the degree distribution is unknown to the nodes, a node can find an $\epsilon$-close mixing time in $\tilde{O}(n^{2/3}poly(\epsilon^{-1}) + n^{1/3}poly(\epsilon^{-1})\sqrt{Dt_{mix}})$ rounds.
\end{theorem}
\begin{proof}
Our goal is to check when the probability distribution (on vertex set $V$) of the random walk becomes stationary distribution which is uniform here. If a source node knows the total number of nodes in the network (which can be done through flooding in $O(\Phi)$ rounds), we only need
$\tilde{O}(n^{1/2}poly(\epsilon^{-1}))$ samples from a distribution to
compare it to the stationary distribution.  This can be achieved by
running {\sc MultipleRandomWalk} to obtain $K = \tilde{O}(n^{1/2}poly(\epsilon^{-1}))$ random walks. We choose $\epsilon = 1/12e$.
To find the approximate mixing time, we try out
increasing values of $\ell$ that are powers of $2$.  Once we find the
right consecutive powers of $2$, the monotonicity property admits a
binary search to determine the exact value for the specified $\epsilon$.
%of $\epsilon$-near mixing
%time. Note that we can apply binary search as $\epsilon$-near mixing
%time is a monotonic property.


The result
in~\cite{BFFKRW} can also be adapted to compare with the steady state distribution even if the source does not know the entire distribution. As described previously, the source only needs to know the {\em count} of number of nodes with steady state distribution in given buckets. Specifically, the buckets of interest are at most $\tilde{O}(n^{1/2}poly(\epsilon^{-1}))$ as the count is required only for buckets were a sample is drawn from. Since each node knows its own steady state probability (determined just by its degree), the source can broadcast a specific bucket information and recover, in $O(D)$ steps, the count of number of nodes that fall into this bucket. Using the standard upcast technique previously described, the source can obtain the bucket count for each of these at most $\tilde{O}(n^{1/2}poly(\epsilon^{-1}))$ buckets in $\tilde{O}(n^{1/2}poly(\epsilon^{-1}) + D)$ rounds.


We have shown previously that a source node can obtain $K$ samples from $K$ independent random walks of length $\ell$ in $\tilde{O}(\sqrt{K\ell \Phi})$ rounds. Setting $K=\tilde{O}(n^{1/2}poly(\epsilon^{-1}))$ completes the proof.
\end{proof}


Suppose our estimate of $\tau^x_{mix}$ is close to the dynamic mixing time of the network defined as $\tau = \max_{x}{\tau^x_{mix}}$, then this would allow us to estimate several related quantities. Given a dynamic mixing time $\tau$, we can approximate the spectral gap ($1-\lambda$) and the conductance ($\Psi$) due to the
%following known relations. The spectral gap is the $1-\lambda_2$ where $\lambda_2$ is the second eigenvalue of the connected transition matrix. It is
known relations that $\frac{1}{1-\lambda}\leq \tau \leq \frac{\log n}{1-\lambda}$ and $\Theta(1-\lambda)\leq \Psi \leq \Theta(\sqrt{1-\lambda})$ as shown in~\cite{JS89}. %Note that the spectral gap $(1 - \lambda)$ is the smallest spectral gap of all the graphs in $\{ G_t : t \geq 1 \}$ as our second largest eigenvalue $\lambda$ is the maximum of  all second largest eigenvalue of those graphs.  

\fi

