\section{Cover time analysis} 
\vspace{-0.15in}
\label{sec:cover}

In this section, we show that the cover time of a $k$-branching random
walk on constant-degree regular expanders is $O(\log^2 n)$, for any $k
\ge 2$.  In particular, we show that once the number of active nodes
reaches $\Omega(n)$ as we have already established in
Section~\ref{sec:fractional}, the number of subsequent steps needed to
cover every node on a bounded degree regular expander is $O(\log^2
n)$.  Our analysis proceeds by considering an alternative process,
which we call $R_{alt}$, whose cover time stochastically dominates
that of the $k$-branching walk starting from a state of
$\delta$-coverage.

In $R_{alt}$, we consider each active vertex as possessing a
pebble. In $R_{alt}$, pebbles continue their walks but no branching
occurs.  The walks of the pebbles, though random, are not independent
from one another.  In particular, the transition probabilities of
pebbles are modified when more than one pebble hits the same vertex at
the same time to resemble collisions of pebbles in the original
process. The walk of one pebble, conditioned on the walk of another,
can be viewed as a time-inhomogenous Markov process that is only a
minor perturbation of the independent walk of a lone pebble on
$G$. Using a well-known result of Mihail ~\cite{MIH} that exploits an
expansion-like property of the graph, called merging conductance, but
is applicable to arbitrary irreducible Markov chains, we show that any
given node is visited by a pebble in $\Theta(\log n)$ time, ensuring
full coverage of $G$ in $O(\log^2 n)$ time.

In Section~\ref{sec:cover.alt}, we define $R_{alt}$.  In
Section~\ref{sec:cover.analysis}, we show that $R_{alt}$ has a cover
time of $O(\log^2 n)$ with high probability, implying an $O(\log^2 n)$
bound on the cover time of a $k$-branching random walk for any $k \ge
2$.

\vspace{-0.15in}
\subsection{Description of $R_{alt}$}
\vspace{-0.1in}
\label{sec:cover.alt}
For this process, we have the same underlying graph, $G$, as in our
original process. Each of the $\delta n$ marked nodes has its own
individual pebble to start. Arbitrarily index and order the
pebbles. This ordering will be used to assign priority when
determining the transition probability of a pebble.  At each time
step, each pebble choses a neighbor to move to according to the
following rules: (a) If there is one pebble at a node, the pebble
choses a neighbor with probability $\frac{1}{\Delta}$; (b) If there
are at least two pebbles at a node, then the two highest priority
pebbles each chose a neighbor independently and uniformly at random
with probability $\frac{1}{\Delta}$.  The remaining pebbles chooses
one of the already-selected neighbors independently with probability
$\frac{1}{2}$.

Note that if a node has one pebble, behavior of $R_{alt}$ is locally
equivalent to random walk over graph $G$. We can map the original
process to $R_{alt}$ as follows. If a node in $R_{alt}$ has two or
more pebbles, it behaves exactly identical to an active node in the
original process in the next time step (i.e. choosing two neighbors
uniformly and independently at random). If $R_{alt}$ has only one
pebble, then it does not behave like an active node in the original
process, since it only picks one neighbor to transmit its pebble
to. In this way, the number of active nodes at the next time step
under the original process is a superset of the nodes with pebbles in
$R_alt$ at the next time step. In this way, the original process
stochastically dominates $R_{alt}$.

\vspace{-0.15in}
\subsection{Analysis of $R_{alt}$}
\vspace{-0.1in}
\label{sec:cover.analysis}
\begin{theorem}
Let $G$ be a bounded-degree $\Delta$-regular expander graph. Let there
be $\delta n$ pebbles on $\delta n$ vertices of G. Then the cover time
of $R_{alt}$ is $O(\log^2 n)$ with high probability.
\end{theorem}
\begin{proof} 
If we can show that each vertex in $G$ is covered with constant
probability in $\Theta(\log n)$ steps, then the theorem follows by
carrying out $O(\log n)$ phases of $\Theta(\log n)$ steps each. Let
$E_i$ be the event that pebble $i$ covers an arbitrary vertex $v$ in
$s$ steps. We are interested in $\Pr[ \bigcup_i E_i]$, the probability
that $v$ is covered by at least one pebble, because we want to show
that it is larger than some constant.  To prove this, we use a
second-order approximation:
\begin{equation}
\Pr [\bigcup_i E_i] \ge \sum_i \Pr [E_i] - \sum_{i \neq j} \Pr [E_i \cap E_j] = \sum_i \Pr [E_i] - \sum_{i \ne j} \Pr[E_i]\Pr[E_j|E_i]
\end{equation}
Each term $\Pr[E_i]$ is $\le O(\frac{1}{n})$, since at the end of 
phase 2, each pebble has a probability of being at any particular node
equal to the value of the vertex's component of the stationary
distribution vector, $\frac{1}{n}$. From ~\cite{AAKKLT}, the
probability that an independent walk of pebble $i$ of length $O(\log
n)$ deviates from $\frac{1}{n}$ by at most $\frac{2}{n}$. To bound the
second-order term we will need to analyze the conditional walk of
pebble $j$ given the walk of pebble $i$, which has already been fixed.

To bound $\Pr[E_j|E_i]$, fix the walk of pebble $i$. Now consider the
walk of $j$. Clearly if it does not intersect the walk of $j$ in such
a way that both $i$ and $j$ are at the same node at the same time, it
is just another independent random walk, since we are not including
the walks of other pebbles, we can view the edge-selection probability
of pebble $j$ at each step of $\frac{1}{\Delta}$ as a marginal
probability across all other pebble paths, except for $i$'s.  On the
other hand, if $i$ and $j$ intersect in both space and time, we need
to consider the priorities of the two pebbles. In the worst of all
possible cases, $i$ is highly ranked and $j$ is lowly ranked such that
by the time we get to $j$ it is at least the third pebble at that
node. If this is the case, then $j$ will pick $i$'s edge out of the
intersection node with probability $\frac{1}{2}$. With probability
$\frac{1}{2}$ it will pick another pebble's edge out of the
node. However, since we are calculating without knowledge of the path
of any other pebble, we again use the marginal probability of edge
selection, $\frac{1}{\Delta}$ for each edge, thus giving us
probability of selecting the same edge as $i$ of $\frac{1}{2} +
\frac{1}{2 \Delta}$ and $\frac{1}{2 \Delta}$ for every other edge.

Based on the above discussion, pebble $j$ follows the modified
transition matrix $(M + T_{l(i),t})$, where $M$ is the standard
transition matrix for $G$, and $T_{l(i),t}$ is a perturbation matrix
depending on the vertex location of $i$ at time t, viewed as the
function $l(i)$.  The rows $(M + T_{l(i),t})_k$ are identical to M for
$k \ne l(i)$. For $k = l(i)$, the entries of the column are
$\frac{1}{2} + \frac{1}{2\Delta}$ for one randomly selected entry that
is $\frac{1}{\Delta}$ in $M$, and the rest of the entries are
$\frac{1}{2 \Delta}$. From any probability distribution $z$ over
$V(G)$, it is clear that that after $s = O(\log n)$ steps the
probability of pebble $j$ being a vertex $v$ is the corresponding
component of the vector $z \prod^s_{t=1}(M + T_{l(i),t})$. By
Lemma~\ref{lem:componentBound} below each component of $z
\prod^s_{t=1}(M + T_{l(i),t}) < \frac{2}{n}$. Thus,
\begin{eqnarray*}
\Pr(\bigcup E_i) &\ge& \sum_i P(E_i) - \sum_{i \ne j} P(E_i)P(E_i | E_j) \\
&\ge& \epsilon n \frac{1}{n} - \binom{\ n}{2} \frac{2}{n^2} \\
&\ge& \epsilon - \epsilon^2
\end{eqnarray*}
and the theorem follows. 
\end{proof}


\begin{lemma} 
\label{lem:componentBound}
For $z$ a probability distribution over $V(G)$, and $(M+T_{l(i),t})$ a
time-inhomogenous Markov chain on V(G) over a fixed sequence
$\{l(i),t\} \text{ for } t \in \{1, \dots, s\}$, each component of $z
\cdot \prod_{t = 1}^{s} (M+T_{l(i),t}) < \frac{2}{n}$.
\end{lemma}
\begin{proof}
Let $z = u + x$, where $u$ is the uniform distribution over $V(G)$ and $x$ is a vector such that $\sum x_i = 0$. Then $z \cdot \prod_{t = 1}^{s} (M+T_{l(i),t}) = x \cdot \prod_{t=1}^s (M+T_{l(i),t}) + u \cdot \prod^s_{t=1} (M+T_{l(i),t})$. 

Lemma~\ref{lem:mihailConductance}  proves that the merging conductance of any $M+T_{l(i),t}$ is bounded by some constant away from zero and that therefore we can use Mihail's theorem to show that $x \cdot \prod_{t=1}^s (M+T_{l(i),t})$ is component-wise within $O(\frac{1}{n})$ of the stationary distribution of the perturbed matrix within $\Theta(\log n)$ time. 

Finally, we want to show that each component of $u \cdot \prod_{t=1}^s (M+T_{l(i),t}) < \frac{2}{n}$. Consider our arbitrary path $v_0 \rightarrow v_1 \rightarrow \ldots \rightarrow v_s$. Start a pebble walking from a vertex drawn from the stationary distribution $u$. Define $p_j(v_i)$ as the probability of a pebble being at vertex $i$ at time step $j$. We want to show that the probability of being at any vertex in G after s time steps is less than $\frac{2}{n}$. We have that $\forall v, p_0(v) = \frac{1}{n}$. We claim that for all vertices along the fixed path, $p_k(v_k) \le \frac{2}{n}$, and that $p_k(v) \le \frac{1}{n}$ for $v \notin \{v_0,\ldots, v_s\}$.

The first claim, $p_k(v_k) \le \frac{2}{n}$, is proved by induction. We will show that $p_j \le \frac{1}{n}(1 + \frac{1}{2} + \frac{1}{2^2} + \dots + \frac{1}{2^j})$. For the base case, $p_1(v_1) = \frac{1}{n}(\frac{1}{2} + \frac{1}{2}) + \frac{1}{n}(\frac{\Delta-1}{\Delta}) = \frac{1}{n}(2 - \frac{1}{2 \Delta}) \le \frac{2}{n}$. For the inductive case, we have 
\begin{eqnarray*}
p_{j+1}(v_{j+1}) &\le& p_j(v_j) (\frac{1}{2} + \frac{1}{2\Delta}) + \frac{1}{n} \frac{\Delta-1}{\Delta}. \\
&\le& \frac{1}{n} (1 + \frac{1}{2} + \dots + \frac{1}{2^j})(\frac{1}{2} + \frac{1}{2\Delta}) + \frac{1}{n} \frac{\Delta-1}{\Delta} \\
&=& \frac{1}{n} (\frac{1}{2} + \frac{1}{4} + \dots + \frac{1}{2^{j+1}} + \frac{1}{n} + \frac{1}{\Delta n} (\frac{1}{2}+ \frac{1}{4} + \dots + \frac{1}{2^{j+1}}) -  \frac{1}{\Delta n}\\
&\le& \frac{1}{n}(1 + \frac{1}{2} + \dots + \frac{1}{2^{j+1}})
\end{eqnarray*}

The second claim has two cases: the first is that v is a neighbor of vertex along the path, $v_j$. Then $p_{j+1}(v) \le \frac{1}{2\Delta} p_j(v_j) + \frac{\Delta-1}{\Delta} \frac{1}{n} \le \frac{1}{n}.$

Finally, if v is not a neighbor of $v_j$, then we have $p_{j+1}(v) \le \frac{1}{n}$
\end{proof}

\begin{lemma}
\label{lem:mihailConductance}
Let $M+T_{l(i),t}$ be a perturbed random walk on expander $G$ as described in the description of $R_{alt}$ and $x$ be a vector over $V$ such that $\sum_i x_i = 0$.  Then for s = $\Theta(\log n)$, each component  of $x \cdot \prod_{t=1}^s (M+T_{l(i),t})$ is $o(\frac{1}{n^2})$.

\begin{proof}
The proof of this lemma relies heavily on Theorem 3.2 in~\cite{MIH}, which we review and state here. Let P be an irreducible, ergodic Markov process. Note that time-reversibility and even strong aperiodicity are not required. Rather than using just the transition probabilities $p_{ij}$ between vertices $i$ and $j$, instead consider a weighted transition $w_{ij}$, where $w_{ij} = \pi_i p_{ij}$, with $\pi_i$ being the $i^{th}$ component of the stationary distribution of $P$. For a subset $A \subset V$, consider a property called the merging conductance, defined as:
\begin{equation}
\Phi^*_P(A) = \frac{  \sum_{j_1 \in A} \sum_{j_2 \in S - A} \sum_i \frac{ w_{j_1 i} w_{j_2 i}}{\pi_i}}{\sum_{i \in A} \pi_i} 
\end{equation}
and define the merging conductance of $G$ to be 
\begin{equation}
\Phi^*_P(A) =  \min_{A \subset S : \sum_{i \in A} \pi_i \leq \frac{1}{2}} \Phi^*_P (A)
\end{equation}
Intuitively, the merging conductance can be viewed as a measure of the
flow coming into all vertices from both $A$ and $S-A$, for some set
$A$. The higher the merging conductance of a graph, the more well
connected it is and evenly distributed the flow is. If we define
$\lVert \vec{x}(t) \rVert = \sum \frac{(p_i(t) - \pi_i)^2}{\pi_i}$ to
be a measure of the distance of a distribution over $V$, $\vec{p}$
from the stationary distribution,~\cite{MIH} gives us the following
theorem, which indicates for a graph with conductance bounded away
from zero, convergence to the stationary distribution occurs in
logarithmic time.
\begin{theorem}{\cite[Theorem~3.2]{MIH}}
\label{th:Mihail}
\begin{equation}
\lVert \vec{x}(t) \rVert \leq (1 - \frac{1}{2} (\Phi^*_P)^2)^t \lVert \vec{x}(0) \rVert
\end{equation}
\end{theorem}

Any bound on the merging conductance of $M+T_{l(i),t}$ requires an understanding of how much the stationary distribution of the perturbed walk differs from the stationary distribution of the original walk, for which $\pi_i = 1/n$ for all $i \in V)$. Consider $u$ to be the one vertex in $M+T_{l(i),t}$ whose transition probabilities are perturbed. Let $v$ be the neighbor that receives the pebble with probability $\frac{1}{2} + \frac{1}{2\Delta}$, and let $u_1,\ldots,u_{\Delta-1}$ be the other neighbors of $u$ that receive the pebble with probability $\frac{1}{2\Delta}$.  Clearly $\pi_v$ is the max of the $\pi_i$'s. Suppose it were not. Then some other vertex $j$ not in $\{u,v,u_1,\ldots u_{\Delta-1}\} $ has $\pi_j = \pi_{max}$. But since $\pi_j =  \pi_{max} = \sum_{k \in N^{-1}(j)} \pi_k p_{kj}$, and $p_{kj} = \frac{1}{\Delta}$, it follows that $\pi_{max}$ is equal to the mean of the neighboring $\pi_k$'s implies $\pi_k = \pi_{max}$ for all $k \in N^{-1}(j)$. We continue this calculation until we reach one of the $u_i$'s, implying that $\pi_{u_i} = \pi_{max}$. However, this is a contradiction, since $\pi_{u_i} = \frac{1}{2d} \pi_u + \frac{d-1}{d} \pi_{max} \le \pi_{max}$. Thus $\pi_{max}$ must either be $\pi_u$ or $\pi_v$, and w.l.o.g. we can assume that it is $\pi_v$. A similar argument shows that the $u_1,\ldots,u_{\Delta -1}$ take values $\pi_{min}$. 

Now we bound the spread between $\pi_{min}$ and $\pi_{max}$. Note that
$\pi_{u} = \frac{\Delta-1}{\Delta} \pi_{min} + \frac{1}{\Delta}
\pi_{max}$.  Then, by elementary algebra, we obtain $\pi_{max} \leq
(\Delta + 1) \pi_{min}$.
\junk{
\begin{eqnarray*}
\pi_{max} \leq \frac{\Delta+1}{2\Delta} \pi_u + \frac{\Delta -1}{\Delta} \pi_{max} 
 \leq \frac{\Delta+1}{2} \pi_u 
\pi_{max} \leq \frac{\Delta + 1}{2} (\frac{\Delta -1}{\Delta} \pi_{min} + \frac{1}{\Delta} \pi_{max}) \\
\frac{\Delta-1}{2\Delta} \pi_{max} &\leq& \frac{(\Delta + 1)(\Delta -1)}{2\Delta} \pi_{min} \\
\pi_{max} &\leq& (\Delta + 1) \pi_{min}
\end{eqnarray*}}
Note that the mean value for $\pi$ over V is $1/n$. Using the bounds
calculated above, that means that $\pi_{min} \geq \frac{1}{(\Delta -1)
  n} > \frac{1}{\Delta n}$ and $\pi_{max} \leq \frac{\Delta-1}{n} <
\frac{\Delta}{n} $.

We next lower-bound the number of $\frac{w_{j_1 i} w_{j_2 i}}{\pi_i}$ terms in the calculation of the merging conductance. Let $A$ be the set that produces the minimum merging conductance. Essentially, we need to bound from below the number of vertices that have one neighbor in $A$ and another neighbor in $V-A$. Using vertex expansion $\alpha$, we have that $N(A)$, the neighborhood of $A$ comprised of neighbors $j$ of each $i \in A$ s.t. $j \in V-A$ has size $|N(A)| \geq \alpha |A|$. Now, we do not know how many of the vertices in $N(A)$ also have a neighbor in $V-A$. However, we can look at the vertex expansion of set $N(A) \bigcup A$. Clearly $|N(N(A) \bigcup A)| \geq \alpha^2 |A|$. However, since all the neighbors of $A$ in $V-A$ were captured in $N(A)$, this implies that $N(N(A) \bigcup A)$ must consist only of vertices in $V-A$. From this we can conclude that there is at least one edge from each vertex in $N(N(A) \bigcup A)$ to $N(A)$, and that this edge is incident to a vertex that has at least one edge back to $A$. Thus there are at least $\alpha^2 |A|$ conductance terms in $\Phi^*_P$. Plugging in our worst-case values for $\pi_i$ and $p_{ij}$,  assuming that $|A| = c n$ for some $c \in [0,1]$, and that $\sum_{i \in A} = \frac{1}{2}$, we have: 
\begin{eqnarray*}
\Phi^*_P \geq \alpha^2 |A| \frac{\frac{ \pi_{min}^2 p_{ij}^2}{\pi_{max}} }{|A| \pi_{max}} = \alpha^2 (\frac{1}{\Delta})^2 (\frac{1}{2 \Delta})^2 = \Omega(\frac{1}{\Delta^4}).
\end{eqnarray*}
As long as $\Delta$ is a constant integer much smaller than $n$, the
merging conductance $\Phi^*_P$ is safely bounded away from zero. Note
that we make no attempt to optimize this bound -- clearly much tighter
bounds are achievable.  Therefore, we can treat $(1 - \frac{1}{2}
(\Phi^*_P)^2)$ as a constant, $\gamma \in (0,1]$. The largest value of
  $\lVert \vec{x}(0) \rVert$ occurs of course when starting the walk
  at one vertex. Thus:
\begin{eqnarray*}
\lVert  \vec{x}(0) \rVert &\leq& \frac{(1 - \pi_{min})^2}{\pi_{min}} + (n-1) \frac{(\pi_{max})^2}{\pi_{min}} = \Delta n (1-\frac{1}{\Delta n})^2 + \Delta n (n-1) (\frac{\Delta}{n})^2.
\end{eqnarray*}
Hence $\lVert  \vec{x}(0) \rVert$ is $O(n)$. In order to obtain a value for $\lVert \vec{x} (s) \rVert$ that is $o(\frac{1}{n^2})$ we will need to pick a value of $s$ so that $\gamma^t$ is $\o(\frac{1}{n^3})$: 
\begin{eqnarray*}
\gamma^s  = \frac{A}{n^2} \text{   for some constant } A 
\Rightarrow  n^2 \gamma^s = c 
\Rightarrow  2 \log n + s \log \gamma = \log c 
\Rightarrow  s = \frac{\log c - 2 \log n}{\log \gamma} 
\Rightarrow  s = \Theta(\log n) .
\end{eqnarray*}
Thus for $s = \Theta(\log n)$ we are guaranteed that $\lVert
\vec{x}(0) \rVert = \Omega(\frac{1}{n})$, and therefore so is each
component.

\end{proof}
\end{lemma}
