\section{Partial Cover Time}
\label{sec:fractional}
\vspace{-0.15in}
In this section, we prove the following theorem that bounds the
partial cover time for a branching random walk on graphs with
sufficient expansion.

\begin{theorem}
\label{thm:fractional_cover}
Let $\delta > 0, \alpha > 0, and \Delta > 2$ be constants. Then if for
a constant $k \geq 2$, the equation
\[ \alpha > \frac{\frac{\Delta^2}{e^k} + (k-1)\Delta - \frac{k^2}{2}}{\frac{\Delta}{e^k} + (k-1)\Delta - \frac{k^2}{2}} \]
is satisfied, then a $k$-branching random walk on any $\Delta$-regular
$(\alpha,\delta)$-expander has a $\delta$-cover time of $O(\log n)$
with high probability.
\end{theorem} 

\paragraph{\bf Remark.} For $2 \leq k \leq \Delta$, the condition of the lemma is met if
\[ \alpha > 1 + \frac{\Delta}{e^k} \left(\frac{\Delta - 1}{\Delta - 2}\right). \]
Since we must assume $\Delta \geq 3$ as one cannot prove fast cover
time for $\Delta = 2$ (the case of a line), and we have $\frac{\Delta
  - 1}{\Delta - 2} \leq 2$ for $\Delta \geq 3$, the condition of the
theorem will be satisfied if
\[ \alpha > 1 + \frac{2\Delta}{e^k}. \]

The above gives us the following corollaries.

\begin{corollary}
If $\Delta \geq 3$ and $\alpha > 1 + \frac{2\Delta}{e^\Delta}$, then a
$k$-branching random walk on a $\Delta$-regular
$(\alpha,\delta)$-expander has a $\delta$-cover time of $O(\log n)$ if
\[ k > \ln\left( \frac{2\Delta}{\alpha - 1} \right). \]
\end{corollary}

\begin{corollary}
A $2$-branching random walk on a $\Delta$-regular random graph, where
$\Delta \geq 3$, covers $\Omega(n)$ nodes in time $O(\log n)$ with
high probability.
\end{corollary}

Our proof of Theorem~\ref{thm:fractional_cover} shows the stronger
result that the number of active nodes itself grows to $\delta n$ in
time $O(\log n)$ with high probability. The proof is divided into
three parts: (a) we first show that if the number of active nodes is
less than $\delta n$ at time $t$, then the expected number of active
nodes in time $t+1$ is at least a factor $(1 + \nu)$ more where $\nu >
0$ (Lemma~\ref{lem:expected_growth}), (b) using this expectation bound
and a martingale argument, we then bound the probability that the
number of active nodes do not grow by a constant factor greater than
$1$ from time $t$ to $t+1$ (Lemma~\ref{lem:prob_growth}), (c) finally,
using the bound on the probability of not growing the number of active
nodes by a constant factor, we show that the number of active nodes
grows to $\delta n$ in time $O(\log n)$ with high probability
(Lemma~\ref{lem:fractional_cover}).

We let $S_t$ denote the set of active nodes at time $t$. Also, for any
set $S$ of nodes, we let $N(S)$ denote the set of neighbors of $S$,
i.e., the set of all those nodes which have at least one edge to a
node in $S$. Note that $N(S)$ can intersect $S$. First we prove growth
in expectation.

\begin{lemma}
\label{lem:expected_growth}
For any $t \geq 0$, if $|S_t| \leq \delta n$, $\E[|S_{t+1}|] \ge (1 +
\nu) |S_t|$ for some constant $\nu > 0$.
\end{lemma}

\begin{proof}
Let $k = $. Fix a $t$ and suppose $|S_t| \leq \delta n$.  It will
suffice to show that $\E[|N(S_t) - S_{t+1}|] \leq |N(S_t)| - (1 +
\nu)|S_t|$. For each vertex $u \in N(S_t)$, let $X_u$ be the indicator
variable that is $1$ if $u \notin S_{t+1}$ and $0$ otherwise.  The
probability that $X_u = 1$ is $p = \left (1-\frac{1}{\Delta} \right
)^{k d_u}$, where $d_u$ the number of edges of $u$ to vertices in
$S_t$.  Thus the expectation of $X_u$ is $p$.  We have
\[ E[|N(S_t) - S_{t+1}|] = \le \sum_{u \in N(S_t)} X_u = \sum_{u \in N(S_t)} \left (1 - \frac{1}{\Delta} \right )^{k d_u} \le \sum_{u \in N(S_t)} e^{- \frac{k d_u}{\Delta}}. \]
We will like to know for what values of $d_u$'s the expression $e^{-
  \frac{k d_u}{\Delta}}$ is maximized? We note that $\sum_{u \in
  N(S_t)} d_u = \Delta |S_t|$. Combining this fact with
Lemma~\ref{lem:redistribute}, it is immediate that the expression will
be maximized when for any $u$, $d_u$ is either $\Delta$ or $1$, except
possibly for one $u$. Let $R_1$ be the number of $u$'s for which $d_u
= 1$, and $R_2$ be the number of $u$'s for which $d_u = 1$. We have
\begin{eqnarray*}
R_1 + R_2 & = & |N(S_t)|, \\
R_1 + \Delta R_2 & = & \Delta |S_t|.
\end{eqnarray*}
Solving for $R_1$ and $R_2$, we get
\begin{eqnarray*}
R_1 & = & \frac{\Delta}{\Delta - 1} (|N(S_t)| - |S_t|), \\
R_2 & = & \frac{1}{\Delta - 1} (\Delta |S_t| - |N(S_t)|).
\end{eqnarray*}
Thus we have
\begin{eqnarray*}
E[|N(S_t) - S_{t+1}|] & \le & R_1 e^{-\frac{k}{\Delta}} + R_2 e^{-k} \\
& = & \frac{\Delta}{\Delta - 1} (|N(S_t)| - |S_t|) e^{-\frac{k}{\Delta}} + \frac{1}{\Delta - 1} (\Delta |S_t| - |N(S_t)|) e^{-k}
\end{eqnarray*}
and we want to show that this is at most
\[ |N(S_t)| - (1 + \nu) |S_t|. \]
Rearranging, we want
\[ |N(S_t)| \left (1 - \frac{\Delta}{\Delta-1} e^{-\frac{k}{\Delta}} + \frac{1}{\Delta-1} e^{-k} \right) + |S_t| \left(\frac{\Delta}{\Delta-1} e^{-\frac{k}{\Delta}} - \frac{\Delta}{\Delta-1} e^{-k} - 1 \right) \geq \nu |S_t|. \] 
We know $|N(S_t)| \geq \alpha |S_t|$ since $|S_t| \leq \delta
n$. Since $1 - \frac{\Delta}{\Delta-1} e^{-\frac{k}{\Delta}} +
\frac{1}{\Delta-1} e^{-k} > 0$, the above will be true if
\[ \alpha \left (1 - \frac{\Delta}{\Delta-1} e^{-\frac{k}{\Delta}} + \frac{1}{\Delta-1} e^{-k} \right) + \left(\frac{\Delta}{\Delta-1} e^{-\frac{k}{\Delta}} - \frac{\Delta}{\Delta-1} e^{-k} - 1 \right) > 0. \]
Rearranging, we want
\[ (\alpha - 1) \left (1 - \frac{\Delta}{\Delta-1} e^{-\frac{k}{\Delta}} \right) - \frac{\Delta - \alpha}{\Delta-1} e^{-k} > 0. \]
Since $e^{-\frac{k}{\Delta}} \leq 1 - \frac{k}{\Delta} + \frac{k^2}{2
  \Delta^2}$, the above condition will be met if
\[ (\alpha - 1) \left (1 - \frac{\Delta}{\Delta-1} \left(1 - \frac{k}{\Delta} +  \frac{k^2}{2 \Delta^2} \right) \right) - \frac{\Delta - \alpha}{\Delta-1} e^{-k} > 0, \]
which simplifies to
\[ \alpha > \frac{\frac{\Delta^2}{e^k} + (k-1)\Delta - \frac{k^2}{2}}{\frac{\Delta}{e^k} + (k-1)\Delta - \frac{k^2}{2}}. \]
\end{proof}

We used the following easy lemma in the proof above.

\begin{lemma}
\label{lem:redistribute}
Let $c > 0$ and $a, b > 1$ such that $a \leq b$. Then 
\[ e^{-c(a-1)} + e^{-c(b+1)} >  e^{-ca} + e^{-cb}. \]
\end{lemma}

\begin{proof}
\begin{eqnarray*}
e^{-c(a-1)} + e^{-c(b+1)} - (e^{-ca} + e^{-cb}) & = & e^{-ca}(e^c - 1) + e^{-c(b+1)}(1 - e^c) \\ 
& = & (e^c - 1)(e^{-ca} - e^{-c(b+1)}) \\ 
& > & 0,
\end{eqnarray*}
since $c > 0$ and $a < b + 1$.
\end{proof}

Next, we show that the number of active nodes is concentrated near the
expectation.

\begin{lemma}
\label{lem:prob_growth}
For any time $t$, $\Pr[|S_{t+1}| - \E[|S_{t+1}|] \leq -\tau |S_t|] \leq
exp(-\frac{\tau^2 |S_t|}{2k})$.
\end{lemma}

\begin{proof}
For any time step $t$, we arbitrarily order the active nodes in
$S_t$. Then we define random variables $(Z_i^j)$, $1 \leq i \leq
|S_t|$, $1 \leq j \leq k$, where $Z_i^j$ indicates which vertex was
chosen by the $i$th node in $S_t$ in its $j$ trial to be active at
time $t+1$. (Note that each active node in $S_t$ chooses $k$ neighbors
uniformly at random with replacement.)

Let $A$ be the size of $S_{t+1}$. Then $X_i^j = E[A|Z_1^1,\ldots,
  Z_1^k, \ldots, Z_{i-1}^1,\ldots, Z_{i-1}^k, Z_i^1, \ldots, Z_i^j]$
is the Doob martingale of A. We have $X_{|S_t|}^k = |S_{t+1}|$. Since
$X_j - X_{j-1}$ is bounded by at most 1, Azuma's inequality gives us:
\begin{eqnarray*}
\Pr[|S_{t+1}| - \E[|S_{t+1}|] \leq -\tau |S_t|] & \leq & exp(-\frac{\tau^2  |S_t|^2}{ 2k|S_t|}) \\
&=& exp(-\frac{\tau^2 |S_t|}{2k}).
\end{eqnarray*}
\end{proof}

Lastly, using the bound on probability of growth obtained above, we
prove an $O(\log n)$ bound for the $\delta$-cover time. 

\begin{lemma}
\label{lem:fractional_cover}
$|S_t| \geq \delta n$ for some $t = O(\log n)$.
\end{lemma}

\begin{proof}
We want to analyze the change in the number of active nodes as a
Markov process. For this, consider a Markov process $X_t$ on the state
space $\{1, 2, \ldots n\}$. Setting $\tau = \nu/2$ in
Lemma~\ref{lem:prob_growth}, we would like to define the transitions as follows:
with probability $1 - exp(-\frac{\nu^2 X_t}{8k})$, $X_{t+1} = (1 +
\frac{\nu}{2})X_t$, and with probability $exp(-\frac{\nu^2 X_t}{8k})$,
$X_{t+1} = 1$, a conservative over-estimate.

But for technical simplicity, we define our random walk slightly
differently. Let $C$ be a sufficiently large constant. It is clear
that with some constant probability we can arrive at $X_t = C$ in a
constant number of steps from $1$. Thus, letting $r =
\frac{\nu^2}{8k}$, we define our transitions as follows. Let $X_t =
C(1+\frac{\nu}{2})^i$. Then, with probability $1 - exp(-rC(1 +
i\frac{nu}{2}))$, $X_{t+1} = (1 + \frac{\nu}{2})X_t$, and with
probability $exp(-rC(1 + i\frac{nu}{2}))$, $X_t = C$. It is not
difficult to see that it suffices to prove that this random walk will
reach $\delta n$ in time $O(\log n)$.

We want to show that starting from $C$, our random walk will reach size
$\delta n$ without going back to $1$ with probability greater than
$\frac{1}{2}$. We observe that the probability of this not happening
is at most
%\begin{eqnarray*}
\[ e^{-rC} +  e^{-rC(1 + \frac{\nu}{2})} + e^{-rC(1 + 2 \frac{\nu}{2})} + \dots  \leq  e^{-rC} (1 + e ^{-rC \frac{\nu}{2}} + e^{-2rC \frac{\nu}{2}} + \dots) 
=  \frac{e^{-rC}}{1 - e^{-rC \frac{\nu}{2}}} 
 \leq  \frac{1}{2}. \]
%\end{eqnarray*}
if $C$ is chosen sufficiently large, depending on the value of
$r$. From here, we can easily achieve our result in time $O(\log^2 n)$
with high probability by repeating the above $O(\log n)$
times. However, we can actually do better and achieve an $O(\log n)$
bound. For this, we view each segment of walk where starting from $C$
we either reach $\delta n$ (success) or go back to $C$ (failure),
whichever is earlier, as a trial. Since probability of a successful
trial is at least $\frac{1}{2}$, we know that with high probability,
there are at most $b \log n$ failed trials before we have a trial that
is successful, for some constant $b$. Label a step (within a trial)
where the walk advances as heads in a coin flip and a step where we go
back to $C$ as tails.  Each failed trial then consists of some number
(possibly zero) of heads flips followed by a single tails flip. Given
that we have $b \log n$ failed trials before we have a successful
trial, we know that there are $b \log n$ tails before we get a
successful trial. We would like to show that with high probability,
there are $O(\log n)$ heads in all the failed trials combined as well,
which will prove our lemma.

We need to bound the probability of heads conditioned that we are in a
failed trial. Now the probability that we get a tails in step $i$ of a
failure trial is larger than
%\begin{eqnarray*}
\[ \frac{e^{-rC(1 + i\frac{\nu}{2})}}{\sum_{l=i}^{\infty} \left[ \prod_{j = i}^{l-1} (1-e^{-rC(1+j\frac{\nu}{2})}) \right] e^{-rC(1+l \frac{\nu}{2})}} 
%& \ge &  \frac{e^{-rC(1 + i\frac{\nu}{2})}}{\sum_{l=i}^{\infty} e^{-rC(1+l \frac{\nu}{2})}} \\ 
%& \ge & \frac{e^{-rC i \frac{\nu}{2}}}{\sum_{l=i}^{\infty} e^{-rC l  \frac{\nu}{2}}} \\
%& \ge & \frac{1}{\sum_{j = 0}^{\infty} e^{-rC j \frac{\nu}{2}}}\\
\geq 1 - e^{-rC \frac{\nu}{2}}. \]
%\end{eqnarray*}
Thus the conditional probability of heads within a failure trial is at
most $e^{-rC \frac{\nu}{2}}$. Now applying Chernoff bound we get that
with high probability, there are $O(\log n)$ heads within all failure
trials.
\end{proof}   
