%\input{template}
\documentclass[11pt]{article}
%\documentclass{sig-alternate}
\usepackage{algorithm}
\usepackage{algorithmic}

\usepackage{subfigure}
\usepackage{epsfig,amsthm,amsmath,color, amsfonts}
\usepackage{epsfig,color}
\newcommand{\xxx}[1]{\textcolor{red}{#1}}
\usepackage{fullpage}
\usepackage{framed}
%\usepackage{epsf}
%\usepackage{hyperref}

%\setlength{\textheight}{9.4in} \setlength{\textwidth}{6.55in}
%\setlength{\textheight}{9.2in} \setlength{\textwidth}{6.55in}
%\setlength{\topmargin}{0in}

%\voffset=-0.9in
%\hoffset=-0.8in

\newtheorem{theorem}{Theorem}[section]
%\newtheorem{definition}[theorem]{Definition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{claim}[theorem]{Claim}
%\newtheorem{example}[theorem]{Example}
\newtheorem{remark}[theorem]{Remark}
\theoremstyle{definition}\newtheorem{example}[theorem]{Example}
\theoremstyle{definition}\newtheorem{definition}[theorem]{Definition}
\theoremstyle{observation}\newtheorem{observation}[theorem]{Observation}

\newcommand{\comment}[1]{}
\newcommand{\QED}{\mbox{}\hfill \rule{3pt}{8pt}\vspace{10pt}\par}
%\newcommand{\eqref}[1]{(\ref{#1})}
\newcommand{\theoremref}[1]{(\ref{#1})}
\newenvironment{proof1}{\noindent \mbox{}{\bf Proof:}}{\QED}
%\newenvironment{observation}{\mbox{}\\[-10pt]{\sc Observation.} }%
%{\mbox{}\\[5pt]}

\def\m{{\rm min}}
%\def\m{\bar{m}}
\def\eps{{\epsilon}}
\def\half{{1\over 2}}
\def\third{{1\over 3}}
\def\quarter{{1\over 4}}
\def\polylog{\operatorname{polylog}}
\newcommand{\ignore}[1]{}
\newcommand{\eat}[1]{}
\newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor}
\newcommand{\ceil}[1]{\left\lceil #1 \right\rceil}

\newcommand{\algorithmsize}[0]{}

%---------------------
%  SPACE SAVERS
%---------------------
%
%\usepackage{times}
%\usepackage[small,compact]{titlesec}
%\usepackage[small,it]{caption}

%\newcommand{\squishlist}{
% \begin{list}{$\bullet$}
%  { \setlength{\itemsep}{0pt}
%     \setlength{\parsep}{3pt}
%     \setlength{\topsep}{3pt}
%     \setlength{\partopsep}{0pt}
%     \setlength{\leftmargin}{1.5em}
%     \setlength{\labelwidth}{1em}
%     \setlength{\labelsep}{0.5em} } }
%\newcommand{\squishend}{
%  \end{list}  }

\newcommand{\squishlist}{
 \begin{itemize}
}
\newcommand{\squishend}{
  \end{itemize}  }

%---------------------------------
% FOR MOVING PROOFS TO APPENDIX
%\usepackage{answers}
%%\usepackage[nosolutionfiles]{answers}
%\Newassociation{movedProof}{MovedProof}{movedProofs}
%\renewenvironment{MovedProof}[1]{\begin{proof}}{\end{proof}}

\def\e{{\rm E}}
\def\var{{\rm Var}}
\def\ent{{\rm Ent}}
\def\eps{{\epsilon}}
\def\lam{{\lambda}}
\def\bone{{\bf 1}}


%First definitions. Use these when you want to read comments.
\def\prasad#1{\marginpar{$\leftarrow$\fbox{P}}\footnote{$\Rightarrow$~{\sf #1 --Prasad}}}
\def\danupon#1{\marginpar{$\leftarrow$\fbox{D}}\footnote{$\Rightarrow$~{\sf #1 --Danupon}}}
\def\gopal#1{\marginpar{$\leftarrow$\fbox{G}}\footnote{$\Rightarrow$~{\sf #1 --Gopal}}}
\def\atish#1{\marginpar{$\leftarrow$\fbox{A}}\footnote{$\Rightarrow$~{\sf #1 --Atish}}}
%
%Second definitions. Use these to remove all comments.
%\def\prasad#1{}
%\def\danupon#1{}
%\def\gopal#1{}
%\def\atish#1{}


\begin{document}


\title{Fast distributed algorithm for generating a random spanning tree}

%\begin{titlepage}
%\author{Atish {Das Sarma} \thanks{College of Computing, Georgia Institute of Technology, Atlanta, GA 30332, USA.
%\hbox{E-mail}:~{\tt atish@cc.gatech.edu, danupon@cc.gatech.edu}} \and Danupon Nanongkai \addtocounter{footnote}{-1}
%\footnotemark \and  Gopal Pandurangan \thanks{Division of Mathematical
%Sciences, Nanyang Technological University, Singapore 637371 and Department of Computer Science, Brown University, Providence, RI 02912.  \hbox{E-mail}:~{\tt gopalpandurangan@gmail.com}. Supported in part by NSF grant CCF-0830476.}  
%\and  David Peleg}

\maketitle 
%\thispagestyle{empty}

\begin{abstract}
We show a distributed algorithm that generates a random spanning tree in $\tilde O((nmD^2)^{1/3})$ rounds where $n$, $m$, and $D$ are the number of nodes, the number of edges, and the diameter of the network, respectively. This improves the previous $\tilde O(\sqrt{m}D)$ algorithm [Das Sarma et al., PODC 2010] on all networks where the previous algorithm does not run in sub-linear time (i.e., $\sqrt{m}D=\Omega(n)$). 
%(Note, however, that the new algorithm runs in sub-linear time if and only if the previous algorithm does.)
\end{abstract}

%\end{titlepage}


\section{Introduction}

We consider the following problem in the $\mathcal{CONGEST}$ model. 

\paragraph{Generating a random spanning tree problem:}  We are given an arbitrary undirected, unweighted, and connected $n$--node network $G = (V,E)$. The goal is to devise a distributed algorithm such that, in the end, a spanning tree $T$ is generated, i.e., each node knows which edges incident to it is in $T$. Moreover, we want $T$ to be generated uniformly at random, i.e., the probability that $T$ is any spanning tree is the same. \\

The main result is the following theorem. 

\begin{theorem}\label{thm:rst}
A random spanning tree can be generated in $\tilde O\left((nmD^2)^{1/3}+D\right)$ rounds.
\end{theorem}


The key ingredient of this result is a new algorithm to generate a random walk which runs faster than the previous algorithm in \cite{DNPT10-podc} when the required walk length is long. In particular, we solve the following problem. 

\paragraph{Generating one walk where each node knows their position(s) ($1$-RW-pos) problem:} We are given an arbitrary undirected, unweighted, and connected $n$--node network $G = (V,E)$ and a source node $s \in V$. The goal is to devise a distributed algorithm such that, in the end, a random walk $W=(s=v_0, v_1, v_2, ..., v_\ell)$ of length $\ell$ is generated and every node knows their positions in such walk. That is, we want each node $v_i$ to know its position $i$ in the random walk. (Note that it is possible that $v_i=v_j$ for some $i\neq j$.) 


Das Sarma et al.~\cite{DNPT10-podc} shows that $1$-RW-pos can be solved in $\tilde{O}(\sqrt{\ell D})$ rounds. We show that this bound can be improved further when $\ell\geq n^2/D$.

\begin{lemma}\label{lem:rw-pos} For any $\ell\leq (nD)^2$, $1$-RW-pos can be solved in $\tilde O((n\ell D)^{1/3}+D)$ rounds with high probability.
\end{lemma}


We prove this lemma in Section~\ref{sec:rw-pos}. We now prove the main theorem assuming the above lemma.

\begin{proof}[Proof of Theorem~\ref{thm:rst}]
Das Sarma et al.~\cite{DNPT10-podc} show that if one can solve $1$-RW-pos where $\ell=\tilde{O}(mD)$ in $\tilde O(f(m, D))$ rounds then one can also generated a random spanning tree in $\tilde O(f(m, D))$ rounds as well (by simulating Aldous-Broder's algorithm~\cite{aldous,broder}). In the case of our problem $f(m, D)=\tilde O\left((nmD^2)^{1/3}\right)$.
\end{proof}

\section{Proof of Lemma~\ref{lem:rw-pos}}\label{sec:rw-pos}

\subsection{Algorithm}

%To generate a random spanning tree, we will perform a random walk of length $\tilde O(mD)$ which will cover the entire graph with high probability. At the end of the random walk, we want every node (except the starting point of the walks) to know the edge that the random walk uses to visit it for the first time. These edges together form a random spanning tree~\cite{aldous, broder}.
%
%Our main task is to perform such a random walk.

Our algorithm builds on the random walk algorithm in \cite{DNPT10-podc}. The main difference is that it will stitch the walk at some central node and this central node will collect some walks in advance so that we do not have to spend $D$ rounds every time we stitch the walk. This gives us some advantage when the walk is long enough. The algorithm does the following to perform a walk of length $\ell$.

Let $\beta=\lfloor(\ell D)^{1/3}/n^{2/3}\rfloor$, $\eta=1$ and $\lambda=\lfloor n\beta\rfloor$.

\begin{enumerate}


 \item \label{step:phase1} Each node $v$ generates $\eta\deg(v)$ walks of length uniformly in $[\lambda, 2\lambda-1]$ each, as in Phase~1 of the algorithm in \cite{DNPT10-podc}. See Algorithm~\ref{alg:generate-short-walks}.

 \item \label{step:collectwalks} Let $c$ be any node which will act as a central node that stitches the random walk. Node $c$ collects $\beta$ walks from each node (if a node $v$ generates less than $\beta$ walks then we simply collect all the walks). See Algorithm~\ref{alg:collect-short-walks}. 
     
 \item \label{step:stitch} We stich the walks internally at the central node: Central node $c$ simulates the walk by stitching the collected short walks until all $\beta$ short walks starting at some node $v$ are used up. In this case, $c$ collects another $\beta$ walks starting at node $v$. The algorithm {\em fails} if there is no more walks to collect from $v$. See Algorithm~\ref{alg:stitch}.
     
 \item \label{step:position} The central node informs the stitching nodes their positions in the walk (simply by broadcasting). Then, each stitching node informs every node in each short walk their positions in the walk.

\end{enumerate}


\begin{algorithm}
\caption{\sc Generate-Short-Walks($s$, $\eta$, $\lambda$)}
\label{alg:generate-short-walks}

\begin{algorithmic}[1]

\FOR{each node $v$}

\STATE Generate $\eta\deg(v)$ random integers in the range $[0,
\lambda-1]$, denoted by $r_1, r_2, ..., r_{\eta\deg(v)}$.

\STATE Construct $\eta\deg(v)$ messages containing its ID and, in
addition, the $i$-th message contains the desired walk length of
$\lambda + r_i$. We will refer to these messages created by node $v$
as ``coupons created by $v$''.

\ENDFOR


\FOR{$i=1$ to $2\lambda$}


\STATE This is the $i$-th iteration. Each node $v$ does the
following: Consider each coupon $C$ held by $v$ which is received in
the $(i-1)$-th iteration. If the coupon $C$'s desired walk length is
at most $i$, then $v$ keeps this coupon ($v$ is the desired
destination). Else, $v$ picks a neighbor $u$ uniformly at random and
forward $C$ to $u$.
\ENDFOR
\end{algorithmic}

\end{algorithm}


%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%

\begin{algorithm}
\caption{\sc Collect-Short-Walks($s$, $\eta$, $\lambda$)}
\label{alg:collect-short-walks}

\begin{algorithmic}[1]

\STATE Define any node $c$ as a central node. 

\FOR{each node $v$}

\FOR{$i=1$ to $\beta$}

\STATE Central node $c$ calls {\sc Sample-Coupon($v$)} (cf. Algorithm~\ref{alg:Sample-Coupon}) to sample one of the coupons distributed by $v$ (in Algorihtm~\ref{alg:generate-short-walks}) uniformly at random. Let $C_{v,i}$ be the sampled coupon.

\ENDFOR

\ENDFOR

\STATE Let $\mathcal C_v=\{C_{v, 1}, ..., C_{v, \beta}\}$ be the set of coupons distributed by $v$ that are collected by central node $c$. 

\end{algorithmic}

\end{algorithm}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\begin{algorithm}[t]
\caption{\sc Sample-Coupon($v$)} \label{alg:Sample-Coupon}
\textbf{Input:} Starting node $v$.\\
\textbf{Output:} A node sampled from among the nodes holding the
coupon of $v$

\begin{algorithmic}[1]

\STATE Construct a Breadth-First-Search (BFS) tree rooted at central node $c$.
While constructing, every node stores its parent's ID. Denote such
tree by $T$.

\STATE We divide $T$ naturally into levels $0$ through $D$ (where
nodes in level $D$ are leaf nodes and the root node $c$ is in level
$0$).

\STATE Every node $u$ that holds some coupons of $v$ picks one
coupon uniformly at random. Let $C_0$ denote such coupon and let
$x_0$ denote the number of coupons $u$ has. Node $u$ writes its ID
on coupon $C_0$.

\FOR{$i=D$ down to $0$}

\STATE Every node $u$ in level $i$ that either receives coupon(s)
from children or possesses coupon(s) itself do the following.

\STATE Let $u$ have $q$ coupons (including its own coupons). Denote
these coupons by $C_0, C_1, C_2, \ldots, C_q$ and let their counts
be $x_0, x_1, x_2, \ldots, x_q$. Node $u$ samples one of $C_0$
through $C_q$, with probabilities proportional to the respective
counts. That is, for any $1\leq j\leq q$, $C_j$ is sampled with
probability $\frac{x_j}{x_0+x_1+\ldots+x_q}$.

\STATE The sampled coupon is sent to the parent node (unless already
at root) along with a count of $x_0+x_1+\ldots+x_q$ (the count
represents the number of coupons from which this coupon has been
sampled).

\ENDFOR

\STATE The root outputs the ID of the owner of the final sampled
coupon (written on such coupon).

\end{algorithmic}


\end{algorithm}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\begin{algorithm}
\caption{{\sc Stitch}}\label{alg:stitch}

\begin{algorithmic}[1]

\STATE Let $s$ be any node that will be the starting point of the walk. Let $t$ and $W$ be the current end vertex of the walk and current walk length. Initially $t=s$ and $W=0$. The algorithm will also keep track of
a set of {\em connectors} (i.e., stitching points), denoted by $\mathbb C$. Initially, ${\mathbb C}
= \{s\}$.

\WHILE {Length of walk completed ($W$) is at most $\ell-2\lambda$}

  \IF{$\mathcal C_t$ is empty}
  
  
  \FOR{$i=1...\beta$} 
  
  \STATE $c$ calls {\sc Sample-Coupon($t$)} to uniformly sample one of the
  coupons distributed by $v$ and store this coupon in ${\cal C}_t$. 

  \ENDFOR
  
  \STATE If ${\cal C}_t$ is still empty (there is no more coupon to sample) then the algorithm {\em fails} and terminates. 

  \ENDIF

  \STATE $c$ randomly picks one coupon $C$ distributed by $t$ from $\mathcal C_t$. 
  
  \STATE Set $t$ to be the destination of coupon $C$.
  
  \STATE Increase $W$ by the length of the walk corresponding to $C$. 

  \STATE ${\cal C} = {\cal C} \cup \{v'\}$

\ENDWHILE

\STATE Walk naively until $\ell$ steps are completed (this is at
most another $2\lambda$ steps) 

\STATE A node holding the token outputs the ID of $s$

\end{algorithmic}

\end{algorithm}


\subsection{Analysis}

Lemma~\ref{lem:rw-pos} follows immediately from the following lemma. 

\begin{lemma} For any $\ell\leq (nD)^2$, the algorithm succeeds with high probability and when it succeeds, it finishes in $\tilde O((n\ell D)^{1/3}+D)$ rounds with high probability.
\end{lemma}
\begin{proof}

The only part that the algorithm might fail is Step~\ref{step:stitch}. It is shown in \cite{DNPT10-podc} that if we  set $\eta\lambda \geq \sqrt{\ell}$ then the short walks generated in Step~\ref{step:phase1} will be enough with high probability. In other words, the algorithm will not fail with high probability. 

We now analyze the running time when the algorithm does not fail. We first analyze each step separately.

\paragraph{Step~\ref{step:phase1}:} As proved in \cite{DNP09-podc}, generating $\eta\deg(v)$ walks of length $\lambda$ from each node $v$ needs $\tilde O(\eta\lambda)$ rounds in total with high probability.  

\paragraph{Step~\ref{step:collectwalks}:} Observe that Algorithm~\ref{alg:Sample-Coupon} causes $O(1)$ congestion and $O(D)$ dilation. Since we need to call this algorithm $\beta n$ times in total, the total congestion is $O(n\beta)$. Therefore the total running time is $O(n\beta+D)$.

\paragraph{Step~\ref{step:stitch}:} 
We analyze the time of this step when the algorithm does not fail. The only step that incurs some rounds is when the central node $c$ collects $\beta$ more coupons sent out from node $v$. How many times can this happen? Since the central node $c$ collects $\beta$ more coupons from a node $v$ only when it has used up all $\beta$ walks starting at $v$ (i.e., all walks previously stored in $\mathcal C_v$ are used to stitch). This means that $c$ has used $v$ as a stitching point for $\beta$ more times. Since the stitching is done only $\lfloor\ell/\lambda\rfloor$ times, $c$ will collect more coupons at most $\ell/(\lambda\beta)$ times.

Each time $c$ collects $\beta$ more coupons, it needs $O(\beta+D)$ rounds ($O(D)$ dilation and $O(\beta)$ congestion). Therefore, the total running time of Step~\ref{step:stitch} is $O(\frac{\ell}{\lambda\beta}(\beta+D))$. 

\paragraph{Step~\ref{step:position}:} It takes $O(\ell/\lambda+D)$ rounds to broadcast positions of $\ell/\lambda$ stitching points. Then, the stitching point simulate the short walks it uses to distribute coupons in Step~\ref{step:phase1} in order to inform the positions to nodes in the short walks. This takes time equal to Step~\ref{step:phase1}, i.e., $\tilde O(\eta\lambda)$ rounds. Therefore, the total running time of this step is $\tilde O(\ell/\lambda+D+\eta\lambda)$
 
 
\paragraph{Total time:} The combined running time is 
\[\tilde O\left(\eta\lambda + n\beta+ \frac{\ell}{\lambda\beta}(\beta+D)+\ell/\lambda+D\right)\]
Recall that $\beta=(\ell D)^{1/3}/n^{2/3}$, $\eta=1$ and $\lambda=n\beta$ and observe that $\beta\leq D$ since $\ell\leq n^2D^2$. Therefore, the running time becomes 
\[\tilde O\left(n\beta+\frac{\ell}{n\beta}+\frac{\ell D}{n\beta^2}+D\right) = \tilde O\left(n\beta+\frac{\ell D}{n\beta^2}+D\right) = \tilde O\left((n\ell D)^{1/3}+D\right)\qedhere\]
\end{proof}



  \let\oldthebibliography=\thebibliography
  \let\endoldthebibliography=\endthebibliography
  \renewenvironment{thebibliography}[1]{%
    \begin{oldthebibliography}{#1}%
      \setlength{\parskip}{0ex}%
      \setlength{\itemsep}{0ex}%
  }%
  {%
    \end{oldthebibliography}%
  }
{ \small
\bibliographystyle{abbrv}
\bibliography{Distributed-RST}
}



%\newpage
%\section*{Appendix}
%%\begin{center}
%%\large \textbf{Appendix}
%%\end{center}
%\appendix
%%\section{Moved Proofs}
%%\Readsolutionfile{movedProofs}
%\input{appendix}



\end{document}
