%\input{template}
\documentclass[11pt]{article}
%\documentclass{sig-alternate}
\usepackage{algorithm}
\usepackage{algorithmic}

\usepackage{subfigure}
\usepackage{epsfig,amsthm,amsmath,color, amsfonts}
\usepackage{epsfig,color}
\newcommand{\xxx}[1]{\textcolor{red}{#1}}
%\usepackage{fullpage}
\usepackage{framed}
%\usepackage{epsf}
%\usepackage{hyperref}

%\setlength{\textheight}{9.4in} \setlength{\textwidth}{6.55in}
\setlength{\textheight}{9.2in} \setlength{\textwidth}{6.55in}
%\setlength{\topmargin}{0in}

\voffset=-0.9in
\hoffset=-0.8in

\newtheorem{theorem}{Theorem}[section]
%\newtheorem{definition}[theorem]{Definition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{claim}[theorem]{Claim}
%\newtheorem{example}[theorem]{Example}
\newtheorem{remark}[theorem]{Remark}
\theoremstyle{definition}\newtheorem{example}[theorem]{Example}
\theoremstyle{definition}\newtheorem{definition}[theorem]{Definition}
\theoremstyle{observation}\newtheorem{observation}[theorem]{Observation}

\newcommand{\comment}[1]{}
\newcommand{\QED}{\mbox{}\hfill \rule{3pt}{8pt}\vspace{10pt}\par}
%\newcommand{\eqref}[1]{(\ref{#1})}
\newcommand{\theoremref}[1]{(\ref{#1})}
\newenvironment{proof1}{\noindent \mbox{}{\bf Proof:}}{\QED}
%\newenvironment{observation}{\mbox{}\\[-10pt]{\sc Observation.} }%
%{\mbox{}\\[5pt]}

\def\m{{\rm min}}
%\def\m{\bar{m}}
\def\eps{{\epsilon}}
\def\half{{1\over 2}}
\def\third{{1\over 3}}
\def\quarter{{1\over 4}}
\def\polylog{\operatorname{polylog}}
\newcommand{\ignore}[1]{}
\newcommand{\eat}[1]{}
\newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor}
\newcommand{\ceil}[1]{\left\lceil #1 \right\rceil}

\newcommand{\algorithmsize}[0]{}

%---------------------
%  SPACE SAVERS
%---------------------

\usepackage{times}
\usepackage[small,compact]{titlesec}
\usepackage[small,it]{caption}

\newcommand{\squishlist}{
 \begin{list}{$\bullet$}
  { \setlength{\itemsep}{0pt}
     \setlength{\parsep}{3pt}
     \setlength{\topsep}{3pt}
     \setlength{\partopsep}{0pt}
     \setlength{\leftmargin}{1.5em}
     \setlength{\labelwidth}{1em}
     \setlength{\labelsep}{0.5em} } }
\newcommand{\squishend}{
  \end{list}  }

%
%\newcommand{\squishlist}{
% \begin{enumerate}}
%\newcommand{\squishend}{
%  \end{enumerate}  }


%---------------------------------
% FOR MOVING PROOFS TO APPENDIX
%\usepackage{answers}
%%\usepackage[nosolutionfiles]{answers}
%\Newassociation{movedProof}{MovedProof}{movedProofs}
%\renewenvironment{MovedProof}[1]{\begin{proof}}{\end{proof}}

\def\e{{\rm E}}
\def\var{{\rm Var}}
\def\ent{{\rm Ent}}
\def\eps{{\epsilon}}
\def\lam{{\lambda}}
\def\bone{{\bf 1}}


%First definitions. Use these when you want to read comments.
%\def\prasad#1{\marginpar{$\leftarrow$\fbox{P}}\footnote{$\Rightarrow$~{\sf #1 --Prasad}}}
\def\danupon#1{\marginpar{$\leftarrow$\fbox{D}}\footnote{$\Rightarrow$~{\sf #1 --Danupon}}}
%\def\gopal#1{\marginpar{$\leftarrow$\fbox{G}}\footnote{$\Rightarrow$~{\sf #1 --Gopal}}}
%\def\atish#1{\marginpar{$\leftarrow$\fbox{A}}\footnote{$\Rightarrow$~{\sf #1 --Atish}}}
%
%Second definitions. Use these to remove all comments.
\def\prasad#1{}
%\def\danupon#1{}
\def\gopal#1{}
\def\atish#1{}


\begin{document}


\title{Dynamic Analysis of Distributed Random Walks}

\begin{titlepage}
%\author{Atish {Das Sarma} \thanks{Google Research, Google Inc., Mountain View, CA 94041, USA.
%\hbox{E-mail}:~{\tt dassarma@google.com}.} \and Danupon Nanongkai \thanks{College of Computing, Georgia Institute of Technology, Atlanta, GA 30332, USA. \hbox{E-mail}:~{\tt danupon@cc.gatech.edu}.}
%\footnotemark \and  Gopal Pandurangan \thanks{Division of Mathematical
%Sciences, Nanyang Technological University, Singapore 637371 and Department of Computer Science, Brown University, Providence, RI 02912.  \hbox{E-mail}:~{\tt gopalpandurangan@gmail.com}. Supported in part by NSF grant CCF-0830476.}   \and Prasad Tetali \thanks{School of Mathematics and School of Computer Science,
%Georgia Institute of Technology, Atlanta, GA 30332, USA. \hbox{E-mail}:~{\tt tetali@math.gatech.edu}. Supported in part by NSF DMS 0701023 and NSF CCR 0910584.}}

\date{}

\maketitle \thispagestyle{empty}

\vspace*{.4in}


\maketitle
\begin{abstract}
Continuous Congest Model
\end{abstract}

%\noindent {\bf Keywords:} Random walks, Random sampling, Decentralized
%computation, Distributed algorithms, Random Spanning Tree, Mixing Time. \\

%\noindent {\bf Format:} Regular Presentation.

\end{titlepage}


\vspace{-0.15in}
\section{Introduction}
Random walks play a central role in computer science, spanning a
wide range of areas in both theory and practice. The focus  of this
paper is  random walks in networks, in particular, decentralized
algorithms for performing random walks in arbitrary networks.
 Random walks are used as an integral subroutine in a wide variety of network applications ranging from token management and load
balancing to search, routing, information propagation
and gathering,  network topology construction
and building random spanning
trees (e.g., see \cite{DNP09-podc} and the references therein). Random walks  are also very useful in
providing uniform and efficient solutions to distributed control of
dynamic networks \cite{BBSB04, ZS06}.  Random walks  are local
and lightweight and require little index or state maintenance which
make them especially attractive to self-organizing dynamic networks such as
Internet overlay and ad hoc wireless networks.

A key purpose of random walks in  many of these network applications
is to perform  node sampling.  While the sampling requirements in different
applications vary, whenever a true sample is required from a random
walk of certain steps, typically all applications perform the walk naively
--- by simply passing a token from one node to its neighbor: thus to
perform a random walk of length $\ell$ takes time linear in $\ell$.

\subsection{Congest Model}
Consider an undirected, unweighted, connected $n$-node graph $G =
(V, E)$.  Suppose that every node (vertex) hosts a processor with
unbounded computational power, but with limited initial knowledge.
Specifically, assume that each node is associated with a distinct identity
number from the set $\{1, 2, . . . , n\}$. At the beginning of the
computation, each node $v$ accepts as input its own identity number
and the identity numbers of its neighbors in $G$. The node may also
accept some additional inputs as specified by the problem at hand.
The nodes are allowed to communicate through the edges of the graph
$G$. The communication is synchronous, and occurs in discrete
pulses, called {\em rounds}. In particular, all the nodes wake up
simultaneously at the beginning of round 1, and from this point on
the nodes always know the number of the current round. In each round
each node $v$ is allowed to send an arbitrary message of size
$O(\log n)$ through each edge $e = (v, u)$ that is adjacent to $v$,
and the message will arrive to $u$ at the end of the current round.
This is a standard model of distributed computation known as the
{\em CONGEST model} \cite{peleg} and has been attracting a lot of
research attention during last two decades
(e.g., see \cite{peleg} and the references therein).%This is a  widely used  standard model
% to study distributed algorithms and captures the realistic notion that
%there is a bound on the amount of messages that can be sent through
%an edge in one time step  and hence captures the bandwidth
%constraints inherent
% in  real-world computer  networks \cite{peleg, PK09}.
 % (We note that if unbounded-size messages were allowed through every
%edge in each time step, then the problems addressed here can be
%trivially solved in $O(D)$ time by collecting all  the topological information %at
%one node, solving the problem locally, and then broadcasting the
%results back to all the nodes \cite{peleg}.)

There are several measures of efficiency of distributed algorithms,
but we will concentrate on one of them, specifically, {\em the
running time}, that is, the number of rounds of distributed
communication. (Note that the computation that is performed by the
nodes locally is ``free'', i.e., it does not affect the number of rounds.)
%\atish{Should we mention here explicitly that we do not consider message complexity - the total number of messages exchanged. And admit that our algorithm is expensive in this aspect?}
 Many
fundamental network problems such as minimum spanning tree, shortest
paths, etc. have been addressed in this model (e.g., see
\cite{lynch, peleg, PK09}). In particular, there has been much
research into designing very fast distributed   approximation
algorithms (that are even faster at the cost of producing
sub-optimal solutions) for many of these  problems (see e.g.,
\cite{elkin-survey,dubhashi, khan-disc,khan-podc}).  Such algorithms
can be useful for large-scale resource-constrained and
dynamic networks where running time is crucial. \\
%This work addresses the
%problem of computing random walks in a time-efficient manner.

\subsection{Dynamic Congest Model}
%\noindent {\bf The New Model: }

The dynamic congest model is a generalization of the traditional congest model of distributed computing. In the congest model, any two neighboring nodes may exchange up to $O(\log n)$ messages, and the goal is to minimize the number of rounds taken for a specific computation. 

In the dynamic congest model, the network needs to address a series of computation requests, as opposed to a single one. Specifically, we consider an {\em injection rate} denoted by $r$ which specifies the rate at which the network may receive new requests for the same/similar computation. The motivation behind modeling this is two fold: firstly, networks rarely need to perform just one off computation. Dynamic peer-to-peer networks are a continuous process that need to address several sequential and parallel computations. The second (somewhat related) motivation is that in the traditional congest model, for a given computation, the entire network may spawn several messages since the only goal is to minimize the number of rounds; such message-flooding may be undesirable. Therefore, given an injection rate, the suggested algorithms need to be more mindful of message complexity.

The injection rate $r$ suggests the rate at which the network receives computation requests: A rate of $r = 1/10$ means that the network may receive a new request any time after $10$ rounds of receiving the previous request. $r = 5$ means that the network may receive $5$ distinct requests every round. We may later want to consider the case where $r$ is only the {\em expected} injection rate. \\

\noindent {\bf The New Model: } The network receives new computation requests at an injection rate of $r$. Further, the network needs to handle a sequence of $N$ such requests. The network still operates under the congest model that enforces a $O(\log n)$ bandwidth restriction over all edges. The goal is to minimize the total (or average) number of rounds required for the overall computation of the $N$ requests. All the $N$ requests are considered to be for the same problem (such as different distance computation requests, or different random walk requests, or even as simple as requesting the computation of the diameter of the network repeatedly).  

The traditional one-computation framework can be thought of as the dynamic model's extreme case of $N=1$ and $r=0$. Here we are interested in $r > 0$, possibly even $r > 1$. Further, one may consider the case where $N\rightarrow \infty$. 

\subsection{Problem Statement, Motivation, and Related Work}

We specifically study random walks. In random walks, a request entails a source node and a desired walk length. We use a pair $(s, l)$ to denote a request which signifies that node $s$ requires a sample from the distribution induced by a random walk of length $l$. 

(Note: Do we get better results if only the destination needs to be notified of the source?)

%We consider Random Walks. What frequency of requests can one handle?

\subsection{Our Results}

Just putting down a candidate theorem here; something of this form would be great. 

\begin{theorem}
There is an algorithm that under the dynamic congest model, with an injection rate of $r\leq 1$, and for any request sequence length $N \geq 1$ can address random walk requests $(s_i, l_i)$ where for all $i\leq N$, $s_i$ is chosen uniformly at random from $1$ to $n$, and $l_i\leq n$ with a round complexity of $\tilde{O}(\frac{\sum_{i}^{N}{\sqrt{D.l_i}}}{N})$ w.h.p. under the dynamic congest model. Further, the total number of messages required is $\tilde{O}(\max\{\sum_{i}^{N}{l_i}, m\sqrt{l_{max}}\})$ where $l_{max} = \max_{i}^{N}{l_i}$.
\end{theorem} 

Can we say something stronger? Perhaps we can handle an even larger $r$? If we state the theorem in terms of $N\geq n$ or something even larger, then message complexity perhaps looks even better...

The above is a CONJECTURE but the below is already a THEOREM. This follows quite easily from our PODC 2010 paper but is still not immediate, and I think is already a much stronger statement from a practical standpoint than what our PODC paper suggests. 

\begin{theorem}
There is an algorithm that under the dynamic congest model, with an injection rate of $r\leq O(1/\sqrt{D.l_{max}})$, and for any request sequence length $N \geq 1$ can address random walk requests $(s_i, l_i)$, where $l_{max} = \max_{i}^{N}{l_i}$, with a round complexity of $\tilde{O}(\frac{\sum_{i}^{N}{\sqrt{D.l_i}}}{N})$ w.h.p. under the dynamic congest model. Further, the total number of messages required is $\tilde{O}(\max\{\sum_{i}^{N}{l_i}, m\sqrt{l_{max}}\})$.
\end{theorem} 
\begin{proof}
To write up but quite simple and follows easily from our previous works
\end{proof}

Notice that we have $r = O(1/\sqrt{D.l_{max}})$, so no new request arrives before the previous request is completed. But still the contribution of this theorem is nice because we get OPTIMAL message complexity for large $N$. So this provides increased practicality. The optimal message bounds is the first level of complexity. Another reason why this result builds (albeit incrementally) over our previous work is that the length of the walks can be varied. Even for this theorem, we need to state the algorithms more carefully (to handle various length requests, and how to re-use walks generated from a previous epoch (i.e. for a previous request). 

THIS THEOREM IS VERY SIMPLE BUT GOOD TO FORMALIZE THIS FULLY - ALSO GOPAL'S  SUGGESTION IS TO EXPRESS MESSAGES ALSO ON A PER REQUEST BASIS. I'LL FORMALIZE THESE SOON - ATISH. What is the best $r$ we can handle for this theorem? Somewhat better if sources were random?

IMPORTANT POINT: We do not even have to restrict to proving a bound of $\tilde{O}(\frac{\sum_{i}^{N}{\sqrt{D.l_i}}}{N})$ rounds. We may even be able to prove $\tilde{O}(1)$ rounds w.h.p. depending on injection rate, $l_{max}$, $N$, and whether request sources are random or not. So proving CONSTANT ROUND and OPTIMAL MESSAGE COMPLEXITY is also possible now. But this would be under restrictive settings. We should work this out as well as the more nontrivial $\sqrt{l}$ type bounds...  

\section{Future Work}

What happens for Distance Computation in this model? What frequency of requests can one handle?

Future work: What about this model with failures. 

  \let\oldthebibliography=\thebibliography
  \let\endoldthebibliography=\endthebibliography
  \renewenvironment{thebibliography}[1]{%
    \begin{oldthebibliography}{#1}%
      \setlength{\parskip}{0ex}%
      \setlength{\itemsep}{0ex}%
  }%
  {%
    \end{oldthebibliography}%
  }
{ \small
\bibliographystyle{abbrv}
\bibliography{Distributed-RW}
}

\end{document}
