\section{Diffusion under organic dynamics}
\label{sec:intro.discovery}

Many large-scale, real-world networks such as peer-to-peer networks,
the Web, and social networks are highly dynamic with continuously
changing topology. The evolution of the network as a whole is
typically determined by the decentralized behavior of nodes, i.e., the
local topological changes made by the individual nodes (e.g., adding
edges between neighbors). The dynamics can be captured as diffusion
processes in self-altered networks. Understanding the dynamics of such
diffusion processes is critical for both analyzing the underlying
stochastic phenomena, e.g., in evolution of social networks, the Web
and other real-world networks \cite{b1,b2,b3}, and designing practical
algorithms for associated algorithmic problems, e.g., in resource
discovery in distributed networks \cite{leighton,law-siu} or in the
analysis of algorithms for the Web \cite{frieze1, frieze2}. In this
thesis, we study the dynamics of network evolution that result from
{\em local} gossip-style processes. Gossip-based processes have
recently received significant attention because of their simplicity of
implementation, scalability to large network size, and robustness to
frequent network topology changes; see, e.g., \cite{demers, kempe1,
  kempe2, chen-spaa, kempe, karp, shah, boyd} and the references
therein.  In a local gossip-based algorithm (e.g., \cite{chen-spaa}),
each node exchanges information with a small number of randomly chosen
neighbors in each round.\footnote{Gossip, in some contexts (see e.g.,
  \cite{karp,kempe}), has been used to denote communication with a
  random node in the network, as opposed to only a directly connected
  neighbor.  The former model essentially assumes that the underlying
  graph is complete, whereas the latter (as assumed here) is more
  general and applies even to arbitrary graphs. The local gossip
  process is typically more difficult to analyze due to the
  dependencies that arise as the network evolves.}  The randomness
inherent in the gossip-based protocols naturally provides robustness,
simplicity, and scalability.

We present two illustrative applications for our study.  First,
consider a P2P network, where nodes (computers or end-hosts with
IDs/IP addresses) can communicate only with nodes whose IP address are
known to them.  A basic building block of such a dynamic distributed
network is to efficiently discover the IP addresses of all nodes that
currently exist in the network.  This task, called {\em resource
  discovery} \cite{leighton}, is a vital mechanism in a dynamic
distributed network with many applications~\cite{leighton,ittai}: when
many nodes in the system want to interact and cooperate they need a
mechanism to discover the existence of one another.  Resource
discovery is typically done using a local mechanism \cite{leighton};
in each {\em round} nodes discover other nodes and this changes the
resulting network --- new edges are added between the nodes that
discovered each other.  As the process proceeds, the graph becomes
denser and denser and will finally result in a complete graph.  Such a
process was first studied in \cite{leighton} which showed that a
simple randomized process is enough to guarantee almost-optimal time
bounds for the time taken for the entire graph to become complete
(i.e., for all nodes to discover all other nodes). Their randomized
{\em Name Dropper} algorithm operates as follows: in each round, each
node chooses a random neighbor and sends {\em all} the IP addresses it
knows.  Note that while this process is also gossip based the
information sent by a node to its neighbor can be extremely large
(i.e., of size $\Omega(n)$).

Second, in social networks, nodes (people) discover new nodes through
exchanging contacts with their neighbors (friends). Discovery of new
nodes changes the underlying network --- new edges are added to the
network --- and the process continues in the changed network.  For
example, consider the {\em LinkedIn}
network\footnote{\url{http://www.linkedin.com}.}, a large social
network of professionals on the Web. The nodes of the network
represent people and edges are added between people who directly know
each other --- between direct contacts.  Edges are generally
undirected, but LinkedIn also allows directed edges, where only one
node is in the contact list of another node.  LinkedIn allows two
mechanisms to discover new contacts.  The first can be thought of as a
{\em triangulation} process (see Figure~\ref{fig:discovery.intro}(a)):
A person can introduce two of his friends that could benefit from
knowing each other --- he can mutually introduce them by giving their
contacts. The second can be thought of as a {\em two-hop} process (see
Figure~\ref{fig:discovery.intro}(b)): If {\em you} want to acquire a
new contact then you can use a shared (mutual) neighbor to introduce
yourself to this contact; i.e., the new contact has to be a two-hop
neighbor of yours.  Both the processes can be modeled via gossip in a
natural way and the resulting evolution of the network can be studied.
This yields insight on the evolution of the social network over time.

\begin{figure}[ht]
\begin{center}
  \includegraphics[width=3.5in]{./figures/model.jpg}
 \caption{(a) Push discovery or triangulation process. (b) Pull
   discovery or two-hop walk process. (c) Non-monotonicity of the
   triangulation process -- the expected convergence time for the
   4-edge graph exceeds that for the 3-edge
   subgraph.\label{fig:discovery.intro}}
\end{center}
\end{figure}

\paragraph{Gossip-based discovery.}   
Motivated by the above applications, we analyze two natural
gossip-based discovery processes (also diffusion processes).  We
assume that we start with an arbitrary undirected connected graph and
the process proceeds in synchronous rounds.  Communication among nodes
occurs only through edges in the network. We further assume that the
size of each message sent by a node in a round is at most $O(\log n)$
bits, i.e., the size of an ID.
\begin{enumerate}
\item {\sf Push discovery (triangulation)}: In each round, each node
  chooses two random neighbors and connects them by ``pushing'' their
  mutual information to each other. In other words, each node adds an
  undirected edge between two of its random neighbors; if the two
  neighbors are already connected, then this does not create any new
  edge.  Note that this process, which is illustrated in
  Figure~\ref{fig:discovery.intro}(a), is completely local.  To
  execute the process, a node only needs to know its neighbors; in
  particular, no two-hop information is needed.

 \item {\sf Pull discovery (two-hop walk)}: In each round, each node
   connects itself to a random neighbor of one of its randomly chosen
   neighbors, by ``pulling'' a random neighboring ID from a random
   neighbor.  Alternatively, one can think of each node doing a
   two-hop random walk and connecting to its destination.  This
   process, illustrated in Figure~\ref{fig:discovery.intro}(b), can also be
   executed locally: a node simply asks one of its neighbors $v$ for
   an ID of one of $v$'s neighbors and then adds an undirected edge to
   the received contact.
 \end{enumerate}
  
Both the above processes are local in the sense that each node only
communicates with its neighbors in any round, and lightweight in the
sense that the amortized work done per node is only a constant per
round.  Both processes are also easy to implement and generally
oblivious to the current topology structure, changes or failures.  It
is interesting also to consider variants of the above processes in
directed graphs. In particular, we study the two-hop walk process
which naturally generalizes in directed graphs: each node does a
two-hop directed random walk and adds a {\em directed} edge to its
destination.  We are mainly interested in the time taken by the
process to converge to the transitive closure of the initial graph,
i.e., till no more new edges can be added.

\paragraph{Our results.}   
We present almost-tight bounds on the number of rounds it takes for
the push and pull discovery processes to converge.
\begin{itemize}
\item {\bf Undirected graphs:} In
  Sections~\ref{sec:discovery.triangulation} and
  \ref{sec:discovery.2hop}, we show that for {\em any} undirected
  $n$-node graph, both the push and the pull discovery processes
  converge in $O(n\log^2 n)$ rounds with high probability.  We also
  show that $\Omega(n \log n)$ is a lower bound on the number of
  rounds needed for almost any $n$-node graph. Hence our analysis is
  tight to within a logarithmic factor.
  
 \item {\bf Directed graphs:} In Section \ref{sec:discovery.directed},
   we show that the pull process takes $O(n^2 \log n)$ time for any
   $n$-node directed graph, with high probability.  We show a matching
   lower bound for weakly connected graphs, and an $\Omega(n^2)$ lower
   bound for strongly connected directed graphs.  Our analysis
   indicates that the directionality of edges can greatly impede the
   resource discovery process.
\end{itemize}  

\paragraph{Applications.}
The gossip-based discovery processes we study are directly motivated
by the two scenarios outlined above, namely algorithms for resource
discovery in distributed networks and analyzing how discovery process
affects the evolution of social networks. Since our processes are
simple, lightweight, and easy to implement, they can be used for
resource discovery in distributed networks. The original resource
discovery algorithm of \cite{leighton} was helpful in developing
systems like Akamai.  Unlike prior algorithms for the discovery
problem \cite{leighton, law-siu, kutten, ittai}, the amortized work
done per node in our processes is only constant per round and hence
this can be efficiently implemented in bandwidth and
resource-constrained networks (e.g., peer-to-peer or sensor
networks). In contrast, the {\em Name Dropper} algorithm of
\cite{leighton}, can transfer up to $\Theta(n)$ information per edge
per round and hence may not be scalable for large-scale networks. We
note that, however, because there is essentially no restriction on the
bandwidth, the number of rounds taken by the {\em Name Dropper}
algorithm is $O(\log^2 n)$. (We note that in our model, $\Omega(n)$ is
a trivial lower bound). Our analyses can also give insight into the
growth of real-social networks such as LinkedIn, Twitter, or Facebook,
that grow in a decentralized way by the local actions of the
individual nodes. For example, it can help in predicting the sizes of
the immediate neighbors as well as the sizes of the second and
third-degree neighbors (e.g., these are listed for every node in
LinkedIn).  An estimate of these can help in designing efficient
algorithms and data structures to search and navigate the social
network.

\paragraph{Technical contributions.} 
Our main technical contribution is a probabilistic analysis of
localized gossip-based discovery in arbitrary networks.  While our
processes can be viewed as graph-based coupon collection processes,
one significant distinction with past work in this
area~\cite{adler+hkv:p2p,alon:combinatorics,dimitriov+p:coupon} is
that the graphs in our processes are constantly changing.  The
dynamics and locality inherent in our process introduces nontrivial
dependencies, which makes it difficult to characterize the network as
it evolves.  A further challenge is posed by the fact that the
expected convergence time for the two processes is {\em not
  monotonic}; that is, the processes may {\em take longer} to converge
starting from a graph $G$ than starting from a subgraph $H$ of $G$.
Figure~\ref{fig:discovery.intro}(c) presents a small example
illustrating this phenomenon.  This seemingly counter-intuitive
phenomenon is, however, not surprising considering the fact that the
cover time of random walks also share a similar property.  One
consequence of these hurdles is that analyzing the convergence time
for even highly specialized or regular graphs is challenging since the
probability distributions of the intermediate graphs are hard to
specify.  Our lower bound analysis for a specific strongly connected
directed graph in Theorem~\ref{thm:discovery.directed.lower}
illustrates some of the challenges.  In our main upper bound results,
we overcome these technical difficulties by presenting a uniform
analysis for all graphs, in which we study different local
neighborhood structures and show how each lead to rapid growth in the
minimum degree of the graph.



