
\section{Introduction}
Many large-scale, real-world networks such as peer-to-peer networks,
the Web, and social networks are highly dynamic with continuously
changing topologies. The evolution of the network as a whole is
typically determined by the decentralized behavior of nodes, i.e., the
local topological changes made by the individual nodes (e.g., adding
edges between neighbors).  Understanding the dynamics of such local
processes is critical for both analyzing the underlying stochastic
phenomena, e.g., in the emergence of structures in social networks,
the Web and other real-world networks \cite{b1,b2,b3}, and for
designing practical algorithms for associated algorithmic problems,
e.g., in resource discovery in distributed networks
\cite{leighton,law-siu} or in the analysis of algorithms for the Web
\cite{frieze1, frieze2}.  In this paper, we study the dynamics of
network evolution that result from {\em local} gossip-style
processes. Gossip-based processes have recently received significant
attention because of their simplicity of implementation, scalability
to large network size, and robustness to frequent network topology
changes; see, e.g., \cite{demers, kempe1, kempe2, chen-spaa, kempe,
  karp, shah, boyd, ozalp1, ozalp2} and the references therein.  In
particular, gossip-based protocols have been used to efficiently and
robustly construct various overlay topologies dynamically in a fully
decentralized manner \cite{ozalp1}.  In a local gossip-based algorithm
(e.g., \cite{chen-spaa}), each node exchanges information with a small
number of randomly chosen neighbors in each round.\footnote{Gossip, in
  some contexts (see e.g., \cite{karp,kempe}), has been used to denote
  communication with a random node in the network, as opposed to only
  a directly connected neighbor.  The former model essentially assumes
  that the underlying graph is complete, whereas the latter (as
  assumed here) is more general and applies even to arbitrary
  graphs. The local gossip process is typically more difficult to
  analyze due to the dependences that arise as the network evolves.}
The randomness inherent in the gossip-based protocols naturally
provides robustness, simplicity, and scalability. While many of the
theoretical gossip-based work (including those on rumor spreading),
especially, the {\em push-pull} type algorithms (\cite{karp, kempe,
  chen-spaa, doerr, flavio, giakkoupis}) focus on analyzing various
gossip-based tasks (e.g., computing aggregates or spreading a rumor)
on {\em static} graphs, a key feature of this work is rigorously
analyzing a gossip-based process in a {\em dynamically changing}
graph, where the graph topology changes in a probabilistic fashion due
to the behaviour of the gossip-style process itself. We also note that
there have been recent works (see e.g., \cite{chen,
  disc12,podc12,podc08,icalp14,soda12,podc13,spaa13,clementi+cdfipps:evolving}
and the references therein) that have analyzed gossip-style processes,
random walks, and flooding processes in dynamic networks where the
topological changes are determined by an adversary (usually, but not
always, independent of the processes that occur over these networks).
     
We present two illustrative application domains for our study.  First,
consider a P2P network, where nodes (computers or end-hosts with
IDs/IP addresses) can communicate only with nodes whose IP address are
known to them.  A basic building block of such a dynamic distributed
network is to efficiently discover the IP addresses of all nodes that
currently exist in the network.  This task, called {\em resource
  discovery} \cite{leighton}, is a vital mechanism in a dynamic
distributed network with many applications~\cite{leighton,ittai}: when
many nodes in the system want to interact and cooperate they need a
mechanism to discover the existence of one another.  Resource
discovery is typically done using a local mechanism \cite{leighton};
in each {\em round}\/ nodes discover other nodes and this changes the
resulting network---new edges are added between the nodes that
discovered each other.  As the process proceeds, the graph becomes
denser and denser and will finally result in a complete graph.  Such a
process was first studied in \cite{leighton} which showed that a
simple randomized process is enough to guarantee almost-optimal time
bounds for the time taken for the entire graph to become complete
(i.e., for all nodes to discover all other nodes). Their randomized
{\em Name Dropper} algorithm operates as follows: in each round, each
node chooses a random neighbor and sends {\em all} the IP addresses it
knows.  Note that while this process is also gossip-based the
information sent by a node to its neighbor can be extremely large
(i.e., of size $\Omega(n)$).  

More recently, self-stabilization protocols have been designed for
constructing and maintaining P2P overlay networks e.g,
\cite{berns,jacob}. These protocols guarantee convergence to a desired
overlay topology (e.g., the SKIP+ graph) starting from any arbitrary
topology via local checking and repair.  For example, the
self-stabilizing protocol of \cite{berns} proceeds by continuously
discovering new neighbors (via transitive closure) till a complete
graph is formed. Then the repair process is initiated. This can also
be considered as a local gossip-based process in an underlying virtual
graph with changing (added) edges. \junk{However, the process is not
  lightweight as information sent by a node to its neighbor can be
  extremely large (i.e., of size $\Omega(n)$).}  In both the above
examples, the assumption is that the starting graph is arbitrary but
(at least) weakly connected.  The gossip-based processes that we study
also have the same goal---starting from an arbitrary connected
graph, each node discovers all nodes as quickly as possible---in a
setting where individual message sizes are small ($O(\log n)$ bits).

Second, in social networks, nodes (people) discover new nodes through
exchanging contacts with their neighbors (friends). Discovery of new
nodes changes the underlying network---new edges are added to the
network---and the process continues in the changed network.  For
example, consider the {\em LinkedIn} network, a large social network
of professionals on the Web.\footnote{\url{http://www.linkedin.com}.}
The nodes of the network represent people and edges are added between
people who directly know each other (i.e., between direct contacts).
Edges are generally undirected, but LinkedIn also allows directed
edges, where only one node is in the contact list of another node.
LinkedIn allows two mechanisms to discover new contacts.  The first
can be thought of as a {\em triangulation} process (see
Figure~\ref{fig:intro}(a)): A person can introduce two of his friends
that could benefit from knowing each other---he can mutually
introduce them by giving their contacts. The second can be thought of
as a {\em two-hop} process (see Figure~\ref{fig:intro}(b)): If {\em
  you} want to acquire a new contact then you can use a shared
(mutual) neighbor to introduce yourself to this contact; i.e., the new
contact has to be a two-hop neighbor of yours.  Both the processes can
be modeled via gossip in a natural way (as we do shortly below) and
the resulting evolution of the network can be studied: e.g., how and
when do clusters emerge?  how does the diameter change with time?  In
the social network context, our study focuses on the following
question: how long does it take for all the nodes in a connected
induced subgraph of the network to discover all the nodes in the
subgraph?  This is useful in scenarios where members of a social
group, e.g., alumni of a school, members of a club, discover all
members of the group through local gossip operations.

\begin{figure}[ht]
\begin{center}
  \includegraphics[width=6in]{./figures/model.jpg}
  \label{fig:intro}
 \caption{(a) Push discovery or triangulation process. (b) Pull
   discovery or two-hop walk process. (c) Non-monotonicity of the
   triangulation (push) process---the expected convergence time for the
   4-edge graph exceeds that for the 3-edge
   subgraph. The left graph makes progress towards the complete graph faster than the right graph. label{fig:intro}}
\end{center}
\end{figure}

\BfPara{Gossip-based discovery}  Motivated directly by the above
applications, we analyze two lightweight, randomized gossip-based
discovery processes.  We assume that we start with an arbitrary
undirected connected graph and the process proceeds in {\em synchronous
rounds}.  Communication among nodes occurs only through edges in the
network. We further assume that the size of each message sent by a
node in a round is at most $O(\log n)$ bits, i.e., the size of an ID.
  \begin{enumerate}
\item {\sf Push discovery (triangulation)}: In each round, each
  node chooses two random neighbors and connects them by ``pushing''
  their mutual information to each other. In other words, each node
  adds an undirected edge between two of its random neighbors; if the
  two neighbors are already connected, then this does not create any
  new edge.  Note that this process, which is illustrated in
  Figure~\ref{fig:intro}(a), is completely local.  To execute the
  process, a node only needs to know its neighbors; in particular, no
  two-hop information is needed. (Note that the selection of edges to add for each node is simultaneous at each round.) \junk{Note that this is similar in spirit to the {\em triangulation} procedure of Linkedin described earlier, i.e., a node completes a triangle with two of its chosen neighbors.\footnote{However, we note that in our process the two neighbors are chosen randomly, unlike in LinkedIn.}  }

 \item {\sf Pull discovery (two-hop walk)}: In each round, each node
   connects itself to a random neighbor of a neighbor chosen uniformly
   at random, by ``pulling'' a random neighboring ID from a random
   neighbor.  Alternatively, one can think of each node doing a
   two-hop random walk and connecting to its destination.  This
   process, illustrated in Figure~\ref{fig:intro}(b), can also be
   executed locally: a node (say $u$) simply asks one of its neighbors $v$ for
   an ID of one of $v$'s neighbors  and then adds an undirected edge to
   the received contact (say $w$). (This can be thought of as a two-step process: first, $u$ learns of $w$'s address from $v$; second, $u$ also
notifies $w$ of its address).  \junk{Note that this is similar in spirit to the
   {\em two-hop} procedure of LinkedIn described earlier.
   \footnote{Again, one difference is that in the process we analyze
     the particular each node in the two-hop walk is chosen uniformly
     at random from the appropriate neighborhood.}}
 \end{enumerate}
  
  Both the above processes are local in the sense that each node only
  communicates with its neighbors (or its two-hop neighbours, as in
  the case of pull discovery) in any round, and lightweight in the
  sense that the average work done per node is only a constant per
  round.  Both processes are also easy to implement and generally
  oblivious to the current topology structure, changes, or failures.
  It is interesting also to consider variants of the above processes
  in directed graphs. In particular, we study the two-hop walk process
  which naturally generalizes in directed graphs: each node does a
  two-hop directed random walk and adds a {\em directed}\/ edge to its
  destination.\footnote{One can also study the push process in directed
    graphs: each node chooses two random outgoing neighbours and adds
    a directed edge in one or both directions. It will be interesting
    to analyze this process and compare with the corresponding results
    of the push process on undirected graphs.}za  We are mainly
  interested in the time taken by the process to converge to the {\em
    transitive closure} of the initial graph, i.e., till no more new
  edges can be added.  \junk{In an undirected graph, the processes
    will converge to a complete graph, while that may not necessarily
    be the case in directed graphs.}
  
%  Shall we mention about triangulation in directed graphs?

\smallskip  
\BfPara{Our results}   
Our main contribution is an analysis of the above gossip-based
discovery processes in both undirected and directed graphs.  In
particular, we show the following results (the precise theorem
statements are in the respective sections.)

\begin{itemize}
\item {\bf Undirected graphs:} In Sections~\ref{sec:triangulation-10p}
  and \ref{sec:2hop-10p}, we show that for {\em any} undirected
  $n$-node graph, both the push and the pull discovery processes
  converge to the transitive closure of the graph in $O(n\log^2 n)$
  rounds with high probability.  We also show that $\Omega(n \log n)$
  is a lower bound on the number of rounds needed, with high
  probability, for any $n$-node connected graph that is missing
  $\Omega(n)$ edges. Hence our analysis is tight zawithin a
  logarithmic factor.

Our results also apply when we require only a subset of nodes to
converge.  In particular, consider a subset of $k$ nodes that induce a
connected subgraph and run the gossip-based process {\em restricted to
  this subgraph}.  Then by just applying our results to this subgraph,
we immediately obtain that it will take $O(k\log^2 k)$ rounds, with
high probability (in terms of $k$), for all the nodes in the subset to
converge to a complete subgraph.  As discussed above, such a result is
applicable in social network scenarios where all nodes in a subset of
network nodes discover one another through gossip-based processes.
    
 \item {\bf Directed graphs:} In Section \ref{sec:directed-10p}, we show
   that the pull process takes $O(n^2 \log n)$ time for any $n$-node
   directed graph, with high probability.  We show a matching lower
   bound for weakly connected graphs, and an $\Omega(n^2)$ lower bound
   for strongly connected directed graphs.  Our analysis indicates
   that the directionality of edges can greatly impede the resource
   discovery process.  \junk{
  \item Can we talk about the message (communication) complexity 
  and the bit complexity of our algorithms (e.g., these are done
  in prior works in resource discovery, see e.g., the paper by Abraham
  and Dolev---available in our kdissemination website.)
   
  \item Other results to add ?---e.g., robustness to failures, only subset of nodes participating etc.
}
\end{itemize}  

\BfPara{Applications} The gossip-based discovery processes we study
are directly motivated by the two scenarios outlined above, namely
algorithms for resource discovery in distributed networks and
analyzing how discovery process affects the evolution of social
networks. Since our processes are simple, lightweight, and easy to
implement, they can be used for resource discovery in distributed
networks.  The {\em Name Dropper}\/ discovery algorithm has been
applied to content delivery systems~\cite{leighton}.  As mentioned
earlier, {\em Name Dropper}\/ and other prior algorithms for the
discovery problem \cite{leighton, law-siu, kutten, ittai} complete in
polylogarithmic number of rounds ($O(\log^2 n)$ or $O(\log n)$), but
may transfer $\Theta(n)$ bits per edge per round.  As a result, they
may not be scalable for bandwidth and resource-constrained networks
(e.g., peer-to-peer, mobile, or sensor networks).  One approach to use
these algorithms in a bandwidth-limited setting ($O(\log n)$-bits per
message) is to spread the transfer of long messages over a linear
number of rounds, but this requires coordination and maintaining
state.  In contrast, the ``stateless'' nature of the gossip processes
we study and the fact that the results apply to any initial graph make
the process attractive in unpredictable environments.  \junk{ In
  contrast, the {\em Name Dropper} algorithm of \cite{leighton}, . We
  note that, however, because there is essentially no restriction on
  the bandwidth, the number of rounds taken by the {\em Name Dropper}
  algorithm is $O(\log^2 n)$. (We note that in our model, $\Omega(n)$
  is a trivial lower bound).} Our analyses can also give insight into
the growth of social networks\junk{ such as LinkedIn, Twitter, or
  Facebook} that grow in a decentralized way by the local actions of
the individual nodes. Although, convergence to the complete graph as a whole is unrealistic from
the point of large social networks, as mentioned earlier, it might be more relevant to discovering members in a
(smaller) subgraph.   In addition to the application of discovering
all members of a group, analyses of the processes such as the ones we
study can help analyze both short-term and long-term evolution of
social networks.  In particular, it can help in predicting the sizes
of the immediate neighbors as well as the sizes of the second and
third-degree neighbors (these are listed for every node in LinkedIn).
An estimate of these can help in designing efficient algorithms and
data structures to search and navigate the social network.

\smallskip
\BfPara{Technical contributions} Our main technical contribution is a
probabilistic analysis of localized gossip-based discovery in
arbitrary networks.  While our processes can be viewed as graph-based
coupon collection processes, one significant distinction with past
work in this
area~\cite{adler+hkv:p2p,alon:combinatorics,dimitriov+p:coupon} is
that the graphs in our processes are constantly changing.  The
dynamics and locality inherent in our process introduces nontrivial
dependencies, which makes it difficult to characterize the network as
it evolves.  

A further challenge is posed by the fact that the expected convergence
time for the two processes is {\em not monotonic}; that is, the
processes may {\em take longer}\/ to converge starting from a graph
$G$ than starting from a subgraph $H$ of $G$.
Figure~\ref{fig:intro}(c) presents a small example illustrating this
phenomenon.  This seemingly counterintuitive phenomenon is, however,
not surprising considering the fact that the cover time of random
walks also share a similar property.  One consequence of these hurdles
is that analyzing the convergence time for even highly specialized or
regular graphs is challenging since the probability distributions of
the intermediate graphs are hard to specify.  Our lower bound analysis
for a specific strongly connected directed graph in
Theorem~\ref{thm:directed.lower-10p} illustrates some of the
challenges.  In our main upper bound results
(Theorems~\ref{thm:triangulation-10p}
and~\ref{thm:graph+randwalk-10p}), we overcome these technical
difficulties by presenting a uniform analysis for all graphs, in which
we study different local neighborhood structures and show how each
leads to rapid growth in the minimum degree of the graph.

\junk{
\paragraph{Other related work.} ?


\paragraph{Organization of the paper.} Giving a road map of the sections here
can be useful...
 }
