% !TEX root = sparsecut.tex
\section{Introduction}\label{sec:intro}
Developing distributed algorithms
for computing key metrics of a communication network is an important research goal with
various applications. 
Network properties 
--- which depend on the collective behavior of  nodes
and links ---  characterize global network  performance
such as  routing, sampling, information dissemination, etc.  These in turn depend on topological properties  of the network such as high connectivity, low diameter, high conductance, and good spectral properties \cite{mihail}.
%For example, overlay P2P  networks, which are virtual networks built over the Internet, are used for file sharing and content distribution applications.
%A P2P  network's topology should have good connectivity properties  for providing
%good quality of service at the virtual network
%layer, as well as for providing load balancing at the underlying network layer.
%Furthermore,
%it might be desirable that network is highly connected which enables robust communication
%even under failures. 
The above properties, all of which are critical, need to be measured periodically. Having a highly-connected network is good for fault-tolerance and reliable routing,
since a packet can be routed via many disjoint paths. Low diameter ensures 
that packets can be routed quickly with short delay. Conductance  (formally defined in Section \ref{sec:def}) measures how  
``well-knit" the network is; it determines how fast a random walk 
converges to the stationary distribution --- known as the {\em mixing time}.
 Conductance is related to the {\em expansion}, {\em spectral gap}, and mixing time of a graph.  High expansion and spectral gap means that the graph
has fast mixing time. Such a network supports fast random sampling (which
has many applications \cite{drw-jacm}) and low-congestion routing \cite{mihail}. 
%The spectral properties (the graph spectrum --- the set of eigenvalues of the adjacency matrix) tell a great deal of the network structure. 


Sparse cuts are those cuts that have low conductance and can be used
to determine well-connected clusters\footnote{A cut $(S,V-S)$ is a partition
of the set of nodes $V$ into $S$ (assume $|S| \leq |V|/2$) and $V-S$. A low conductance cut has lot more edges within $S$ than those going outside $S$, and hence $S$ is relatively well (intra)connected.} and thus also identify potential ``bottlenecks" in the network. In particular, the edges crossing the cut
can be considered as {\em critical} edges and they have been used in designing algorithms to
improve searching, topology maintenance (i.e., maintaining a well-connected topology), and reducing routing congestion in networks \cite{mihail}.
%Such algorithms are useful in the design, analysis, and maintenance of {\em topology aware} networks \cite{mihail}.
  
In this paper, we focus on developing  fast distributed algorithms for computing sparse cuts  in networks. 
Given an undirected $n$-node network $G$ with conductance $\phi$ (a quantity less than 1), the goal is to find a cut set whose conductance is close to $\phi$. (We note that computing the minimum conductance cut --- the one with conductance $\phi$ of the network--- is NP-hard \cite{MatulaS90}.)  Our main result is a fast distributed algorithm that finds a cut set with sparsity $\tilde O(\sqrt{\phi})$. Our algorithm uses small-sized messages and works in the CONGEST distributed computing model. Our algorithm builds on previous work \cite{LovaszS90,SpielmanT04} on classical algorithms for sparse cuts. In particular, we use a key technical result (cf. Theorem~\ref{thm:conductance-estimate}) follows from \cite{LovaszS90,SpielmanT04} to work in our distributed settings.
Our algorithm outputs a cut of conductance at most $\tilde O(\sqrt{\phi})$ with high probability, in $O(\frac{1}{b}( \frac{1}{\phi} + n)\log^2 n)$ rounds, where $b$ is balance of the cut of given conductance (cf. Section \ref{sec:def}). In particular, to find a cut of constant balance (i.e., the cuts are of  approximately equal size), the algorithm takes $O((\frac{1}{\phi} + n)\log^2 n)$ rounds and finds such a cut (if it exists) with similar approximation.  %The second algorithm is a variant of the first one and can  be used to output a sparse {\em local} cluster, i.e., a cut that is near a given source node, and whose conductance is within a quadratic factor of the optimal local cut; the time required for this is $O(\frac{1}{\phi} + n)$ rounds, where $\phi$ is the conductance of the local cut. 
Our  algorithm can also be used to output a well-connected {\em local} cluster (cf. Section \ref{sec:def}) , i.e., a subset  $S$ of vertices containing the given source node such that the internal edge connections in $S$ are significantly higher than the outgoing edges from $S$.  
Our distributed algorithm works without knowledge of the optimal $\phi$ value, albeit at the cost of $\log n$ factor slowdown in the running time. Hence our algorithm can be used
to find approximate conductance values both globally and locally with respect to a given source node. 

Our approach crucially uses random walks. Random walks
are very  local and lightweight and require little index or state maintenance 
that makes it attractive to self-organizing networks \cite{BBSB04,ZS06}. 
Our approach, on a high-level, is based on efficiently implementing
 the methods of Lov{\'a}sz and Simonovits \cite{LovaszS90,SpielmanT04}.
  This method uses random walks to
 estimate the probability distribution of such walks terminating at nodes.
 This probability distribution can then be used to identify sparse cuts.
 Our algorithm  is fully decentralized and uses lightweight local computations (i.e., the computation within a node uses relatively simple operations, in particular,
 these are polynomially bounded.)
 
 We also present an alternate approach to compute sparse cuts that is based on graph sparsification.
 This approach uses a distributed algorithm to compute a  {\em cut sparsifier} so that it contains $\tilde{O}(n)$ edges (cf. Section \ref{sec:different-approach}).
 (A cut sparsifier $G' = (V,E')$ for a graph $G = (V,E)$ is a  (weighted) graph that preserves the weights of {\em all} cuts in $G$ to within a factor of $1 \pm \epsilon$
 and  is sparse, typically about $\tilde{O}(n)$ edges --- cf. Section \ref{sec:different-approach})). 
 Once we have such a sparsifier, all the edges (i.e., the entire topology of the sparse graph) can be collected at one node which locally computes a sparse cut. This approach gives a running time of $\tilde{O}(n)$  (more precisely, $O(n\log^6 n/\epsilon^2)$) in the CONGEST model and can compute, in principle, a $(1 \pm \epsilon)$-approximation to the sparsest cut (assuming that we allow exponential sized computations within a node); we discuss more on this in Section \ref{sec:different-approach}.
 The running time of this approach is independent of the conductance $\phi$ and the balance $b$.
  This approach can take less number of distributed rounds than the random walk based approach, especially if $1/\phi$  is large. However,  it has the drawback of  computing the result at a single (central) node, unlike the random walk based approach. 
 %Anisur: We should mention the 1/\eps multiplication with the running time of sparsification approach. The running time of sparsification based approach is better than random walk based approach when $1/\phi = \Omega(n \log^6 n/\eps^2)$.  
 
 Finally, we show a lower bound
on the time needed for any distributed algorithm to compute any non-trivial sparse cut. In particular, we show that there is a graph in which any distributed approximation algorithm (for any non-trivial  approximation ratio, not just quadratic approximation) for computing sparsest cut will take $\tilde \Omega(\sqrt{n} + D)$ rounds, where $D$ is the diameter of the graph. 



%It is known  (see Footnote \ref{foot:cheeger}) that $\tilde O(\sqrt{M}) \leq \frac{1}{\phi} \leq \tilde O(M)$ and hence the second algorithm in general   
%is at least as fast as the first one and can be up to quadratic times faster.

%Our algorithm can be useful in efficiently finding sparse cuts 
%(and their conductance values) and critical edges (the edges crossing sparse cuts) in distributed networks.
%In particular, the work of \cite{mihail} shows how  critical edges can be used to design algorithms to  improve search, reduce congestion in routing, and for keeping the graph well-connected (topology maintenance).
%Such  algorithms can be useful in the design and deployment
%of {\em reconfigurable networks} (whose topology can be changed by rewiring edges) such as  peer-to-peer networks and  wireless mesh networks. The paper \cite{KerenS12} study information spreading where they used a generalized notion of conductance as a key tool. In fact, the conductance helps to identify bottlenecks in the network and thus achieves fast information spreading.   

%, which in turn can be helpful in the design, analysis, and maintenance
%of {\em topologically-(self)aware} networks, i.e.,  networks that can monitor and regulate themselves in a decentralized fashion \cite{mihail}.

The focus of distributed computation of spectral properties that we are interested
here, in particular, conductance and sparse cuts, is relatively new. 
The work of \cite{drw-jacm} presented a fast decentralized algorithm for estimating mixing time, conductance, and spectral gap of the network.
% In
%particular, they show that given a starting point $x$, the mixing time with respect to $x$, called $\tau^x_{mix}$, can be
%estimated in $\tilde{O}(n^{1/2} + n^{1/4}\sqrt{D\tau^x_{mix}})$ rounds, where $D$ is the network diameter. 
%If the estimate of $\tau^x_{mix}$ is close to the mixing time of the network defined as $\tau_{mix} = \max_{x}{\tau^x_{mix}}$, then this allows one to estimate also the conductance $\phi$ (upto a quadratic factor)
%and spectral gap of the graph\footnote{\label{foot:cheeger} The spectral gap is the $1-\lambda_2$ where $\lambda_2$ is the second eigenvalue of the connected transition matrix. It is known that conductance, mixing time, and spectral gap are related to each other \cite{JS89}:  $\frac{1}{1-\lambda_2}\leq \tau_{mix}\leq \frac{\log n}{1-\lambda_2}$ and $\Theta(1-\lambda_2)\leq \Phi\leq \Theta(\sqrt{1-\lambda_2})$.}. 
The work of Kempe and McSherry \cite{kempe} gives a decentralized algorithm 
for computing the top eigenvectors of a weighted adjacency matrix
that runs in $O(\tau_{mix}\log^2 n)$ round, where $\tau_{mix}$ is the mixing time of the network\footnote{Estimating mixing time  also allows one to estimate conductance $\phi$ (upto a quadratic factor) and spectral gap of the graph. The spectral gap is  $1-\lambda_2$ where $\lambda_2$ is the second eigenvalue of the connected transition matrix. It is known that conductance, mixing time, and spectral gap are related to each other \cite{JS89}:  $\frac{1}{1-\lambda_2}\leq \tau_{mix}\leq \frac{\log n}{1-\lambda_2}$ and $\Theta(1-\lambda_2)\leq \Phi\leq \Theta(\sqrt{1-\lambda_2})$.}.

While the above works give distributed algorithms to estimate the conductance $\phi$, they {\em do not}
give an efficient distributed algorithm  to compute sparse cuts. 
%Sparse cuts have low conductance (i.e., close to $\phi$) and, in particular, the sparsest cut
%is a cut that achieves the network conductance. 
Since there are exponential number of cuts in the network, it is significantly more challenging to efficiently find
the sparsest cut or approximate it in a distributed fashion. Hence computing sparse cuts needs
a different approach compared to computing conductances and mixing time as in the works of \cite{drw-jacm,kempe}.
 
 
 %Our second algorithm uses random walks with {\em reset} to a given source node, in other words, it computes {\em personalized PageRank} (cf. Section \ref{sec:pagerank-algo}). 
% Our algorithms can be used to estimate the  conductance of the network.
% The second algorithm, in particula
%as well as ``local" conductance, i.e., conductance of a sparse set
%containing a given source node.

\iffalse

 A subroutine for efficiently sampling from several random walks naturally leads to a technique
for estimating the spectral gap, or the mixing time of the network graph in a distributed manner.
Samples from walks of length $\ell$ are samples from the distribution induced at length $\ell$. These algorithms
can be extended to sample from walks of length $1, 2, 4, 8,\dots$. One can then find an efficient way to
compare the distance between distributions and estimate $t$ such that the distribution at $t$ and $2t$ are
close. This gives an estimate of the mixing time and thereby the spectral gap. 
\fi


