%\input{template}
\documentclass[11pt]{article}
%\documentclass{sig-alternate}
\usepackage{algorithm}
\usepackage{algorithmic}

\usepackage{subfigure}
\usepackage{epsfig,amsthm,amsmath,color, amsfonts}
\usepackage{epsfig,color}
\newcommand{\xxx}[1]{\textcolor{red}{#1}}
%\usepackage{fullpage}
\usepackage{framed}
%\usepackage{epsf}
%\usepackage{hyperref}
\usepackage{sidecap}

%\setlength{\textheight}{9.4in} \setlength{\textwidth}{6.55in}
\setlength{\textheight}{9.2in} \setlength{\textwidth}{6.55in}
%\setlength{\topmargin}{0in}

\voffset=-0.9in
\hoffset=-0.8in


\newtheorem{theorem}{Theorem}[section]
%\newtheorem{definition}[theorem]{Definition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{claim}[theorem]{Claim}
%\newtheorem{example}[theorem]{Example}
\newtheorem{remark}[theorem]{Remark}
\theoremstyle{definition}\newtheorem{example}[theorem]{Example}
\theoremstyle{definition}\newtheorem{definition}[theorem]{Definition}
\theoremstyle{observation}\newtheorem{observation}[theorem]{Observation}

\newcommand{\comment}[1]{}
\newcommand{\QED}{\mbox{}\hfill \rule{3pt}{8pt}\vspace{10pt}\par}
%\newcommand{\eqref}[1]{(\ref{#1})}
\newcommand{\theoremref}[1]{(\ref{#1})}
\newenvironment{proof1}{\noindent \mbox{}{\bf Proof:}}{\QED}
%\newenvironment{observation}{\mbox{}\\[-10pt]{\sc Observation.} }%
%{\mbox{}\\[5pt]}

\def\m{{\rm min}}
%\def\m{\bar{m}}
\def\eps{{\epsilon}}
\def\half{{1\over 2}}
\def\third{{1\over 3}}
\def\quarter{{1\over 4}}
\def\polylog{\operatorname{polylog}}
\newcommand{\ignore}[1]{}
\newcommand{\eat}[1]{}
\newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor}
\newcommand{\ceil}[1]{\left\lceil #1 \right\rceil}

\newcommand{\algorithmsize}[0]{}

%---------------------
%  SPACE SAVERS
%---------------------

\usepackage{times}
\usepackage[small,compact]{titlesec}
\usepackage[small,it]{caption}

\newcommand{\squishlist}{
 \begin{list}{$\bullet$}
  { \setlength{\itemsep}{0pt}
     \setlength{\parsep}{3pt}
     \setlength{\topsep}{3pt}
     \setlength{\partopsep}{0pt}
     \setlength{\leftmargin}{1.5em}
     \setlength{\labelwidth}{1em}
     \setlength{\labelsep}{0.5em} } }
\newcommand{\squishend}{
  \end{list}  }

%
%\newcommand{\squishlist}{
% \begin{enumerate}}
%\newcommand{\squishend}{
%  \end{enumerate}  }


%---------------------------------
% FOR MOVING PROOFS TO APPENDIX
%\usepackage{answers}
%%\usepackage[nosolutionfiles]{answers}
%\Newassociation{movedProof}{MovedProof}{movedProofs}
%\renewenvironment{MovedProof}[1]{\begin{proof}}{\end{proof}}

\def\e{{\rm E}}
\def\var{{\rm Var}}
\def\ent{{\rm Ent}}
\def\eps{{\epsilon}}
\def\lam{{\lambda}}
\def\bone{{\bf 1}}


\begin{document}

\title{Distributed Computation of Sparse Cuts via Random Walks}

\begin{titlepage}
\author{Atish {Das Sarma} \thanks{eBay Research Labs, eBay Inc., CA, USA.
\hbox{E-mail}:~{\tt atish.dassarma@gmail.com}} \and  Anisur Rahaman Molla \thanks{Division of Mathematical
Sciences, Nanyang Technological University, Singapore 637371. \hbox{E-mail}:~{\tt anisurpm@gmail.com}} \and Gopal Pandurangan \thanks{Division of Mathematical
Sciences, Nanyang Technological University, Singapore 637371 and Department of Computer Science and ICERM, Brown University, Providence, RI 02912, USA. \hbox{E-mail}:~{\tt gopalpandurangan@gmail.com}. Supported in part by the following research grants: Nanyang Technological University grant M58110000, Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 2 grant MOE2010-T2-2-082, Singapore MOE AcRF Tier 1 grant MOE2012-T1-001-094, and a grant from the US-Israel Binational Science Foundation (BSF).}}

\date{}

\maketitle \thispagestyle{empty}

%\vspace*{-0.2in}

\maketitle
\begin{abstract}
Finding  sparse cuts is an important tool in analyzing  large-scale distributed networks such as the Internet and Peer-to-Peer networks, as well as large-scale graphs such as the web graph, online social communities, and VLSI circuits. Sparse cuts are useful in graph clustering and partitioning among numerous other applications. In distributed communication networks, they are useful for topology maintenance and  for designing better search and routing algorithms.

In this paper, we focus on developing a fast distributed algorithm for computing sparse cuts in networks. Given an undirected $n$-node network $G$ with conductance $\phi$, the goal is to find a cut set whose conductance is close to $\phi$. We present a  distributed algorithm that finds a cut set with sparsity $\tilde O(\sqrt{\phi})$ ($\tilde{O}$ hides $\polylog{n}$ factors). Our algorithm works in the CONGEST distributed computing model and outputs a cut of conductance at most $\tilde O(\sqrt{\phi})$ with high probability, in $O(\frac{1}{b}(\frac{1}{\phi} + n)\log^2 n)$ rounds, where $b$ is balance of the cut of given conductance. In particular, to find a sparse cut of constant balance, our algorithm takes $O((\frac{1}{\phi} + n)\log^2 n)$ rounds. Our  algorithm can also be used to output a {\em local} cluster, i.e., a subset of vertices near a given source node, and whose conductance is within a quadratic factor of the best possible cluster around the specified node. Our distributed algorithm can work without knowledge of the optimal $\phi$ value and hence can be used to find approximate conductance values both globally and with respect to a given source node. Our algorithm uses random walks as a key subroutine and is fully decentralized and uses lightweight local computations. 

We also give a lower bound
on the time needed for any distributed algorithm to compute any non-trivial sparse cut --- any distributed approximation algorithm (for any non-trivial  approximation ratio) for computing sparsest cut will take $\tilde \Omega(\sqrt{n} + D)$ rounds, where $D$ is the diameter of the graph. 

Our algorithm can be used to find  sparse cuts 
(and their conductance values) and to identify well-connected clusters and critical edges in distributed networks. This in turn can be helpful in the design, analysis, and maintenance of topologically-aware networks. 

\end{abstract}
\vspace{1.0cm}
\noindent {\bf Keywords:} Distributed Algorithm, Sparse Cut, Conductance, Random Walks, Graph Sparsification

%\noindent {\bf Format:} Regular Presentation.

\end{titlepage}

\input{introduction}

\input{model}

\input{results}

\input{related-work}

\input{algo-sparsity}

%\input{pagerank-algo}

\input{diff_approach}

\input{lower-bound}

\input{conclusion}

\newpage

\let\oldbibliography\thebibliography
\renewcommand{\thebibliography}[1]{%
  \oldbibliography{#1}%
  \setlength{\itemsep}{0pt}%
}
\bibliographystyle{abbrv}
\bibliography{Distributed-RW}

\newpage
\section*{Appendix}
%\begin{center}
%\large \textbf{Appendix}
%\end{center}
\appendix
%\section{Moved Proofs}
%\Readsolutionfile{movedProofs}
\input{appendix}

\end{document}
