% !TEX root = sparsecut.tex

%\input{template}
\documentclass[11pt]{article}
%\documentclass{sig-alternate}
\usepackage{algorithm}
%\usepackage{algpseudocode}
\usepackage{algorithmic}

%\renewcommand{\baselinestretch}{.98}

\usepackage[pdftex]{graphicx}
\usepackage{subfigure}
\usepackage{sidecap}
\usepackage{epsfig,amsthm,amsmath,color, amsfonts}
\usepackage{epsfig,color}
\newcommand{\xxx}[1]{\textcolor{red}{#1}}
%\usepackage{fullpage}
\usepackage{framed}
%\usepackage{epsf}
%\usepackage{hyperref}

%\setlength{\textheight}{9.4in} \setlength{\textwidth}{6.55in}
\setlength{\textheight}{9.2in} \setlength{\textwidth}{6.55in}
%\setlength{\topmargin}{0in}

\voffset=-0.9in
\hoffset=-0.8in

\newtheorem{theorem}{Theorem}[section]
%\newtheorem{definition}[theorem]{Definition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{claim}[theorem]{Claim}
%\newtheorem{example}[theorem]{Example}
\newtheorem{remark}[theorem]{Remark}
\theoremstyle{definition}\newtheorem{example}[theorem]{Example}
\theoremstyle{definition}\newtheorem{definition}[theorem]{Definition}
\theoremstyle{observation}\newtheorem{observation}[theorem]{Observation}

\newcommand{\comment}[1]{}
\newcommand{\QED}{\mbox{}\hfill \rule{3pt}{8pt}\vspace{10pt}\par}
%\newcommand{\eqref}[1]{(\ref{#1})}
\newcommand{\theoremref}[1]{(\ref{#1})}
\newenvironment{proof1}{\noindent \mbox{}{\bf Proof:}}{\QED}
%\newenvironment{observation}{\mbox{}\\[-10pt]{\sc Observation.} }%
%{\mbox{}\\[5pt]}

\def\m{{\rm min}}
%\def\m{\bar{m}}
\def\eps{{\epsilon}}
\def\half{{1\over 2}}
\def\third{{1\over 3}}
\def\quarter{{1\over 4}}
\def\polylog{\operatorname{polylog}}
\newcommand{\ignore}[1]{}
\newcommand{\eat}[1]{}
\newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor}
\newcommand{\ceil}[1]{\left\lceil #1 \right\rceil}

\newcommand{\algorithmsize}[0]{}

%---------------------
%  SPACE SAVERS
%---------------------

\usepackage{times}
\usepackage[small,compact]{titlesec}
\usepackage[small,it]{caption}

\newcommand{\squishlist}{
 \begin{list}{$\bullet$}
  { \setlength{\itemsep}{0pt}
     \setlength{\parsep}{3pt}
     \setlength{\topsep}{3pt}
     \setlength{\partopsep}{0pt}
     \setlength{\leftmargin}{1.5em}
     \setlength{\labelwidth}{1em}
     \setlength{\labelsep}{0.5em} } }
\newcommand{\squishend}{
  \end{list}  }

%
%\newcommand{\squishlist}{
% \begin{enumerate}}
%\newcommand{\squishend}{
%  \end{enumerate}  }


%---------------------------------
% FOR MOVING PROOFS TO APPENDIX
%\usepackage{answers}
%%\usepackage[nosolutionfiles]{answers}
%\Newassociation{movedProof}{MovedProof}{movedProofs}
%\renewenvironment{MovedProof}[1]{\begin{proof}}{\end{proof}}

\def\e{{\rm E}}
\def\var{{\rm Var}}
\def\ent{{\rm Ent}}
\def\eps{{\epsilon}}
\def\lam{{\lambda}}
\def\bone{{\bf 1}}

\date{}

\begin{document}
\title{Distributed Computation of Sparse Cuts}
\begin{titlepage}
\author{Atish {Das Sarma} \thanks{eBay Research Labs, eBay Inc., CA, USA.
\hbox{E-mail}:~{\tt atish.dassarma@gmail.com.}}\and  Anisur Rahaman Molla \thanks{Division of Mathematical
Sciences, Nanyang Technological University, Singapore 637371. \hbox{E-mail}:~{\tt anisurpm@gmail.com}.} \and Gopal Pandurangan \thanks{Division of Mathematical
Sciences, Nanyang Technological University, Singapore 637371 and Department of Computer Science, Brown University, Providence, RI 02912, USA. \hbox{E-mail}:~{\tt gopalpandurangan@gmail.com}.  Supported in part by the following research grants: Nanyang Technological University grant M58110000, Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 2 grant MOE2010-T2-2-082, MOE  AcRF Tier 1 grant MOE2012-T1-001-094, and a grant from the US-Israel Binational Science Foundation (BSF).}}

\date{}

\maketitle \thispagestyle{empty}

\maketitle

\begin{abstract}
Finding  sparse cuts is an important tool in analyzing  large-scale distributed networks such as the Internet and Peer-to-Peer networks, as well as large-scale graphs such as the web graph, online social communities, and VLSI circuits. Sparse cuts are useful in graph clustering and partitioning among numerous other applications. In distributed communication networks, they are useful for topology maintenance and  for designing better search and routing algorithms.

In this paper, we focus on developing fast distributed algorithms for computing sparse cuts in networks. Given an undirected $n$-node network $G$ with conductance $\phi$, the goal is to find a cut set whose conductance is close to $\phi$. We present two distributed algorithms that find a cut set with sparsity $\tilde O(\sqrt{\phi})$ ($\tilde{O}$ hides $\polylog{n}$ factors). Both our algorithms work in the CONGEST distributed computing model and output a cut of conductance at most $\tilde O(\sqrt{\phi})$ with high probability, in $\tilde O(\frac{1}{b}(\frac{1}{\phi} + n))$ rounds, where $b$ is balance of the cut of given conductance. In particular, to find a sparse cut of constant balance, our algorithms take $\tilde O(\frac{1}{\phi} + n)$ rounds.
% and finds a cut with similar approximation. 
Our second algorithm can  be used to output a sparse {\em local} cut, i.e., a cut that is local to a given source node, and whose conductance is within a quadratic factor of the optimal local cut. Both our distributed algorithms can work without knowledge of the optimal $\phi$ value and hence can be used to find approximate conductance values both globally and with respect to a given source node.

Our algorithms can be useful in efficiently finding global and local sparse cuts 
(and their conductance values) and identifying well-connected clusters and critical edges in distributed networks. This in turn can be helpful in the design, analysis, and maintenance
of topologically-aware networks. 


\end{abstract}

\noindent {\bf Keywords:} Distributed Algorithm, Sparse Cut, Conductance, Random Walks, PageRank

\medskip

\noindent {\bf Format:} Regular Presentation.

\medskip

\noindent {\bf Eligible for Best Student Paper Award:} Student recommended for award - Anisur Rahaman Molla.

\end{titlepage}

\input{introduction}

\input{model}

\input{results}

\input{related-work}

\input{algo-sparsity}

\input{pagerank-algo}

\input{lower-bound}

\input{conclusion}

\newpage

  \let\oldthebibliography=\thebibliography
  \let\endoldthebibliography=\endthebibliography
  \renewenvironment{thebibliography}[1]{%
    \begin{oldthebibliography}{#1}%
      \setlength{\parskip}{0ex}%
      \setlength{\itemsep}{0ex}%
  }%
  {%
    \end{oldthebibliography}%
  }
%{ \small
{%\tiny
\bibliographystyle{abbrv}
\bibliography{Distributed-RW}
}


\newpage
\section*{Appendix}
%\begin{center}
%\large \textbf{Appendix}
%\end{center}
\appendix
%\section{Moved Proofs}
%\Readsolutionfile{movedProofs}
\input{appendix}


\end{document}


