%\section{Preliminaries}
\subsection{Model} \label{sec:model}

\iffalse
Our model  consists of  a  point-to-point communication network of $k$ machines 
%\footnote{In practice, $k$ can be typically in order of thousands. While our upper bounds will hold for all $k$, our lower bounds will hold for all $k$ greater than some fixed constant.}  
interconnected by bandwidth-restricted links; the machines communicate by message passing over the links. 
%Communicating data between the machines is the costly operation (as opposed to local computation).
The network  is used to process an arbitrary $n$-node input graph $G$ (typically $n \gg k > 1$).  
Vertices of $G$ are partitioned across the machines  in a (approximately) balanced manner; in particular, we assume that the vertices are partitioned in a {\em random} fashion, which is a common implementation in many real systems \cite{pregel,stanton,1212.1121v1}. 
% --- this is assumed to be done in a random fashion\footnote{Many of our results will also hold (with slight modifications) without this assumption;  only a {\em``balanced"} partition of the input graph among the machines is needed --- cf. Section \ref{sec:mapping}.}, i.e., the vertices (and their incident edges) are assigned independently and randomly to the $k$ machines. (This is
%the typical way that many real systems (e.g., Pregel) partition the input graph among the machines\footnote{Partitioning based on the structure of the graph  --- with the goal
%of minimizing the amount of communication between the machines  --- is non-trivial; finding such a ``good" partition itself might be prohibitively expensive
%and  can be problem dependent. Some papers  address this issue, see e.g., \cite{stanton,cloud,1212.1121v1}.}.)
 The distributed computation proceeds in a sequence of {\em rounds}.
 %\footnote{In Pregel, these are called as {\em supersteps}. The high-level organization of Pregel  is inspired by Valiant's BSP model and has synchronicity built into it  \cite{pregel}.}.
In each round, each vertex  ``performs" some (local) computation in parallel which depends on the current state
 of the vertex and the messages that it received in the previous round; it can then ``send" messages to other vertices (that will be received at the next round), modify the state of its vertex and its incident edges. 
 %Messages are typically sent along outgoing edges, but a message may be sent to any vertex whose identifier is known (note that this is easy to accomplish since the identifier tells which machine a particular vertex is hashed to --- cf. Section \ref{sec:model}). 
 We note that the  computation and communication associated with a vertex is actually performed by the {\em machine} that is responsible for processing the vertex  (though it is easier to design algorithms  by thinking that the  vertices are the ones performing computation \cite{pregel,giraph}). Local computation within a machine is considered free, while
 communicating messages between the machines is the costly operation\footnote{This assumption  is reasonable in the context of large-scale data, e.g., it has been made  in the context of theoretical analysis of MapReduce, see e.g., \cite{ullman-book} for a justification. Indeed, typically in practice, even assuming the links have a bandwidth of order of gigabytes of data per second, the amount of data that have been to be communicated can be in order of tera or peta bytes which generally dominates the overall computation cost \cite{ullman-book}. Note that, alternatively, one can restrict the amount
 of data that a machine can process per round/timestep; our results also apply to this setting as well --- cf. Section \ref{sec:model}.}. 
 \fi
 
 \iffalse
 We note that although the data center model
is related to the standard distributed computing model, there are significant differences stemming from the fact
that many vertices of the input graph are mapped to the same machine in the data center model. This means that more ``local knowledge" is available
per vertex (since it can access for free information about other vertices in the same machine) in the data center model compared to the standard model. On the other hand, all nodes 
assigned to a machine have to communicate through the links incident on this machine, which can limit the bandwidth (unlike
the standard model where each vertex has a dedicated processor). These differences manifest in the time complexity --- certain problems have a faster time complexity in one model compared to the other (cf. Section \ref{sec:upperbounds}).  In particular, the fastest known distributed algorithm in the standard model for a given problem, may not give rise to the fastest algorithm in the data center model. Furthermore, the techniques for showing the complexity bounds (both upper and lower) in the data center model are different compared to the standard model.
\fi
 
 

%Our distributed computing model,  henceforth  called the {\em ``data center"}  model, is as follows.
We consider a {\em network} of $k>1$ (distinct) {\em machines} $N = \{p_1,\dots,p_k\}$ 
that are pairwise interconnected by bidirectional point-to-point communication {\em links} --- henceforth called the {\em $k$-machine} model\footnote{Our results can also be generalized to work if the communication network is a sparse topology as well if one assumes an underlying routing mechanism; details are omitted here.}.
Each machine executes an instance of a distributed algorithm $A$.
The computation advances in {\em synchronous} rounds where, in each round,
machines can exchange messages over their communication links.
Each link is assumed to have a bandwidth of $W$, i.e., $W$ bits can be transmitted over the link in one round.  
(In stating our time bounds, for convenience,  we will assume that bandwidth $W = 1$; in any case, it is easy to rewrite our upper bounds to scale in terms of parameter $W$ --- cf. Theorem \ref{thm:translation}).
(Note that machines have no other means of communication and do not share any
memory.) %We assume that communication is the costly operation and local computation is free  (though we will restrict it to be polynomial in the size
%of the input).  
%We note that the above model assumes that the communication topology is a complete network (among the $k$ machines). 
There is an alternate --- but equivalent --- way to view our communication restriction: instead of putting bandwidth restriction on the
links, we can put a restriction on the amount of information that each {\em machine} can communicate (i.e.\ send/receive) in a round.  The results that we obtain in the bandwidth-restricted model will also apply to the latter model (cf. Section \ref{sec:conversion}).
%In each round, each machine some (local) computation in parallel which depends on its  current state
 %and the messages that it received in the previous round; it can then ``send" messages to other machine (that will be received at the next round), modify the state of its vertex and its incident edges. 
 %Messages are typically sent along outgoing edges, but a message may be sent to any vertex whose identifier is known (note that this is easy to accomplish since the identifier tells which machine a particular vertex is hashed to --- cf. Section \ref{sec:model}). 
% We note that the  computation and communication associated with a vertex is actually performed by the {\em machine} that is responsible for processing the vertex  (though it is easier to design algorithms  by thinking that the  vertices are the ones performing computation \cite{pregel,giraph}). 
 Local computation within a machine is considered free, while
 communicating messages between the machines is the costly operation\footnote{This assumption  is reasonable in the context of large-scale data, e.g., it has been made  in the context of theoretical analysis of MapReduce, see e.g., \cite{ullman-book} for a justification. Indeed, typically in practice, even assuming the links have a bandwidth of order of gigabytes of data per second, the amount of data that have been to be communicated can be in order of tera or peta bytes which generally dominates the overall computation cost \cite{ullman-book}. 
 %Note that, alternatively, one can restrict the amount
 %of data that a machine can process per round/timestep; our results also apply to this setting as well --- cf. Section \ref{sec:model}.
 }. 



%\footnote{Our algorithms can be generalized to  work if the communication network is a sparse topology as well if one assumes an underlying routing mechanism; details are omitted here.}.
%and show that our results can be extended to this case as well. Hence, unless otherwise stated, we will assume that the topology is fully-connected.

%%Our model is essentially the same as the well-studied message passing distributed computing model 

We are interested in solving graph problems where we are given an \emph{input graph} $G$  of $n$ {\em vertices} (assume that each vertex has a unique label) and $m$ {\em edges} from some \emph{input domain} $\cG$. To avoid trivialities, we will assume that  $n \geq k$ (typically $n \gg k$).
 Unless otherwise stated, we will
consider $G$ to be undirected, although all our results can be made to apply in a straightforward fashion to directed graphs as well.
 Initially, the entire graph $G$  is not known by a single machine, but rather partitioned  among  the  $k$ machines in a {\em ``balanced"} fashion, i.e., the nodes
and/or edges of $G$ must be partitioned approximately evenly among the machines.  We will  assume  a {\em  vertex-partition} model, where vertices (and their incident edges) are partitioned across machines.
%\footnote{This is the assumption in systems such as Pregel and Giraph \cite{pregel, giraph} --- this is referred to as the ``vertex-centric" model which is a natural
%model  to simulate a message-passing distributed system.}.  
 One  type of partition that we will
assume throughout is the {\em random (vertex)} partition, i.e., vertices (and its incident edges) of the input graph are assigned  randomly to machines. (This is
the typical way that many real systems (e.g.,  Pregel)  partition the input graph among the machines; it is simple and 
and easy to accomplish, e.g., via hashing \footnote{Partitioning based on the structure of the graph  --- with the goal
of minimizing the amount of communication between the machines  --- is non-trivial; finding such a ``good" partition itself might be prohibitively expensive
and  can be problem dependent. Some papers  address this issue, see e.g., \cite{stanton,cloud,1212.1121v1}.}.)
Our upper bounds  will also hold (with slight modifications) without this assumption;  only a {\em``balanced"} partition of the input graph among the machines is needed. On the other hand, our lower bounds apply even under random partitioning, hence they apply to worst-case partition as well.
 % (as mentioned earlier, this is typical assumption in many real-world systems). 
%(We will show later, that many of our upper bounds (i.e., algorithms) can be extended to work {\em without} the random partition assumption; it is enough to have just any (arbitrary), but, balanced partition.)
It can be shown that\onlyLong{ (cf. Lemma \ref{lem:mapping})}\onlyShort{ (cf. full paper in Appendix)} a random partition gives rises to an (approximately) balanced partition.

Formally, in the {\em random vertex partition (RVP)} model,  each vertex of $G$  is assigned independently and randomly to one of the $k$ machines. If a vertex $v$ is assigned to machine $p_i$ we call $p_i$ the {\em home} machine of $v$.  Note that when a vertex is assigned to a machine, {\em all its incident edges} are assigned to that machine as well; i.e., the home machine will know the labels 
of neighbors of that vertex as well as  the identity of the home machines of the neighboring vertices.
 A convenient way to implement  the above assignment is
via {\em hashing}: each vertex (label) is hashed to one of the $k$ machines.
Hence, if a machine knows a vertex label, it also knows where it is hashed to. 
% We will also later discuss the {\em random edge partition} model, where each edge of $G$  is assigned independently and randomly to one of the $k$ machines and show how the results in the random vertex partition model can be related to the random
% edge partition model. 

Depending on the problem $\cP$, the vertices and/or edges of $G$ have labels chosen from a set of polynomial (in $n$) size.
Eventually, each {\em machine} $p_i$ ($1 \leq i \leq k$) must (irrevocably) set a designated local output variable $o_i$ (which will
depend on the set of vertices assigned to machine $p_i$) and the \emph{output configuration} $o=\langle o_1,\dots,o_k\rangle$ must satisfy certain feasibility conditions w.r.t.\ problem $\cP$.
For example, when considering the minimum spanning tree (\mst) problem, each $o_i$ corresponds to a set of edges (which will be a subset of edges
incident on vertices mapped to machine) $p_i$  and the edges in the union of the sets $o_i$ must form an MST of the input graph $G$; in other words, each machine $p_i$ will know all the MST edges incident on vertices mapped to $p_i$. 
(Note that
this is a natural generalization of the analogous assumption in the standard distributed message passing model, where each vertex knows which of its incident edges belong to the MST \cite{peleg}.) 
We say that \emph{algorithm $A$ solves problem $\cP$} if $A$ maps each $G\in \cG$ to an output configuration that is feasible for $\cP$.
The \emph{time complexity of $A$} is the maximum number of rounds until termination, over all graphs in $\cG$.
%We are interested in studying the time complexity of solving various graph problems in the $k$-machine model.
In stating our time bounds, for convenience,  we will assume that bandwidth $W = 1$; in any case, it is easy to rewrite our upper bounds to scale in terms of parameter $W$ (cf. Theorem \ref{thm:translation}).
%, e.g., an $\tilde{O}(n/k)$ time bound implies a time bound of $\tilde{O}(n/Wk)$ for a general $W$ (similarly for our lower bounds as well).



%Since we are interested in scalable algorithms, we limit the communication between any two machines to $O(\log n)$ bits per round round, which corresponds to the standard CONGEST model (cf.\ \cite{peleg}).
\noindent {\bf Notation.}
For any $0\leq \epsilon\leq 1$, we say that a protocol has {\em $\epsilon$-error} if, for any input graph $G$, it outputs the correct answer with probability at least $1-\epsilon$, where the probability is over the random partition and the random bit strings used by the algorithm (in case it is randomized). 

For any $n>0$ and function $T(n)$, we say that an algorithm $\cA$ {\em terminates in $O(T(n))$ rounds} if, for any $n$-node graph $G$, $\cA$ always terminate in $O(T(n)))$ rounds, regardless of the choice of the (random) input partition.  
For any $n$ and problem $\cP$ on $n$ node graphs, we let the {\em time complexity of solving $\cP$ with $\epsilon$ error probability} in the $k$-machine model, denoted by $\cT^k_\epsilon(\cP)$, be the minimum $T(n)$ such that there exists an $\epsilon$-error protocol that solves $\cP$ and terminates in $T(n)$ rounds. 
%
For any $0\leq \epsilon\leq 1$, graph problem $\cP$ and function $T:\mathbb{Z}_+\rightarrow \mathbb{Z}_+$, we say that $\cT^k_\epsilon(\cP)=O(T(n))$ if there exists integer $n_0$ and $c$ such that for all $n\geq n_0$, $\cT^k_\epsilon(\cP)\leq cT(n)$. Similarly, we say that $\cT^k_\epsilon(\cP)=\Omega(T(n))$ if there exists integer $n_0$ and real $c$ such that for all $n\geq n_0$, $\cT^k_\epsilon(\cP)\geq cT(n)$. For our upper bounds, we will usually use $\epsilon = 1/n$, which will imply high probability algorithms, i.e., succeeding with probability at least $1 - 1/n$. In this case, we will sometimes just omit $\epsilon$ and simply say 
the time bound applies ``with high probability".
We use %always use $k$ for the number of machines, $n$ for the number of nodes in the input graph, and $m$ for the number of edges in the input graph, 
$\Delta$ to denote the maximum degree of any node in the input graph, and $D$ for the diameter of the input graph.

\subsection{Our Results and Techniques}
\label{sec:contri}

Our main goal  is to investigate the {\em time} complexity, i.e., the number of distributed ``rounds", for solving various  fundamental graph problems. The time complexity not only captures the (potential) speed up possible for a problem, but it also implicitly captures the communication cost
 of the algorithm as well, since links can transmit only a limited amount of bits per round; equivalently, we can view our model where instead of links, {\em machines} can send/receive only a limited amount of bits per round.  We develop techniques to obtain non-trivial lower and upper bounds on the time
complexity of various graph problems.
 
 \iffalse
 We present techniques for obtaining non-trivial   lower bounds on the  distributed  time complexity. Our bounds apply even under a synchronous setting and even when the input is partitioned in a random fashion among the machines (cf. Section \ref{sec:contri}).
 
 We show an almost {\em tight} (up to polylogarithmic factors) lower bound of
$\Omega(n/k)$ rounds for  computing a spanning tree (ST) which also implies the same bound for other fundamental graph problems such as minimum spanning tree (MST), breadth-first tree, and shortest paths. We also show an
$\Omega(n/k^2)$ lower bound for connectivity, ST verification and other related problems.
 Our lower bounds develop and use new bounds in  {\em random-partition} communication complexity and quantify the fundamental  time limitations of distributively solving graph problems. We then develop  algorithmic techniques for obtaining fast algorithms for various graph problems in the $k$-machine model.  We  show that for many graph problems such as minimum spanning tree (MST), connectivity, PageRank etc., we can obtain a running time of $\tilde{O}(n/k)$ (i.e., the run time scales linearly in $k$), whereas
 for shortest paths, we present algorithms that run in $\tilde{O}(n/\sqrt{k})$ (for $(1+\epsilon)$-factor approximation) and in $\tilde{O}(n/k)$ (for $O(\log n)$-factor approximation) respectively. Our bounds are (almost) tight for problems such as computing a ST or a MST, while for other problems such as connectivity and shortest paths, there is a non-trivial gap between upper and lower bounds.  Understanding these bounds and investigating the best
 possible
 can provide insight into understanding the  complexity of distributed graph processing. 
 %Hence, it serves as a simple and  reasonable measure
  %to quantify the distributed complexity of large-scale data processing. 
\fi


%We study bounds on solving various fundamental graph problems. 


\medskip

\noindent{\bf Lower Bounds.} 
Our lower bounds quantify the fundamental time limitations of distributively solving graph  problems. They apply essentially to distributed data computations in all point-to-point communication models, since they apply even to a synchronous complete  network model where  the graph is partitioned {\em randomly} (unlike some previous results, e.g., \cite{woodruff},  which apply only under some worst-case  partition).

We first give a tight lower bound on the complexity of computing a spanning tree (cf. Section~\ref{sec:lower bound computation}).  The proof shows that 
$\Omega(n/k)$ rounds of communication are needed even for unweighted and undirected graphs of diameter 2, and even for sparse graphs. We give an information theoretic argument for this result. This result also implies the same bound for other fundamental problems
such as computing a minimum spanning tree, breadth-first tree, and shortest paths tree. This bound shows that one cannot hope to obtain
a run time that scales (asymptotically) faster than $1/k$. 
This result, in conjunction with our upper bound
of $\tilde O(n/k)$ for computing a MST,  shows that this lower bound is essentially tight. 

We then show an
$\Omega(n/k^2)$ lower bound for  connectivity, spanning tree (ST) verification and other related verification problems (cf. Section \ref{sec:lower bound verification}).
To
 analyze the complexity of verification problems we give reductions from problems in the 2-player communication complexity model using random partitions of the input variables. As opposed to the standard fixed partition model here all input bits are {\em randomly} assigned to Alice and Bob.
We give a tight lower bound for randomized protocols for the well-studied {\em disjointness} problem in this setting. In particular, we show a lower bound on the randomized {\em average partition} communication complexity of the  disjointness problem  which might be of independent interest.
Random partition communication complexity has also been studied by Chakrabarti et al. \cite{chakrabarti}, but their results apply to the promise disjointness problem in the multiparty number in hand model for a sufficiently large number of players only.
In our proof we apply the rectangle based arguments of Razborov \cite{Razborov92}, but we need to take care of several issues arising.  A core ingredient of Razborov's proof is the conditioning that turns the input distribution into a product distribution. With randomly assigned inputs we need to recover the necessary product properties by conditioning over badly assigned input bits. Even when doing so the size of the sets in the input are no longer exactly as in Razborov's proof. Furthermore, there is a large probability that the set intersection is visible to a single player right from the start, but with a small enough error probability the communication still needs to be large.

\medskip

%\mst = Minimum Spanning Tree. 
%\conn = Connectivity. 
%\mis = Maximal Independent Set. 
\noindent{\bf Algorithms and Upper Bounds.}
We introduce techniques to obtain fast graph algorithms in the $k$-machine model (cf. Section \ref{sec:model}).
We first present a general result, called the {\em Conversion Theorem} (cf. Theorem \ref{thm:translation}) that, given a graph problem ${\cal P}$,
shows how fast algorithms for  solving ${\cal P}$ in the $k$-machine model can be designed  by leveraging distributed algorithms for ${\cal P}$ in  the standard \cal{CONGEST} message-passing distributed computing model (see e.g., \cite{peleg}). 
%This theorem brings the vast research in distributed graph algorithms (in the standard model) immediately applicable to those in the data center model. 
%However, as mentioned in Section \ref{sec:intro}, 
We note that fast distributed algorithms in the standard model {\em do not} directly imply fast algorithms in the $k$-machine model. 
To achieve this, we consider distributed algorithms in an intermediate {\em clique} model (cf. Section  \ref{sec:conversion}) and then
show two ways --- parts (a) and (b) respectively of the Conversion Theorem --- to efficiently convert algorithms in the clique model
to the $k$-machine model.  Part (b) applies to converting distributed algorithms (in the clique model)  that only uses broadcast, while part (a) applies to any algorithm.  Part (a) will sometimes give better time bounds compared to part (b) and vice versa --- this depends on the problem at hand and the  type of distributed algorithm considered, as well as on the graph parameters. (The latter  can be especially useful in applications where we might
have some information on the graph parameters/topology as explained below.)
Using this theorem, we design algorithms  for various fundamental graph problems, e.g., PageRank, minimum spanning tree (MST), connectivity, spanning tree (ST) verification, shortest paths, cuts, spanners, covering problems, densest subgraph, subgraph isomorphism, Triangle finding (cf. Table 1). We show that problems such as PageRank, MST, and connectivity, graph covering etc.  can be solved in $\tilde{O}(n/k)$ time; this shows that one can achieve almost {\em linear} (in $k$) speedup. 
For graph connectivity, BFS tree construction, and ST verification, we show $\tilde O(\min(n/k,m/k^2 + D\Delta /k ))$ bound --- note that
the second part of the above bound may be better in some cases, e.g., if the graph is sparse (i.e., $m = O(n)$) and $D$ and $\Delta$ are small (e.g., bounded by $O(\log n)$) --- then we get a bound of $\tilde{O}(n/k^2)$.
For single-source shortest paths, another classic and important problem, we show a bound of $\tilde{O}(n/\sqrt{k})$ for a $(1+\epsilon)$-factor approximation and a bound of $\tilde{O}(n/k)$ for $O(\log n)$-factor approximation. We note that if one wants to compute {\em exact} shortest paths,
this might take significantly longer (e.g., using  Bellman-Ford --- cf. Section \ref{sec:applications}).   For graph covering problems such a Maximal Independent Set (MIS) and (approximate) Minimum Vertex cover (MVC), we show a bound of  $\tilde O(\min(n/k,m/k^2 + \Delta/k))$; note that this implies a bound of $\tilde{O}(n/k^2)$ for {\em (constant) bounded degree} graphs, i.e., we can get a speed up that scales superlinearly in $k$.


We finally note that our results also directly apply to an alternate (but equivalent) model, where instead of having a restriction on the number 
of bits individual links can transmit in a round, we restrict the number of bits a machine can send/receive (in total) per round (cf. Section \ref{sec:conversion}).

\onlyShort{
  For lack of space, most of the proofs  and  related work are deferred to the full paper (in Appendix).}

%If we restrict that each machine can send/receive only $k$ bits per round, then the same bounds apply to this model as well. 
 
 %We also present another algorithmic technique called {\em degree thinning} that
%leads to an even improved bound of $\tilde{O}(n/k^2)$ time  for connectivity  in {\em sparse} graphs. 
%We also show that our results can be extended under weaker variants of our model, e.g., when the data center network is sparsely connected and/or when the input graph is partitioned among the machines in an arbitrary (but balanced) fashion, i.e., the nodes
%and edges of $G$ must be partitioned approximately equal among the machines.




%\subsubsection{Upper Bounds}

%\subsubsection{Lower Bounds}

\begin{figure*}
  \centering
\begin{threeparttable}
%  \onlyShort{\scriptsize}
%  \onlyLong{\footnotesize}
  \begin{tabular}{l l l }
  \toprule
  \textsc{Problem}  & \textsc{Upper Bound} & \textsc{Lower Bound} \\ %\phantom{----------------------}\\
  \midrule
%  \multicolumn{2}{l}{\bf Lower Bounds:} \\
Minimum Spanning Tree (\mst) & $\tilde O(n/k)$  & $\tilde\Omega(n/k)^*$ \\
Connectivity, Spanning Tree Verification (\conn,\st) & $\tilde O(\min(n/k,m/k^2 + D\Delta /k ))$  & $\tilde\Omega(n/k^2)$ \\
Breadth First Search Tree (\bfs) & $\tilde O(\min(n/k + D,m/k^2+D\Delta /k))$ & $\tilde\Omega(n/k)$\\
Single-Source Shortest-Paths Distances (\sssp) & $\tilde O(n/\sqrt{k})^\dagger$,\ $\tilde O(n/k)^\$$  \\
Single-Source Shortest-Paths Tree (\spt) & $\tilde O(n/\sqrt{k})^\dagger$,\ $\tilde O(n/k)^\$$  & $\tilde\Omega(n/k)^*$\\
All-Pairs Shortest-Paths Distances (\apsp) & $\tilde O(n\sqrt{n}/k)^\#$,\ $\tilde O(n/k)^\$$  \\ % (\spann) & $\tilde O(n/k)$  & $\Omega(n/k^2)$ \\
  \pagerank with reset prob. $\gamma$ (\pagerank) & $\tilde O(n/\gamma k)$ \\  
  Graph Covering Problems (\mis, \mvc) & $\tilde O(\min(n/k,m/k^2 + \Delta/k))$ \\ %$\tilde O(\min(m/k^2,n/k))$  \\
  Maximal Ind.\ Set on Hypergraphs (\hmis) & $\tilde O(n/k)$ \\
$(2\delta-1)$-Spanner (\spanner) $(\delta \in O(\log n))$ & $\tilde O(n/k)$ \\
  Densest Subgraph (\dgraph) & $\tilde O(n/k)$ {\footnotesize (for $(2+\epsilon)$-approx.)}\\ %$\tilde O(\min(\frac{m}{k^2}, \frac{n}{k}))$ {\footnotesize (for $(2+\epsilon)$-approx.)}\\
  Triangle Verification (\tri) & $\tilde O(\min(n\Delta^2/ k^2 + \frac{\Delta^2}{k},n^{7/3}/k^2 + \frac{n}{k}))$\\
  Subgraph Isomorphism (\subiso) ($d$-vertex subgraph) & $\tilde O(n^{2 + (d-2)/d}/ k^2 + n/k)$\\
  \bottomrule
\end{tabular}
\begin{tablenotes}
    \item $^\dagger$ {\footnotesize $(1+\epsilon)$-approximation.}\quad
    $^\#$ {\footnotesize $(2+\epsilon)$-approximation.}\quad
    $^\$$ {\footnotesize $O(\log n)$-approximation.} \quad
    $^*$ {\footnotesize For any approx.\ ratio.}
\end{tablenotes}
\caption{ Complexity bounds in the $k$-machine  model for an $n$-node input graph with $m$ edges, max degree $\Delta$, and diameter $D$. $\epsilon >0$ is any small constant. The notation $\tilde O$ hides $\text{polylog}(n)$ factors and an additive $\text{polylog}(n)$ term. For clarity of presentation, we assume a bandwidth of $\Theta(\log n)$ bits.}
\label{tab:results}
\end{threeparttable}
\end{figure*}



\subsection{Related Work} \label{sec:related}

\onlyShort{

%The theoretical study of (large-scale) graph processing in distributed systems is relatively recent. 
%This is partly motivated
%by the rise of systems such as Google's Pregel \cite{pregel} (and its open source equivalent Giraph\cite{giraph}), Microsoft's Trinity \cite{trinity},
%GPS \cite{gps}, GraphLab\cite{graphlab} etc. 
% The above systems were specifically developed for graph processing, partly due to the fact that MapReduce \cite{DBLP:conf/osdi/DeanG04} --- a established platform to do large-scale data processing --- has some drawbacks when it comes to processing graph-structured data
 %\cite{beyond-hadoop-cacm, pregel}.   However, 
% Several works have been devoted to developing MapReduce graph algorithms (e.g., see \cite{lin-book,
% ullman-book} and the references therein).  
Several recent theoretical papers analyze MapReduce algorithms in general, including MapReduce graph algorithms see e.g., \cite{filtering-spaa, ullman-book, soda-mapreduce} and the references therein. 
%We note that  the flavor of theory developed for MapReduce is somewhat different compared to the distributed complexity
%results of this paper.
Minimizing communication (which leads in turn to minimizing the number of communication rounds) is also a key motivation (as in our paper) in MapReduce algorithms (e.g., see \cite{ullman-book}); however this is
generally achieved by making sure that the data is made small enough  quickly (in a small number of MapReduce rounds) to fit into the {\em memory} of a single machine. (The full paper discusses more on MapReduce algorithms.)
\iffalse
An example of this idea is the  filtering technique of \cite{filtering-spaa} applied to graph problems.  The main idea behind filtering is to reduce the size of the input in a distributed fashion so that the resulting, much smaller, problem instance can be solved on a single machine. Filtering allows for a tradeoff between the number of rounds and the available memory. Specifically, the work of  \cite{filtering-spaa} shows that for graphs with at most $n^{1+c}$ edges and machines with memory at least $n^{1+\eps}$ will require $O(c/\epsilon)$ (MapReduce) rounds.  
\fi

The work that is closest in spirit to ours  is the recent work of \cite{woodruff}.
The above work considers a number of basic statistical and graph problems in the message-passing model (where the data is distributed across a set of machines) and analyzes
their communication complexity --- which denotes the total number of bits exchanged in all messages across the machines during a computation. Their main result is that {\em exact} computation of many statistical and graph problems in the distributed setting is very expensive, and often one cannot do better than simply having all machines send their data to a centralized server.  
%The graph problems considered are computing the degree of a vertex, testing cycle-freeness, testing connectivity, computing the number of connected components, testing bipartiteness, and testing triangle-freeness. 
The strong lower bounds shown 
assume a {\em worst-case} distribution of the input (unlike ours, which assumes a random distribution).
They posit that in order to obtain protocols that are communication-efficient, one has to allow approximation, or investigate the distribution or layout of the data sets and leave these as open problems for future.
Our work, on the other hand, addresses time  (round) complexity (this is different from the notion of round complexity defined
in \cite{woodruff}) and shows that non-trivial speed up is possible for many graph problems. As posited above, for some problems
such as shortest paths and densest subgraph etc., our model assumes a {random partition} of the input graph and also allows {\em approximation} to get good speedup, while for problems such as MST
we get good speedups for exact algorithms as well. 
%For spanning tree problems we show tight lower bounds as well.
%A lower bound for computing a rooting spanning tree was shown in \cite{taskalloc}.

The $k$-machine model  is closely related to the well-studied (standard) distributed message-passing CONGEST model \cite{peleg}, in particular to the CONGEST {\em clique} model (cf. Section \ref{sec:upperbounds}). The main difference is that while
many vertices of the input graph are mapped to the same machine in the $k$-machine model, in the standard model each vertex corresponds to a dedicated machine.  
More ``local knowledge" is available
per vertex (since it can access for free information about other vertices in the same machine) in the $k$-machine model compared to the standard model. On the other hand, all nodes 
assigned to a machine have to communicate through the links incident on this machine, which can limit the bandwidth. 
These differences manifest in the time complexity --- certain problems have a faster time complexity in one model compared to the other (cf. Section \ref{sec:upperbounds}). 
% In particular, the fastest known distributed algorithm in the standard model for a given problem, may not give rise to the fastest algorithm in the data center model. 
%Furthermore, the techniques for showing the complexity bounds (both upper and lower) in the $k$-machine model are different compared to the standard model.
  The recently developed communication complexity techniques (in particular, those based on  the {\em Simulation theorem} of \cite{sicomp12, podc11,podc14}) used to prove lower bounds in the standard CONGEST model do not apply here.
}


\onlyLong{
\input{related}
}
\endinput
