
%Related Work

The theoretical study of (large-scale) graph processing in distributed systems is relatively recent. 
%This is partly motivated
%by the rise of systems such as Google's Pregel \cite{pregel} (and its open source equivalent Giraph\cite{giraph}), Microsoft's Trinity \cite{trinity},
%GPS \cite{gps}, GraphLab\cite{graphlab} etc. 
% The above systems were specifically developed for graph processing, partly due to the fact that MapReduce \cite{DBLP:conf/osdi/DeanG04} --- a established platform to do large-scale data processing --- has some drawbacks when it comes to processing graph-structured data
 %\cite{beyond-hadoop-cacm, pregel}.   However, 
 Several works have been devoted to developing MapReduce graph algorithms (e.g., see \cite{lin-book,
 ullman-book} and the references therein).  
There also have been several recent theoretical papers analyzing MapReduce algorithms in general, including Mapreduce graph algorithms see e.g., \cite{filtering-spaa, ullman-book, soda-mapreduce} and the references therein. 
We note that  the flavor of theory developed for MapReduce is quite different compared to the distributed complexity
results of this paper.
Minimizing communication (which leads in turn to minimizing the number of communication rounds) is a key motivation in MapReduce algorithms (e.g., see \cite{ullman-book}); however this is
generally achieved by making sure that the data is made small enough  quickly (in a small number of MapReduce rounds) to fit into the {\em memory} of a single machine. An example of this idea is the  filtering technique of \cite{filtering-spaa} applied to graph problems.  The main idea behind filtering is to reduce the size of the input in a distributed fashion so that the resulting, much smaller, problem instance can be solved on a single machine. Filtering allows for a tradeoff between the number of rounds and the available memory. Specifically, the work of  \cite{filtering-spaa} shows that for graphs with at most $n^{1+c}$ edges and machines with memory at least $n^{1+\eps}$ will require $O(c/\epsilon)$ (MapReduce) rounds.  

The work that is closest in spirit to ours  is the recent work of \cite{woodruff}.
The above work considers a number of basic statistical and graph problems in the message-passing model (where the data is distributed across a set of machines) and analyzes
their communication complexity --- which denotes the total number of bits exchanged in all messages across the machines during a computation. Their main result is that {\em exact} computation of many statistical and graph problems in the distributed setting is very expensive, and often one cannot do better than simply having all machines send their data to a centralized server.  The graph problems considered are computing the degree of a vertex, testing cycle-freeness, testing connectivity, computing the number of connected components, testing bipartiteness, and testing triangle-freeness. The strong lower bounds shown for these
assume a {\em worst-case} distribution of the input (unlike ours, which assumes a random distribution).
They posit that in order to obtain protocols that are communication-efficient, one has to allow approximation, or investigate the distribution or layout of the data sets and leave these as open problems for future.
Our work, on the other hand, addresses time  (round) complexity (this is different from the notion of round complexity defined
in \cite{woodruff}) and shows that non-trivial speed up is possible for many graph problems. As posited above, for some problems
such as shortest paths and densest subgraph etc., our model assumes a {\em random partition} of the input graph and also allows {\em approximation} to get good speedup, while for problems such as MST
we get good speedups for exact algorithms as well. For spanning tree problems we show tight lower bounds as well.
%A lower bound for computing a rooting spanning tree was shown in \cite{taskalloc}.

The $k$-machine model  is closely related to the well-studied (standard) message-passing CONGEST model \cite{peleg}, in particular to the CONGEST clique model (cf. Section \ref{sec:upperbounds}). The main difference is that while
many vertices of the input graph are mapped to the same machine in the $k$-machine model, in the standard model each vertex corresponds to a dedicated machine.  More ``local knowledge" is available
per vertex (since it can access for free information about other vertices in the same machine) in the $k$-machine model compared to the standard model. On the other hand, all nodes 
assigned to a machine have to communicate through the links incident on this machine, which can limit the bandwidth (unlike the standard model where each vertex has a dedicated processor). These differences manifest in the time complexity --- certain problems have a faster time complexity in one model compared to the other (cf. Section \ref{sec:upperbounds}). 
In particular, the fastest known distributed algorithm in the standard model for a given problem, may not give rise to the fastest algorithm in the $k$-machine model. 
Furthermore, the techniques for showing the complexity bounds (both upper and lower) in the $k$-machine model are different compared to the standard model.  The recently developed communication complexity techniques (see e.g, \cite{sicomp12, podc11,podc14}) used to prove lower bounds in the standard CONGEST model are not applicable here.


\endinput

