\section{Introduction} \label{sec:intro}
%\vspace{-0.1in}
The emergence of ``Big Data"  over the last decade or so has led to new computing platforms for distributed processing of large-scale data, exemplified by  MapReduce \cite{DBLP:conf/osdi/DeanG04} and more recently systems such as Pregel \cite{pregel} and Giraph\cite{giraph}. 
In these platforms, the data  --- which is simply too large to fit into a single machine --- is distributed across a group of machines
that are connected  via a communication network and the machines jointly process the data in a distributed fashion.
The focus of this paper is on distributed processing of large-scale {\em graphs} which is increasingly becoming  important
with the rise of massive graphs such as the Web graph, social networks, biological networks, and other graph-structured data
and the consequent need for 
fast graph algorithms on such large-scale graph data. 
%(For example, Facebook recently announced running graph algorithms on graphs with trillion edges \cite{trillion}.)
Indeed, there has been a recent proliferation of systems designed specifically for large-scale graph processing, e.g., Pregel \cite{pregel}, Giraph \cite{giraph}, GraphLab \cite{graphlab}, and GPS \cite{gps}.
MapReduce (developed at Google \cite{DBLP:conf/osdi/DeanG04}) has become a very successful distributed computing platform for a wide variety of large-scale computing applications and also has been used for processing graphs \cite{lin-book}. However, as pointed out by the developers of Pregel (which was also developed  at Google),  MapReduce may sometimes be ill-suited for implementing graph algorithms; it can lead to ``sub-optimal performance and usability issues"
\cite{pregel}.  On the other hand, they mention that graph algorithms seem better suited to a {\em message-passing} distributed computing model \cite{peleg, lynch} and this is the main design principle \cite{pregel} behind Pregel (and other systems that followed it such as Giraph \cite{giraph} and GPS \cite{gps}). While there is a rich theory for the message-passing distributed computing model \cite{peleg, lynch}, such a theory is still in its infancy for distributed graph processing systems.
%\onlyShort{In the full paper (cf.\ appendix) we provide an in-depth discussion of related work.
%}

%In this work, we study a  message-passing distributed computing  model for  graph processing and present algorithms and lower bounds for several graph problems. 
%Our work, on a high level, is motivated by distributed graph processing systems such as Pregel,  Giraph and GPS whose  design is based  on the standard message-passing model of distributed computing \cite{peleg, lynch}. Such systems are now increasingly used to process  a (large-scale) graph distributed over a set of machines, where the number of machines is typically {\em much smaller} than the size of the graph. The computation model of the above systems is  ``vertex-centric", i.e., vertices of the input graph do  local computation as well as  communicate  with each other via message passing.  
%This  model  differs from MapReduce in several aspects (cf. Section~\ref{sec:model}).  While a theory has been recently developed for the MapReduce computing paradigm (see e.g., \cite{soda-mapreduce}),
In this work, our  goal is to investigate a theory for large-scale graph computation
based on a distributed message-passing model. 
A fundamental issue that we would like to investigate is the amount of ``speedup" 
possible in such a model  vis-a-vis the number of machines used: more precisely, if we use $k$ machines, does the run time scale linearly (or even super-linearly) in $k$? And what are the fundamental time bounds for various graph problems?
 %Such a theoretical framework can be potentially helpful in gaining insight into the possibilities and limitations  of emerging distributed graph processing systems. 
 
\iffalse
% In the standard message passing (synchronous) distributed computing model, we have a distributed network represented
%by some (arbitrary) graph $G$, where the vertices (representing the processors --- the standard assumption is that each processor is represented by a {\em unique} vertex)  communicate via
% message-passing via the edges (representing the communication links which are typically bandwidth-restricted) \cite{peleg}. Computation proceeds in a sequence of rounds:  In each ``round", each vertex can do some local
%computation (based on its local state and the messages that it has received till the previous round) and can send out messages to its neighbors;  these messages are received by the start of the next round. Local computation is generally considered free  and {\em communication} between vertices is the costly operation. 
%A key goal is to minimize the distributed  time complexity, i.e., the number of ``rounds" needed to solve a problem.    
%Distributed time complexity for various graph problems    have been studied extensively in the message-passing model over the last three decades \cite{lynch,peleg}. 
%We consider a distributed model for graph processing  that adopts a   ``vertex-centric" message-passing model (similar to Pregel and other systems) to process  a (large-scale) graph distributed over a set of machines (the number of machines is typically much
% smaller than the size of the graph).  
Our model (described in detail in Section \ref{sec:model}) consists of  a  point-to-point communication network of $k$ machines 
%\footnote{In practice, $k$ can be typically in order of thousands. While our upper bounds will hold for all $k$, our lower bounds will hold for all $k$ greater than some fixed constant.}  
interconnected by bandwidth-restricted links; the machines communicate by message passing over the links. 
%Communicating data between the machines is the costly operation (as opposed to local computation).
The network  is used to process an arbitrary $n$-node input graph $G$ (typically $n \gg k > 1$).  
Vertices of $G$ are partitioned across the machines  in a (approximately) balanced manner; in particular, we assume that the vertices are partitioned in a {\em random} fashion, which is a common implementation in many real systems \cite{pregel,stanton,1212.1121v1}. 
% --- this is assumed to be done in a random fashion\footnote{Many of our results will also hold (with slight modifications) without this assumption;  only a {\em``balanced"} partition of the input graph among the machines is needed --- cf. Section \ref{sec:mapping}.}, i.e., the vertices (and their incident edges) are assigned independently and randomly to the $k$ machines. (This is
%the typical way that many real systems (e.g., Pregel) partition the input graph among the machines\footnote{Partitioning based on the structure of the graph  --- with the goal
%of minimizing the amount of communication between the machines  --- is non-trivial; finding such a ``good" partition itself might be prohibitively expensive
%and  can be problem dependent. Some papers  address this issue, see e.g., \cite{stanton,cloud,1212.1121v1}.}.)
 The distributed computation proceeds in a sequence of {\em rounds}.
 %\footnote{In Pregel, these are called as {\em supersteps}. The high-level organization of Pregel  is inspired by Valiant's BSP model and has synchronicity built into it  \cite{pregel}.}.
In each round, each vertex  ``performs" some (local) computation in parallel which depends on the current state
 of the vertex and the messages that it received in the previous round; it can then ``send" messages to other vertices (that will be received at the next round), modify the state of its vertex and its incident edges. 
 %Messages are typically sent along outgoing edges, but a message may be sent to any vertex whose identifier is known (note that this is easy to accomplish since the identifier tells which machine a particular vertex is hashed to --- cf. Section \ref{sec:model}). 
 We note that the  computation and communication associated with a vertex is actually performed by the {\em machine} that is responsible for processing the vertex  (though it is easier to design algorithms  by thinking that the  vertices are the ones performing computation \cite{pregel,giraph}). Local computation within a machine is considered free, while
 communicating messages between the machines is the costly operation\footnote{This assumption  is reasonable in the context of large-scale data, e.g., it has been made  in the context of theoretical analysis of MapReduce, see e.g., \cite{ullman-book} for a justification. Indeed, typically in practice, even assuming the links have a bandwidth of order of gigabytes of data per second, the amount of data that have been to be communicated can be in order of tera or peta bytes which generally dominates the overall computation cost \cite{ullman-book}. Note that, alternatively, one can restrict the amount
 of data that a machine can process per round/timestep; our results also apply to this setting as well --- cf. Section \ref{sec:model}.}. 
 
 \iffalse
 We note that although the data center model
is related to the standard distributed computing model, there are significant differences stemming from the fact
that many vertices of the input graph are mapped to the same machine in the data center model. This means that more ``local knowledge" is available
per vertex (since it can access for free information about other vertices in the same machine) in the data center model compared to the standard model. On the other hand, all nodes 
assigned to a machine have to communicate through the links incident on this machine, which can limit the bandwidth (unlike
the standard model where each vertex has a dedicated processor). These differences manifest in the time complexity --- certain problems have a faster time complexity in one model compared to the other (cf. Section \ref{sec:upperbounds}).  In particular, the fastest known distributed algorithm in the standard model for a given problem, may not give rise to the fastest algorithm in the data center model. Furthermore, the techniques for showing the complexity bounds (both upper and lower) in the data center model are different compared to the standard model.
\fi
 
 Our main goal  is to investigate the {\em time} complexity, i.e., the number of distributed ``rounds", for solving various  fundamental graph problems. The time complexity not only captures the (potential) speed up possible for a problem, but it also implicitly captures the communication cost
 of the algorithm as well, since links can transmit only a limited amount of bits per round; equivalently, we can view our model where instead of links, {\em machines} can send/receive only a limited amount of bits per round (cf. Section \ref{sec:model}).  We present techniques for obtaining non-trivial   lower bounds on the  distributed  time complexity. Our bounds apply even under a synchronous setting and even when the input is partitioned in a random fashion among the machines (cf. Section \ref{sec:contri}).
 We show an almost {\em tight} (up to polylogarithmic factors) lower bound of
$\Omega(n/k)$ rounds for  computing a spanning tree (ST) which also implies the same bound for other fundamental graph problems such as minimum spanning tree (MST), breadth-first tree, and shortest paths. We also show an
$\Omega(n/k^2)$ lower bound for connectivity, ST verification and other related problems.
 Our lower bounds develop and use new bounds in  {\em random-partition} communication complexity and quantify the fundamental  time limitations of distributively solving graph problems. We then develop  algorithmic techniques for obtaining fast algorithms for various graph problems in the $k$-machine model.  We  show that for many graph problems such as minimum spanning tree (MST), connectivity, PageRank etc., we can obtain a running time of $\tilde{O}(n/k)$ (i.e., the run time scales linearly in $k$), whereas
 for shortest paths, we present algorithms that run in $\tilde{O}(n/\sqrt{k})$ (for $(1+\epsilon)$-factor approximation) and in $\tilde{O}(n/k)$ (for $O(\log n)$-factor approximation) respectively. Our bounds are (almost) tight for problems such as computing a ST or a MST, while for other problems such as connectivity and shortest paths, there is a non-trivial gap between upper and lower bounds.  Understanding these bounds and investigating the best
 possible
 can provide insight into understanding the  complexity of distributed graph processing. 
 %Hence, it serves as a simple and  reasonable measure
  %to quantify the distributed complexity of large-scale data processing. 
\fi

