\section{Algorithms and Techniques} \label{sec:upperbounds}
\input{balanced}

\onlyLong{\subsection{The Conversion Theorem} }
\label{sec:conversion}
We now present a general conversion theorem that enables us to leverage results from the standard message-passing model \cite{peleg}.
%We then show to obtain fast algorithms for many important distributed computing problems in the $k$-machine model.
%Our goal is to design distributed algorithms that leverage edges for communication that are \emph{not} part of the input graph $G$.
Our conversion theorem\onlyShort{ (cf.\ full paper in the appendix for the proof)} allows us to use distributed algorithms that leverage direct communication between nodes, even when such an edge is not part of the input graph.
More specifically, we can translate any distributed algorithm that works in the following clique model to the $k$-machine model.

\paragraph{The Clique Model}
Consider a complete $n$-node network $C$ and a spanning subgraph $G$ of $C$ determined by a set of (possibly weighted) edges $E(G)$.
The nodes of $C$ execute a distributed algorithm and each node $u$ is aware of the edges that are incident to $u$ in $G$.
Each node can send a message of at most $W\ge 1$ bits over each incident link per round.
For a graph problem $P$, we are interested in distributed algorithms that run on the network $C$ and, given input graph $G$, compute a feasible solution of $P$. In addition to {\em time complexity} (the number of rounds in the worst case), we are interested in the {\em message complexity} of an algorithm in this model which is the number of messages (in the worst case) sent over all links. Additionally, we are also interested in {\em communication degree complexity} which is the maximum number of messages sent by any node in any round; i.e., it is the minimum integer $M'$ such that every node sends a message to at most $M'$ other nodes in each round. %\danupon{I just defined the notion of communication degree complexity. Please check.}
%
%In contrast to the classic distributed computing model where $C = G$, we separate the communication network $C$ from the input graph $G$.
%
Note that we can simulate any ``classic'' distributed algorithm running on a network $G$ of an arbitrary topology that uses messages of $O(\log n)$ size in the clique model by simply restricting the communication to edges in $E(G) \subset E(C)$ and by splitting messages into packets of size $W$.
In this case, the time and message complexities remain the same (up to log-factors) while the communication degree complexity can be bounded by the maximum degree of $G$. 
We say that an algorithm is a {\em broadcast} algorithm if, in every round and for every node $u$, it holds that $u$ broadcasts the same message to other nodes (or remains silent). We define the {\em broadcast complexity} of an algorithm as the number of times nodes broadcast messages.  


% -------------------
% Below is the description of the seletive broadcast algorithm. 
% I removed it becuase I'm not sure about its correctness. 
% -------------------
%
%We say that an algorithm is a {\em selective broadcast} algorithm if, in every round of the algorithm, each node send the same message to (not necessarily all) other nodes. In other words, in each round $i$ of the algorithm, a node $u$ picks a set of neighbors $S_u^i$ and send the same message $M_u^i$ to all nodes in $S_u$ (nodes not in $S_u$ will not get any message from $u$ in this round).  For a technical reason, we also require that a selective broadcast algorithm's correctness and time guarantees remain unchanged even if some message $M_u^i$ is sent to nodes outside $S_u^i$; i.e., the algorithm must work as long as every node in $S_u^i$ receives message $M_u^i$ from $u$ in round $i$. 
%%
%We measure the complexity of selective broadcast algorithms in two ways. First, as a general algorithm, we define the {\em message complexity} as the number of messages sent over all edges; i.e., it is defined as $\sum_i\sum_u |S_u^i|$. Second, we define the {\em broadcast complexity} which is the number of times nodes broadcast messages; i.e., if $b_u$ is the number of times $S_u^i$ is not empty, then the broadcast complexity is $\sum_u b_u$. 


\begin{theorem}[Conversion Theorem] \label{thm:translation}
Suppose that there is an $\eps$-error algorithm $A_C$ that solves problem $P$ in time $\TC$ in the clique model, for any $n$-node input graph.
Then there exists an $\eps$-error algorithm $A$ that solves $P$ in the  $k$-machine model with bandwidth $W$ satisfying the following time complexity bounds with high probability:
\begin{compactdesc}
\item[(a)] If $A_C$ uses point-to-point communication with message complexity $M$ and communication degree complexity $\Delta'$, then $A$ runs in $\tilde O\left(\frac{M}{k^2 W} + \frac{\TC\Delta'}{k W}\right)$ time.
\item[(b)] If $A_C$ is a broadcast algorithm with broadcast complexity $B$, then $A$ takes $\tilde O(\frac{B}{k W} + \TC)$ time.
%
% ---------------------
% Below is the statement for the selective broadcast algorithm
% ---------------------
%\item[(b)] If $A_C$ is a selective broadcast algorithm with broadcast complexity $B$ and message complexity $M$, then $A$ runs in $\tilde O\left(\min\left(\frac{M}{k^2}, \frac{B}{k}\right) + \TC\right)$ time.
\end{compactdesc}
\end{theorem}
\onlyShort{
\begin{proof}[Proof Sketch]
We present the main ideas of the proof of Theorem~\ref{thm:translation} and defer the details to the full paper.
To obtain algorithm $A$ for the $k$-machine model, each machine locally simulates the execution of $A_C$ at each hosted vertex.
If algorithm $A_C$ requires a message to be sent from a node $u_1\in C$ hosted at machine $p_1$ to some node $u_2\in C$ hosted at $p_2$, then $p_1$ sends this message directly to $p_2$ via the links of the network $N$.
We will now bound the necessary number of rounds for simulating one round of algorithm $A_C$ in the $k$-machine model:
We observe that we can bound the number of messages sent in a round of $A_C$ through each machine link using \Cref{lem:mapping}(2). Let $G_i$ be the graph that captures the communication happening in round $i$ of $A_C$, i.e., there exists an edge $(u,v) \in E(G_i)$ if $u$ and $v$ communicated in round $i$.
By \Cref{lem:mapping}(2), each communication link of $N$ is mapped to at most $\tilde O(|E(G_i)|/k^2+\Delta_i/k)$ edges of $G_i$ (whp), where $\Delta_i$ is the maximum degree of $G_i$.
Summing up over all $T_C(n)$ rounds yields Part (a).

For (b), we modify the previous simulation to simulate a broadcast algorithm $A_C$ in our $k$-machine model:
Suppose that in the $i^{th}$ round, a node $u$ on
machine $p_1$ broadcasts a message to nodes $v_1, \ldots, v_j$ 
on machine $p_2$.
We can simulate this round of $A_C$, by letting machine $p_1$ send only one message to $p_2$ and machine $p_2$ will pretend that this message is sent from $u_1$ to {\em all
  nodes} belonging to $p_2$.
Recalling \Cref{lem:mapping}(a), the number of nodes contributing to $B_i$ broadcasts assigned to a single machine is $\tilde O(\lceil B_i / k \rceil)$ w.h.p; appropriately summing up over all $T_C(n)$ rounds (see full paper), yields the result.
\end{proof}
}
\onlyLong{
\begin{proof}
  Consider any $n$-node input graph $G$ with $m$ edges and suppose that nodes in $G$ are assigned to the $k$ machines of the network $N$ according to the vertex partitioning process (cf.\ \Cref{sec:model}).%\danupon{IMPORTANT: Broken link!}

%Let $A$ be the assumed algorithm for solving $P$ in the clique model in time $\TC$.
We now describe how to obtain algorithm $A$ for the $k$-machine model from the clique model algorithm $A_C$:
Each machine locally simulates the execution of $A_C$ at each hosted vertex.
First of all, we only need to consider inter-machine communication, since local computation at each machine happens instantaneously at zero cost.
If algorithm $A_C$ requires a message to be sent from a node $u_1\in C$ hosted at machine $p_1$ to some node $u_2\in C$ hosted at $p_2$, then $p_1$ sends this message directly to $p_2$ via the links of the network $N$.
(Recall that a machine $p_1$ knows the hosting machines of all endpoints of all edges (in $G$) that are incident to a node hosted at $p_1$.)
Moreover, $p_1$ adds a header containing the IDs of $u_1$ and $u_2$ to ensure that $p_2$ can correctly deliver the message to the simulation of $A_C$ at $u_2$.
Each message is split into packets of size $W$, which means that sending all packages that correspond to such a message requires $\lceil O(\log n) / W\rceil$ rounds.
In the worst case (i.e.\ $W=1$), this requires additional $O(\log n)$ rounds, which does not change our complexity bounds.
Thus, for the remainder of the proof, we assume that $W$ is large enough such that any message generated by $A_C$ can be sent in $1$ round in the $k$-machine model.

\smallskip\noindent{\em Proof of (a):} We will bound the number of messages sent in each round through each link using \Cref{lem:mapping}(2). Let $G_i$ be the graph whose node set is the same as the input graph (as well as the clique model), and there is an edge between nodes $u$ and $v$ if and only if 
%one of $u$ and $v$ sends a message to the other in round $i$ of the algorithm;
a message is sent between $u$ and $v$ in round $i$ of the algorithm;
in other words, $G_i$ captures the communications happening in round $i$. From \Cref{lem:mapping}(2), we know that (w.h.p.) each communication link of $N$ is mapped to at most $\tilde O(|E(G_i)|/k^2+\Delta_i/k)$ edges of $G_i$, where $\Delta_i$ is the maximum degree of $G_i$. This means that each machine needs to send at most  $\tilde O(|E(G_i)|/k^2+\Delta_i/k)$ messages over a specific communication link with high probability.
In other words, the $i^{th}$ round of $A_C$ can be simulated in $\tilde O(|E(G_i)|/k^2 W+\Delta_i/k W)$ rounds. By summing up over all rounds of $A_C$, we can conclude that the number rounds needed to simulate $A_C$ is 
\begin{align*}
\tilde O\left(\sum_{i=1}^{\TC} \left(\frac{|E(G_i)|}{k^2 W}+\frac{\Delta_i}{k W}\right)\right) &= \tilde O\left( \frac{M}{k^2 W}+\frac{\TC\Delta'}{k W}\right)
\end{align*}
where the equality is because of the following facts: (1) $\sum_{i=1}^{\TC} |E(G_i)| = O(M)$ since $|E(G_i)|$ is at most two times the number of messages sent by all nodes in the $i^{th}$ round, and (2) $\Delta_i\leq \Delta'$. This proves (a). 
 

%Let $M_i$ be the number of messages sent by $A_C$ in the $i$-th round, i.e., $M=\sum_{i=1}^{\TC}M_i$ and let $M_{i,max}$ be the maximum number of messages sent by any node in round $i$.
%It follows that each machine needs to send at most $\ell_i = O(\lceil M_i /k^2 \rceil + M_{i,max}/k+\log n) = O(\lceil M_i /k^2 \rceil + \Delta/k+\log n)$ messages over a specific communication link with high probability.
%%for simulating round $i$ of $A_C$ in the $k$-machine model, a machine needs to send at most $\ell_i=O(\lceil M_i /k^2 \rceil + n/k+\log n)$ messages (w.h.p.) over any communication link, which can be done in $\ell_i$ rounds.
%Summing up over the $\le \TC$ rounds, yields the claimed time complexity bound for (a).


\smallskip\noindent{\em Proof of (b):} We first slightly modify the previous simulation to simulate broadcast algorithms: Note that if $A_C$ is a broadcast algorithm, then for the $i^{th}$ round ($i\ge 1$) of algorithm $A_C$, if a node $u$ belonging to machine $p_1$ sends messages to nodes $v_1, \ldots, v_j$ ($j\ge 1$) belonging to machine $p_2$, we know that $u$ sends {\em the same message} to $v_1, \ldots, v_j$. Thus, when we simulate this round $A_C$, we will let machine $p_1$ send only one message to $p_2$, instead of $j$ messages. Then, machine $p_2$ will pretend that this message is sent from $u_1$ to {\em all nodes} belonging to $p_2$ that have an edge to node $u$. (We cannot specify the destination nodes $v_1, \ldots, v_j$ in this message as this might increase the length of the message significantly.) 
%Since a broadcast algorithms' time and correctness guarantees hold when we send messages to nodes outside $S_u^i$ at round $i$, this simulation of the algorithm remains correct.%\danupon{The last part is a bit sketchy. Please let me know if you can't follow.}

We now analyze this new simulation. We show that this simulation finishes in 
$\tilde O(\frac{B}{k}+ \TC)$ rounds. Let $B_i$ be the number of nodes that perform a broadcast in round $i$ of the run of $A_C$ in the clique model, and note that $B = \sum_{i=1}^{\TC} B_i$.
According to \Cref{lem:mapping}(a), the number of nodes contributing to $B_i$ broadcasts that are assigned to a single machine is $\tilde O(\lceil B_i / k \rceil)$ w.h.p.; in other words, w.h.p., each machine contains $\ell_i = \tilde O(\lceil B_i / k \rceil )$ of the $B_i$ nodes. Thus, for every $i$, we instruct algorithm $A$ to simulate these $B_i$ broadcasts in the $k$-machine model in $\lceil \ell_i / W \rceil$ rounds. Since $A_C$ takes at most $\TC$ rounds, it follows that algorithm $A$ takes $\tilde O(\frac{B}{k W} + \TC)$ rounds in the $k$-machine model.
%
% ------------------------
% Below is the analysis for the selective broadcast algorithm
% ------------------------
%
%Next, we show that this simulation finishes in $\tilde O(\frac{M}{k^2}+ \TC)$ rounds. We let $G_i$ be the graph representing the communication at the $i^{th}$ round of $A_C$, as defined in the proof of (a). We compute the number of rounds needed for simulating the $i^{th}$ round of $A_C$ by partitioning nodes into two sets. (1) The first set consists of nodes of degree at least $k$ in $G_i$. There are at most $2|E(G_i)|/k$ such nodes; so, by \Cref{lem:mapping}, each machine will get $\tilde O(1+E(G_i)/k^2)$\danupon{To do: \Cref{lem:mapping} should tell us where ``+1'' comes from.} of them w.h.p. Each of such high-degree nodes contributes one round of communication to the machine that they belong to. Thus, each machine will finish sending and receiving messages sent and received by these high-degree nodes in $\tilde O(E(G_i)/k^2)$ w.h.p. 
%%
%(2) The second set of nodes consists of nodes whose degree in $G_i$ is at most $k$. Let $G_i'$ be the subgraph of $G_i$ induced by these low-degree nodes. By \Cref{lem:mapping}(2), each link in the $k$-machine $N$ is mapped to at most $\tilde O(|E(G'_i)|/k^2+\Delta'_i/k)$ edges of $G'_i$, where $\Delta'_i$ is the maximum degree of $G'_i$. This term can be bounded by $\tilde O(1+|E(G'_i)|/k^2)$ since $\Delta'_i\leq k$.\danupon{To think after the deadline: There might be a problem as there might be a node that receives many messages and thus  $\Delta'_i$ could be as large as $n$? The claim is still fine if we define $\Delta'_i$ to be the out degree. But then I'm not sure if \Cref{thm:translation} still hold.} This means that each machine needs to send at most   $\tilde O(1+|E(G'_i)|/k^2)$ messages over a specific communication link with high probability. 
%
%
%It follows from the arguments above that the $i^{th}$ round of $A_C$ can be simulated in $\tilde O(1+|E(G'_i)|/k^2)$ rounds. By summing up over all rounds of $A_C$, we can conclude that the number rounds needed to simulate $A_C$ is 
%\begin{align*}
%\tilde O\left(\sum_{i=1}^{\TC} \left(1+\frac{|E(G_i)|}{k^2}\right)\right) &= \tilde O\left( \TC +\frac{M}{k^2}\right)
%\end{align*}
%as desire. Note again that the the equality is because $\sum_{i=1}^{\TC} |E(G_i)| = O(M)$ since $|E(G_i)|$ is at most two times the number of messages sent by all nodes in the $i^{th}$ round. This proves (b). 
%
\end{proof}


%For some applications it might be more suitable to limit each machine to communicate (i.e.\ send/receive) at most $k W$ bits in total per round, instead of restricting the bandwidth of individual inter-machine links to $W$ bits.
It is easy to see that a simulation similar to the one employed in the proof of Theorem~\ref{thm:translation} provides the same complexity bounds, if we limit the
total communication of each machine (i.e.\ bits sent/received) to at most $k W$ bits per round, instead of restricting the bandwidth of individual inter-machine links to $W$ bits.
To see why this is true, observe that throughout the above simulation, each machine is required to send/receive at most $k W$ bits in total per simulated round with high probability. 
}
\begin{comment}
Analogously to the link bandwidth restriction model, we assume that $W$ is at least the size of any message generated by the clique algorithm $A_C$ per round,  in particular, $W \in \Omega(\log n)$, since a smaller $W$ incurs a logarithmic overhead in the worst case.
The bound of Theorem~\ref{thm:translation}(b) is immediate, since the simulation requires each machine to perform at most $\lfloor W / O(\log n)\rfloor \ge 1$ broadcasts per round.
In the simulation requires each machine to send (resp.\ receive) $\tilde O(k W)$ bits per rounds, Theorem~\ref{thm:translation} also holds in this model.
\end{comment}


%By using the routing mechanism of \cite{lenzen-routing} we can deliver the necessary $k W$ bits to each machine in $O(1)$ rounds.

%\begin{theorem}[Broadcast Translation] \label{thm:broadcastTranslation}
%Consider an algorithm $A$ that solves problem $P$ in time $\TC$ in the subgraph %optimization model by broadcasting at most $M$ messages in total, for any $n$-%node input graph.
%\end{theorem}
%\begin{proof}
%Let $N_i$ be the nodes that perform a broadcast in round $i$ of the run of $A$ on %the clique model and observe that $M = \sum_{i=1}^{\TC} N_i$.
%Using a standard Chernoff bound, it follows that w.h.p.\ each machine contains %$O(N_i /k )$ vertices.
%Thus, for every $i$, we can simulate these $N_i$ broadcasts in the $k$-machine model %using $O(N_i / k)$ rounds.
%Since there are at most $\TC$ sequential broadcasts
%We will show how to simulate $O(n/k)$
%\end{proof}


\subsection{Algorithms} \label{sec:applications}

We now consider various important graph problems in the $k$-machine model. 
For the sake of readability, we assume a bandwidth of $\Theta(\log n)$ bits, i.e., parameter $W=\Theta(\log n)$.
Observe that the simple solution of aggregating the entire information about the input graph $G$ at a single machine takes $O(m/k)$ rounds; thus we are only interested in algorithms that beat this trivial upper bound.%
\onlyLong{ Our results are summarized in Table~\ref{tab:results}.}%
\onlyShort{ Our results are summarized in Table~\ref{tab:results} and described in more detail in the full paper.}
%\onlyLong{

\medskip

\noindent {\bf Breadth-First Search Tree (\bfs).}
To get an intuition for the different bounds obtained by applying either Theorem~\ref{thm:translation}(a) or Theorem~\ref{thm:translation}(b) to an algorithm in the clique model, consider the problem of computing a breadth-first search (\bfs) tree rooted at a fixed source node.
%gives us an $\tilde O(n/k + D)$ time algorithm, 
If we use Theorem~\ref{thm:translation}(a) we get a bound of $\tilde O(m/k^2 + D \Delta /k )$ rounds.
In contrast, recalling that each node performs $O(1)$ broadcasts, Theorem~\ref{thm:translation}(b) yields $\cT_{1/n}^k(\bfs) \in \tilde{O}(n/k +D)$.
\onlyLong{We will leverage these bounds when considering graph connectivity and spanning tree verification below.}
%+ D)$following:
%\begin{corollary}
%In the $k$-machine data center model, we have $\cT_{1/n}^k(\bfs) \in O(n/k
%+ D)$.
%\end{corollary}
%}

\medskip

\noindent {\bf Minimum Spanning Tree (\mst), Spanning Tree Verification (\st) and Graph Connectivity (\conn).}
%\onlyShort{\paragraph{Minimum Spanning Tree (\mst) and Graph Connectivity (\conn)}}
%MST and connectivity in $\tilde O(m/k^2+n/k)$ time. 
An efficient algorithm for computing the \mst of an input graph was given by
\cite{GallagerHS83}, which proceeds by merging ``MST-fragments'' in parallel; initially each vertex forms a fragment by itself.
In each of the $O(\log n)$ phases, each fragment computes the minimum outgoing
edge (pointing to another fragment) and tries to merge with the respective
fragment.
Since any MST has $n-1$ edges, at most $n-1$ edges need to be added in total.
This yields a total broadcast complexity of $\tilde O(n)$ and thus
Theorem~\ref{thm:translation}(b) readily implies the bound of $\tilde
O(n/k)$.
We can use an \mst algorithm  for verifying \emph{graph connectivity} which in turn can be used for \st.
\onlyShort{The details are in the full paper.}%
\onlyLong{%
We assign weight $1$ to all edges of the input graph $G$ and then add an edge with infinite weight between any pair of nodes $u$, $v$ where $(u,v) \notin E(G)$, yielding a modified graph $G'$.
Clearly, $G$ is disconnected iff an MST of $G'$ contains an edge with infinite weight.
This yields the first part of the upper bound for graph connectivity stated in \Cref{tab:results}.
%\onlyLong{
We now describe how to verify whether an edge set $S$ is an \st, by employing a given algorithm $A$ for \conn.
Note that, for \emph{\st\ verification}, each machine $p$ initially knows the assumed status of the edges incident to its nodes wrt.\ being part of the \st,
and eventually $p$ has to output either \textsc{yes} or \textsc{no}.
First, we run $A$ on the graph induced by $S$ and then we compute the size of $S$ as follows: 
Each machine locally adds $1$ to its count for each edge $(u,v)\in S$, if $p$ is the home machine for vertices $u$, $v$. Otherwise, if one of $u$ or $v$ reside on a different machine, then $p$ adds $1/2$.
Then, all machines exchange their counts via broadcast, which takes $1$ round (since each count is at most $n$ and $W\in\Theta(\log n)$) and determine the final count by summing up over all received counts including their own.
Each machine outputs \textsc{yes} iff (1) the output of the \conn\ algorithm $A$ returned \textsc{yes} and (2) the final count is $n-1$.
Thus we get the same bounds for \st verification as for graph connectivity.

Recalling that we can compute a \bfs in $\tilde O(m/k^2 + D \Delta /k )$
rounds, it is straightforward to see that the same bound holds for \conn\ (and thus also \st verification):
First, we run a leader election algorithm among the $k$ machines.
This can be done in $O(1)$ rounds (and $\tilde O(\sqrt{k})$ messages) by using the algorithm of \cite{icdcn13} (whp).
The designated leader machine then chooses an arbitrary node $s$ as the source node and executes a \bfs\ algorithm.
Once this algorithm has terminated, each machine locally computes the number of its vertices that are part of the \bfs\ and then computes the total number of vertices in the \bfs\ by exchanging its count (similarly to the \st\ verification above).
The input graph is connected iff the \bfs contains all vertices.
}
%\onlyLong{

\medskip

\noindent {\bf PageRank.}
%\peter{I need to update this part}
%Many distributed algorithms use the ability of a node to initiate a random walk on the network\peter{add citations}.
%As a concrete application of problem \randwalk, we consider computing the page rank (\pagerank).
The PageRank problem is to compute the PageRank distribution of a given graph (may be directed or undirected).
A distributed page rank algorithm was presented in \cite{DBLP:conf/icdcn/SarmaMPU13}, based on the distributed random walk algorithm:
Initially, each node generates $\Theta(\log n)$ random walk tokens.
A node forwards each token with probability $1-\delta$ and terminates the token with probability $\delta$ (called the {\em reset}
probability).
Clearly, every token will take at most $O(\log n / \delta)$ steps with high probability before being terminated.
From Lemma~2.2 of \cite{DBLP:conf/podc/SarmaNP09} we know that these steps can be implemented in $O(\log^2 n)$ rounds in the clique model,
and since this requires $O(n\log^2 n/\delta)$ messages to be sent in total, Theorem~\ref{thm:translation}(a) yields that,
%In  problem \randwalk, every node starts $\eta$ distributed random walk by generating a unique token and we require all of these tokens to take $\lambda$ steps.
%In particular, if $\lambda$ is sufficiently large, the distribution vector of every walk needs to be close to the stationary distribution of $G$.
%A simple distributed algorithm for the clique model proceeds by token forwarding:
%If a node $u$ holds some token $\tau$, it uniformly at random chooses a neighbor $v$ (according to the edges of $G$) and sends a message containing $\tau$ to $v$.
%In the next round, node $v$ in turn forwards $\tau$ to a randomly chosen neighbor, until $\tau$ has been forwarded $\lambda$ times.
%Since a node might be receiving multiple tokens in the same round but only $1$ token can be sent across each link, it is possible that some tokens are delayed due to congestion.
%We now analyze the round complexity of this token forwarding algorithm in the $k$-machine network:
%From the properties of the vertex partitioning process\peter{ref}, it follows that the expected number of vertices on each machine is $O(n/k)$, therefore, initially the expected number of tokens on each machine is $O(\eta n/k)$.
%By an analogous argument as in Lemma~2.2 of \cite{DBLP:conf/podc/SarmaNP09} it follows that the number of tokens held by each vertex is $O(\eta\log n)$ at any point in time, and thus, w.h.p., each machine holds $\tilde O(\eta n / k )$ tokens in any round.
%Since we have assumed random partitioning of vertices to machines, it follows that a token is sent across a specific inter-machine link with probability $1/k$ upon being forwarded.
%This means that at most $\tilde O(\eta n /k^2)$ rounds are necessary to ensure that each token takes a step.
%In total, it takes $\tilde O(\lambda \eta n / k^2)$ rounds to perform random walks of length $\lambda$ in the $k$-machine model.
%
%In contrast to the distributed random walk algorithm that we have described previously, it is sufficient if nodes count the number of tokens that they are visited by and in particular does not require knowledge of the token's source.
%This allows the algorithm in \cite{DBLP:conf/icdcn/SarmaMPU13} to aggregate tokens that need to be sent across the same edge by simply sending $1$ counter containing the number of tokens instead of several unique tokens.
%\begin{corollary}
for any $\delta > 0$, there is a randomized algorithm for computing \pagerank\ in the $k$-machine  model such that $\cT_{1/n}^k(\pagerank) \in \tilde O( \frac{n}{\delta k})$.
%\end{corollary}


\onlyLong{

\onlyLong{
\paragraph{Computing a $(2\delta-1)$-Spanner}
The algorithm of \cite{baswana} computes a $(2\delta-1)$-spanner, for some $(\delta \in O(\log n)$), of the input graph $G$ in $\delta^2$ rounds (using messages of $O(\log n)$ size) that has an expected number of $\delta m^{1+1/\delta}$ edges  (cf.\ Theorem~5.1 in \cite{baswana}).
That is, each node needs to broadcast $\delta^2$ times, for a total broadcast complexity of $n\delta^2$.
Applying Theorem~\ref{thm:translation}(b) yields a bound of $\tilde O(n/k)$ rounds in the $k$ machine model.
}


\paragraph{Single-Source Shortest Paths (\sssp, \spt),  and All-Pairs Shortest Paths(\apsp)} 
We show that, in the $k$-machine model, \sssp and \apsp can be $(1+\epsilon)$-approximated in $\tilde O(n/\sqrt{k})$ time, and $(2+\epsilon)$-approximated in $\tilde O(n\sqrt{n}/k)$ time, respectively. 

Recall that, for \sssp, we need to compute the distance between each node and a designated source node.
Nanongkai \cite{Nanongkai13-ShortestPaths} presented a $\tilde O(\sqrt{n}D^{1/4})$-time algorithm for \sssp, which implies a $\tilde O(\sqrt{n})$-time algorithm in the clique model. We show that the ideas in \cite{Nanongkai13-ShortestPaths}, along with Theorem~\ref{thm:translation}(b), leads to a $\tilde O(n/\sqrt{k})$-time $(1+\epsilon)$-approximation algorithm in the $k$-machine model. We sketch the algorithm in \cite{Nanongkai13-ShortestPaths} here\footnote{The algorithm is in fact a simplification of the algorithm in \cite{Nanongkai13-ShortestPaths} since we only have to deal with the clique model}. First, every node broadcasts $\rho$ edges incident to it of minimum weight (breaking tie arbitrarily), for some parameter $\rho$ which will be fixed later. Using this information, every node internally compute $\tilde O(1)$ integral weight functions (without communication). For each of these weight functions, we compute a BFS tree of depth $n/\rho$ from the source node, treating an edge of weight $w$ as a path of length $w$. Using two techniques called {\em light-weight \sssp} and {\em shortest-path diameter reduction}, the algorithm of \cite{Nanongkai13-ShortestPaths}  gives a $(1+\epsilon)$-approximation solution. Observe that this algorithm uses broadcast communication. Its time complexity is clearly $\TC=\tilde O(\rho+n/\rho)$. (Thus, by setting $\rho=\sqrt{n}$, we have the running time of $\tilde O(\sqrt{n})$ in the clique model.) Its broadcast complexity is $B=\tilde O(n\rho)$ since every node has to broadcast $\rho$ edges in the first step and the BFS tree algorithm has  $O(n)$ broadcast complexity. Its message complexity is $M=\tilde O(n^2\rho)$ (the BFS tree algorithm has $O(n^2)$ message complexity since a message will be sent through each edge once). 
%
%
By Theorem~\ref{thm:translation}(b), we have that in the $k$-machine model, the time we need to solve \sssp is
%
%$$\tilde O(\min(\frac{n^2\rho}{k^2}, \frac{n\rho}{k})+\rho+n/\rho).$$ 
$\tilde O(\frac{n\rho}{k}+\rho+n/\rho).$ 
%
Using $\rho=\sqrt{k}$, we have the running time of $\tilde O(\sqrt{k}+n/\sqrt{k})=\tilde O(n/\sqrt{k})$ where the equality is because $k\leq n$.  
%
\danupon{We should mention that next section we get a faster algorithm with higher approx ratio, using filtering.}

In \cite{Nanongkai13-ShortestPaths}, a similar idea was also used to obtain a $(2+\epsilon)$-approximation $\sqrt{n}$-time algorithm for \apsp on the clique model. This algorithm is almost identical to the above algorithm except that it creates BFS trees of depth $n/\rho$ from $n/\rho$ centers instead of just the source. By this modification, it can be shown that the running time remains the same, i.e. $\TC=\tilde O(\rho+n/\rho)$.  (Thus, by setting $\rho=\sqrt{n}$, we have the running time of $\tilde O(\sqrt{n})$ in the clique model.) The  broadcast complexity becomes $B=\tilde O(n\rho+n^2/\rho)$ since each BFS tree algorithm has $O(n)$ broadcast complexity.   Its message complexity becomes $M=\tilde O(n^2\rho+n^3/\rho)$ since each BFS tree algorithm has $O(n^2)$ message complexity. By Theorem~\ref{thm:translation}(b), we have that in the $k$-machine model, the time we need to solve \apsp is
%
%$$\tilde O(\min(\frac{n^2\rho+n^3/\rho}{k^2}, \frac{n\rho+n^2/\rho}{k})+\rho+n/\rho).$$ 
$\tilde O(\frac{n\rho+n^2/\rho}{k}+\rho+n/\rho).$
%
Using $\rho=\sqrt{n}$, we have the running time of $\tilde O(\frac{n\sqrt{n}}{k})$.\danupon{I'm not sure if this is the best parameter for $\rho$. Also not sure if we can get a matching lower bound. I can see $\Omega(n/k)$ lower bound which holds for any approximation ratio. But we can't get the same thing with $\Omega(n\sqrt{n}/k)$ lower bound since we can use spanner to get $(\log n)$-approximation in $O(n/k)$-time, I think (this has to be written in the filtering section).}

Since the algorithm of \cite{Nanongkai13-ShortestPaths} also constructs a shortest path tree while computing the shortest path distances, we get analogous bounds for the \spt problem, which requires each machine to know which of its edges are part of the shortest path tree to the designated source.

We can leverage the technique of computing a $(2\delta-1)$-spanner in $\tilde O(n/k)$ rounds that has an expected number of $\tilde O(m^{1+1/\delta})$ edges\onlyShort{ (described in the full paper)}.
We can simply collect all edges at one designated machine $p$, which takes time $\tilde O(m^{1+1/\delta} /k )$ and then locally compute a $(2\delta-1)$-approximation for the shortest path problems at machine $p$.
In particular, for $\delta=\Theta(\log n)$, we have a spanner of (expected) $O(n)$ edges and thus we get a $O(\log n)$-approximation for the shortest path problems in expected $\tilde O(n/k)$ rounds.

For computing the exact \apsp (resp.\ \sssp) problems, we can use a distributed algorithm by Bellman-Ford (\cite{peleg,lynch}). 
This algorithm takes $S$ rounds, where $S$ is the shortest path diameter, and thus the broadcast complexity is $nS$. 
By virtue of Theorem~\ref{thm:translation}(b), we get a round complexity of $\tilde O(nS/k+S)$ in the $k$-machine model.


\onlyLong{
  \paragraph{Densest Subgraph} We show that Theorem~\ref{thm:translation} implies that the densest subgraph problem can be approximated in $\tilde O(\min(\frac{m}{k^2}, \frac{n}{k}))$ time in the $k$-machine model. In \cite{densest}, Das Sarma et al. presented a $(2+\epsilon)$-approximation algorithm for the densest subgraph problem. The idea is very simple: In every round, they compute the average degree of the network and delete nodes of degree less than $(1+\epsilon)$ times of the average degree.  This process generates several subgraphs of the original network, and the algorithm output the densest subgraph among the generated subgraphs. It was shown in \cite{densest} that this algorithm produces a $(2+\epsilon)$-approximate solution. 
%
They also proved that this algorithm stops after $\log_{1+\epsilon} n$ rounds, implying a time complexity of  $\TC=O(\log n)$ in the clique model. The message complexity is $M=\tilde O(m)$ since every node has to announce to its neighbors in the input graph that it is deleted at some point. For the same reason, the broadcast complexity is $B=\tilde O(n)$. Note that this algorithm is broadcast. So, By Theorem~\ref{thm:translation}, we have that in the $k$-machine model, the time we need to solve this problem is
$\tilde O(\frac{n}{k})$ as desired. 
%
%{\bf Danupon: I'll take care of this.}
%$O(\log n)$ locally computing total degree, then aggregating for average degree.
%$O(m/k^2 + \Delta\log n /k)$ using conversion theorem.
}

\onlyLong{
\paragraph{Cut Sparsifier, Min Cut, Sparsest Cut, etc.} An $\epsilon$-cut-sparsification of a graph $G$ is a graph $G'$ on the same set of nodes such that every cut in $G'$ is within $(1\pm \epsilon)$ of the corresponding cut in $G$. We show that we can use the technique called {\em refinement sampling} of Goel et al. \cite{GoelKK10}, to compute an $\epsilon$-sparsification of $\tilde O(n)$ edges in $\tilde O(n/k)$ time in the $k$-machine model. 
%
By aggregating this sparsification to a single machine, we can approximately solve cut-related problems such as a $(1\pm\epsilon)$-approximate solution to the minimum cut and sparsest cut problems. 
%
The main component of the algorithm of Goel et al. is repetitively sparsify the graph by keeping each edge with probability $2^{-\ell}$ for some $\ell$, and compute the connected components after every time we sparsify the graph. By doing this process for $\tilde O(1)$ times, we can compute a probability $z(e)$ on each edge $e$. Goel et al. showed that we can use this probability $z(e)$ to (locally) sample edges and assign weights to them to obtain an $\epsilon$-sparsification. 
%
%obtain an $\epsilon$-sparsification by sampling each edge with probability $z(e)$ and 
%
It is thus enough to be able to compute the connected components quickly in the $k$-machine model. This can be done by simply invoke the \mst algorithm, which takes $\tilde O(n/k)$ rounds. We have to runs this algorithm for $\tilde O(1)$ times, so the total running time is $\tilde O(n/k)$. 
}

\onlyLong{
\paragraph{Covering Problems on Graphs}
We now describe how to solve covering problems like maximal independent set (MIS) in our model.
We first consider MIS and related covering problems on simple graphs, and then describe how to obtain an MIS on an input hypergraph.
A well known distributed algorithm for computing a maximal independent set (\mis) is due to \cite{luby}:
The algorithm proceeds in phases and in each phase, every \emph{active} node $v$---initially every node---marks itself with probability $1/2 d_v$ where $d_v$ is the degree of $v$.
If $v$ turns out to be the only marked node in its neighborhood, $v$ enters the \mis, notifies all of its neighbors who no longer participate (i.e.\ become \emph{inactive}) in future phases and terminates.
When $2$ neighboring nodes both mark themselves in the same phase, the lower degree node unmarks itself.  
Nodes that were not deactivated proceed to the next phase and so forth.
It was shown in \cite{luby} that this algorithm terminates in $O(\log n)$ rounds with high probability.
%, and the total number of messages sent is $O(m\log n)$.
%Applying \Cref{thm:translation}(a) yields an $\tilde O(m/k^2 + n/k)$ round distributed algorithm in the $k$-machine model.
%
Since each node sends the same messages to all neighbors, we can analyze the communication in terms of broadcasts, yielding a broadcast complexity of $O(n\log n)$ (whp).
Applying Theorem~\ref{thm:translation}(b) yields a round complexity of $\tilde O(n/k)$.
Alternatively, for bounded degree graphs, applying Theorem~\ref{thm:translation}(a) gives us a running time of $\tilde O(m/k^2 + \Delta/k)$, which is faster when $\Delta \ll k$.
Considering the locality preserving reductions (cf.\ \cite{KuhnMW10}) between \mis, maximal matching (\maximalm), minimal dominating set (\minimalds), and computing a $2$-approximation of the minimum vertex cover (\mvc),  we get that
%\begin{corollary}
 $\cT_{1/n}^k(\mis)$,
$\cT_{1/n}^k(\maximalm)$, $\cT_{1/n}^k(\mvc)$, $\cT_{1/n}^k(\minimalds)$ are
$\tilde O(\min(n/k,m/k^2 + \Delta/k))$.

We now describe how to obtain an $\tilde O(n/k)$ time algorithm for all graph covering problems directly in the $k$-machine model, without resorting to Theorem~\ref{thm:translation}, assuming that $k\le \tilde O(\sqrt{n})$.
In particular, this yields an $\tilde O(n/k)$ algorithm for the problem of finding a maximal independent set in a hypergraph (\hmis), which has been studied extensively in the PRAM model of computation (cf.\ \cite{kelsen,beame}).
Note that, for the hypergraph setting, the input graph $G$ is a hypergraph and if some node $u$ has home machine $p_1$, then $p_1$ knows all hyperedges (and the corresponding machines) that contain $u$.
To the best of our knowledge, there is no efficient distributed algorithm known for \hmis.
First assume that there is an ordering of the machines with ids $1,\dots,k$.
Such an ordering can be obtained by running the $O(1)$-time leader election algorithm of \cite{icdcn13}.
The elected leader (machine) then arbitrarily assigns unique ids to all the other machines.
Then, we sequentially process the nodes at each machine and proceed in $k$ phases.
In the first phase, machine $1$ locally determine the membership status of being in the \hmis, for all of its nodes.
Next, machine $1$ computes an arbitrary enumeration of its nodes and sends the status (either $0$ or $1$) and node id of the first $k$ nodes over its $k$ links (i.e.\ a single status is sent over exactly $1$ link).
When a machine receives this message from machine $1$, it simply broadcasts this message to all machines in the next round.
Thus, after $2$ rounds, all machines know the status of the first $k$ nodes of machine $1$.
Then, machine $1$ sends the status of the next $k$ nodes and so forth.
By \Cref{lem:mapping}, there will be $\tilde O(n/k)$ nodes with high probability, and therefore every machine will know the status of all the nodes of machine $1$ after $\tilde O(n/k^2)$ rounds.
After machine $1$ has completed sending out all statuses, all other machines locally use this information to compute the statuses of their nodes (if possible).
For example, if some node $u$ at machine $1$ is in the \hmis (i.e.\ has status $1$) and adjacent to some node $v$ at machine $2$, then machine $2$ sets the status of $v$ to $0$.
Then, machine $2$ locally computes the status of its remaining undetermined nodes such that they are consistent with the previously received statuses, and starts sending this information to all other machines in the same fashion.
Repeating the same process for each of the $k$ machines yields a total running time of $\tilde O(n/k)$.

%
%Since each node sends the same messages to all neighbors, we can analyze the communication in terms of selective broadcasts, yielding a message complexity of $O(m\log n)$ and a broadcast complexity of $n\log n$.
%Applying Theorem~\ref{thm:translation}(b) yields a round complexity of $\tilde O(\min(m/k^2,n/k))$.
%Considering the locality preserving reductions (cf.\ \cite{KuhnMW10}) between \mis, maximal matching (\maximalm), minimal dominating set (\minimalds), and computing a $2$-approximation of the minimum vertex cover (\mvc),  we get that
%%\begin{corollary}
% $\cT_{1/n,0}^k(\mis)$,
%$\cT_{1/n,0}^k(\maximalm)$, $\cT_{1/n,0}^k(\mvc)$, $\cT_{1/n,0}^k(\minimalds)$ are
%$\tilde O(\min(m/k^2,n/k))$
}

\onlyLong{
\paragraph{Finding Triangles and Subgraphs}
For the subgraph isomorphism problem $\subiso_d$, we are given $2$ input graphs: the usual $n$-node graph $G$ and
a $d$-vertex graph $H$, for $d \in O(1)$.
We want to answer the question whether $H \subseteq G$.
A distributed algorithm for the clique model that runs in $O(n^{(d-2)/d})$ rounds was given by \cite{DBLP:conf/wdag/DolevLP12}.
Since the total number of messages sent per round is $O(n^2)$, 
Theorem~\ref{thm:translation}(a) gives rise to an algorithm for the $k$-machine model that runs in $\tilde O(n^{2 + (d-2)/d} / k^2 + n/k)$ rounds.
%For the special case where $H$ is a triangle, we get an $\tilde O(n^{7/3} / k^2 + n/k)$ round algorithm in the $k$-machine model.

We use $\tri$ to denote the restriction of $\subiso_3$ to the case where $H$ is a triangle.
The following is a simple algorithm for the clique model: Each node locally collects its $2$-neighborhood information, checks for triangles and then either outputs \textsc{yes} or \textsc{no}.
This requires each node to send a message for each of its at most $\Delta$ neighbors. 
The total number of messages sent is $n\Delta^2$, and the algorithm takes $\Delta$ rounds to send all messages.
Applying Theorem~\ref{thm:translation}(a), we get a distributed algorithm in the $k$-machine model with round complexity of $\tilde O(n\Delta^2 / k^2 + \Delta^2 / k)$, which is better than the above bound when $\Delta$ is sufficiently small.
%Alternatively,
%We can think of the communication performed by this algorithm as each node broadcasting each incident edge exactly once.
%For the broadcast case, \Cref{thm:translation}(b) gives us a round complexity of $\tilde O(n\Delta / k)$.
%\begin{corollary} \label{cor:triangle}
Thus, we have $\cT_{1/n}^k(\tri) \in \tilde
O(\min(n\Delta^2 / k^2 + \Delta^2 / k,n^{7/3}/k^2 + n/k))$.
%Moreover, for problem \subiso, it holds that $\cT_{1/n,1/n}^k(\subiso) \in
%\tilde O(n^{2 + (d-2)/d} / k^2 + n/k)$.
%\end{corollary}
}


%We can use the algorithm of Ghaffari and Kuhn \cite{GhaffariKuhn13} to 

%To do. Either use Kuhn et al's algorithm or refinement sampling. Question: Where should we talk about cut sparsification?


%\paragraph{Spanner?}
}
\endinput


