% \section{Lower Bounds}
\section{Tight Lower Bounds for Spanning Tree Computation}
\label{sec:lower bound computation}

%\textbf{***Lower Bound Proof Without Server Model}

In this section, we show that any $\eps$-error algorithm  takes $\Omega(n /k )$ rounds to \emph{compute a spanning tree} (\stcomp). % (and thus also for computing an $\mst$).
% in the worst case.
%
Recall that in the \stcomp problem, for every edge $uv$, the home machines of $u$ and $v$ must know whether $uv$ is in the \st or not.

\begin{theorem}[Lower bound for \stcomp]\label{theorem:st_lower_bound}
Every public-coin $\epsilon$-error randomized protocol on a $k$-machine  network that computes a spanning tree of an $n$-node input graph has an expected round complexity of $\Omega\left(\frac{n}{k}\right)$.
More specifically, there exists a constant $\epsilon>0$ such that, for any
$k\ge 3$ and large enough $n$, $ \cT^k_{\epsilon}(\stcomp), \cT^k_{\epsilon}(\mst),\cT^k_{\epsilon}(\bfs), \cT^k_{\epsilon}(\spt) \in \Omega\left(\frac{n}{k}\right).$
\end{theorem}

\begin{proof}
We first show the theorem for $\cT^k_{\epsilon}(\stcomp)$ using an information theoretic argument.
Assume for a contradiction that there exists a distributed algorithm in the $k$-machine  model, denoted by $\cR$, that violates Theorem~\ref{theorem:st_lower_bound}. In other words, $\cR$ solves \stcomp correctly with probability at least $1-\epsilon$ and always terminates in $\frac{\delta n}{k}$ rounds, for some $\delta \in o(1)$.
We will show that the information flow to at least one machine must be large.



\paragraph{Graph $G_b(X, Y)$} Let $b=n-2$. For any $X,Y\subseteq [b]$, we construct the following graph, denoted by $G_b(X, Y)$.
 The vertices (and adjacent edges of $G_b(X,Y)$) will be assigned to a random machine.


%
$G_b(X, Y)$ consists of $b+2$ nodes, denoted by $v_1, \ldots, v_b$ and $u, w$. For each $1\leq i\leq b$, we add edge $uv_i$ to $G_b(X, Y)$ if $i\in X$, and we add edge $v_iw$ to $G_b(X, Y)$ if $i\in Y$.
%

The random strings $X,Y$ will be drawn from a distribution that will ensure that $G_b(X,Y)$ is connected. Furthermore the graph will contain roughly $4b/3$ edges with high probability ($2b/3$ adjacent to $u,w$ respectively), hence roughly $b/3$ edges must be removed to obtain a spanning tree.

To produce the correct output in $\cR$ the machine that receives $u$ must know which edges $uv_i$ are included in the spanning tree, and similarly the machine that receives $w$ must know which edges $v_iw$ are included in the spanning tree.
Since $X,Y$ are encoded in the graph, the machine $p_1$ who receives $u$ knows $X$ via the edges $uv_i$ at the start, but has initially limited information about $Y$, unless it also receives $w$, which happens with probability $1/k$. With probability $1-1/k$ the information about $Y$ initially held by $p_1$ comes from the set of vertices $v_i$ held by $p_1$ which is of size $\approx b/k$ with high probability, giving $p_1$ at most $\approx b/k$ bits of information about $Y$. Hence all information needed to decide which edges to not include must come into $p_1$ via communication with the other $k-1$ machines. We show that this communication is $\Omega(b)$, if $p_1$ outputs at most $b/2$ edges as part of the spanning tree (and a symmetric argument holds for the machine $p_2$ that holds $w$). Hence the number of rounds is $\Omega(b/k) = \Omega(n/k)$.

In order to give a clean condition on which edges are necessary for a spanning tree we use the following distribution on $X,Y$.
$X,Y$ (viewed as characteristic vectors) are chosen uniformly from $\{0,1\}^b\times\{0,1\}^b$ under the condition that for every $i\in[b]$ we have $X_i+Y_i\geq 1$. Hence there are exactly $3^b$ possible values for $X,Y$.
The following simple observation is crucial to our proof.

\begin{observation}\label{observation:STinfo}
For any $X, Y$ chosen as described above, and all $1\leq i\leq b$ such that $X_i=Y_i=1$ exactly one of the edges $uv_i$ or $v_iw$ must be part of any spanning tree, except for exactly one such $i$, for which both edges are in the spanning tree. For all other $i$ the one edge $uv_i$ or $v_iw$ that is in the graph must also be in the spanning tree.
\end{observation}

%If we denote $s=|X\cap Y|$, then the graph contains exactly $b+s$ edges. A spanning tree, however, must contain exactly $b+1$ edges. This means that for all $i$ with $X_i=Y_i=1$ one of the two edges adjacent to $v_i$ must be deleted, the other included. We call such $i$ and the edges involved {\em contested}. Who outputs how many and which edges can depend on the randomness used by the algorithm, as well as on the input.

Since all edges are adjacent to either $u$ or $w$, and any spanning tree must have $b+1$ edges, one of $p_1$ or $p_2$ must output at most $b/2$ edges.
Before the first round of communication the entropy $H(Y|X)$ is $2b/3$ by the following calculation:\onlyLong{
\begin{eqnarray*}
H(Y|X)&=&\sum_x\mbox{Pr}(X=x)\cdot H(Y|X=x)\\
&=&3^{-b}\sum_{\ell=0}^b{b\choose \ell}2^\ell\cdot\log 2^\ell\\
&=&3^{-b}b\sum_{\ell=0}^{b-1}{b-1\choose \ell}2^{\ell+1}\\
&=&2b/3.
\end{eqnarray*}
}
\onlyShort{
%\begin{eqnarray*}
$H(Y|X)=\sum_x\mbox{Pr}(X=x)\cdot H(Y|X=x)
=3^{-b}\sum_{\ell=0}^b{b\choose \ell}2^\ell\cdot\log 2^\ell
=3^{-b}b\sum_{\ell=0}^{b-1}{b-1\choose \ell}2^{\ell+1}
=2b/3.$
%\end{eqnarray*}
}

Besides $X$ the machine $p_1$ also knows some vertices $v_i$ and their edges, giving it access to some bits of $Y$.
It is easy to see via the Chernoff bound that with very high probability $p_1$ knows at most $(1+\zeta)b/k$ bits of $Y$ for $\zeta=0.01$, lowering the conditional entropy of $Y$ given those bits to no less than $2b/3-(1+\zeta) b/k$. The event where $p_1$ knows more cannot influence the entropy by more than $2^{-\zeta^2 b/(3k)}\cdot b=o(1)$ (for $b$ large enough).
Hence the entropy of $Y$ given the initial information of $p_1$, which we denote by a random variable $A$, is $H(Y|A)\geq 2b/3-(1+\zeta)b/k-o(1)$.

Assume that $p_1$ outputs at most $b/2$ edges. This event or the corresponding event for $p_2$ must happen with probability $1-\epsilon$ assuming failure probability $\epsilon$.
  Conditioned on the event that $p_1$ outputs $b/2$ or fewer edges we can estimate the entropy $H(Y|A,T_0)$ for the random variable $T_0$ containing the transcript of all messages to $p_1$.
  $H(Y|A,T_0)\leq H(Y|X,E)$ where $E$ is the random variables of edges in the output of $p_1$ (we use that $X$ and $E$ can be computed from $A,T_0$). Given that $X=x$ there are $|x|$ possible edges for $E$.
  With probability $1-o(1)$ we have $|y|<2b/3+\zeta b$.



   For at most $b/2$ edges in $E$ there are at most
   \onlyLong{ \[\sum_{\ell<b/6+\zeta b} {b/2\choose \ell}\leq b\cdot{b/2\choose b/6+\zeta b} \]
   }
   \onlyShort{
$\sum_{\ell<b/6+\delta b} {b/2\choose \ell}\leq b\cdot{b/2\choose b/6+\zeta b}$
   }
   possibilities for $Y=y$ such that $|y|\leq2b/3+\zeta b$.
   Hence we can estimate the remaining entropy of $Y$ as follows:
   \onlyLong{
   \begin{eqnarray*}
   H(Y|X,E)&\leq&\mbox{Pr}(|Y|<2b/3+\zeta b)(\log {b/2\choose b/6+\zeta b}\\ &&+\log b)+o(1)\\
   &\leq& H(1/3+2\zeta)b/2+o(b)
      \end{eqnarray*}
    }
    \onlyShort{
   $ H(Y|X,E)\leq\mbox{Pr}(|Y|<2b/3+\zeta b)(\log {b/2\choose b/6+\zeta b}+\log b)+o(1)
   \leq H(1/3+2\zeta)b/2+o(b)$
 }
   for the binary entropy function $H$. For $k\geq 7$ we can simply use the upper bound $b/2$ for the above quantity and 
   conclude that $I(T_0:Y|A)=H(Y|A)-H(Y|A,T_0)\geq 2b/3-(1+\zeta)b/k-o(1)-b/2\geq
   \Omega(b)$, and hence $p_1$ must have received messages of length $\Omega(b)$.
   This happens with some probability $\gamma$ for $p_1$ and with probability $(1-\epsilon)-\gamma$ for $p_2$. Hence
   $|T_0|+|T_1|\geq \Omega(b)$.


The above analysis is under the assumption that different machines hold $u,w$, which happens with probability $1-1/k$.
Without this assumption the information flow must be at least $(1-1/k)\cdot \Omega(b)$.
\onlyLong{

For $k<7$ we need to make a more careful analysis. For instance, $p_1$ actually gets only around $2b/(3k)$ bits of information from knowing $b/k$ bits of $Y$, and the estimate on $H(Y|A,E)$ needs to be made more precise. We skip the details.
}%
This completes the proof of $\cT^k_{\epsilon}(\stcomp) = \Omega\left(\frac{n}{k}\right)$.

To see that this also implies $\cT^k_{\epsilon}(\mst),\cT^k_{\epsilon}(\bfs),\cT^k_{\epsilon}(\spt) \in
\Omega\left(\frac{n}{k}\right)$, it is sufficient to observe that any BFS tree
(resp.\ \mst, and \spt, for any approximation ratio) is also a spanning tree.
\begin{comment}
Finally, consider an algorithm for the $\sssp$ problem, which computes the
distance of each node from the source node $s$.
We show how to construct the shortest path tree (which is also an \st) from the output of the $\sssp$-algorithm in $\tilde O(n/k)$ rounds:
Suppose that each machine has $\tilde O(n/k)$ nodes.
(From \Cref{lem:mapping}.(a), we know that this happens with high probability.)
Each machine broadcasts the distances and identifiers of its $\tilde O(n/k)$ nodes to all other machines, which takes $\tilde O(n/k)$ rounds.
Then, every machine locally reconstructs the tree edges that are incident to its nodes.
\textbf{TODO!!!}
\end{comment}
\end{proof}

%\paragraph{

\section{Lower Bounds for Verification Problems}\label{sec:lower bound verification}

In this section, we show lower bounds for the spanning tree (\st) and connectivity (\conn) verification problems.
An algorithm solves the \conn verification problem in our model if the machines output $1$ if and only if the input graph $G$ is connected; the \st problem is defined similarly.
%\danupon{This lower bound holds for $\epsilon$-error algorithm, i.e. {\em Monte Carlo} algorithms. I won't mention the word Monte Carlo here but we should say this word in the intro.}
%
%For \st, we show a lower bound for {\em one-sided error} algorithms, where we are interested in bounded the expected running time for algorithms that always output a correct answer. For \conn, we show a stronger lower bound, which is for {\em Monte Carlo} algorithms, i.e. those that can make error with small probability.
%
We note that our lower bounds hold even when we allow {\em shared randomness}, i.e. even when all machines can read the same random string.
%We define $\cT^k_{\epsilon_0, \epsilon_1}$ to have $\epsilon_0$ error on $0$-input and $\epsilon_1$ error on $1$-input.
For a problem $\cP$ where a two-sided error is possible, we define the {\em time complexity of solving $\cP$ with $(\epsilon_0,\epsilon_1)$ error probabilities}, denoted by $\cT^k_{\epsilon_0,\epsilon_1}(\cP)$, be the minimum $T(n)$ such that there exists a protocol that solves $\cP$, terminates in $T(n)$ rounds, and errs on $0$-input with probability at most $\epsilon_0$ and errs on $1$-input with probability at most $\epsilon_1$.

\begin{theorem}[\st verification and \conn]\label{theorem:conn_lower_bound}
There exists a constant $\epsilon>0$ such that, for any $k\geq 2$ and large enough $n$,
$\cT^k_{\epsilon, 0}(\st)=\tilde \Omega\left(\frac{n}{k^2}\right) ~~~\mbox{and}~~~ \cT^k_{\epsilon, \epsilon}(\conn)=\tilde\Omega\left(\frac{n}{k^2}\right)\,.$
In other words, there is no public-coin $(\epsilon, 0)$-error randomized protocol on a $k$-machine model that, on any $n$-node input graph, solves \st correctly in $o(\frac{n}{k^2\log n})$ expected rounds, and no $(\epsilon, \epsilon)$-error protocol that solves \conn correctly in $o(\frac{n}{k^2\log n})$ rounds.
\end{theorem}

%\begin{theorem}[$\tilde \Omega(n/k^2)$ lower bound for \st and \conn]\label{theorem:conn_lower_bound}
%There are constants $\epsilon>0$ and $\delta>0$ such that, for any $k\geq 2$ and large enough $n$,
%$$\cT^k_{\epsilon/n}(\conn)=\Omega(\frac{n}{k^2\log n}).$$
%In other words, there is no public-coin randomized protocol on the $k$-machine $k$-machine model that, on any $n$-node input graph, solves \conn correctly in $\frac{\delta n}{k^2\log n}$ rounds with probability at least $1-\epsilon/n$ (this probability is over all possible random partition\danupon{Define!} and shared random bits used by $k$ machines in the protocol).
%\end{theorem}

To prove Theorem~\ref{theorem:conn_lower_bound}, we introduce a new model called {\em random-partition (two-party) communication complexity} and prove some lower bounds in this model. This is done in \Cref{sec:communication complexity}. We then use these lower bounds to show lower bounds for the $k$-machine model in \Cref{sec:lower bound for ST,sec:lower bound for CONN}.

%%We note that the probability above is over all possible random partition\danupon{Define!} and (shared) random bits used by the protocol (in case the protocol is randomized).
%
%The rest of this section is devoted to proving the above theorem.
%%
%The starting point of the proof is the following communication complexity lower bound of the {\em unique disjointness} problem, denoted by $\udisj_b$, first proved by Kalyanasundaram and Schnitger \cite{KalyanasundaramS92}, later simplified by Razborov \cite{Razborov92}, and further simplified by Bar-Yossef et al. \cite{Bar-YossefJKS04}.


\subsection{Random-Partition Communication Complexity}\label{sec:communication complexity}

We first recall the standard communication complexity model, which we will call {\em worst-partition} model, to distinguish it from our random-partition model. (For a comprehensive review of the subject, we refer the reader to \cite{KNbook}.)

%\begin{definition}[Worst-partition communication complexity]\label{def:worst communication complexity}
In the {\bf worst-partition} communication complexity model, there are two players called Alice and Bob. Each player receives a $b$-bit binary string, for some integer $b\geq 1$. We denote the string received by Alice and Bob by  $x$ and $y$ respectively. Together, they both want to compute $f(x, y)$ for a Boolean function $f: \{0,1\}^b\times \{0,1\}^b \rightarrow \{0, 1\}$. At the end of the process, we want both Alice and Bob to know the value of $f(x, y)$.
%
We are interested in the number of bits exchanged between Alice and Bob in order to compute $f$. We say that a protocol $\cR$ has {\em complexity $t$} if it always uses at most $t$ bits in total\footnote{We emphasize that we allow $\cR$ to incur at most $t$ bits of communication regardless of the input and random choices made by the protocol. We note a standard fact that one can also define the complexity $t$ to be the {\em expected} number of bits. The two notions are equivalent up to a constant factor.}. For any function $f$, the {\em worst-partition communication complexity} for computing $f$ with $(\epsilon_0, \epsilon_1)$-error, denoted by $R_{\epsilon_0, \epsilon_1}^{cc-pub}(f)$, is the minimum $t$ such that there is an  $(\epsilon_0, \epsilon_1)$-error protocol with complexity $t$. (Note that a protocol is $(\epsilon_0, \epsilon_1)$-error if it outputs $0$ with probability at least $1-\epsilon_0$ when $f(x, y)=0$ and outputs $1$ with probability at least $1-\epsilon_1$ when $f(x, y)=1$.)

%\danupon{Is it necessary to say that we use the expected number of bits when $\epsilon_0=\epsilon_1=0$.}
% I removed the above comment since I don't remember what I asked anymore. -- Danupon
%\end{definition}

The {\bf random-partition} model is slightly different from the worst-partition model in that, instead of giving every bit of $x$ to Alice and every bit of $y$ to Bob, each of these bits are sent to one of the players {\em randomly}. To be precise, let $x_i$ be the $i^{th}$ bit of $x$ and $y_i$ be the $i^{th}$ bit of $y$. For any pair of input strings $(x, y)$, we partition it by telling the value of each $x_i$ and $y_i$ (by sending a message of the form ``$x_i=0$'' or ``$x_i=1$'') to a random player.
%
As before, we say that a protocol $\cR$ has {\em complexity $t$} if it always uses at most $t$ bits in total, {\em regardless of the input and its partition}. We note that the error probability of a protocol $\cR$ is calculated over all possible random choices made by the algorithm {\em and all possible random partitions}; e.g., it is possible that an $(\epsilon_0, \epsilon_1)$-error protocol never answers a correct answer for some input partition. Also note that, while we pick the input partition randomly, the input pair itself is picked {\em adversarially}. %In other words, an $(\epsilon_0, \epsilon_1)$-error protocol is a protocol that is $(\epsilon_0, \epsilon_1)$-error, regardless of the input $(x, y)$.
%
In other words, an $(\epsilon_0, \epsilon_1)$-error protocol must, for any input $(x, y)$, output $0$ with probability at least $1-\epsilon_0$ when $f(x, y)=0$ and output $1$ with probability at least $1-\epsilon_1$ when $f(x, y)=1$, where the probability is over all possible random strings given to the protocol and the random partition.
%
For any function $f$, the {\em random-partition communication complexity} for computing $f$ with $(\epsilon_0, \epsilon_1)$-error, denoted by $R_{\epsilon_0, \epsilon_1}^{rcc-pub}(f)$, is the minimum $t$ such that there is an  $(\epsilon_0, \epsilon_1)$-error protocol with complexity $t$.

The problems of our interest are {\em equality} and {\em disjointness}. In the rest of \Cref{sec:communication complexity}, we show that the communication complexity of these problems are essentially the same in both worst-partition and random-partition models. The techniques used to prove these results are different between the two problems, and might be of an independent interest.

\subsubsection{\st Verification and Random-Partition Communication Complexity of Eq.}\label{sec:lower bound for ST}

The equality function, denoted by $\eq$, is defined as $\eq(x, y)=1$ if $x=y$ and $\eq(x, y)=0$ otherwise. Note that this problem can be solved by the fingerprinting technique which makes a small error only when $x\neq y$, i.e. $R_{\epsilon,0}^{cc-pub}(\eq)=O(\log b)$ (see, e.g. \cite{KNbook}). Interestingly, if we ``switch'' the error side, the problem becomes hard: $R_{0,\epsilon}^{cc-pub}(\eq)=\Omega(b)$. We show that this phenomenon remains true in the random-partition setting.
%\onlyLong{\Cref{lem:complexity of EQ} is proved in \Cref{sec:proof of EQ}.}
\onlyShort{Lemma~\ref{lem:complexity of EQ} is proved in the full paper.}

\begin{lemma}[Random-Partition Equality]\label{lem:complexity of EQ}
For some $\epsilon>0$, $R_{0,\epsilon}^{rcc-pub}(\eq)=\Omega(b)$. This lower bound holds even when Alice knows $x$ and Bob knows $y$.%\danupon{This is an ugly part that we'll need later on.}
\end{lemma}
\onlyLong{\input{proof_of_eq}}


\paragraph{Lower bound for $\st$ verification}
We now show a lower bound of $\tilde \Omega(n/k^2)$ on $(0, \epsilon)$-error algorithms for \st verification. For the $b$-bit string inputs $x$ and $y$ of the equality problem, we construct the following graph $G(x,y)$: The nodes are $u_0, \ldots, u_b$ and $v_0, \ldots, v_b$. For any $i$, there is an edge between $u_0$ and $u_i$ if and only if $x_i=1$, and there is an edge between $v_0$ and $v_i$ if and only if $y_i=0$. Additionally, there is always an edge between $u_j$ and $v_j$, for $0 \le j \le b$. Observe that $G(x,y)$ is a spanning tree if and only if $x=y$.
%%
%\begin{observation}
%$G(x,y)$ is a spanning tree if and only if $x=y$.
%\end{observation}
%
Also note that $G(x, y)$ has $n=\Theta(b)$ nodes.


Now assume that there is a $(0, \epsilon)$-error algorithm $\cR$ in the $k$-machine model that finishes in $\tilde o(n/k^2)$ rounds. Alice and Bob simulate $\cR$ as follows. Let $p_1, \ldots, p_k$ be machines in the $k$-machine model, and assume that $k$ is even. First, Alice and Bob generate a random partition of nodes in $G(x, y)$ using the random partition of input $(x, y)$ and shared randomness. Using shared randomness, they decide which machine the nodes $u_0$ and $v_0$ should belong to. Without loss of generality, we can assume that $u_0$ belongs to $p_1$. Moreover, with probability $1-1/k$, $v_0$ is not in $p_1$, and we can assume that $v_0$ is in $p_2$ without loss of generality. (If $u_0$ and $v_0$ are in the same machine then Alice and Bob stop the simulation and output $0$ (i.e. $x\neq y$).)
%
Alice will simulate machines in $P_A=\{p_1, p_3, p_5 \ldots, p_{k-1}\}$ and Bob will simulate machines in $P_B=\{p_2, p_4, \ldots, p_k\}$.  This means that Alice (Bob respectively) can put and get any information from $P_A$ ($P_B$ respectively) with no cost. At this point, Alice assigns node $u_0$ on $p_1$; i.e., she tells $p_1$ whether $u_i$ has an edge to $u_0$ or not, for all $i$ (this can be done since she knows $x$). Similarly, Bob assigns node $v_0$ on $p_2$. (Note that to do this we need Alice to know $x$ and Bob to know $y$ in addition to the random partition of $x$ and $y$. We have the lower bound of this as claimed in Lemma~\ref{lem:complexity of EQ}.)
%
Next, they randomly put every node in a random machine. For any $i$, if Alice gets $x_i$, then she assigns $u_i$ in a random machine in $P_A$, i.e., she tells such a random machine whether $u_i$ has an edge to $u_0$ or not. Otherwise, Bob puts $u_i$ in a random machine in $P_B$ in the same way. Since each $x_i$ and $y_i$ belongs to Alice with probability $1/2$, it can be seen that each $u_i$ and $v_i$ will be assigned to a random machine.
%
Similarly, node $v_i$ is assigned to a random machine depending on who gets $y_i$. Note that both Alice and Bob know which machine each node is assigned to since they use shared randomness.

Now Alice and Bob simulate $\cR$ where Alice simulates $\cR$ on machines in $P_A$ and Bob simulates $\cR$ on machines in $P_B$.  To keep this simulation going, they have to send messages to each other every time there is a communication between machines in $P_A$ and $P_B$. This means that they have to communicate $\tilde O(k^2)$ bits in each round of $\cR$. Since $\cR$ finishes in $\tilde o(n/k^2)$ rounds, Alice and Bob have to communicate $\tilde o(n)=\tilde o(b)$ bits. Once they know whether $G(x,y)$ is an \st or not, they can answer whether $x=y$ or not. Since $\cR$ is $(0, \epsilon)$-error, Alice and Bob's error will be $(0, \epsilon+1/k)$, where the extra $1/k$ term is because they answer $0$ when $u_0$ and $v_0$ are in the same machine. For large enough $k$, this error is smaller than the error in Lemma~\ref{lem:complexity of EQ}, contradicting Lemma~\ref{lem:complexity of EQ}. This implies that such an algorithm $\cR$ does not exist.

\subsubsection{\conn and Random-Partition Communication Complexity of Disjointness}\label{sec:disj}\label{sec:lower bound for CONN}

The disjointness function, denoted by $\disj$, is defined as $\disj(x, y)=1$ if there is $i$ such that $x_i=y_i$ and $\disj(x, y)=0$ otherwise. This problem in the worst-partition model is a fundamental problem in communication complexity, having tons of application (e.g. \cite{setdisj-survey}). Through a series of results (e.g. \cite{BabaiFS86,Bar-YossefJKS04,BravermanGPW13,KalyanasundaramS92,Razborov92}), it is known that $R_{\epsilon, \epsilon}^{cc-pub}(\disj)=\Omega(b)$. By adapting the previous proof of Razborov \cite{Razborov92}, we show that this lower bound remains true in the random-partition setting.
\onlyLong{Lemma~\ref{lem:complexity of DISJ} is proved in \Cref{sec:proof of DISJ}.}
%\onlyShort{\Cref{thm:complexity of DISJ} is proved in the full paper.}
%
\begin{lemma}[Random-Partition Disjointness]\label{lem:complexity of DISJ}
For some $\epsilon>0$, $R_{\epsilon, \epsilon}^{rcc-pub}(\disj)=\Omega(b)$. This lower bound holds even when Alice knows $x$ and Bob knows $y$.
\end{lemma}
%\danupon{We have to cite \cite{ChakrabartiCM08}}
%\input{proof_of_disj}

%\onlyShort{
%The rest of the proof follows along the lines of the lower bound proof of \st verification and is deferred to the full paper. }

%\onlyLong{

\paragraph{Lower bound for $\conn$ verification}
We now show a lower bound of $\tilde \Omega(n/k^2)$ on $(\epsilon, \epsilon)$-error algorithms for $\conn$ verification, for a small enough constant $\epsilon>0$. For the $b$-bit string inputs $x$ and $y$ of the disjointness problem, we construct the following graph $G(x,y)$: The nodes are $u_0, \ldots, u_b$ and $v_0, \ldots, v_b$. For any $i$, there is an edge between $u_0$ and $u_i$ if and only if $x_i=0$, and there is an edge between $v_0$ and $v_i$ if and only if $y_i=0$. Additionally, there is always an edge between $u_j$ and $v_j$, for $0 \le j \le b$. Observe that $G(x,y)$ is connected if and only if $x$ and $y$ are disjoint.
%%
%\begin{observation}
%$G(x,y)$ is a spanning tree if and only if $x=y$.
%\end{observation}
%
Also note that $G(x, y)$ has $n=\Theta(b)$ nodes.
%
Assume that there is an $(\epsilon, \epsilon)$-error algorithm $\cR$ in the $k$-machine model that finishes in $\tilde o(n/k^2)$ rounds. Alice and Bob simulate $\cR$ as follows. Let $p_1, \ldots, p_k$ be machines in the $k$-machine model, and assume that $k$ is even. First, Alice and Bob generate a random partition of nodes in $G(x, y)$ using the random partition of input $(x, y)$ and shared randomness. Using shared randomness, they decide which machine the nodes $u_0$ and $v_0$ should belong to. Without loss of generality, we can assume that $u_0$ belongs to $p_1$. Moreover, with probability $1-1/k$, $v_0$ is not in $p_1$, and we can assume that $v_0$ is in $p_2$ without loss of generality. (If $u_0$ and $v_0$ are in the same machine then Alice and Bob stop the simulation and output $0$.)
%
Alice will simulate machines in $P_A=\{p_1, p_3, p_5 \ldots, p_{k-1}\}$ and Bob will simulate machines in $P_B=\{p_2, p_4, \ldots, p_k\}$. This means that Alice (Bob respectively) can put and get any information from $P_A$ ($P_B$ respectively) with no cost. At this point, Alice assigns node $u_0$ on $p_1$; i.e., she tells $p_1$ whether $u_i$ has an edge to $u_0$ or not, for all $i$ (this can be done since she knows $x$). Similarly, Bob assigns node $v_0$ on $p_2$.
%
Next, they randomly put every node in a random machine. For any $i$, if Alice gets $x_i$, then she assigns $u_i$ in a random machine in $P_A$, i.e., she tells such a random machine whether $u_i$ has an edge to $u_0$ or not. Otherwise, Bob puts $u_i$ in a random machine in $P_B$ in the same way. Since each $x_i$ and $y_i$ belongs to Alice with probability $1/2$, it can be seen that each $u_i$ and $v_i$ will be assigned to a random machine.
%
Similarly, node $v_i$ is assigned to a random machine depending on who gets $y_i$. Note that both Alice and Bob knows which machine each node is assigned to since they use shared randomness.

Now Alice and Bob simulate $\cR$ where Alice simulates $\cR$ on machines in $P_A$ and Bob simulates $\cR$ on machines in $P_B$. To keep this simulation going, they have to send messages between each other every time there is a communication between machines in $P_A$ and $P_B$. This means that they have to communicate $\tilde O(k^2)$ bits in each round of $\cR$. Since $\cR$ finishes in $\tilde o(n/k^2)$ rounds, Alice and Bob have to communicate $\tilde o(n)=\tilde o(b)$ bits. Once they know whether $G(x,y)$ is connected or not, they can answer whether $x$ and $y$ are disjoint or not. Since $\cR$ is $(\epsilon, \epsilon)$-error, Alice and Bob's error will be $(\epsilon, \epsilon+1/k)$, where the extra $1/k$ term is because they answer $0$ when $u_0$ and $v_0$ are in the same machine. For large enough $k$, this error is smaller than the error in Lemma~\ref{lem:complexity of DISJ}, contradicting Lemma~\ref{lem:complexity of DISJ}. This implies that such an algorithm $\cR$ does not exist.
%}


%\subsection{Lower bound for \conn}












\endinput


\begin{definition}[Communication Complexity of Unique Disjointness; $R_\epsilon(\udisj_b)$] There are two parties, called by Alice and Bob.
%
%For a parameter $b$, Alice and Bob are given a $b$-bit binary string, denoted by $x$ and $y$, respectively. We let $x[i]$ and $y[i]$ be the $i^{th}$ It is promised that $\langle x, y\rangle$ is either $0$ or $1$, where $\langle x, y\rangle = \sum_{i=1}^b x[i]y[i]$
%
For a parameter $b$, Alice and Bob are given subset $X$ and $Y$ of $[b]$, respectively, where $[b]=\{1, \ldots, b\}$. It is promised that $|X\cap Y|$ is either $0$ or $1$. The goal of this problem is to find out the value of $|X\cap Y|$. We let the {\em communication complexity} of this problem, denoted by $R_\epsilon(\udisj_b)$ be the message complexity for solving this problem by an $\epsilon$-error protocol. \danupon{Make sure to define message complexity.}
%
%TO DO: Define $R(\udisj_b)$. Note that we always use $b$ to denote the number of bits in the input strings of this problem. Let bits of the input binary string $x$ be $x[1]\ldots x[n]$.
\end{definition}
%
Note that the above definition of communication complexity is not, but equivalent to, a standard one. For the standard definition, see \cite{KNbook}.

%When there are two parties involved in the problem, the communication complexity is equivalent to the message complexity. For a traditional definition of communication complexity, see \cite{KNbook}.}


%\begin{theorem}[\cite{KalyanasundaramS92,Razborov92,Bar-YossefJKS04}]\label{theorem:disj_lower_bound}
%There exists constants $\epsilon'>0$ and $\delta'>0$ such that, for any large enough $N$, there is no public-coin randomized protocol on the two-party communication complexity model that, with probability at least $1-\epsilon$, communicates at most $\delta' N$ bits and solves {\sc Disj} on $N$-bit input strings correctly.
%\end{theorem}


%*********************************************************************************************************
% Note: From Shertov's ``THE MULTIPARTY COMMUNICATION COMPLEXITY OF SET DISJOINTNESS'': All three proofs ()cited below) of the linear lower bound apply to unique set disjointness.
%*********************************************************************************************************

\begin{theorem}[Lower bound for $R_\epsilon(\udisj_b)$; \cite{KalyanasundaramS92,Razborov92,Bar-YossefJKS04}]\label{theorem:disj_lower_bound}
There are constants $\epsilon'>0$ and $\delta'>0$ such that, for any large enough $n$, $$R_{\epsilon'}(\udisj_b)\geq \delta' b.$$
In other words, there is no public-coin randomized protocol on the two-party communication complexity that, on any input sets $X, Y\subseteq [b]$, solves \udisj correctly in $\delta' b$ rounds with probability at least $1-\epsilon'$ (the probability above is over all shared random bits used by Alice and Bob).
\end{theorem}
%
%We note that by the very recent result of Braverman et al. \cite{BravermanGPW12}, the value of $\delta'$ in Theorem~\ref{theorem:disj_lower_bound} is a constant around $0.4827$.

Let $\epsilon=100\epsilon'$ and $\delta=\delta'/4$.\danupon{To figure out the correct value}
%
Assume for a contradiction that there exists a distributed algorithm in the $k$-machine model, denoted by $\cR$, that violates Theorem~\ref{theorem:conn_lower_bound}. In other words, $\cR$ solves \conn correctly with probability at least $1-\epsilon$ and always terminates in $\frac{\delta n}{k^2\log n}$ rounds. We now show that Alice and Bob can use $\cR$ to solve $\udisj_b$ efficiently; i.e., there is a protocol $\cR'$ that solves $\udisj_b$ correctly with probability at least $1-\epsilon'/n$ and always needs at most $\delta' b$ bits.




\paragraph{Graph $G_b(X, Y)$} For any $X,Y\subseteq [b]$, we construct the following graph, denoted by $G_b(X, Y)$. (For now, we construct $G_b(X, Y)$ by assuming that we know both $X$ and $Y$. We will show how Alice and Bob can construct $G_b(X,Y)$ based on their individual input $X$ and $Y$ later.)
%
$G_b(X, Y)$ consists of $2b+2$ nodes, denoted by $u_0, \ldots, u_b$ and $v_0, \ldots, v_b$. For each $1\leq i\leq b$, we add edge $u_0u_i$ to $G_b(X, Y)$ if $i\notin X$, and we add edge $v_0v_i$ to $G_b(X, Y)$ if $i\notin Y$. There is always an edge $u_iv_i$, for all $1\leq i\leq b$.
%
The following simple observation is crucial to our proof.

\begin{observation}\label{observation:conn_and_disj}
For any $X, Y\subseteq [b]$, $G_b(X, Y)$ is connected if and only if $|X\cap Y|=0$.
\end{observation}
%\begin{proof}TO DO. This should be in appendix or we can simply omit it.
%\end{proof}



%*************************************************
% Below is the intuition of the proof through the worst-case partition case.
% I don't think it's necessary though.
%*************************************************
%\paragraph{Proof for Worst-Partition Case} To get an intuition behind our proof, we first explain the lower bound proof for the worst-partition case, which has been shown in, e.g. \cite{BabaiFS86,DasSarmaHKKNPPW11,MyQuantum2012}. First, observe that the lower bound for $\udisj_b$ in Theorem~\ref{theorem:disj_lower_bound} together with Observation~\ref{observation:conn_and_disj} immediately implies the lower bound for \conn for $k=2$ in the {\em worst-partition} case: Consider a partition where nodes $u_0, \ldots, u_b$ belong to one machine, say $p_1$, and the rest belong to the other, say $p_2$. Then, Alice and Bob can simulate $\cA$ on $p_1$ and $p_2$, respectively, as follows. Observe that, initially, Alice can generate edges incident to $u_0, \ldots, u_b$ since the existence of these edges depend solely on the value of $x$. Thus, Alice can generate the input for $p_1$ without communicating with Bob. Similarly, Bob can generate the input for $p_2$ without communicating with Alice. After generate the input, Alice and Bob can simulate $\cA$ and communicate with each other whenever $p_1$ and $p_2$ want to do so. When $\cA$ terminates, Alice and Bob can check the result of $\cA$ to answer whether $x$ and $y$ are disjoint (using Observation~\ref{observation:conn_and_disj}). Since Alice and Bob communicates exactly what $\cA$ needs, the lower bound of $\udisj_b$ applies to $\cA$ as well; i.e., $\cA$ needs to communicate $\Omega(n)$ bits. This translates into $\Omega(n/\log n)$-time lower bound since in each round $p_1$ and $p_2$ can exchange only $O(\log n)$ bits.

%To extend the above argument to the case of $k$ machines, we divide machines into two groups of size $k/2$ each, say $A$ and $B$. Alice and Bob will simulate machines in group $A$ and $B$, respectively. Nodes $u_0, \ldots, u_b$ (respectively, $v_0, \ldots, v_b$) are arbitrarily partitioned among machines in $A$ (respectively, $B$). Alice and Bob then simulate $\cA$ and communicate only when a machine in one group wants to send a message to a machine in another group. Using the same argument as above,

%*************************************************
% End of intuition
%*************************************************



\iffalse
%*************************************************
% This is an old proof (there's some bug in it)
%*************************************************

Upon receiving input $(X, Y)$ for $\udisj_b$, Alice and Bob will simulate $\cR$ on $k$ machines, using $G_b(X, Y)$ as an input. The first thing they have to do is agreeing how to partition nodes of $G_b(X, Y)$ into $k$ machines. They will use the following partition. Let $A=\{p_1, \ldots, p_{k/2}\}$ and $B=\{p_{1+k/2}, \ldots, p_{k}\}$. Let $\cS$ be a set of partitions such that
\begin{itemize}
\item $u_0$ (respectively, $v_0$) is assigned to a machine in $A$ (respectively, $B$), and
\item there are at most $\delta' b/4$ nodes in $\{u_1, \ldots, u_b\}$ (respectively, $\{v_1, \ldots, v_b\}$) assigned to machines in $B$ (respectively $A$).
\end{itemize}

Let $P$ be a random partition in $\cS$.
%
Note that Alice and Bob can agree on the random partition $P$ without communication by using the shared randomness, as follows.  Using the shared randomness, they separately create a random partition and separately check whether this partition is in $\cS$. If not, they again generate a new random partition. They repeat this process until a partition in $\cS$ is found.

Next, Alice and Bob assign $G_b(X, Y)$ to $k$ machines and start simulating $\cR$. In particular, Alice and Bob will simulate machines in $A$ and $B$, respectively. To do this, Alice needs to know whether each edge incident to nodes belong to machines in $A$ is in $G_b(X, Y)$ or not. For all edges of the form $u_0u_i$, Alice knows whether such edge is in $G_b(X, Y)$ or not since she knows $X$ (recall that $u_0u_i$ is in $G_b(X, Y)$ if and only if $i\notin X$). She also knows whether an edge of the form $u_iv_j$ is in $G_b(X, Y)$ or not (which happens only for $i=j$). Thus, the only thing she might not know is whether an edge of the form $v_0v_i$ is in $G_b(X, Y)$ or not, for each $v_i$ belonging to a machine in $A$. Thus, for each node $v_i$ belonging to a machine in $A$, Bob has to tell Alice whether $i\in Y$. Similarly, for each node $u_i$ belonging to a machine in $B$, Alice has to tell Bob whether $i\in X$.

After Alice and Bob send the required input to the machines, they can now simulate $\cR$ on these machines.  Alice and Bob will simulate machines in $A$ and $B$, respectively. Whenever a machine in $A$ (respectively, $B$) wants to send a message to a machine in $B$ (respectively, $A$), Alice (respectively, Bob) will send such message to Bob (respectively, Alice). After $\cR$ terminates, Alice and Bob will know the answer of $\cR$ (i.e. whether $G_b(X, Y)$ is connected or not) by reading it from machines belonging to them. Then, they say that $|X\cap Y|=0$ if and only if $\RA$ says that $G_b(X, Y)$ is connected.
%
We now analyze this protocol.

\begin{lemma}
Alice and Bob exchange at most $\delta' b$ bits in total.
%$\delta' b/5+\delta(2b+2)\log n$ bits in total.
\danupon{Make sure that we allow exactly $\log n$ bits in each round (not $O(\log n)$).}
\end{lemma}
\begin{proof}
First, we bound the communication needed by Alice and Bob before they start simulating $\cR$, where Alice (respectively, Bob) has to send whether $i\in X$ (respectively, $i\in Y$) to Bob (respectively, to Alice) for every node $u_i$ (respectively, $v_i$) belonging to a machine in $B$ (respectively $A$). Since $P$ is a partition such that there are $\delta' b/4$ such nodes $u_i$ (respectively $v_i$), Alice and Bob have to communicate at most $\delta' b/2$ bits in this step. % (e.g., Alice sends a sequence of binary bits where the $j^{th}$ bit indicates whether the $j^{th}$ such node $u_i$).
\danupon{Some obvious details are left out here.}

Secondly, we bound the communication needed by Alice and Bob to simulate $\cR$. Recall that, by definition, $\cR$ terminates in $\frac{\delta n}{k^2\log n}$ rounds. To simulate each round of $\cR$, Alice has to send messages sent by machines in $A$ to those in $B$, where each message has $\log n$ bits. So, Alice sends at most $(k/2)^2\log n$ bits in total in each round. The same argument holds for Bob. Thus, Alice and Bob sends at most
%
$\frac{k^2 \log n}{2}\cdot \frac{\delta n}{k^2\log n} = \frac{\delta n}{2}\leq \frac{\delta' b}{2} $
%
bits in total in order to simulate $\cR$, where the last inequality uses the fact that $n=2b+2\leq 4b$ and $\delta'=\delta/4$.

The lemma follows by combining the above two bounds together.
\end{proof}



\begin{lemma}
For any $X, Y\subseteq [b]$, Alice and Bob answer $\udisj_b$ on $(X,Y)$ correctly with probability at least $1-\epsilon'$.
\end{lemma}
\begin{proof}
Recall that, by Observation~\ref{observation:conn_and_disj}, Alice and Bob answer $\udisj_b$ on $(X, Y)$ correctly if and only if $\cR$ on a random partition $P\in \cS$ gives a correct answer to \conn on $G_b(X, Y)$. Denote the probability that the latter event happens by $p$. We claim that $p\geq 1-XXX\epsilon =1-\epsilon'$.

To see this, let $P'$ be a partition randomly picked from all possible partitions (not only partitions in $\cS$) and $p'$ be the probability that $\cR$ on $P'$ gives a correct answer to \conn on $G_b(X, Y)$. (The randomness comes from two sources: the randomness of the random partition $P'$ and the fact that $\cR$ is a randomized protocol.) By definition of $\cR$, $p'\geq 1-\epsilon$.
\begin{claim}
$Pr[P'\in \cS] \geq 1/4$
\end{claim}
\begin{proof}
We bound the probability that $P'$ satisfies the properties of partitions in $\cS$. Note that for each node $u_i$ and $v_i$, the probability that it is assigned to a machine in $A$ is exactly $1/2$. It follows immediately that the probability that $P'$ assigns $u_0$ and $v_0$ to machines in $A$ and $B$, respectively, is $1/4$. Moreover, by Chernoff's bound (see, e.g., \cite{MitzenmacherUpfalBook}[Chapter 4.2.2]), the probability that more than $b/2+\sqrt{6b\ln b}/2$ nodes from $\{u_1, \ldots, u_b\}$ are assigned to

Now we bound the probability that
\end{proof}

\end{proof}

\fi



%*************************************************
% New proof
%*************************************************

\paragraph{Simulation} Upon receiving input $(X, Y)$ for $\udisj_b$, Alice and Bob construct another input $(X', Y')$ by picking a subset $U'\subseteq [b]$ of size $\delta' b/4$ uniformly at random and let $X'=X\cap U'$ and $Y'=Y\cap Y'$. They can do this without communication since they can use the shared randomness to pick $U'$. Alice and Bob will simulate $\cR$ on $k$ machines, using $G_b(X', Y')$ as an input.



The first thing they have to do is agreeing how to partition nodes of $G_b(X, Y)$ into $k$ machines. They will use the following partition. Let $A=\{p_1, \ldots, p_{k/2}\}$ and $B=\{p_{1+k/2}, \ldots, p_{k}\}$. Let $\cS$ be a set of partitions such that
\begin{itemize}
\item $u_0$ (respectively, $v_0$) is assigned to a machine in $A$ (respectively, $B$), and
\item there are at most $\delta' b/4$ nodes in $\{u_1, \ldots, u_b\}$ (respectively, $\{v_1, \ldots, v_b\}$) assigned to machines in $B$ (respectively $A$).
\end{itemize}

Let $P$ be a random partition in $\cS$.
%
Note that Alice and Bob can agree on the random partition $P$ without communication by using the shared randomness, as follows.  Using the shared randomness, they separately create a random partition and separately check whether this partition is in $\cS$. If not, they again generate a new random partition. They repeat this process until a partition in $\cS$ is found.

Next, Alice and Bob assign $G_b(X, Y)$ to $k$ machines and start simulating $\cR$. In particular, Alice and Bob will simulate machines in $A$ and $B$, respectively. To do this, Alice needs to know whether each edge incident to nodes belong to machines in $A$ is in $G_b(X, Y)$ or not. For all edges of the form $u_0u_i$, Alice knows whether such edge is in $G_b(X, Y)$ or not since she knows $X$ (recall that $u_0u_i$ is in $G_b(X, Y)$ if and only if $i\notin X$). She also knows whether an edge of the form $u_iv_j$ is in $G_b(X, Y)$ or not (which happens only for $i=j$). Thus, the only thing she might not know is whether an edge of the form $v_0v_i$ is in $G_b(X, Y)$ or not, for each $v_i$ belonging to a machine in $A$. Thus, for each node $v_i$ belonging to a machine in $A$, Bob has to tell Alice whether $i\in Y$. Similarly, for each node $u_i$ belonging to a machine in $B$, Alice has to tell Bob whether $i\in X$.

After Alice and Bob send the required input to the machines, they can now simulate $\cR$ on these machines.  Alice and Bob will simulate machines in $A$ and $B$, respectively. Whenever a machine in $A$ (respectively, $B$) wants to send a message to a machine in $B$ (respectively, $A$), Alice (respectively, Bob) will send such message to Bob (respectively, Alice). After $\cR$ terminates, Alice and Bob will know the answer of $\cR$ (i.e. whether $G_b(X, Y)$ is connected or not) by reading it from machines belonging to them. Then, they say that $|X\cap Y|=0$ if and only if $\cR$ says that $G_b(X, Y)$ is connected.
%
We now analyze this protocol.

\begin{lemma}
Alice and Bob exchange at most $\delta' b$ bits in total.
%$\delta' b/5+\delta(2b+2)\log n$ bits in total.
\danupon{Make sure that we allow exactly $\log n$ bits in each round (not $O(\log n)$).}
\end{lemma}
\begin{proof}
First, we bound the communication needed by Alice and Bob before they start simulating $\cR$, where Alice (respectively, Bob) has to send whether $i\in X$ (respectively, $i\in Y$) to Bob (respectively, to Alice) for every node $u_i$ (respectively, $v_i$) belonging to a machine in $B$ (respectively $A$). Since $P$ is a partition such that there are $\delta' b/4$ such nodes $u_i$ (respectively $v_i$), Alice and Bob have to communicate at most $\delta' b/2$ bits in this step. % (e.g., Alice sends a sequence of binary bits where the $j^{th}$ bit indicates whether the $j^{th}$ such node $u_i$).
\danupon{Some obvious details are left out here.}

Secondly, we bound the communication needed by Alice and Bob to simulate $\cR$. Recall that, by definition, $\cR$ terminates in $\frac{\delta n}{k^2\log n}$ rounds. To simulate each round of $\cR$, Alice has to send messages sent by machines in $A$ to those in $B$, where each message has $\log n$ bits. So, Alice sends at most $(k/2)^2\log n$ bits in total in each round. The same argument holds for Bob. Thus, Alice and Bob sends at most
%
$\frac{k^2 \log n}{2}\cdot \frac{\delta n}{k^2\log n} = \frac{\delta n}{2}\leq \frac{\delta' b}{2} $
%
bits in total in order to simulate $\cR$, where the last inequality uses the fact that $n=2b+2\leq 4b$ and $\delta'=\delta/4$.

The lemma follows by combining the above two bounds together.
\end{proof}



\begin{lemma}
For any $X, Y\subseteq [b]$, Alice and Bob answer $\udisj_b$ on $(X,Y)$ correctly with probability at least $1-\epsilon'$.
\end{lemma}
\begin{proof}
Recall that, by Observation~\ref{observation:conn_and_disj}, Alice and Bob answer $\udisj_b$ on $(X, Y)$ correctly if and only if $\cR$ on a random partition $P\in \cS$ gives a correct answer to \conn on $G_b(X, Y)$. Denote the probability that the latter event happens by $p$. We claim that $p\geq 1-XXX\epsilon =1-\epsilon'$.

To see this, let $P'$ be a partition randomly picked from all possible partitions (not only partitions in $\cS$) and $p'$ be the probability that $\cR$ on $P'$ gives a correct answer to \conn on $G_b(X, Y)$. (The randomness comes from two sources: the randomness of the random partition $P'$ and the fact that $\cR$ is a randomized protocol.) By definition of $\cR$, $p'\geq 1-\epsilon$.
\begin{claim}
$Pr[P'\in \cS] \geq 1/4$
\end{claim}
\begin{proof}
We bound the probability that $P'$ satisfies the properties of partitions in $\cS$. Note that for each node $u_i$ and $v_i$, the probability that it is assigned to a machine in $A$ is exactly $1/2$. It follows immediately that the probability that $P'$ assigns $u_0$ and $v_0$ to machines in $A$ and $B$, respectively, is $1/4$. Moreover, by Chernoff's bound (see, e.g., \cite{MitzenmacherUpfalBook}[Chapter 4.2.2]), the probability that more than $b/2+\sqrt{6b\ln b}/2$ nodes from $\{u_1, \ldots, u_b\}$ are assigned to

Now we bound the probability that
\end{proof}

\end{proof}

\textbf{*** Peter: ignore the remainder of this sec for now.}
We now show that any $\eps$-error $k$-machine algorithm takes $\Omega(n /k )$ rounds to solve $\st$ (and thus also $\mst$) in the worst case.

\begin{theorem}[$\tilde \Omega(n/k)$ lower bound for \st]\label{theorem:st_lower_bound}
Every public-coin $(\epsilon, 0)$-error randomized protocol on a $k$-machine  network that computes a spanning tree of an $n$-node input graph has an expected round complexity of $\Omega\left(\frac{n}{k}\right)$.
More specifically, there exists a constant $\epsilon>0$ such that, for any
$k\geq 3$ and large enough $n$, $\cT^k_{\epsilon, 0}(\st)=\tilde
\Omega\left(\frac{n}{k}\right).$
\end{theorem}

In the remainder of this section, we prove \Cref{theorem:st_lower_bound}.
Our proof uses a reduction from the following communication complexity problem:

\begin{definition}[Promise-XOR; $R_\epsilon(\pxor_{n,k})$] There are three parties, Alice Bob, and Carole.
We consider two $n$-length bit vectors $x$ and $y$ and use the notation $x_i$ to denote the $i$-bit of $x$.
It is promised that there exist disjoint sets of indices $I_0, I_1 \subset
\{1,\dots,n\}$ of $n/4$ indices each, such that $\forall i \in I_0 x_i=y_i=0$ and
$\forall j \in I_1 x_j=y_j=1$.
For the remaining $n/2$ indices, it is promised that $x_i=0$ and $y_i=1$.
Initially, Alice knows all of $x$ and a uniformly at random chosen $1/k$ fraction of $y$, whereas Bob knows all of $y$ and a uniformly at random chosen $1/k$ fraction of the bits in $x$.
Carole knows both, $x$ and $y$.
Alica and Bob can only communicate with Carole.
Let $Z$ be the set of indices where, for each $i \in Z$, $x_i\ \text{xor}\ y=1$;
observe that $|Z|=n/2$.
The goal is for either Alice or Bob to output $n/4$ indices of $Z$.
We define the {\em communication complexity} of this problem, denoted by
$R_\epsilon(\pxor_{n,k})$ be the message complexity for solving this problem by an
$\epsilon$-error protocol.
\end{definition}

\begin{theorem}[Lower bound for $R_\epsilon(\pxor_{n,k})$] \label{theorem:pxor_lower_bound}
There are constants $\epsilon'>0$ and $\delta'>0$ such that, for any large enough $n$ and $k\ge 3$, $$R_{\epsilon'}(\pxor_{n,k})\geq \delta' n.$$
In other words, there is no public-coin randomized protocol on the server
communication complexity model that, on $n$-bit input strings $x, y$, solves
\pxor correctly in $\delta' $ rounds with probability at least $1-\epsilon'$.
(The probability above is over all shared random bits used by Alice, Bob and Carole.)
\end{theorem}
\begin{proof}
\end{proof}


Let $A$ be an $\eps$-error algorithm that solves $\pxor$ in the $k$-machine data network.
We show how to simulate $A$ in the server model, which implies that the expected round complexity of $A$ is $\tilde\Omega(n / k)$.
We first describe the simulation at Carole:
Recall that Carole knows the entire input vectors $x$ and $y$.




\endinput

