\onlyShort{\vspace{-0.2cm}}
\section{Applications of Hypergraph MIS algorithms to standard graph problems}
\onlyShort{\vspace{-0.2cm}}
\label{sec:applications}
In this section we show that our distributed hypergraph algorithms have direct applications in the standard graph setting.
\onlyLong{As a first application of our MIS algorithm, we show how to solve the restricted minimal dominated set (\rmds) problem in \Cref{sec:rmds}.
We will use this \rmds-algorithm to obtain a distributed algorithm for solving the balanced minimal dominating set (\bmds) problem, thereby resolving an open problem of \cite{balanced-minimal}.}

\onlyShort{\vspace{-0.1cm}}
\onlyLong{\subsection{Restricted Minimal Dominating Set (\rmds)} \label{sec:rmds}}
\onlyShort{\paragraph{Restricted Minimal Dominating Set (\rmds)} \label{sec:rmds}}
%\onlyShort{\vspace{-0.2cm}}

We are given a (standard) graph $G = (V,E)$ and a subset of nodes $R \subseteq V$, such
that  $R$ forms a dominating set in $G$ (i.e., every node $v \in V$ is either adjacent to $R$ or belongs to $R$).
We are required to find a {\em minimal} dominating set that is a subset of $R$ and dominates $V$.
Since a minimal vertex cover is the complement of a maximal independent set, we can leverage our \mis algorithm (cf.\ \Cref{sec:hyper}).
To this end, we show that the \rmds problem can be solved by finding a minimal hitting set (or minimal vertex cover) on a specific hypergraph $H$.
The server client representation of $H$ is determined by $G$ and $R$ as follows:
For every vertex in $V$ we add a client (i.e.\ hyperedge) and, for every vertex in $R$, we also add a server. 
Thus, for every vertex $u \in V$, we have a client $e_u$ and, if $u \in R$, we also have a server $s_u$.
We then connect a server $s_u$ to a client $e_v$, iff either $u$ and $v$ are adjacent in $G$, or $u=v$.
\onlyLong{\Cref{algo:rmds} contains the complete pseudo code of this construction.}
\onlyShort{See the full paper for the complete pseudo code of this construction.}
Note that we can simulate this server client network on the given graph with constant overhead in the CONGEST model.
%Since the number of clients, servers, and the maximum degree are all bounded by $O(n)$, 
We have the following result by virtue of \Cref{thm:mis}:

\begin{theorem} \label{thm:rmds}
  \rmds can be solved in expected time $\tilde O(\min\{\Delta^{\eps}, n^{o(1)}\})$  (for any const.\ $\eps > 0$) on graph $G$ in the CONGEST model and in time $O(\log^2 n)$ in the LOCAL model where $\Delta$ is the maximum degree of $G$.
\end{theorem}
\onlyLong{
\begin{algorithm}[t]
  \begin{algorithmic}[1]
\item[] Let $R$ be the set of restricted nodes (which are part of the MDS).
\item[] Simulate a server client network $H$. Every node (locally) adds vertices to the clients $C$ resp.\ servers $S$, and simulates the edges in $H$.
\FOR{every node $u$}
  \STATE Node $u$ adds a client $e_u$ to $C$.
  \IF{$u \in R$}
  \STATE Node $u$ adds a server $s_u$ to $S$, and an edge $(s_u,e_u)$ to $E(H)$.
  \ENDIF
\ENDFOR
\FOR{all nodes $u$, $v$ where $(u,v) \in E(G)$}
  \STATE If server $s_u$ exists in $H$, add edge $(s_u,e_u)$ to $H$.
\ENDFOR
\item[]
\STATE Find an MIS on $H$ and let $O_{MIS} \subseteq S$ be the servers that are in the output set.
\FOR{every node $u$ where $s_u$ exists}
  \STATE If $s_u \notin O_{MIS}$, then node $u$ adds itself to the \rmds.
\ENDFOR
\end{algorithmic} 
  \caption{An \rmds-algorithm: Finding a minimal dominating set on a graph $G$ that is a subset of a given dominating set $R$.}
  \label{algo:rmds}
\end{algorithm}
}

\onlyShort{\vspace{-0.2cm}}
\onlyLong{\subsection{Balanced Minimal Dominating Set} \label{sec:bmds}}
\onlyShort{\paragraph{Balanced Minimal Dominating Set} \label{sec:bmds}}
\onlyShort{\vspace{-0.2cm}}
We define the \emph{average degree} of a (standard) graph $G$,
 denoted by $\delta$, as the total degrees of its vertices (degree of a vertex is its degree in $G$)  divided by the number of vertices in $G$.
A {\em balanced minimal dominating set (BMDS)} (cf.\ \cite{balanced-minimal})
is a minimal dominating set  $D$ in $G$ that minimizes the ratio of
the average degree of $D$ to that of the graph itself (the average degree of the set of nodes $D$ is defined as the average degree of the subgraph induced by $D$).
\onlyLong{The \bmds problem is motivated by applications in fault-tolerance and load balancing (see \cite{balanced-minimal} and the references therein).
For example, in a typical  application, an MDS
can be used to form  clusters with low diameter, with the nodes in the MDS being the ``clusterheads'' \cite{moscibroda-survey}. Each clusterhead is  responsible for monitoring the nodes that are adjacent to it.
Having an MDS with low degree is useful
in a resource/energy-constrained setting since the number of
nodes monitored \emph{per} node in the MDS will be low (on average). This
can lead to  better load balancing, and consequently less resource or
energy consumption per node, which is crucial for ad hoc and sensor
networks, and help in extending the lifetime of such networks while also leading to better fault-tolerance. 
 For example,
in an $n$-node star graph, the above requirements imply that it is better
for  the leaf nodes to form the MDS rather than the central node alone.   In
fact,  the average degree of the MDS formed by the leaf  nodes --- which is
1 --- is within a constant factor of   the average degree of a star (which is
close to 2), whereas the average degree, $n-1$, of the MDS consisting of the
central node alone is much larger.}
A {\em centralized} polynomial time algorithm for computing a \bmds with (the best possible in general \footnote{That is,
there exists graphs with average degree $\delta$, where this bound is essentially the optimal.}) average degree $O(\frac{\delta \log \delta}{\log \log \delta})$ was given in \cite{balanced-minimal}. A distributed algorithm that gives the same bounds was left a key open problem.
We now present a distributed variant of this algorithm (cf.\ Algorithm~\ref{algo:bmds}) that uses our hypergraph \mis-algorithm as a subroutine.
\onlyLong{Note that since the \bmds problem is defined on standard graphs, we assume that \Cref{algo:bmds} executes on a standard synchronous network adhering to the CONGEST model of communication.
}
\begin{algorithm}[t]
  \onlyShort{\scriptsize}
  \begin{algorithmic}
\item[]
 \STATE Nodes compute the average network degree $\delta$.
 ~
  \STATE Every node $u$ of degree $ > 2\delta$ marks itself with probability $\frac{\log t}{t}$ where $t = \frac{2 \delta \log \delta}{\log \log \delta }$.

\STATE Every node of degree $\leq 2 \delta$ marks itself.

\STATE If a node $v$ is not marked, and none of the neighbors of $v$ are marked, then $v$ marks itself.
\STATE Let $\textsc{marked}$ be the set of nodes that are marked. 
Invoke the RMDS algorithm (cf.\ \Cref{sec:rmds}) on $G$ where the restricted set is given by \textsc{marked}.
%\STATE Let $M$ denote the set of marked vertices at this point. $M$ forms a dominating set of $G$, but is not necessarily minimal. Using any algorithm, select a minimal dominating set $M' \subseteq M$.
\STATE Every node that is in the solution set of the RMDS algorithm remains in the final output set.
  \end{algorithmic}
  \caption{A distributed \bmds-algorithm.}
  \label{algo:bmds}
\end{algorithm}

\begin{theorem} \label{thm:bmds}
Let $\delta$ be the average degree of a graph $G$.
There is a CONGEST model algorithm that finds a \bmds with average degree $O(\frac{\delta \log \delta}{\log \log \delta })$ in expected $\tilde O(D + \min\{\Delta^\eps, n^{o(1)}\})$ rounds, where $D$ is the diameter, $\Delta$ is the maximum node degree of $G$, and $\eps > 0$ is
any constant.
\end{theorem}
\onlyLong{
\begin{proof}
Computing the average degree in Step~1 of \Cref{algo:bmds} can be done by first electing a leader, then building a BFS-tree rooted at the leader, and finally computing the average degree by convergecast.

It was shown in \cite{balanced-minimal} that marking the nodes according to \Cref{algo:bmds} yields an average degree of $O(\frac{\delta \log \delta}{\log \log \delta })$.
The runtime bound follows since the first part of the algorithm can be done in $O(D)$ rounds and the running time of the \rmds-algorithm (cf. \Cref{thm:rmds}).
\end{proof}
}


%\onlyShort{\vspace{-0.2cm}}
\onlyLong{\subsection{Minimal Connected Dominating Sets (\mcds)} \label{sec:mcds}}
\onlyShort{\paragraph{Minimal Connected Dominating Sets (\mcds)} \label{sec:mcds}}
\onlyShort{\vspace{-0.2cm}}

Given a graph $G$, the \mcds problem requires us to find a minimal dominating set $M$ that is connected in $G$.
We now describe our distributed algorithm for solving \mcds in the CONGEST model (\onlyLong{ see \Cref{algo:mcds} for the complete pseudo code}\onlyShort{see the full paper for the complete pseudo code}) and argue its correctness.
We first elect a node $u$ as the leader using a $O(D)$ time algorithm of \cite{KPPRT13:PODC}.
Node $u$ initiates the construction of a BFS tree $B$, which has $k\le D$ levels, after which every node knows its level (i.e.\ distance from the leader $u$) in the tree $B$.
Starting at the leaf nodes (at level $k$), we convergecast the maximum level to the root $u$, which then broadcasts the overall maximum tree level to all nodes in $B$ along the edges of $B$.

We then proceed in iterations processing two adjacent tree levels at a time, starting with nodes at the maximum level $k$.
Note that since every node knows $k$ and its own level, it knows after how many iterations it needs to become active.
Therefore, we assume for simplicity that all leafs of $B$ are on level $k$.
We now describe a single iteration concerning levels $i$ and $i-1$:
First, consider the set $L_i$ of level $i$ nodes that have already been added to the output set $M$ in some previous iteration;
initially, for $i=k$, set $L_i$ will be empty.
We run the $O(D + \sqrt{n})$ time algorithm of \cite{thurimella} to find maximal connected components among the nodes in $L_i$ in the graph $G$; let $\cC=\{C_1,\dots,C_\alpha\}$ be the set of these components and let $\ell_j$ be the designated component leader of component $C_j \in \cC$.

%For the node set of $H$, we consider each component in $C$ as a super-node and the set %$L$ of the remaining nodes on levels $i$ and $i-1$, i.e., $V(H) = C \cup L$.
%The edges of $H$ are given by the induced inter-level edges of $G$ among the nodes in $L$ and, in addition, we add an edge between $s \in L$ and $C_j \in C$ iff there exists an $v \in C_j$ such that $(v,s) \in E(G)$.
%We can think of these edges incident to $C_j$ as pointing to the component leader node $\ell_j$.


We now simulate a hypergraph that is defined as the following bipartite server client graph $H$:
Consider each component in $\cC$ as a \emph{super-node}; we call the other nodes on level $i$ \emph{non-super-nodes}.
The set $C$ of clients contains all super-nodes in $\cC$ and all nodes on level $i$ that are neither adjacent to any super-node nor have been added to the output set $O$.
The set $S$ of servers contains all nodes on level $i-1$.
The edges of $H$ are the induced inter-level edges of $G$ between servers and non-super-node clients. In addition, we add an edge between a server $s \in S$ and a (super-node) client $C_j \in \cC$, iff there exists a $v \in C_j$ such that $(v,s) \in E(G)$.
Conceptually, we can think of the edges incident to $C_j$ as pointing to the component leader node $\ell_j$.
%  We use $H$ denote the hypergraph identified by this server client network.
Next, we find a MIS (cf.\ \Cref{sec:hyper}) on the (virtual) hypergraph $H$. \onlyShort{We refer to the full
paper for details.}
\onlyLong{
We sketch how we simulate the run of the MIS algorithm on $H$ in $G$:
If a node $v \in C_j$ receives a message from a node in $S$, then $v$ forwards this message to the component leader $\ell_j$. (If a node receives multiple messages at the same time, it simply forwards all messages sequentially by pipelining.) 
After waiting for $\tilde O(D)$ rounds, the component leader $\ell_j$ locally simulates the execution of $\ell_j$ according to the MIS algorithm by using the received (forwarded) messages.
Any messages produced by the simulation at $\ell_j$ are then sent back through the same paths to the neighbors of $C_j$.
Let $O_i$ be the set of nodes (on level $i-1$) that are not in the MIS; note that $O_i$ forms a minimal vertex cover on the hypergraph given by $H$.
At the end of this iteration, we add $O_i$ to the output set $M$ and then proceed to process levels $i-1$ and $i-2$.
}


%to the output set $M$ of the MCDS by starting at the leafs at maximum level $k$.




%The lemma belows state that it is sufficient to solve hypergraph MIS on graphs of dimension at most $3\log (m+n)$. 

\begin{theorem} \label{thm:mcds}
  \mcds can be solved in the CONGEST model in expected time $\tilde O(D (D\min\{\Delta^{o(1)}, n^{o(1)}\} +\sqrt{n}) )$.
\end{theorem}
\onlyLong{
\begin{proof}
We first argue the correctness of the algorithm.
It is straightforward to see that, after $k$ iterations, the \emph{solution set} $M= \bigcup_{i=1}^k O_i$ forms a dominating set of $G$.
For connectivity, note that since $O_i$ is a minimal vertex cover on the induced subgraph $H$, it follows that every super-node in the client set has a neighboring node in $O_i$.
This guarantees that $M$ remains connected (in $G$) after adding $O_i$.

Next, we consider minimality.
Suppose that there exists a node $w$ in the solution set $M$ that is \emph{redundant} in the sense that it can be removed from $M$ such that $M\setminus \{w\}$ is a MCDS of $G$.
Assume that $w$ became redundant in the iteration when processing levels $j$ and $j-1$.
Note that by the properties of the BFS tree, $w$ must be either on levels $j$ or $j-1$, since, in this iteration, we only add new nodes to $M$ that are themselves on level $j-1$.
By the correctness of the MIS algorithm, $w$ does not become redundant in the same iteration that it is added to $M$, thus $w$ can only be on level $j$.
Moreover, observing that $w$ can only have been added to $M$ in the preceding iteration 
to dominate some node $x$ on level $j+1$, it follows that $w$ cannot be made redundant by adding some node $z$ on level $j-1$, since $z$ cannot dominate $x$.
This shows that the set $M$ is minimal as required.

We now argue the running time bound.
The pre-processing steps of electing a leader and constructing a BFS tree can be completed in $O(D)$ rounds.
The for-loop of the MCDS algorithm has $O(D)$ iterations, thus it is sufficient if we can show that we can simulate a single iteration (including finding a MIS on the constructed hypergraph $H$) in $\tilde O(\sqrt{n} + D\min\{\Delta^{\eps},n^{o(1)}\})$ rounds.
Consider the iteration that determines the status of nodes in level $i$, i.e., the nodes on level $i$ form the set of servers as defined in MCDS algorithm.
First, we run the algorithm of \cite{thurimella}, which, given a graph $G$ and a subgraph $G'$, yields maximal connected components (w.r.t.\ $G'$) in time $\tilde O(D + \sqrt{n})$, where $D$ is the diameter of $G$.
Then, we simulate the MIS algorithm \Cref{sec:hyper} on the hypergraph $H$ given by the set of servers and the clients (some of which are super-nodes).
Consider a super-node $C_j$.
We can simulate a step of the MIS algorithm by forwarding all messages that nodes in $C_j$ receive (from servers on level $i-1$) to the component leader node $\ell_j$ by sequentially pipelining simultaneously received messages.
The following lemma shows that we can assume that each client has at most $O(\log n)$ incident servers, i.e., the dimension of the hypergraph $H$ is bounded by $O(\log n)$:
\begin{lemma} \label{lem:logn}
If there is an algorithm $\cA$ that solves MIS on $n$-node $m$-edge hypergraphs of dimension up to $3\log (m+n)$ in $T(n)$ rounds for some function $T$, then there is an algorithm $\cA'$ that solves hypergraph MIS on any $n$-node $m$-edge hypergraph with any dimension  in $\tilde O(T(n))$ rounds. 
\end{lemma}
\onlyLong{
\begin{proof}[Proof of \Cref{lem:logn}]
$\cA'$ works as follows. Let $\cH$ be the input graph. We will use $M$ as a final MIS solution for $\cA'$; initially, $M=\emptyset$. First, we mark every hypernode with probability $1/2$. Let $\cH'$ be the subgraph of $\cH$ induced by marked nodes (i.e. $\cH'$ consist of every edge such that every node it contains is marked). Observe that, with probability at least $1-1/m^2$, every hyperedge in $\cH'$ has dimension at most $3\log m$ because every hyperedge that contains more than $3\log m$ nodes will have all its nodes marked with probability at most $m^3$. We now run $\cA$ to solve hypergraph MIS on $\cH'$. We add all nodes in the resulting MIS to $M$ and remove them from $\cH$. Additionally, we remove from $\cH$ all other nodes in $\cH'$ (that are not in the MIS of $\cH'$) and edges containing them. (These nodes cannot be added to the MIS of $\cH$ so they are removed.) We then repeat this procedure to find the MIS of the remaining graph. Observe that this procedure removes $n/2$ nodes in expectation. So, we have to repeat it only $O(\log n)$ times in expectation. Each of this procedure takes $O(T(n))$ time, so we have the running time of $\tilde O(T(n))$ in total. 
\end{proof}
}
It follows from \Cref{lem:logn} that forwarding messages towards the component leader can incur a delay of at most $O(\log n)$ additional rounds due to congestion.
This means that one step of the MIS algorithm can be implemented in $\tilde O(D)$ rounds, and thus the total time complexity of a single iteration of the for-loop takes time $\tilde O(D\min\{\Delta^{\eps}, n^{o(1)}\} + \sqrt{n})$, as required.
\end{proof}
}




\onlyLong{
\begin{algorithm}[t]
  \begin{algorithmic}[1]
\item[]
  \STATE Let $M$ be the final output set; initially $M=\emptyset$.
  \STATE We perform leader election using an $O(D)$ time algorithm of \cite{KPPRT13:PODC}, yielding some leader $\ell$.
  \STATE Node $\ell$ initiates the construction of a breadth-first-search tree $B$ of  $k\le D$ levels.
  \STATE The leafs of $B$ report their level (i.e.\ distance from the root) to $\ell$ by convergecast and the leader $\ell$ then rebroadcasts the maximum level to all children along the tree edges.
  At the end of this step, every node knows its level  in $B$ and the maximum tree level.
  \item[]
  \FOR{tree level $i=k,\dots,1$}
  \STATE Let $L_i \subseteq M$ denote the nodes on level $i$ that have been added to $M$. (Note that $L_k$ is empty initially.)
  Find a set of maximal connected components $\cC=\{C_1,\dots, C_\alpha\}$ of the nodes in $L_i$ using the $O(D+\sqrt{n})$ time algorithm of \cite{thurimella}; let $\ell_1,\dots,\ell_\alpha$ denote the roots of the respective components.
\item[]
  \item[] \textsl{Solving MIS on the hypergraph induced by levels $i$ and $i-1$:}
  \STATE We construct the following bipartite server client graph $H$.
  Consider each component in $\cC$ as a ``super-node''.
  The set $C$ of clients contains all super-nodes in $\cC$ and all nodes on level $i$ that are neither adjacent to any super-node nor have been added to the output set $O$.
  The set $S$ of servers contains all nodes on level $i-1$.
  The edges of $H$ are the induced inter-level edges of $G$ between servers and clients that do not form a component. In addition, add an edge between $s \in S$ and $C_j \in C$, iff there exists an $v \in C_j$ such that $(v,s) \in E(G)$.
  Conceptually, we can think of the edges incident to $C_j$ to point to the component leader node $\ell_j$.
%  Let $H$ be the hypergraph identified by this server client network.
  \STATE Find a MIS (cf.\ \Cref{sec:hyper}) on the virtual hypergraph $H$:
 \item[] We sketch how we simulate the run of the MIS algorithm on $H$ in $G$:
  If a node $v \in C_j$ receives a message from a node in $S$, $v$ forwards this message to the component leader $\ell_j$. (If a node receives multiple messages at the same time, it simply forwards all messages sequentially by pipelining.) 
  After waiting for $\tilde O(D)$ rounds, the component leader $\ell_j$ locally simulates the execution of the MIS algorithm by using the received (forwarded) messages.
  Any messages produced by the simulation at $\ell_j$ are then sent back through the same paths to the neighbors of $C_j$.
  \STATE Add every node on level $i-1$ that is not in the MIS to the output set $M$.
  \ENDFOR
\item[]
  \end{algorithmic}
  \caption{A distributed \mcds-algorithm.}
  \label{algo:mcds}
\end{algorithm}
}




%\danupon{To Peter: I'm not sure where you'll use this lemma so I'll state it separately here.}


\endinput
