\section{Efficient Fanout Heuristics}\label{sec:heuristics}

 In this section we provide the basic intuition of why the structure of the
 overlay can affect the latency of the system, and then prove the optimal fanout
 is dependent upon a small number of features of the aggregation using a few of
 the proofs we have constucted for different cases. We then explain our final
 heuristics in light of the proofs. All of the notation used is viewable is
 Table~\ref{tab:notation}.

\begin{table}
  \begin{tabular}{l|p{.75\linewidth}}
  Symbol & Meaning \\ \hline
  $n$ & Number of computation nodes. \\
  $d$ & Fanout of the overlay.\\
  $h$ & Height of the overly (equal to $\log_{d} n$).\\
  $g^{s}\left( x\right)$ & This function returns the size of the output
  for a level for an input of size $\left| x\right|$.\\
  $g^{c}\left( x\right)$ & This function returns the amount of time
  taken for aggregation (with communication) for a level for an input of
  size $\left| x\right|$.\\
  $c_{c}$ & Cost of aggregation per unit of data when $g^{c}\left(
  x\right)$ is linear and $g^{c}\left( 0\right)=0$.\\
  $y$ & Ratio of output sizes of adjacent levels of aggregation when
  $g^{s}\left(d\, g^{s}\left( x\right) \right)\neq g^{s}\left( x\right)$.\\
  $x_{0}$ & Output from a computation node.\\
  $\left| i_{x}\right|$ & Size for the input retrieved from one child of the
  $x^{th}$ level of aggregation. $i_{1}=x_{0}$.
  \end{tabular}
  \caption{Notation used in the intuition and proofs.}
  \label{tab:notation}
\end{table}

\subsection{Intuition}

Our objective is to minimize the total cost of the system. Since the aggregation
phase is separate from the local computation time, we can optimize them
separately. The output from the computation phase is the output from each local
node, and it is the job of the aggregation overlay to aggregate these outputs
into a single output. With this setup, the number of leaf nodes for the tree
defining the aggregation overlay is given because it is the number of local
nodes involved in the computation phase. The functions to aggregate multiple
inputs into a single input are also given, so the only variable left to change
is the fanout, $d$.

The aggregation time at a single level, composed of the time to receive input
from the level just beneath it and the time to compute the output for the level,
is dependent on the size of the input $x$. We use the function $g^{c}(x)$,
composed of functions which account for the communication time and the
computation time for aggregation to denote the time cost of a single level of
aggregation. It should be noted that aggregation taking place at the same level
in the overlay happens in parallel, so the costs for the two branches overlap,
and only the cost of a single branch must be considered.

The output of an aggregation node is also defined by the function which
aggregates results. The size of this output is important to the analysis because
it is used as input at higher levels, so we denote it with $g^{c}(x)$. Depending
on the aggregation function, the output size may be greater than, less than, or
equal to the size of the output at the level below. All output sizes will be
related in some manner to the size of $i_{1}$, which is the output from the
computation phase and the input to the leaf nodes of the aggregation
overlay tree.

There is a tradeoff between fanout and tree height. Decreasing fanout increases
the height of the tree. This increases amount of computation done in parallel,
but it also increases the total amount of computation to be done since results
have to filter up through more levels. Depending on the exact relationships
between $g^{s}\left( x\right)$ and $g^{c}\left( x\right)$, the time saved by the
parallelism might or might not offset the time required by extra levels. We can
model the system mathematically to make a determination.

In order for the analysis to be complete, we have to consider the intersection
of the following lists and create a heuristic for each intersection:
\begin{enumerate}
  \item $g^{s}\left(d\, g^{s}\left( x\right) \right) < g^{s}\left( x\right) $ -
  The output size at each level is less than that of the preceding level.
  \item $g^{s}\left(d\, g^{s}\left( x\right) \right) = g^{s}\left( x\right) $ -
  The output size at each level is equal to that of the preceding level.
  \item $g^{s}\left(d\, g^{s}\left( x\right) \right) > g^{s}\left( x\right) $ -
  The output size at each level is greater than that of the preceding level.
\end{enumerate}
-- and --
\begin{enumerate}
  \item $g^{c}\left( x\right)$ is sublinear - The compuation time at each level
  is sublinear with respect to the size of the input.
  \item $g^{c}\left( x\right)$ is linear - The compuation time at each level
  is linear with respect to the size of the input.
  \item $g^{c}\left( x\right)$ is superlinear - The compuation time at each
  level is superlinear with respect to the size of the input.
\end{enumerate}

Because $g^{c}\left( x\right)$ includes
communication time to retrieve the inputs, there will always be a linear
component, but that component may or may not dominate the time taken to compute
the aggregation.

\subsection{Selected Proofs}

Here we prove the (near-)optimality of our heuristics.
For space reasons, we cannot present the exhaustive proofs for all cases.
The subset we show is straightforward and covers a significant portion of the
cases. For all proofs, we assume communication time is linear on the size of the
total input, which is the case when each aggregators do not share nodes.

\begin{thm}\label{thm:middlecell}
The optimal fanout is $e$ when
$g^{s}\left(d\, g^{s}\left( x\right) \right) = g^{s}\left( x\right) $ and
$g^{c}\left( x \right) $ is linear.
\end{thm}

\begin{proof}\label{proof:linernogrowth}
 The total aggregation cost when fanout is $d$ equals 
 $\sum_{x=1}^{\log _{d}n}g^{c}\left(d\, i_{x}\right)$. With our
 conditions, $g^{c}\left(d\, i_{x}\right)$ equals a constant $c_{c}$
 times $d\, \left| i_{x}\right|$.
 The output sizes at each level are equal,
 $\left| x_{0}\right|=\left| i_{1}\right|=\cdots=\left|i_{\log_{d}n}\right|$.
 The cost thus equals $\sum_{x=1}^{\log _{d}n}c_{c}d \left| x_{0}\right| $ = 
 $d \left| x_{0}\right|\left( c_{c}\right)\sum_{x=1}^{\log _{d}n}1  = 
 d \left| x_{0}\right|\left( c_{c}\right) \log_{d}n$.
 $\log _{d}n = \frac{\log n}{\log d}$. $\log n$,
 $\left| i_{1}\right|$, and $c_{c}$ are not affected by fanout.
 $\therefore$ The affected part is $\frac{d}{\log
 d}$, which is minimal at $d=e$.
\end{proof}

Next we consider the cases when $g^{s}\left(d\, g^{s}\left( x\right) \right)
\neq g^{s}\left( x\right) $ and $g^{c}\left( x \right) $ is linear. We introduce
Lemma~\ref{lem:growthfactor} which is used in Theorem~\ref{thm:bottomcell} and
Theorem~\ref{thm:topcell}. Note the lemma requires at least two levels of
aggregation.

\begin{lem}\label{lem:growthfactor}
Total aggregation cost is
$d\left| i_{1}\right| \left( c_{c} \right)
\frac{\left(y^{\log_{d}n} - 1 \right)}{y-1}$ when $y=
\frac{i_{x+1}}{i_{x}}\neq 1$ and $g^{c}\left( x \right) $ is linear.
\end{lem}

\begin{proof}
Assume the output size at each level is a proportional factor of $y$ different 
than the input size so
$g^{s}\left(d\,g^{s}\left( x\right) \right)$ $=$ $y \, g^{s}\left( x\right) $.
Thus $g_{x}^{s}\left( i_{1}\right) = y^{x}\left| i_{1}\right|$. The total
aggregation cost is $\sum _{x=1}^{\log_{d}n} c_{c} d\, \left|
i_{1}\right|\frac{\prod _{z=1}^{x}y}{y}$ $=$ $d\, \left| i_{1}\right| c_{c} \sum
_{x=1}^{\log_{d}n}\frac{\prod_{z=1}^{x} y}{y} $ because input size changes
by a factor of $y$ for every level \emph{after the first}.
$\therefore$ The aggregation cost equals
$d\, \left| i_{1}\right| \, \left( c_{c} \right)
\frac{\left(y^{\log_{d}n} - 1 \right)}{y-1}$.
\end{proof}
 
\begin{thm}\label{thm:bottomcell}
For $g^{s}\left(d\, g^{s}\left( x\right) \right) > g^{s}\left( x\right) $ and
linear or sublinear $g^{c}\left( x \right)$, $\exists$ some $y_{i}$ $\mid$ for
$y > y_{i}$ the optimal fanout is $n$.
\end{thm}

\begin{proof}
%By Lemma~\ref{lem:growthfactor}, the total aggregation cost is \ldots.
%The total aggregation cost when $d=n$ is \ldots.
%Therefore a single level of aggregation is preferable when \ldots > \ldots.
   A single aggregation level provides minimal cost when the aggregation cost
   in Lemma~\ref{lem:growthfactor} for $y>1$ and $d<n$ is greater than the
   aggregation cost when $d=n$, which is 
   $c_{c} \, n \, \left| i_{1}\right|$.
   $d\, \left| i_{1}\right| \, c_{c} \frac{\left(y^{\log_{d}n} - 1
   \right)}{y-1} > c_{c} \, n \, \left| i_{1}\right|$.
   We substitute $\sqrt[h]{n}$ for $d$, which is the case for a
   full tree, and $y^{\log_{d}n}$ becomes $y^{h}$.
   This simplifies the problem to
   $\sqrt[h]{n}\, \frac{\left(y^{h} - 1 \right)}{y-1} > n $, or 
   $\frac{\left(y^{h} - 1 \right)}{y-1} > n^{1-\frac{1}{h}} $.
   $\frac{\left(y^{h} - 1 \right)}{y-1}$ $>$ $\frac{\left(y^{h} - y
   \right)}{y-1}$  $=$ $\frac{y\left(y^{h} - 1 \right)}{y-1}$ $>$ $y^{h} - 1$ 
   $>$ $y^{h}$ $>$ $n^{1-\frac{1}{h}} $.
   The inequality holds when $y$ $>$ $n^{\left(
   1-\frac{1}{h}\right)\left( \frac{1}{h}\right)}$ $=$ $n^{\frac{h-1}{h^{2}}}$.
   In our experience growth is more likely to be
   proportional to $d$ than $n$. Plugging $d$ into $n^{\frac{h-1}{h^{2}}}$ gives 
   $\left(d^{h}\right)^{\frac{h-1}{h^{2}}}$ $=$ $d^{\frac{h-1}{h}}$ $=$ 
   $d^{\frac{log_{d}n-1}{log_{d}n}}$ for $2 \leq d \leq n$.
   To find the extreme case, we find the derivative and set it to 0.
   $\frac{\partial}{\partial d} d^{\frac{log_{d}n-1}{log_{d}n}}$ $=$ 
   $\frac{d^{-\log_{d}n}\left(\log n \, - \, 2\log{d}\right)}{\log n} = 0$.
   This is true when $\log n = 2\log{d}$, or $n=d^{2}$.
   The second derivative is $\frac{2 \, d^{\log_{n}d \, + \, 1}\left(\log d \,
   \log n - 2\log^{2}d + \log n\right)}{\log^{2}n}$, which for $2 \leq d \leq n$
   is negative at $n=d^{2}$, making this a maximum value.
   $d^{\frac{log_{d}n-1}{log_{d}n}}$ at $n=d^{2}$
   yeilds $\sqrt[2]{d}$.
   $\therefore$ For $y \geq \sqrt[2]{d}$, $d=n$ is optimal.
   
   We assumed
   $g^{c}\left( x_{0}\right)+\ldots +g^{c}\left( x_z\right)=g^{c}\left( x_{0} +
   \ldots + x_z\right)$.
   With sublinear $g^{c}\left( x\right)$, applying
   $g^{c}\left( x\right)$ to many smaller inputs is higher
   than the application to one input of the same combined size.
   $\therefore$ This result holds for sublinear $g^{c}\left( x\right)$.
\end{proof}

\begin{thm}\label{thm:topcell}
For $g^{s}\left(d\, g^{s}\left( x\right) \right) < g^{s}\left( x\right) $ and
linear or superlinear $g^{c}\left( x \right)$, the optimal value of $d$ is 2.
\end{thm}

\begin{proof}
  The aggregation cost is given by Lemma~\ref{lem:growthfactor} using $y<1$.
  $\left| i_{1}\right|$ and $c_{c}$ are given, so the goal is to
  find the minimum for  $d\, \frac{\left(y^{\log_{d}n} - 1 \right)}{y-1}$ for $2 \leq d \leq n$.
  The only global extreme value in the range $2 \leq d \leq n$ was shown in the
  proof for Theorem~\ref{thm:bottomcell} to be a maximum, so the minimum value
  appears at one of the endpoints.
  $d\, \frac{\left(y^{\log_{d}n} - 1 \right)}{y-1}$ at 2 is 
  $2\, \frac{\left(y^{\log_{2}n} - 1 \right)}{y-1}$, and at $n$ is 
  $n\, \frac{\left(y^{\log_{n}n} - 1 \right)}{y-1}$ $=$
  $n\, \frac{\left(y - 1\right)}{y-1}$  $=$ $n$.
  Assuming $n>2$ and $y<1$, $y^{\log_{2}n}\, -\, 1\, <\, y\, -\, 1$, so 
  $\frac{\left(y^{\log_{2}n} - 1 \right)}{y-1} < 1$, and 
  $2\, \frac{\left(y^{\log_{2}n} - 1 \right)}{y-1}$ $<$ 2 $<$ $n$.
  $\therefore$ The optimal fanout is 2.
  
  We assumed 
  $g^{c}\left( x_{0}\right) + \ldots + g^{c}\left(x_z\right)$ $=$
  $g^{c}\left( x_{0} + \ldots + x_z\right)$. With superlinear $g^{c}\left(
  x_{0}\right)$, increased input size from higher
  $d$ and fewer levels to decrease the input size increases computation
  cost.
  $\therefore$ This result holds for superlinear $g^{c}\left( x\right)$.
\end{proof}

\subsection{Heuristic Results}

\begin{table*}
  \centering
  \begin{tabular}{l|c|c|c}
  \multicolumn{1}{r|}{$g^{c}\left( x \right) \rightarrow $} & sublinear & linear & superlinear\\
  $g^{s}\left(d\, g^{s}\left( x\right) \right) \downarrow $ & & & \\ \hline
  ~ 						& 2 & 2 & 2  \\ 
  $< g^{s}\left( x\right) $	& \emph{(unproven)} & \emph{(optimal)} &
  \emph{(optimal)} \\
  ~							& & & \\ \hline
  ~							& 2 & 2 & 2 \\
  $= g^{s}\left( x\right) $ & \emph{(unproven)} & \emph{(near-optimal)} &
  \emph{(near-optimal -- proof not shown)} \\
  ~ 						& & top-k match &  \\ \hline
  ~							& $n$ & $n$ & $n$ \\
  $> g^{s}\left( x\right) $ & \emph{(optimal for $y\geq \sqrt[2]{d}$)} & \emph{(optimal for $y\geq
  \sqrt[2]{d}$)} & \emph{(optimal for $y\geq d$ -- proof not shown)} \\
   ~							& grep & sort & some matrix
  operations
  \end{tabular}
  \caption{The hueristic value for $d$ and some common distributed problems.}
  \label{tab:heuristicsummary}
\end{table*}

When $g^{s}\left(d\, g^{s}\left( x\right) \right) = g^{s}\left( x\right) $ and
$g^{c}\left( x \right)$ is linear, we have proven the optimal fanout to be $e$.
However, it is unclear what is means to have a fanout which is not a whole
number. Since the function which resulted in this optimal is monotonic on either
side of the minimum, the whole number with the best latency should be 2 or 3.
For simplicity, we choose 2 to reduce the variability configurable in the
system.

When $1 < y < \sqrt[2]{d} $, the ideal fanout is still not
definitively proven in the general case. However, we note the maximum occurs
close to the intuitive case of a single layer. Therefore, it
makes sense to have relatively high fanout when the size of the output grows
proportionally at each level. When $y$ is known to be very small, a case can be
made for a smaller $d$, but we use $d=n$ to simplify the heuristics.

Both Theorems~\ref{thm:bottomcell} and~\ref{thm:topcell} assume that $y$ is a
constant ratio. In the case that the output grows or shrinks inconsistent with
this assumption, the heuristics are no longer proven ideal. However, we
believe that the results are strong enough to apply the hueristics in those
scenarios.

In addition to the proofs shown, we have constructed proofs for two other cases.
When $g^{s}\left(d\, g^{s}\left( x\right) \right) = g^{s}\left( x\right) $ and
$g^{c}\left( x \right)$ is superlinear, we have proven that the ideal value for
$d$ is in the range of $\left[ 2, e\right)$ depending on the degree of
superlinearity. For simplicity, we use 2 for the heuristic.

We have also proven that $d=n$ is the optimal fanout when $g^{s}\left(d\,
g^{s}\left( x\right) \right) > g^{s}\left( x\right) $ and $g^{c}\left( x
\right)$ is superlinear for values of $y>d$. The is a relatively large growth
factor in our experience, but a tighter bound is not possible without knowing
the degree of superlinearity of $g^{c}\left( x\right)$.

There are two cells for which the optimal fanouts remain uproven. These are
the cases where $g^{c}\left( x\right)$ is sublinear and
$g^{s}\left(d\,g^{s}\left( x\right) \right) < g^{s}\left( x\right) $ or
$g^{s}\left(d\,g^{s}\left( x\right) \right) = g^{s}\left( x\right) $. In both of
these cases the degree of sublinearity is necessary for meaningful analysis.
However, we note the communication time is expected to be linear
with respect to the input size. Therefore we use the value proven to be
optimal in the case of a linear $g^{c}\left( x\right)$ within the same row to
set the heuristic.

Table~\ref{tab:heuristicsummary} displays our final heuristics, the state of how
optimal the heuristics are known to be, and where some common distributed
problems fall in the analysis.



