\section{Efficient Fanout Heuristics}\label{sec:heuristics}

Here we provide the basic intuition of why the overlay structure
 affects the latency of the system. We prove the optimal fanout
 of several cases using only a couple knowable features of aggregations.
 We then summarize the results of the proofs. All of our
 notation is shown in Table~\ref{tab:notation}.

\begin{table}
  \caption{Notation used in the intuition and proofs.}
  \label{tab:notation}
  \begin{tabular}{p{.10\linewidth}|p{.82\linewidth}}
  Token & Meaning \\ \hline
  $n$ & Number of computation/leaf nodes. \\
  $d$ & Fanout of the overlay, making the height $\log_{d} n$.\\
  $g\left(\overline{x}\right)$ & The aggregation function for a set of inputs
  x.\\
  $g^{c}\left(\overline{x}\right)$ & This function returns the amount of time
  taken for $g\left(\overline{x}\right)$ (including communication).\\
  $c_{c}$ & Cost of aggregation per unit of data when $g^{c}\left(
 \overline{x}\right)$ is linear and $g^{c}\left( \emptyset\right)=0$.\\
  $x_{0}$ & Output from a computation node.\\
  $y$ & Ratio of output sizes of consecutive levels.\\
  $y_{0}$ & Ratio of the final aggregate output size to $\left| x_{0}\right|$.
  \end{tabular}
\end{table}

\subsection{Intuition}

Our objective is to minimize the total cost of the system. Since the aggregation
phase is separate from the local computation time, we can optimize them
separately. The aggregation overlay simply aggregates the outputs from all the
computation nodes, so the number of leaf nodes for the tree
defining the aggregation overlay is given by the number of local
nodes involved in the computation phase. The functions to aggregate multiple
inputs into a single input are also given by the problem, so the only variable
left to change is the fanout, $d$.

The aggregation time at a single level, composed of the time to receive input
from the level just beneath it and the time to create the output for the level,
is dependent on the size of the input $x$. We use the function $g^{c}\left(\overline{x}\right)$,
composed of functions which account for the communication time and the
computation time for aggregation to denote the time cost of a single level of
aggregation. Aggregation on the same level
of the overlay happens in parallel, so the only the time of a single branch must
be considered.

We distinguish between sublinear, linear, and superlinear $g^{c}\left(\overline{x}\right)$
for the sake of the proofs. It is important to note the order of
the function is fixed for instances of a problem, and not just
the problem in general. For instance, square matrix multiplication takes time
superlinear to the size of the number of cells of the matrix. For equivalent
trees which are being compared the size of the matrices is fixed. An increase in
$d$ of 1 results in one additional matrix, which results in a linear increase in
time.

Fanout and tree height are inversely related. Increased height
increases amount of work done in parallel, but it also might increase
the total amount of work to be done when results have to filter up through more
levels. The values of $y_{0}$ and $g^{c}\left( \overline{x}\right)$ determine if the time
saved by the parallelism offsets the time required by extra
levels.

The ratio of
the size of the output of a node to the size of the input, which is the output
one child, is $y$. The ratio for the entire tree, i.e. the size of the final
aggregation to the output from a single leaf, is $y_{0}$. $y_{0}$ is a result of repeatedly apply $y$, so $y^{\log_{d} n} = y_{0}$. $y_{0}$ is a
knowable feature of many aggregation methods.
Table~\ref{tab:y0examples} shows some problems and their
ranges of $y_{0}$ values.

\begin{table}
  \centering
  \label{tab:y0examples}
  \caption{Some aggregation functions grouped by their $y_{0}$.}
  \begin{tabular}{p{.13\columnwidth}|p{.75\columnwidth}}
    {\large\textbf{$\mathbf{y_{0}}$ }} & {\large \textbf{Common Problems}} \\
    \hline
    $y_{0} < 1$ & The average MapReduce job at Google~\cite{MapReduce}, The
    average ``Aggregate'' jobs at Facebook and Yahoo!~\cite{YahooFBStat}
    \\
    $y_{0} = 1$ & Min, Max, Average, Top-$k$ match, Word count with a fixed
    dictionary, Multiplying square matrices\\
    $y_{0} > 1$ & Sort, Concatenate, Word count with
    mismatched dictionaries
  \end{tabular}
\end{table}

\subsection{Proofs}\label{sec:proofs}

Here we prove the (near-)optimality of fanouts for a significant portion of
cases. For all proofs, we assume communication time is linear on the size of the
total input, which is the case when aggregators do not share nodes. We assume
that the $y_{0}$ is a feature of the aggregation and $n$ is chosen by the
system, so all proofs find the optimal value for $d$ as the remaining
configurable variable.

\begin{lem}\label{lem:growthfactor}
The total aggregation time of the system with linear $g^{c}\left(\overline{x} \right)$ and
$y_{0} \neq 1$ is $f\left(d,n,y_{0}\right) = \frac{c_{c}\left|
x_{0}\right|d\left(y_{0}- 1\right)}{\sqrt[\log_{d}n]{y_{0}}-1}$, which has a first derivative with respect
to $d$ of\\
$\frac{c_{c}\left| x_{0}\right|\left(y_{0}-1\right) y_{0}^{\log_{n}d}
- \left(1+\frac{\log y}{\log
n}y^{\log_n d}\right)}{\left(y_{0}^{\log_{n}d}-1\right)^{2}}$.
\end{lem}

\begin{proof}
Assuming that aggregation takes linear amounts of time relative to the total
input of the level, the cost of a single level can be represented as some
constant $c_{c} \times$ the input size at that level.
The input size at the first level of aggregation is $\left| x_{0}\right|$, and
the input size for every level thereafter is changed by a factor of $y$ for
each of the $\log_{d} n$ levels.
Thus the time taken by the entire system can be represented as
$\sum_{z=1}^{\log_d n} c_{c}d y^{z-1}x_{0}$.

Pulling the variables which are constant at each level outside the
summation gives $c_{c}d \left| x_{0}\right|
\sum_{z=1}^{\log_d n} y^{z-1}$, which simplifies to $\frac{c_{c}
x_{0}d y^{\log_{d}n} - 1}{y-1}$. Recalling that $y =
\sqrt[\log_{d}n]{y_{0}}$ and substituting it gives us
$\frac{c_{c}\left| x_{0}\right| d\left(y_{0}-
1\right)}{\sqrt[\log_{d}n]{y_{0}}-1}$.
\end{proof}

\begin{lem}\label{lem:nogrowthfactor}
The total aggregation time of the system with
$y_{0} = 1$ is $g^{c}\left( d\left| x_{0}\right|\right)\log_{d} n$.
\end{lem}

\begin{proof}
Reusing the logic from Lemma~\ref{lem:growthfactor} with the omission of the
non-applicable growth factor gives us a time for the system of
$\sum_{z=1}^{\log_d n} g^{c}\left( d\left| x_{0}\right|\right)$. Since $z$ does
not appear in the equation inside the summation, this is equavalent to
$g^{c}\left( d\left| x_{0}\right|\right)\sum_{z=1}^{\log_d n} 1$, which
simplifies to $g^{c}\left( d\left| x_{0}\right|\right) \log_{d} n$.
\end{proof}


\begin{thm}\label{thm:ylt1}
For $y_{0}<1$ and linear or superlinear $g^{c}\left(\overline{x} \right)$, the fanout to
optimize the time of the system is $2$.
\end{thm}

\begin{proof}
By Lemma~\ref{lem:growthfactor}, the time taken by the system is $f(d,n,y) =
\frac{c_{c}d\left| x_{0}\right|\left(y_{0}-
1\right)}{\sqrt[\log_{d}n]{y_{0}}-1}$ and the derivative with respect to $d$ is
$c_{c}\left| x_{0}\right|\left(y_{0}-1\right)\frac{y_{0}^{\log_{n}d}
- \left(1+\frac{\log y}{\log
n}y^{\log_n d}\right)}{\left(y_{0}^{\log_{n}d}-1\right)^{2}}$.
Because $2\leq d\leq n$, $0 < \log_{n} d\leq 1$, and $y_{0}<1$ the derivative
with respect to $d$ is always positive for $0 < y_{0} < 1$.
\\$\therefore$ The minimum cost is for minimal $d$, which is 2.

  We assumed 
  $g^{c}\left(\overline{x}_{0}\right) + \ldots + g^{c}\left(\overline{x}_{z}\right)$ $=$
  $g^{c}\left(\overline{x}_{0} + \ldots + \overline{x}_{z}\right)$. With superlinear $g^{c}\left(
  \overline{x}\right)$, increased input size from higher
  $d$ and fewer levels to decrease the input size increases computation
  cost.\\
  $\therefore$ This result holds for superlinear $g^{c}\left( \overline{x}\right)$.
\end{proof}

\begin{thm}\label{thm:ye1}
The optimal fanout is $e$ when
$y_{0}=1$ and
$g^{c}\left(\overline{x} \right) $ is linear.
\end{thm}

\begin{proof}\label{proof:linearnogrowth}
 Using the equation $g^{c}\left( d\left| x_{0}\right|\right) \log_{d} n$
 from Lemma~\ref{lem:nogrowthfactor} and a linear assumption on $g^{c}\left(
 x\right)$ gives us
 $d \left| x_{0}\right| c_{c} \log_{d}n$.
 The first derivative with respect to $d$ is $\frac{\left| x_{0}\right|
 c_{c}\left(\log d - 1\right) \log n}{\log^{2} d}$, which has a single $0$
 at $d=e$. The second derivative with respect to $d$ is $-\frac{\left| x_{0}\right|
 c_{c}\left(\log d - 2\right) \log n}{d \log^{3} d}$. At $d=e$ the value
 of the second derivative is positive, so the extreme point is a minimum.
 \\$\therefore$ The fanout to optimize the time of this system is $e$.
\end{proof}

\begin{thm}\label{thm:ye1gc}
The optimal fanout is $[2, e)$ when
$y_{0}=1$ and
$g^{c}\left(\overline{x} \right) $ is superlinear.
\end{thm}

\begin{proof}\label{proof:linearnogrowthgc}
Lemma~\ref{lem:nogrowthfactor} gives us the time of the system of $g^{c}\left(
d\left| x_{0}\right|\right) \log_{d} n$. As shown in Theorem~\ref{thm:ye1},
$lim_{g^{c}\left(x\right) \rightarrow linear}$ is $e$. Because
$g^{c}\left(\overline{x}\right)$ is superlinear, we can assume that the
derivative with respect to $d$ is be greater than for the linear case, i.e. $\frac{\left| x_{0}\right|
 c_{c}\left(\log d - 1\right) \log n}{\log^{2} d}$. Since this is already
 greater than or equal to 0 for $d \geq e$, the derivative will always be
 positive in that range. Thus any minimum occurs at $d < e$. $d \geq 2$ by
 definition.\\
 $\therefore$ The optimal value of $d$ is in the range $[2, e)$
\end{proof}

\begin{thm}\label{thm:y1ton}
The optimal fanout when $1 < y_{0} < n$ and $g^{c}\left( \overline{x}\right)$ is
linear is $\frac{\log^{2}n}{\log n \log y - \log^{2} y}$.
\end{thm}

\begin{proof}
From Lemma~\ref{lem:growthfactor}, the amount of time taken to aggregate is
$f(d,n,y) = \frac{c_{c}d\left| x_{0}\right|\left(y_{0}-
1\right)}{\sqrt[\log_{d}n]{y_{0}}-1}$, and the derivative with respect to $d$ is
$\frac{c_{c}\left| x_{0}\right|\left(y_{0}-1\right) y_{0}^{\log_{n}d}
- \left(1+\frac{\log y}{\log
n}y_{0}^{\log_n d}\right)}{\left(y_{0}^{\log_{n}d}-1\right)^{2}}$.

This derivative can be rewritten as \\$\frac{c_{c}\left|
x_{0}\right|\left(y_{0}-1\right)}{\left(y_{0}^{\log_{n}d}-1\right)^{2}} \times
\left(y_{0}^{\log_{n}d} - \left(1+\frac{\log y}{\log n}y^{\log_n
d}\right)\right)$.
For\\$y_{0} > 1$, $\frac{c_{c}\left|
x_{0}\right|\left(y_{0}-1\right)}{\left(y_{0}^{\log_{n}d}-1\right)^{2}} > 0$, so
the entire expression is 0 iff $y_{0}^{\log_{n}d} = \left(1+\frac{\log y}{\log
n}y^{\log_n d}\right)$. When solved for $d$ this is $e^{-\frac{\log n
\log\left( 1 - \log_{n} y_{0}\right)}{\log y_{0}}}$ or $\left( 1 -
\frac{\log y_{0}}{\log n}\right)^{-\frac{\log n}{\log y_{0}}}$.

The second derivative with respect to $d$
is\\$\frac{c_{c}\left| x_{0}\right|\left( y_{0}-1\right)y_{0}^{\log_{n} d}\log
y_{0}\left(\log n + \log y_{0} - \left(\log n - \log y_{0}\right)
y_{0}^{\log_{n}d}\right)}{d \log^{2}n\left(y_{0}^{\log_{n}d}-1\right)^{3}}$.
Because $\frac{c_{c}\left| x_{0}\right|\left( y_{0}-1\right) y_{0}^{\log_{n}
d}\log y_{0}}{d \log^{2}n\left(y_{0}^{\log_{n}d}-1\right)^{3}}$ is always
positive, this equation has the same sign as\\$\left(\log n + \log y_{0} -
\left(\log n - \log y_{0}\right) y_{0}^{\log_{n}d}\right)$. In order for the
extrema to be a minimum this portion must be greater than 0. If we plug in the
extrema value for $d$ and simplify we get $\frac{y_{0}^{-\log_{y_{0}}\left(
1-\log_{n}y_{0}\right)} - 1}{y_{0}^{-\log_{y_{0}}\left(
1-\log_{n}y_{0}\right)} + 1} - \log_{n}y_{0} < 0$. To prove this inequality we
need to find the maximum value of the left side and compare it to 0. To do this
we assume that $n$ is fixed and find the value of $y_{0}$ which maximizes the
expression. The derivative with respect to $y_{0}$ is $\frac{2\log^{2}n +
\log^{2}y_{0} - 4\log n \log y_{0}}{y_{0}\log n \left( \log y_{0} -2\log
n\right)^{2}}$, which is always positive for $1 < y_{0} < n$. Thus the maximum
value for the left side of the inequality occurs at $\lim_{y_{0}\to n}$, and the
value at that point is negative. This is less than zero, so the inequality is
satisfied, and the second derivative of the original equation is positive; the extrema
is a minimum.

If $\left( 1 -
\frac{\log y_{0}}{\log n}\right)^{-\frac{\log n}{\log y_{0}}} > 0$ the optimal
fanout is greater than the number of leaves, which doesn't make sense. In this
case we note that there is only a single 0 in the first derivate, and the point
is a minimum. That means that for $d = \left[2, \left( 1 -
\frac{\log y_{0}}{\log n}\right)^{-\frac{\log n}{\log y_{0}}}\right]$ the
function monotonically decreases, so the optimal fanout is the largest value
that makes sense, i.e. $n$.

$\therefore$ The minimal value for $f(d,n,y) = \frac{c_{c}d\left|
x_{0}\right|\left(y_{0}- 1\right)}{\sqrt[\log_{d}n]{y_{0}}-1}$) is $min\left( n,
\left( 1 - \frac{\log y_{0}}{\log n}\right)^{-\frac{\log n}{\log y_{0}}}
\right)$.
\end{proof}

\begin{thm}\label{thm:ygtn}
The optimal fanout when $y_{0} \geq n$ is $n$ for all
$g^{c}\left( \overline{x}\right)$.
\end{thm}

\begin{proof}
The amount of time taken for $d=n$ at the
root node is $g^{c}\left( y_{0}^{\log_{d} n}\left|x_{0}\right|\right)$. Since
$y_{0} \geq n$, and $\log_{d} n \geq 1$, this is clearly minimal when $\log_{d} n = 1$. In
addition, the time for the rest of the tree in this case is $0$, as there is no
rest of the tree. With any other fanout, this additional time is non-zero.
\\$\therefore$ The minimum cost is for $d = n$.
%
% By Lemma~\ref{lem:growthfactor} the amount of time taken by
% the system is $f(d,n,y) = \frac{c_{c}d\left| x_{0}\right|\left(y_{0}-
% 1\right)}{\sqrt[\log_{d}n]{y_{0}}-1}$, which has a derivative with respect to
% $d$ of $\frac{\left(y_{0} - 1\right)\left(\log n \left( y_{0}^{\log_{n} d} - 1
% \right) - y_{0}^{\log_{n} d}\log y_{0}\right)}{\log n \left(y_{0}^{\log_{n} d} -
% 1\right)^{2}}$.
% For $2\leq d\leq n$, $0 < \log_{n} d\leq 1$, so for $y_{0} \geq n \geq 2$, the
% derivative is always negative.
% \\$\therefore$ The minimum cost is for maximal $d$, which is $n$.
%  
%    We assumed
%    $g^{c}\left(\overline{x}_{0}\right)+\ldots +g^{c}\left(\overline{x}_z\right)=g^{c}\left(\overline{x}_{0} +
%    \ldots + \overline{x}_{z}\right)$.
%    With sublinear $g^{c}\left( \overline{x}\right)$, applying
%    $g^{c}\left( \overline{x}\right)$ to many smaller inputs is higher
%    than the application to one input of the same combined size.
%    \\$\therefore$ This result holds for sublinear $g^{c}\left( \overline{x}\right)$.
\end{proof}

\subsection{Optimality Results Summary}

\begin{table*}
  \centering
  \caption{The hueristic value for $d$ and the state of their proofs of
  optimality.}
  \label{tab:heuristicsummary}
  \begin{tabular}{l|c|c|c|c}
  {\textbf{$\mathbf{y_{0}}$ }} & 
  {\textbf{Optimal Fanout}} &
  {\textbf{Sublinear $\mathbf{g^{c}\left(\right)}$}} & 
  {\textbf{Linear $\mathbf{g^{c}\left(\right)}$}} &
  {\textbf{Superlinear $\mathbf{g^{c}\left(\right)}$ }}\\ \hline
  %$y \downarrow $ & & & \\ \hline
  $y_{0} < 1 $	& 2 & \textit{unproven} & {Theorem~\ref{thm:ylt1}} &
  {Theorem~\ref{thm:ylt1}} \\
  %\hline
  $y_{0} = 1 $ & $e$ & \textit{unproven} & {Theorem~\ref{thm:ye1}} &
  {Theorem~\ref{thm:ye1gc}} (near optimal) \\
  %\hline
  $1 < y_{0} < n $ & $min\left( n, \left( 1 -
\frac{\log y_{0}}{\log n}\right)^{-\frac{\log n}{\log y_{0}}}
\right)$ & \textit{unproven}
  & {Theorem~\ref{thm:y1ton}} & \textit{unproven} \\
  %\hline
  $n \leq y_{0} $ & $n$ & {Theorem~\ref{thm:ygtn}} &
  {Theorem~\ref{thm:ygtn}} & {Theorem~\ref{thm:ygtn}}
  \end{tabular}
\end{table*}

Table~\ref{tab:heuristicsummary} summarized the known optimal fanouts and which
Theorems prove them, if any. For $y = 1$ and many cases of $1 < y < n$ the
optimal fanout is not a whole number. It is unclear what is means to have a
fractional fanout, so we suggest using the whole number which is closest to the
calculated optimum as the models are monotonic on either side of the minimums.

We assume that $y$ is fixed for the entire tree and a known attribute of the
aggregation function. Given the predictable trends we are reasonably satified
that a good estimate of $y$ results in a good, if not optimal, fanout.

Optimal fanouts remain uproven in cases where the degree
of sublinearity or superlinearity is necessary for meaningful analysis.
Communication time is linear with respect to the input size, so
there is always an aspect of linearity to $g^{c}\left(\overline{x} \right)$. Thus it
makes sense to use the heuristics from the linear cases on the sublinear cases.
In practice we also use the heuristics from the linear cases on the superlinear
cases as they are the best available analyses.
