\section{Experiments}\label{sec:experiments}

\paragraph{Setup.}
We ran our experiments on m1.small nodes in an Amazon EC2 datacenter.
Each leaf and vertex is on a distinct node. We show fanouts
resulting in full trees to avoid the noise of
straying from the model, but our system handles all fanouts. Averages of 5
runs are shown.

We first test the system with microbenchmarks varying $y_{0}$ with
linear aggregation methods. Leaf nodes 
generate a random list of integers. Aggregators generate a series of random
numbers proportional to the size of their inputs, then prune down the list to
the size dictated by $y_{0}$. All microbenchmarks
are performed on 16 leaf nodes.

Then we evaluate the system on two common aggregation tasks
using a dataset that consists of log files of Yahoo's Hadoop clusters. Each leaf node
contains $830$MB of input data, which is in the range of what is
studied in RDDs~\cite{rdds}. We perform on both 16 and 64 leaves:
\begin{itemize}
  \tightitem Top-$k$ match -- a simple equation
  ``scores'' how well each line of the log matches a filter. The $k$ lines with
  the highest score are returned in sorted order from each leaf. Aggregator nodes 
  forward the $k$ highest scores across their inputs. 
%The  final aggregate result is the $k$ highest scoring matches from all log files.
  We use $k=100000$.
  \tightitem Word count -- counts the occurrences of each word in the log files at
  the leaves and sums the results. The final aggregate result is a map of
  each word in the logs and the number of times it appeared across all logs. Log
  entries are not very disparate, so any word appearing in the log at one
  leaf has a high probability of appearing in the logs at every other leaf, so
  if $y_{0}$ is greater than 1, it is negligibly so.
\end{itemize}

For all tests the leaves complete their computation and inform
the controller. The controller then starts a timer, sends a ``go'' signal to
begin aggregation and stops the timer when it receives the final aggregate
result from the root of the overlay. We use the value from $d=2$ from most
experiments and the model for the associated value of $y_{0}$ to create the
``Predicted Values'' lines.

%  We do not show the
% time for the compute phase, which does not impact the aggregation phase. In some
% tests aggregation dominated total time, while in others computation dominated.\fixme{how about the two benchmarks here? we're not helping us with this statement} This
% depended on the number of leaves as well as the problem being explored. It would
% also depend on the size of the data at each leaf, but we did not vary that
% parameter.

\paragraph{Results.}
\begin{figure}[!th]
  \centering
  \subfigure[Series as size ratio] {
    \label{fig:AOResults1}
    \resizebox{0.22\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{axis}[
	    xlabel={\LARGE Aggregation tree fanout},
	    ylabel={\LARGE Aggregation time (s)}]
        \addplot+[smooth] coordinates {
	      (2,   12.0864)
	      (4,   12.8416)
	      (16,   42.2188)
        };
       \addplot+[smooth] coordinates {
	      (2,    23.0726)
	      (4,   20.9926)
	      (16,   41.4852)
       };
       \addplot+[smooth] coordinates {
	      (2,   38.3106)
	      (4,   31.9206)
	      (16,   42.2638)
       };
       \addplot+[smooth] coordinates {
	      (2,   76.167)
	      (4,   51.5392)
	      (16,   43.502)
      };
      \legend{$\mathlarger{\mathlarger{y_{0}=1/n}}$,
      $\mathlarger{\mathlarger{y_{0}=1}}$,
      $\mathlarger{\mathlarger{y_{0}=\sqrt{n}}}$,
      $\mathlarger{\mathlarger{y_{0}=n}}$}
      \end{axis}
    \end{tikzpicture}
    }
    }
  \subfigure[Series as fanout] {
    \label{fig:AOResults2}
    \resizebox{0.22\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{semilogyaxis}[
	    xlabel={\LARGE Aggregation tree fanout},
	    ylabel={\LARGE Aggregation time (log(s))},
	      legend pos= south east]
        \addplot+[smooth] coordinates {
	      (1/16,   12.0864)
	      (1,   23.0726)
	      (4,   38.3106)
	      (16,   76.167)
        };
        \addplot+[smooth] coordinates {
	      (1/16,   12.8416)
	      (1,   20.9926)
	      (4,   31.9206)
	      (16,   51.5392)
        };
        \addplot+[smooth] coordinates {
	      (1/16,   42.2188)
	      (1,   41.4852)
	      (4,   42.2638)
	      (16,   43.502)
        };
        \legend{{\Large fanout 2},{\Large fanout 4},{\Large fanout 16}}
      \end{semilogyaxis}
    \end{tikzpicture}
    }
    }
    \caption{Microbenchmark results}
  \label{fig:micros}
\end{figure}
\begin{figure}[!th]
  \centering
  \footnotesize
  \subfigure[Practice vs. model, $y_{0}=\frac{1}{n}$] {
    \label{fig:micro1on}
    \resizebox{0.22\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{axis}[
	    xlabel={\LARGE Aggregation tree fanout},
	    ylabel={\LARGE Aggregation time (s)},
	      legend pos= north west]
        \addplot coordinates {
	      (2,   12.0864)
	      (4,   12.8416)
	      (16,   42.2188)
        };
       \addplot [domain=2:16, samples=100, loosely dashed, very thick, red]
       {12.0864*((x/((1/16)^(ln(x)/ln(16))-1))/(2/((1/16)^(ln(2)/ln(16))-1)))};
      \legend{{\large Experimental Values},{\large Predicted Values}}
      \end{axis}
    \end{tikzpicture}
    }
    }
  \subfigure[Practice vs. model, $y_{0}=1$]{
    \label{fig:micro1}
    \resizebox{0.22\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{axis}[
	    xlabel={\LARGE Aggregation tree fanout},
	    ylabel={\LARGE Aggregation time (s)},
	      legend pos= north west]
        \addplot coordinates {
	      (2,   23.0726)
	      (4,   20.9926)
	      (16,   41.4852)
        };
       \addplot [domain=2:16, samples=100, loosely dashed, very thick, red]
       {23.0726*((x*ln(16)/ln(x))/(2*ln(16)/ln(2)))};
      \legend{{\large Experimental Values},{\large Predicted Values}}
      \end{axis}
    \end{tikzpicture}
    }
    }\\
  \subfigure[Practice vs. model, $y_{0}=\sqrt{n}$] {
    \label{fig:microsqn}
    \resizebox{0.22\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{axis}[
	    xlabel={\LARGE Aggregation tree fanout},
	    ylabel={\LARGE Aggregation time (s)},
	      legend pos= north west]
        \addplot coordinates {
	      (2,   38.3106)
	      (4,   31.9206)
	      (16,   42.2638)
        };
       \addplot [domain=2:16, samples=100, loosely dashed, very thick, red]
       {38.3106*((x/((4)^(ln(x)/ln(16))-1))/(2/((4)^(ln(2)/ln(16))-1)))};
      \legend{{\large Experimental Values},{\large Predicted Values}}
      \end{axis}
    \end{tikzpicture}
    }
    }
  \subfigure[Practice vs. model, $y_{0}=n$] {
    \label{fig:micron}
    \resizebox{0.22\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{axis}[
	    xlabel={\LARGE Aggregation tree fanout},
	    ylabel={\LARGE Aggregation time (s)}]
        \addplot coordinates {
	      (2,   76.167)
	      (4,   51.5392)
	      (16,   43.502)
        };
       \addplot [domain=2:16, samples=100, loosely dashed, very thick, red]
       {76.167*((x/((16)^(ln(x)/ln(16))-1))/(2/((16)^(ln(2)/ln(16))-1)))};
      \legend{{\large Experimental Values},{\large Predicted Values}}
      \end{axis}
    \end{tikzpicture}
    }
    }
    \caption{Microbenchmarks versus model}
  \label{fig:microspredicted}
\end{figure}
Figure~\ref{fig:micros} shows the results of the microbenchmarks. In
Figure~\ref{fig:AOResults1} each line represents the data grouped by $y_{0}$. 
Figure~\ref{fig:AOResults2} draws the same data, but each line
represents an aggregation tree with a given fanout.%The fanout
%significantly and predictably affects the aggregation time.
When $y_{0}$ is
small, the smaller fanouts outperform. As $y_{0}$ grows, the performance of
those overlays degrades until a larger fanout is faster.

Figure~\ref{fig:microspredicted} show the performance of each $y_{0}$ for
varying fanouts versus that predicted by the model. In all four cases
the trends match. The only minimum not at the predicted place is for
$y_{0}= 1$. Fanout 4 slightly outperforms fanout 2 when the model predicts
identical performance.

For $y_{0}=\frac{1}{n}$, the time taken for $d=16$ is 350\% what it is for
$d=2$. For $y_{0} = n$, the time for $d=2$ is 175\% what it is for
$d=16$. That is a very significant penalty for choosing the wrong fanout, and
the right fanouts are opposite in these two examples.
Even choosing between 2 and $n$ is not a good heuristic for all cases, as the faster
of the two still takes 132\% of the time as $d=4$ for $y=\sqrt{n}$.

\begin{figure}[!th]
\centering
  \subfigure[Top-$k$ match, $n=16$] {\label{fig:topk16}
    \resizebox{0.22\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{axis}[
	    xlabel={\LARGE Aggregation tree fanout},
	    ylabel={\LARGE Aggregation time (s)},
	      legend pos= north west]
        \addplot coordinates {
	      (2,   3.280)
	      (4,   2.850)
	      (16,   5.193)
        };
       \addplot [domain=2:16, samples=100, loosely dashed, very thick, red]
       {3.280*((x*ln(64)/ln(x))/(2*ln(64)/ln(2)))};
      \legend{{\large Experimental Values},{\large Predicted Values}}
      \end{axis}
    \end{tikzpicture}
    }
    }
  \subfigure[Top-$k$ match, $n=64$] {\label{fig:topk64}
    \resizebox{0.22\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{axis}[
	    xlabel={\LARGE Aggregation tree fanout},
	    ylabel={\LARGE Aggregation time (s)},
	      legend pos= north west]
        \addplot coordinates {
	      (2,   4.804)
	      (4,   3.986)
	      (8,   4.650)
	      (64,   21.589)
        };
       \addplot [domain=2:64, samples=100, loosely dashed, very thick, red]
       {4.804*((x*ln(64)/ln(x))/(2*ln(64)/ln(2)))};
      \legend{{\large Experimental Values},{\large Predicted Values}}
      \end{axis}
    \end{tikzpicture}
    }
    }\\
    \subfigure[Word count, $n=16$] {\label{fig:wordcount16}
    \resizebox{0.22\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{axis}[
	    xlabel={\LARGE Aggregation tree fanout},
	    ylabel={\LARGE Aggregation time (s)},
	      legend pos= north west]
        \addplot coordinates {
	      (2,   3.543)
	      (4,   3.198)
	      (16,   6.868)
        };
       \addplot [domain=2:16, samples=100, loosely dashed, very thick, red]
       {3.543*((x*ln(64)/ln(x))/(2*ln(64)/ln(2)))};
      \legend{{\large Experimental Values},{\large Predicted Values}}
      \end{axis}
    \end{tikzpicture}
    }
    }
    \subfigure[Word count, $n=64$] {\label{fig:wordcount64}
    \resizebox{0.22\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{axis}[
	    xlabel={\LARGE Aggregation tree fanout},
	    ylabel={\LARGE Aggregation time (s)},
	      legend pos= north west]
        \addplot coordinates {
	      (2,   2.800)
	      (4,   2.145)
	      (8,   3.228)
	      (64,   13.492)
        };
       \addplot [domain=2:64, samples=100, loosely dashed, very thick, red]
       {2.800*((x*ln(64)/ln(x))/(2*ln(64)/ln(2)))};
      \legend{{\large Experimental Values},{\large Predicted Values}}
      \end{axis}
    \end{tikzpicture}
    }
    }
    \caption{Common problem results}\label{fig:realexp}
\end{figure}

Figure~\ref{fig:realexp} shows the results from our common aggregation problems
on real world data. For the 16 leaf top-$k$ matching experiment in
Figure~\ref{fig:topk16} the values seem to diverge from the predicted
values as $d$ grows, but the trend matches the model and the microbenchmarks.
The deviation may be because the aggregation function depends on $d$
and input size ($O\left( d\left| x_{0}\right|\log{d}\right)$) and not just
the input size.
At increased scale with 64 leaves, as seen in Figure~\ref{fig:topk64},
the divergence becomes overshadowed. % notable.

Figures~\ref{fig:wordcount16} and~\ref{fig:wordcount64} show the results from
the word count test for 16 and 64 leaves respectively. They show the same
trends as the microbenchmarks and top-$k$ matching experiments and
deviate even less from the predicted values. This is likely because the sets of words at
each node were the same, so summing occurrences is an aggregation function truly
linear on the size of the input.

Predictably, the relative impact of choosing the right fanout increases as more
leaves are added. For the top-$k$ matching and word count
applications with 16 nodes the worst case fanouts take 182\% and 215\% as much
time as their best case fanouts respectively. When $n$ increases to 64 those
numbers jump to 542\% and 629\% respectively.
