\section{Experiments}\label{sec:experiments}

\subsection{Experimental Setup}
%No resuse of machines in subsequent levels.

For our experiments we use nodes on Amazon's EC2 datacenter in Virginia. Each
leaf and aggregator is on its own node. We only consider fanouts
resulting in full trees so as to validate the model without adding the extra complexity of
straying from the model. Using 16 leaf nodes allows us to to have fanouts of 2,
4, and 16. Experiments are run 5 times, and the average is shown.

Because our analysis found $y_{0}$ to be the major variable affecting the
optimal fanout, we target that with micromarks simulating linear aggregation
methods with configurable output to input ratios. Leaf nodes generate a
random list of integers. Aggregators generate a series of random numbers
proportional to the size of their inputs, then prune down the list to
the size dictated by $y_{0}$. All microbenchmarks are performed for 16
leaf nodes.

We also evaluate the performance of the system on two common aggregation tasks
using a dataset that consists of log files of Yahoo's Hadoop clusters. Each leaf node
contains $830$MB of input data. From this data, we perform on both 16 and 64
leaves:
\begin{itemize}
  \item Top-$k$ match -- A simple equation
  ``scores'' how well each line of the log matches a filter. The $k$ lines with
  the highest score are returned in sorted order from each leaf. Aggregator nodes
  forward the $k$ highest scores across their inputs. The
  final aggregate result is the $k$ highest scoring matches from all log files.
  We use $k=100000$.
  \item Word Count -- Counts the occurrences of each word in the log files at
  the leaves, and then sums the results. The final aggregate result is a map of
  each word in the logs and the number of times it appeared across all logs. Log
  entries are not very disparate, so any word which appears in the logs at one
  leaf has a high probability of appearing in the logs at every other leaf, so
  if $y$ is greater than 1, it is negligibly so.
\end{itemize}

For all tests we wait for the leaves to complete their computation and inform
the controller. The controller then sends a ``go'' signal to begin the
aggregation and times how long it takes from initiating that signal to receive
the final aggregate result from the root of the overlay. Most graphs include a
``Predicted Values'' line. This is created with the results from $d=2$ from that
experiment and the model for the associated value of $y_{0}$. 
% We do not show the
% time for the compute phase, which does not impact the aggregation phase. In some
% tests aggregation dominated total time, while in others computation dominated. This
% depended on the number of leaves as well as the problem being explored. It would
% also depend on the size of the data at each leaf, but we did not vary that
% parameter.

\subsection{Results}

\begin{figure*}[!th]
  \label{fig:micros}
  \centering
  \subfigure[Series as Size Ratio] {
    \label{fig:AOResults1}
    \resizebox{0.30\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{axis}[
	    xlabel={\Large Aggregation Tree Fanout},
	    ylabel={\Large Aggregation Time (seconds)}]
        \addplot+[smooth] coordinates {
	      (2,   12.0864)
	      (4,   12.8416)
	      (16,   42.2188)
        };
       \addplot+[smooth] coordinates {
	      (2,    23.0726)
	      (4,   20.9926)
	      (16,   41.4852)
       };
       \addplot+[smooth] coordinates {
	      (2,   38.3106)
	      (4,   31.9206)
	      (16,   42.2638)
       };
       \addplot+[smooth] coordinates {
	      (2,   76.167)
	      (4,   51.5392)
	      (16,   43.502)
      };
      \legend{$y_{0}=\frac{1}{n}$,$y_{0}=1$,$y_{0}=\sqrt{n}$,$y_{0}=n$}
      \end{axis}
    \end{tikzpicture}
    }
    }
  \subfigure[Series as Fanout] {
    \label{fig:AOResults2}
    \resizebox{0.30\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{semilogyaxis}[
	    xlabel={\Large Aggregation Tree Fanout},
	    ylabel={\large Aggregation Time (seconds, log scale)},
	      legend pos= north west]
        \addplot+[smooth] coordinates {
	      (1/16,   12.0864)
	      (1,   23.0726)
	      (4,   38.3106)
	      (16,   76.167)
        };
        \addplot+[smooth] coordinates {
	      (1/16,   12.8416)
	      (1,   20.9926)
	      (4,   31.9206)
	      (16,   51.5392)
        };
        \addplot+[smooth] coordinates {
	      (1/16,   42.2188)
	      (1,   41.4852)
	      (4,   42.2638)
	      (16,   43.502)
        };
        \legend{$d=2$,$d=4$,$d=16$}
      \end{semilogyaxis}
    \end{tikzpicture}
    }
    }
  \subfigure[Experiment vs. Predicted, $y_{0}=\frac{1}{n}$] {
    \label{fig:micro1on}
    \resizebox{0.30\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{axis}[
	    xlabel={\Large Aggregation Tree Fanout},
	    ylabel={\Large Aggregation Time (seconds)},
	      legend pos= north west]
        \addplot coordinates {
	      (2,   12.0864)
	      (4,   12.8416)
	      (16,   42.2188)
        };
       \addplot [domain=2:16, samples=100, loosely dashed, very thick, red]
       {12.0864*((x/((1/16)^(ln(x)/ln(16))-1))/(2/((1/16)^(ln(2)/ln(16))-1)))};
      \legend{{Experimental Values},{Predicted Values}}
      \end{axis}
    \end{tikzpicture}
    }
    }\\
  \subfigure[Experiment vs. Predicted, $y_{0}=1$]{
    \label{fig:micro1}
    \resizebox{0.30\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{axis}[
	    xlabel={\Large Aggregation Tree Fanout},
	    ylabel={\Large Aggregation Time (seconds)},
	      legend pos= north west]
        \addplot coordinates {
	      (2,   23.0726)
	      (4,   20.9926)
	      (16,   41.4852)
        };
       \addplot [domain=2:16, samples=100, loosely dashed, very thick, red]
       {23.0726*((x*ln(16)/ln(x))/(2*ln(16)/ln(2)))};
      \legend{{Experimental Values},{Predicted Values}}
      \end{axis}
    \end{tikzpicture}
    }
    }
  \subfigure[Experiment vs. Predicted, $y_{0}=\sqrt{n}$] {
    \label{fig:microsqn}
    \resizebox{0.30\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{axis}[
	    xlabel={\Large Aggregation Tree Fanout},
	    ylabel={\Large Aggregation Time (seconds)},
	      legend pos= north west]
        \addplot coordinates {
	      (2,   38.3106)
	      (4,   31.9206)
	      (16,   42.2638)
        };
       \addplot [domain=2:16, samples=100, loosely dashed, very thick, red]
       {38.3106*((x/((4)^(ln(x)/ln(16))-1))/(2/((4)^(ln(2)/ln(16))-1)))};
      \legend{{Experimental Values},{Predicted Values}}
      \end{axis}
    \end{tikzpicture}
    }
    }
  \subfigure[Experiment vs. Predicted, $y_{0}=n$] {
    \label{fig:micron}
    \resizebox{0.30\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{axis}[
	    xlabel={\Large Aggregation Tree Fanout},
	    ylabel={\Large Aggregation Time (seconds)}]
        \addplot coordinates {
	      (2,   76.167)
	      (4,   51.5392)
	      (16,   43.502)
        };
       \addplot [domain=2:16, samples=100, loosely dashed, very thick, red]
       {76.167*((x/((16)^(ln(x)/ln(16))-1))/(2/((16)^(ln(2)/ln(16))-1)))};
      \legend{{Experimental Values},{Predicted Values}}
      \end{axis}
    \end{tikzpicture}
    }
    }
    \caption{Microbenchmark Results}
\end{figure*}

Figure~\ref{fig:micros} shows the results from the microbenchmarks.  In
Figure~\ref{fig:AOResults1} each line represents the data grouped by this ratio. 
Figure~\ref{fig:AOResults2} draws the same data, but this time each line
represents a fixed aggregation tree with a given fanout. As expected the fanout
of the aggregation tree overlay significantly affects the aggregation time in a
predictable manner. When $y_{0}$ is small, the smaller fanouts outperform. As
$y_{0}$ grows, the performance of those overlays degrade until a larger fanout
is preferable. For the most part the transitions happen in the expected ranges.

Figures~\ref{fig:micro1on},~\ref{fig:micro1},~\ref{fig:microsqn},
and~\ref{fig:micron} show the performance of each $y_{0}$ for varying
fanouts and the performance predicted by the model. In all four cases
the trends match. The only minimum not at the predicted place is for
$y_{0}= 1$. Fanout 4 outperforms fanout 2 when the model predicts identical
performance. The deviation is very minor.

Of particular note are the size and locations of some of the performance gains.
For $y_{0}=\frac{1}{n}$, the time taken for $d=16$ is 350\% what it is for
$d=2$. For $y_{0} = n$, the time for $d=2$ is 175\% what it is for
$d=16$. That is a very significant penalty for choosing the wrong fanout, and
the right fanouts are opposite in these two examples.
Even choosing between 2 and $n$ is not a good heuristic for all cases, as the faster
of the two still takes 132\% of the time as $d=4$ for $y=\sqrt{n}$.

\begin{figure*}[!th]\label{fig:realexp}
\centering
  \subfigure[Top-$k$ match, $n=16$] {\label{fig:topk16}
    \resizebox{0.23\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{axis}[
	    xlabel={\Large Aggregation Tree Fanout},
	    ylabel={\Large Aggregation Time (seconds)},
	      legend pos= north west]
        \addplot coordinates {
	      (2,   3.280)
	      (4,   2.850)
	      (16,   5.193)
        };
       \addplot [domain=2:16, samples=100, loosely dashed, very thick, red]
       {3.280*((x*ln(64)/ln(x))/(2*ln(64)/ln(2)))};
      \legend{{\large Experimental Values},{\large Predicted Values}}
      \end{axis}
    \end{tikzpicture}
    }
    }
  \subfigure[Top-$k$ match, $n=64$] {\label{fig:topk64}
    \resizebox{0.23\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{axis}[
	    xlabel={\Large Aggregation Tree Fanout},
	    ylabel={\Large Aggregation Time (seconds)},
	      legend pos= north west]
        \addplot coordinates {
	      (2,   4.804)
	      (4,   3.986)
	      (8,   4.650)
	      (64,   21.589)
        };
       \addplot [domain=2:64, samples=100, loosely dashed, very thick, red]
       {4.804*((x*ln(64)/ln(x))/(2*ln(64)/ln(2)))};
      \legend{{\large Experimental Values},{\large Predicted Values}}
      \end{axis}
    \end{tikzpicture}
    }
    }
    \subfigure[Word Count, $n=16$] {\label{fig:wordcount16}
    \resizebox{0.23\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{axis}[
	    xlabel={\Large Aggregation Tree Fanout},
	    ylabel={\Large Aggregation Time (seconds)},
	      legend pos= north west]
        \addplot coordinates {
	      (2,   3.543)
	      (4,   3.198)
	      (16,   6.868)
        };
       \addplot [domain=2:16, samples=100, loosely dashed, very thick, red]
       {3.543*((x*ln(64)/ln(x))/(2*ln(64)/ln(2)))};
      \legend{{\large Experimental Values},{\large Predicted Values}}
      \end{axis}
    \end{tikzpicture}
    }
    }
    \subfigure[Word Count, $n=64$] {\label{fig:wordcount64}
    \resizebox{0.23\textwidth}{!} {%
    \begin{tikzpicture}
      \begin{axis}[
	    xlabel={\Large Aggregation Tree Fanout},
	    ylabel={\Large Aggregation Time (seconds)},
	      legend pos= north west]
        \addplot coordinates {
	      (2,   2.800)
	      (4,   2.145)
	      (8,   3.228)
	      (64,   13.492)
        };
       \addplot [domain=2:64, samples=100, loosely dashed, very thick, red]
       {2.800*((x*ln(64)/ln(x))/(2*ln(64)/ln(2)))};
      \legend{{\large Experimental Values},{\large Predicted Values}}
      \end{axis}
    \end{tikzpicture}
    }
    }
    \caption{Common Problem Results}
\end{figure*}

Figure~\ref{fig:realexp} shows the results from our common aggregation problems
on real world data. For the 16 leaf top-$k$ matching experiment in
Figure~\ref{fig:topk16} the values seems to diverge from the predicted
values as $d$ grows, but the trend matches the model and the microbenchmarks.
The deviation may be explained by the dependence of the aggregation function on $d$
and input size ($O\left( d\left| x_{0}\right|\log d\right)$) and not just
the input size.
Expanding the test to 64 leaves, as seen in Figure~\ref{fig:topk64},
makes the divergence less notable.

Figures~\ref{fig:wordcount16} and~\ref{fig:wordcount64} show the results from
the word count test for 16 and 64 leaves respectively. They show the same
trends seen in the microbenchmarks and top-$k$ matching experiments and
deviate even less from the predicted values. This is likely because the sets of words at
each node were the same, so summing occurrences is an aggregation function truly
linear on the size of the input.

Predictably, the relative impact of choosing the right fanout increases as more
computation nodes are added. For the top-$k$ matching and word count
applications with 16 nodes the worst case fanouts take 182\% and 215\% as much
time as their best case fanouts respectively. When we increase $n$ to 64 those
numbers jump to 542\% and 629\% respectively.
