\section{Introduction}

Processing big data is a core challenge of our era. The
need to extract and distill more concise information from large datasets is
strong across scientific research (e.g., astrophysics, chemistry), the
commercial sector (e.g., advertising, recommendation), and governmental institutions (e.g.,
intelligence services, population demographics).

Inherently, \emph{aggregation} of information --- broadly construed ---
underlies the distillation process.  In commonly
employed toolkits for parallelized big data batch processing such as the
ubiquitous MapReduce~\cite{MapReduce} framework, aggregation happens in a
straightforward manner. Take the popular word count example in MapReduce:
mappers produce intermediate key-value pairs where the key represents a word and
the value its number of occurrences in a slice of input data --
in a common straightforward attempt where mappers are executed for individual
words this value is trivially 1. Aggregation happens on reducers by summing all
values with a same given key, i.e., word.

As demonstrated by Yu, Gunda, and
Isard~\cite{ComputeAggregate}, performance can be significantly improved by
considering that partial word counts can be computed for larger subsets of data,
and subsequently aggregated via summation. The
intuition is that certain ``functions'' such as word-count can be performed in a
\emph{distributed associative}~\cite{ComputeAggregate} manner by expressing them
as two parts -- a first one which can be performed on data subsets individually,
and a second one which aggregates the results of the first stage, possibly in
several steps.

The relevance of \emph{how exactly} to perform such aggregation is accrued in
recent work which suggests storing datasets in main memory to increase
expressiveness of computations for instance by supporting iterative or
incremental computations, somewhat departing from pipelined batch processing
towards more online processing~ (e.g., distributed arrays for
R~\cite{distribarrays}, Resilient Distributed Datasets~\cite{rdds}). By striping
a dataset across the main memory of a large number $n$ of nodes, heavy disk I/O
is avoided between subsequent computation phases, within iterations of a
phase, upon incremental computations triggered by updates, or upon
continuous, interactive computations. This shifts the
bottleneck from disk I/O to communication and especially aggregation across nodes.

As long as the
aggregation method is cumulative, so that the results are the combination of
results on subsets, commutative, so that subsets can be aggregated in any order,
and associative, so that subsets can be aggregated in any grouping, there
is a degree of freedom in the construction of an overlay for the aggregation.
Aggregation can for instance happen in a single step by aggregating all
sub-results on one node; inversely aggregation can happen between two
sub-results at a time over $log_{2} n$ stages.

Resources spent on finding the optimal 
are not used to find a solution directly, so we want to find the optimal
overlay efficiently.
This paper thus presents \fullname ~(\name), a system to efficiently find and
implement provably optimal aggregation overlays to weave together results in
\emph{compute-aggregate} operations.
Based on the characteristics of the aggregation function and the nodes
taking part, \name\ \emph{automatically} determines via heuristics the
provably ideal or near-ideal fanout for an aggregation tree consolidating
sub-results across $n$ nodes which store parts of datasets and perform the initial
computation phase individually. As we show, depending on the considered
parameters, performance between some fixed aggregation tree and the tree
identified by \name, can vary by more than $600\%$ in our environment, and we
expect greater variation in more extreme environments.

In summary, our contributions are as follows. After explaining
the model of compute-aggregate tasks, we:
\begin{enumerate}
\item introduce a way to model the performance of an in memory aggregation
phase, which considers the (a) complexity for computation aggregation on a given
node and (b) the corresponding aggregation degree (input vs. output data size),
distinguishes among sublinear, linear, and superlinear behavior.
\item use the model to find provably optimal heuristics for the fanout of an
aggregation tree given the parameters of (a) and (b).
\item discuss the architecture and implementation of \name, a system that uses
the optimality heuristics to create an optimal aggregation tree in the case (a)
and (b) are provided for well-known functions used in compute-aggregate
problems, or can be otherwise achieved via sampling or synthesized in some
cases.
\item design \name\ to run on a third party cloud system, making it easy to
deploy in environments commonly used for big data problems without requiring in depth
knowledge of the networking architecture as any two nodes are able to
communicate.
\item empirically demonstrate via microbenchmarks and a couple typical
compute-aggregate tasks that the ideal overlay determined by \name\ matches the
ideal case and leads to significant time savings.
\end{enumerate} 

The rest of the paper is as follows. Section~\ref{sec:related} presents
the prior art. Section~\ref{sec:model} presents our
model. The heuristics and optimality proofs are in
Section~\ref{sec:heuristics}.
The system is detailed in Section~\ref{sec:system}, and
experimental results are in Section~\ref{sec:experiments}. We conclude with
Section~\ref{sec:conclusions}.
