\section{Introduction}

Processing big data is a core challenge of our era. Extracting and distilling
more concise information from large datasets is vital to scientific research
(e.g., astrophysics, chemistry), the commercial sector (e.g., advertising,
recommendation), and governmental institutions (e.g., intelligence services,
population demographics).

\emph{Aggregation} of information --- broadly construed ---
inherently underlies the distillation process.  In commonly
employed toolkits for parallelized big data batch processing including
MapReduce~\cite{MapReduce}, aggregation happens straightforwardly. 
In the popular word count example, 
mappers produce intermediate key-value pairs where keys represent words and
values their number of occurrences in a slice of input data --
in a common simple approach mappers are executed for individual
words, and this value is trivially 1. Aggregation happens when reducers sum all
values with the same key, i.e., word.

As demonstrated by Yu, Gunda, and
Isard~\cite{ComputeAggregate}, performance can be significantly
improved by considering partial word counts computed for larger subsets of data,
and subsequently aggregating these. The
intuition is that some ``functions'' such as word-count can be performed in a
\emph{distributed associative}~\cite{ComputeAggregate} manner by expressing them
as two parts -- a first one which can be performed on data subsets individually,
and a second one aggregating the results of the first stage, possibly in
several steps.

The relevance of \emph{how exactly} to perform such aggregation is addressed by
recent work which stores datasets in main memory to increase
expressiveness of computations for instance by supporting iterative or
incremental computations, somewhat departing from pipelined batch processing
towards more online processing~ (e.g., distributed arrays %for R 
in Presto~\cite{distribarrays,presto}, resilient distributed datasets in Shark~\cite{rdds,shark}). By striping
a dataset across the main memory of a large number of $n$ nodes, heavy disk I/O
is avoided between subsequent computation phases, within iterations of a
phase, upon incremental computations triggered by updates, or upon
continuous, interactive computations. By bypassing disk I/O, performance can be improved by an order of magnitude, yet the bottlenck is shifted to communication and aggregation.

If the function used for aggregation is cumulative (so results are the combination of
results on subsets), commutative (so subsets can be aggregated in any order),
and associative (so subsets can be aggregated in any grouping), there
is some configurability in the structure of an overlay network along which such aggregation can take place:  
aggregation can happen in one step aggregating all 
sub-results on one node, between two
sub-results at a time over $log_{2} n$ steps, or something in between.

This paper thus presents \fullname ~(\name), a system to efficiently determine and
implement provably optimal aggregation overlays to weave together results in
\emph{compute-aggregate} operations.
Based on traits of the aggregation function and the $n$ nodes
taking part, \name\ \emph{automatically} heuristically determines
the provably (near-)ideal fanout for an aggregation tree consolidating
sub-results from these nodes after they performed the
initial computation phase individually. Depending on the
parameters, performance between the overlay
identified by \name\ and a na\"{i}ve one can vary by more than $600\%$ in our
experiments, and we expect greater variation in more extreme environments.

Our contributions are as follows. After introducing 
	the model of compute-aggregate tasks considered we
\begin{enumerate}
	
\tightitem 	present provably optimal heuristics for determining the fanout of an
aggregation tree given the knowledge of the aggregation method
(Section~\ref{sec:model}).

\tightitem discuss the architecture of \name, a system that uses
the heuristics to create optimal aggregation trees in the case of
well-defined or sampled aggregation functions used in compute-aggregate
problems (Section~\ref{sec:arch}).
%\item design \name\ to run on a third party cloud system, making it easy to
%deploy in environments commonly used for big data.

\tightitem empirically show via microbenchmarks and some typical
compute-aggregate tasks that the overlay determined by \name\ matches the
ideal case in practice and leads to significant time savings (Section~\ref{sec:experiments}).

\end{enumerate} 

%The rest of the paper is as follows. Section~\ref{sec:related} discusses 
%prior art. Section~\ref{sec:model} presents our
%model and heuristics. Our
%experimental results are in Section~\ref{sec:experiments}. 
Section~\ref{sec:related} discusses prior art. We draw conclusions in Section~\ref{sec:conclusions}. An extended version of this report including proofs for some heuristics can be found at~\cite{loom}.
