\section{Related Work}\label{sec:related}

\subsection{Big Data Aggregation}

The idea of distinct phases in a problem is not new. The
MapReduce~\cite{MapReduce} framework for distribution requires problems to be
broken into two phases called Map and Reduce. In fact, the application of
MapReduce is so enticing that the implementation has been modified to fit some
aggregation problems which cannot be recast as strictly MapReduce problems
efficiently. MapReduceMerge~\cite{MapReduceMerge} creates an additional
step called Merge, which is a synonym for aggregation. The paper explains
several operations, including full sorts, joins, set unions, and cartesian
products, are very useful to applications that might run on MapReduce while not
being supported directly.

Yu et al~\cite{ComputeAggregate} also consider an extension to MapReduce, which
they call aggregate, to combine results when allowed by their extended model.
The aggregation is applied automatically between specified computations using
some information provided by the programmer rather than as a separate phase
after a MapReduce job completes.

These approaches show the usefulness of an aggregation subsystem,
but they fall short of optimizing the overlay of such a mechanism.

RDDs, put forth by Zaharia et al~\cite{rdds}, shows in memory computation
is feasible in distributed systems, and can improve the performance by an order
of magnitude over requiring disk access. This is especially important for some
classes of problems which require low latency. This was shown for
nodes which perform computation, but can be easily extended to cover aggregating
nodes. %\fxnote{These are exactly the types of application where aggregation
% latency can be expected to make a difference, since we are worried about real
% time responses and the computation has been sped up.}

Venkataraman et al explain that many distributed frameworks are not well
adapted to problems which require iterative calculations because of the way data
is handled~\cite{distribarrays}. In systems which must be responsive, having
quick access to data in memory on persistent nodes, like the RDDs approach,
rather than forcing data to be reread and modified from disk on potentially
different nodes can make a big difference. In the case of iterative computation
which restarts the same computation using results from the previous pass, that
difference is magnified.

Based on this we choose to use a model which stores everything in memory to
improve the latency.

\subsection{Optimizing Aggregation}

A line of research that considers an overlay specific to aggregation is
Astrolabe~\cite{Astrolabe}. Astrolabe is introduced as a summarizing mechanism
of bounded size using a hierarchical nature. The aggregation is meant to monitor
system state and facilitate scalability of dynamic systems. Summarizing
calculations are made and aggregated on the fly using gossip protocols. To a
limited fashion they consider the effect the overlay has on the aggregation, but
it is not geared toward minimizing the total time of the system as a separate
phase involved in computation.

SDIMS~\cite{SDIMS} presents a rigid overlay for a
distributed system used primarily for aggregation. Shruti~\cite{Shruti} adds flexibility to the
structure by allowing the actual aggregation mechanisms to be tuned across this
structure without altering the overlay. The overlay is hierarchical
with known fanout. It is built on top of a
distributed hash table~ for reasons including fault tolerance rather than
finding an optimal overlay for total system latency. STAR~\cite{STAR} extended
this line of work by using a consistency metric~\cite{NI} to adaptively set the precision
constraints for processing aggregation.

Morozov and Weber~\cite{distribmergetrees} consider distributed computations
which result in merge trees, an abstraction for combining subsets of large
structured datasets. They consider a system to monitor data attributes in
different branches and recomputes a more optimal tree.

Most of the aggregation optimization described to this point rely on reactionary
mechanisms. That is, they take measurements and adapt. Assuming that timing is
not a complex function with misleading local minima, this works very well when
there is a stream useful data available, but we want to
use information during setup to acheive optimality faster by making certain
assumptions about our environment.
We also do not plan to reallocate anything to offset underperforming resources
on the network, so we do not need reactionary mechanisms.

There has been work in dictating overlays for distributed systems.
The CamCube project is an alternative architecture to orient and connect
servers~\cite{CamCube}. The system using a coordinate system to bypass
traditional routing while servers connect with a limited number of adjacent
machines is a prescribed manner, which is very efficient in the case that full
control over the environment is possible. Camdoop MapReduce implemented on
CamCube~\cite{Camdoop}, showing the ability for the
overlay to work with existing distributed problems even if it limits the
overlay configurability. Camdoop does apply aggregation within the MapReduce
implementation instead of on the edges of the calculation as some uses of
MapReduce are restricted to doing by the original framework. While this approach
has performance gains in some cases, nodes only communicate with up to six neighbors directly, so the architecture limits you
to a maximum achievable fanout.

Optimal computation trees have been explored. Cheng and
Robertazzi~\cite{FullTreeAnalysis} tackled the problem of optimal
distribution of load on processors connected by a tree network. Their solution
and the extension by Kim et al~\cite{OptimalTree} consider load balancing
computation across a set for problems in which the greatest
parallelization leads to the fastest completion time because there is no
computation to aggregating results from each processor. Unfortunately, these
methods do not translate to aggregation methods.

