\section{Related Work}\label{sec:related}

\paragraph{Big data aggregation.}% Two phases and in memory computation.}
The idea of distinct phases in problems is not new. The popular
MapReduce~\cite{MapReduce} framework requires a problem to be
broken into two phases as suggested by its name. %called map and reduce.
 The application of
MapReduce is so enticing that its Hadoop implementation has been modified to fit some
aggregation problems which cannot be recast efficiently as strict MapReduce
problems. MapReduceMerge~\cite{MapReduceMerge} extends the framework with a step
called merge -- a synonym for aggregation. MapReduceMerge is motivated by many operations relevant to applications using MapReduce, e.g.
full sorts, joins, set unions, and cartesian products, which are not being
supported efficiently. Yu et al.~\cite{ComputeAggregate} extend MapReduce to
efficiently aggregate data in between phases of jobs.

%RDDs, put forth by Zaharia et al~\cite{rdds}, shows in memory computation
%is feasible in distributed systems, and can improve the performance by an order
%of magnitude over requiring disk access. This is especially important for some
%classes of problems which require low latency.

% Venkataraman et al explain that many distributed frameworks are not well
% adapted to problems which require iterative calculations because of the way data
% is handled~\cite{distribarrays}. In systems which must be responsive, having
% quick access to data in memory on persistent nodes, like the RDDs approach,
% rather than forcing data to be reread and modified from disk on potentially
% different nodes can make a big difference. In the case of iterative computation
% which restarts the same computation using results from the previous pass, that
% difference is magnified.

\paragraph{Optimizing aggregation.}

%Several systems consider distributed aggregation in
%various ways.
 Astrolabe~\cite{Astrolabe} is introduced as a summarizing
mechanism of bounded size using hierarchical overlays.
Gossip protocols use aggregation to monitor system state and promote
dynamic scalability. The
overlay's impact on the aggregation is considered, but minimal
aggregation time is not the primary concern. % of the system as a separate 
% phase of a problem.

STAR~\cite{STAR} extends a line of research started by SDIMS~\cite{SDIMS}
creating information management systems on top of
distributed hash tables. STAR adaptively sets the precision constraints for
processing aggregation.

% There has been work in dictating overlays for distributed systems when direct
% access to the networking layer is possible.
% The CamCube project is an alternative architecture to orient and connect
% servers~\cite{CamCube}. The system using a coordinate system to bypass
% traditional routing while servers connect with a limited number of adjacent
% machines is a prescribed manner, which is very efficient in the case that full
% control over the environment is possible. Camdoop is a convergence of the
% CamCube distributed architecture and the MapReduce distributed
% framework~\cite{Camdoop}, showing the ability for the dictated overlay to work
% with existing distributed problems. In addition to translating the framework to
% the architecture, Camdoop applies aggregation within the MapReduce
% implementation instead of on the edges of the calculation as some uses of
% MapReduce are restricted to doing by the original framework.

%Optimal computation trees have been explored. 
Cheng and
Robertazzi~\cite{FullTreeAnalysis} tackle the problem of optimal
load distribution on processors connected by a tree network. Their
work and the extension by Kim et al~\cite{OptimalTree} consider balancing
computation for problems in which the greatest
parallelization leads to the fastest completion because there is no
computation to aggregate results from each processor. Morozov and
Weber~\cite{distribmergetrees} consider distributed computations resulting in
merge trees, an abstraction for combining subsets of large structured datasets.
Their system monitors data attributes in different branches and
recomputes a more optimal tree.

These works do not optimize for aggregation or use extrapolation to
find (a potentially local) optimum.

