\section{System Model}\label{sec:model}

\subsection{High Level Overview}

We consider systems win which data is distributed across several computation
nodes prior to beginning computation. After distributed computation, the system
applies some aggregation to the results to create the final output.

The compute and aggregate phases are independent from each other except for one
level of communication, so they can be optimized independently to optimize
the system as a whole. We consider problems where
the time taken by the aggregation phase is significant enough for optimization
to noticeably impact the total time. Whereas optimizing computation requires
knowlege about the data, data structures, and computation, optimizing the
overlay only requires knowing very basic information about the aggregation
function. \name\ alters the fanout of the aggregation overlay tree -- a simple
configuration that provably affects the aggregation time using limited
information about the problem.

\begin{figure}
\resizebox{\linewidth}{!}{%
\begin{tikzpicture}[-,auto,,
  lines/.style={transparent}]
  \node [circle, draw] (n7) at (6,4) {};
  \node [circle, draw] (n6) at (7,3) {};
  \node [circle, draw] (n5) at (5,3) {};
  \node [circle, draw] (n4) at (8,2) {};
  \node [circle, draw] (n3) at (4,2) {};
  \node [rectangle, draw] (n01) at (3,1) {};
  \node [rectangle, draw] (n02) at (5,1) {};
  \node [rectangle, draw] (n03) at (7,1) {};
  \node [rectangle, draw] (n04) at (9,1) {};
  
  \node [] (agglabel) at (0,3) {Aggregation Overlay};
  \node [] (complabel) at (0,1) {Computation at Local Nodes};

  \path[every node/.style={font=\sffamily\small}]
    (n7)edge [] node[left]{} (n6)
        edge [] node[left]{} (n5)
    (n6)edge [line width=1.25pt, line cap=round, dash pattern=on 0pt off 6\pgflinewidth ] node[left]{} (n4)
    (n5)edge [line width=1.25pt, line cap=round, dash pattern=on 0pt off 6\pgflinewidth] node[left]{} (n3)
    (n3)edge [] node[left]{} (n01)
        edge [] node[left]{} (n02)
    (n4)edge [] node[left]{} (n03)
        edge [] node[left]{} (n04);

\path (n5) -- (n6) node [midway] {$\cdots$};
\path (n3) -- (n4) node [midway] {$\cdots$};
\path (n01) -- (n02) node [midway] {$\cdots$};
\path (n02) -- (n03) node [midway] {$\cdots$};
\path (n03) -- (n04) node [midway] {$\cdots$};
\draw [-,decorate,decoration=snake] (0,1.5) -- (9.5,1.5);
\end{tikzpicture}
}
\caption{Visual representation of the separation of the computation and aggregation phases.}
\label{fig:highlevel}
\end{figure}

The two phase model is shown in Figure~\ref{fig:highlevel}.
The separation between the phases is clear despite the links between
computation nodes and the aggregation leaf nodes.

\subsection{Computation Phase}

The computation phase happens only at the leaf nodes prior to aggregation. At
each such node, data $z$ is contained in memory prior to computation, and
computation applies some function $f$ such that $x = f(z)$, where $x$ is data
ready for aggregation.
We assume that the data is used in multiple instances of compute-aggregate
queries, so we do not consider the time taken to read the data from disk and
structure it, which only affects the setup time.

This model supports both the case when computation is done just prior to
aggregation, such as a top-$k$ system which matches an incoming message and
aggregates the results from the leaves as a response. It also supports the case
when computation at leaves is ongoing, but aggregation is run on demand
for the current state of computation.

% There are actually two distinct uses for this computation model. The first is
% when computation is run just prior to aggregation. An example of this is a
% top-$k$ match algorithm matching the persistent data to incoming messages. When
% each message enters the system, the top-$k$ matches must be found at each node,
% and then the aggregation overlay prunes the results to the top matches in the
% system.
% 
% The second use is when computation itself is ongoing, but the aggregation is
% done on demand. This could be used in something like a system status report,
% where the status is constantly updated, but the report is only generated by
% aggregating upon demand.

% In either case, the computation phase reuses the same data and data structures,
% although there may be changes between instances or the computation parameters
% may change. Computation is required to be fast after the point that a
% compute-aggregate query is initiated to provide the information as quickly as
% possible.

% Another reason for increasing the number of nodes is increased parallelism. Take
% the example of sorting. Standard comparison based sorting techniques are
% superlinear, so linear increases in the input size require even bigger increases
% in the amount of time taken to sort. Merging sorted lists is relatively fast.
% Therefore, it might make sense to run computation on lots of smaller datasets
% and shift the load to the aggregation phase.

Because there are different reasons for choosing the number of nodes, and
choosing that number may require intimate knowledge of the application, we do
not address it in this paper. The programmer is responsible for choosing a
number of nodes and evenly distributing data across them. The number of nodes
may change in between any two instances of a compute-aggregate query so long as
the system is made aware of the change.

\subsection{Aggregation Overlay}

\begin{figure}
\centering
\subfigure[{Each aggregator is on a separate node.}]{
\label{fig:perceptionvariantsa}
\resizebox{.98\linewidth}{!}{%
\begin{tikzpicture}[<-,shorten >=1pt,auto,
  thick,main node/.style={circle,draw,font=\sffamily\Large\bfseries}]
  [scale=.40,auto=right,every node/.style={circle}]
  \node (n7) at (6,1) {7};
  \node (n6) at (4,1) {6};
  \node (n5) at (3,1)  {5};
  \node (n4) at (1,1)  {4};
  \node (n3) at (5,1.5) {3};
  \node (n2) at (2,1.5)  {2};
  \node (n1) at (3.5,2)  {1};
        
    \path[every node/.style={font=\sffamily\small}]
    (n1)edge [] node[left]{} (n2)
        edge [] node[left]{} (n3)
    (n2)edge [] node[left]{} (n4)
        edge [] node[left]{} (n5)
    (n3)edge [] node[left]{} (n6)
        edge [] node[left]{} (n7);
\end{tikzpicture}
}
}
\qquad
\subfigure[{Reusing nodes for subsequent levels.}]{
\label{fig:perceptionvariantsb}
\resizebox{.98\linewidth}{!}{%
\begin{tikzpicture}[<-,shorten >=1pt,auto,
  thick,main node/.style={circle,draw,font=\sffamily\Large\bfseries}]
  [scale=.40,auto=right,every node/.style={circle}]
  \node (n7) at (6,1) {4};
  \node (n6) at (4,1) {2};
  \node (n5) at (3,1)  {3};
  \node (n4) at (1,1)  {1};
  \node (n3) at (5,1.5) {2};
  \node (n2) at (2,1.5)  {1};
  \node (n1) at (3.5,2)  {1};
  \path [every node/.style={font=\sffamily\small}]
    (n1)edge [gray, dashed] node[left]{} (n2)
        edge node[left]{} (n3)
    (n2)edge [gray, dashed] node[left]{} (n4)
        edge node[left]{} (n5)
    (n3)edge [gray, dashed] node[left]{} (n6)
        edge node[left]{} (n7);
\end{tikzpicture}
}
}
\caption{Two projections of an aggregation overlay on a set of nodes.}
\label{fig:perceptionvariants}
\end{figure}

Once the outputs from every node in the computation phase is available, the
system has to aggregate the results. Figure~\ref{fig:perceptionvariants} shows
how to construct an overlay for this purpose.
Figure~\ref{fig:perceptionvariantsa} shows the overlay when each node is on its own machine, with the node identifier shown at each node of the
tree.
Alternatively, nodes can be reused. Because the only time overlap in aggregation at two connected nodes in the overlay is
communication, a node may be used at consecutive levels in the same branch. This
reduces the number of separate nodes required. As a node is not required to
send its results through the network to itself, this also reduces the
communication cost. This view is shown
in Figure~\ref{fig:perceptionvariantsa} with the dashed lines indicating the
links where communication is not required.

When each output occurs exactly once aggregation takes
the form of a tree where each non-leaf tree node represents the aggregation
point of outputs of the nodes directly below it, whether those nodes performed
local computation or aggregation of other outputs.
Figure~\ref{fig:fanoutvariants} shows how three different fanouts can result in different overlays for 16 leaf nodes.

\begin{figure}
\subfigure[Fanout = 2]{
\resizebox{.98\linewidth}{!}{%
\begin{tikzpicture}[-,shorten >=1pt,auto,
  thick,main node/.style={circle,draw,font=\sffamily\Large\bfseries}]
  [scale=.25,auto=right,every node/.style={circle}]
  \node (n31) at (16,5) {};
  \node (n30) at (24,4) {};
  \node (n29) at (8,4) {};
  \node (n28) at (28,3) {};
  \node (n27) at (20,3) {};
  \node (n26) at (12,3) {};
  \node (n25) at (4,3) {};
  \node (n24) at (30,2) {};
  \node (n23) at (26,2) {};
  \node (n22) at (22,2) {};
  \node (n21) at (18,2) {};
  \node (n20) at (14,2) {};
  \node (n19) at (10,2) {};
  \node (n18) at (6,2) {};
  \node (n17) at (2,2) {};
  \node (n16) at (31,1) {};
  \node (n15) at (29,1) {};
  \node (n14) at (27,1) {};
  \node (n13) at (25,1) {};
  \node (n12) at (23,1) {};
  \node (n11) at (21,1) {};
  \node (n10) at (19,1) {};
  \node (n9) at (17,1) {};
  \node (n8) at (15,1) {};
  \node (n7) at (13,1) {};
  \node (n6) at (11,1) {};
  \node (n5) at (9,1) {};
  \node (n4) at (7,1) {};
  \node (n3) at (5,1) {};
  \node (n2) at (3,1) {};
  \node (n1) at (1,1) {};
        
    \path[every node/.style={font=\sffamily\small}]
    (n31)edge [] node[left]{} (n30)
        edge [] node[left]{} (n29)
    (n30)edge [] node[left]{} (n28)
        edge [] node[left]{} (n27)
    (n29)edge [] node[left]{} (n26)
        edge [] node[left]{} (n25)
    (n28)edge [] node[left]{} (n24)
        edge [] node[left]{} (n23)
    (n27)edge [] node[left]{} (n22)
        edge [] node[left]{} (n21)
    (n26)edge [] node[left]{} (n20)
        edge [] node[left]{} (n19)
    (n25)edge [] node[left]{} (n18)
        edge [] node[left]{} (n17)
    (n24)edge [] node[left]{} (n16)
        edge [] node[left]{} (n15)
    (n23)edge [] node[left]{} (n14)
        edge [] node[left]{} (n13)
    (n22)edge [] node[left]{} (n12)
        edge [] node[left]{} (n11)
    (n21)edge [] node[left]{} (n10)
        edge [] node[left]{} (n9)
    (n20)edge [] node[left]{} (n8)
        edge [] node[left]{} (n7)
    (n19)edge [] node[left]{} (n6)
        edge [] node[left]{} (n5)
    (n18)edge [] node[left]{} (n4)
        edge [] node[left]{} (n3)
    (n17)edge [] node[left]{} (n2)
        edge [] node[left]{} (n1);
\end{tikzpicture}
}
}

\subfigure[Fanout = 4]{
\resizebox{.98\linewidth}{!}{%
\begin{tikzpicture}[-,shorten >=1pt,auto,
  thick,main node/.style={circle,draw,font=\sffamily\Large\bfseries}]
  [scale=.25,auto=right,every node/.style={circle}]
  \node (n21) at (16,3) {};
  \node (n20) at (28,2) {};
  \node (n19) at (20,2) {};
  \node (n18) at (12,2) {};
  \node (n17) at (4,2) {};
  \node (n16) at (31,1) {};
  \node (n15) at (29,1) {};
  \node (n14) at (27,1) {};
  \node (n13) at (25,1) {};
  \node (n12) at (23,1) {};
  \node (n11) at (21,1) {};
  \node (n10) at (19,1) {};
  \node (n9) at (17,1) {};
  \node (n8) at (15,1) {};
  \node (n7) at (13,1) {};
  \node (n6) at (11,1) {};
  \node (n5) at (9,1) {};
  \node (n4) at (7,1) {};
  \node (n3) at (5,1) {};
  \node (n2) at (3,1) {};
  \node (n1) at (1,1) {};
        
    \path[every node/.style={font=\sffamily\small}]
    (n21)edge [] node[left]{} (n20)
        edge [] node[left]{} (n19)
		edge [] node[left]{} (n18)
        edge [] node[left]{} (n17)
    (n20)edge [] node[left]{} (n16)
        edge [] node[left]{} (n15)
		edge [] node[left]{} (n14)
        edge [] node[left]{} (n13)
    (n19)edge [] node[left]{} (n12)
        edge [] node[left]{} (n11)
		edge [] node[left]{} (n10)
        edge [] node[left]{} (n9)
    (n18)edge [] node[left]{} (n8)
        edge [] node[left]{} (n7)
		edge [] node[left]{} (n6)
        edge [] node[left]{} (n5)
    (n17)edge [] node[left]{} (n4)
        edge [] node[left]{} (n3)
		edge [] node[left]{} (n2)
        edge [] node[left]{} (n1);
\end{tikzpicture}
}
}
\subfigure[Fanout = 16]{
\resizebox{.98\linewidth}{!}{%
\begin{tikzpicture}[-,shorten >=1pt,auto,
  thick,main node/.style={circle,draw,font=\sffamily\Large\bfseries}]
  [scale=.25,auto=right,every node/.style={circle}]
  \node (n17) at (16,2.75) {};
  \node (n16) at (31,.5) {};
  \node (n15) at (29,.5) {};
  \node (n14) at (27,.5) {};
  \node (n13) at (25,.5) {};
  \node (n12) at (23,.5) {};
  \node (n11) at (21,.5) {};
  \node (n10) at (19,.5) {};
  \node (n9) at (17,.5) {};
  \node (n8) at (15,.5) {};
  \node (n7) at (13,.5) {};
  \node (n6) at (11,.5) {};
  \node (n5) at (9,.5) {};
  \node (n4) at (7,.5) {};
  \node (n3) at (5,.5) {};
  \node (n2) at (3,.5) {};
  \node (n1) at (1,.5) {};
        
    \path[every node/.style={font=\sffamily\small}]
    (n17)edge [] node[left]{} (n16)
        edge [] node[left]{} (n15)
		edge [] node[left]{} (n14)
        edge [] node[left]{} (n13)
		edge [] node[left]{} (n12)
        edge [] node[left]{} (n11)
		edge [] node[left]{} (n10)
        edge [] node[left]{} (n9)
		edge [] node[left]{} (n8)
        edge [] node[left]{} (n7)
		edge [] node[left]{} (n6)
        edge [] node[left]{} (n5)
		edge [] node[left]{} (n4)
        edge [] node[left]{} (n3)
		edge [] node[left]{} (n2)
        edge [] node[left]{} (n1);
\end{tikzpicture}
}
}
\caption{Three aggregation overlays with 16 leaves.}
\label{fig:fanoutvariants}
\end{figure}

\subsection{Aggregation Function}

 We consider aggregation functions which takes  $x_{1} \ldots x_{i}$ and outputs the
aggregate $x^{1..i}$, i.e., $x^{1..i}= g(\overline{x})$. Functions are cumulative,
 commutative, and associative. The precise mathematical definitions of these
 properties are in Table~\ref{tab:properties}, but they essentially mean that
 outputs can be aggregated in any order with any other outputs or aggregation of
 outputs. As long as any individual output is included exactly once, the final
 aggregation provides a correct result.
The results must be equivalent ($\equiv$), not necessarily identical.
For example, if you want the top-$k$ results and there are $k+1$ results with the same score, different aggregation
orders may result in different $k$ results, but each set will be a correct
answer.

\begin{table}[b]
  \centering
  \setlength{\tabcolsep}{.4em}
\vspace{2mm}
  \caption{Math definitions of requirements for $g\left(
  \overline{x}\right)$.  
%$\overline{x}^{\prime}$ and $\overline{x}^{\prime\prime}$ are defined in the associative property.
}
  \label{tab:properties}
  \begin{tabular}{p{.26\columnwidth}|p{.64\columnwidth}}
  \textbf{Property} & \textbf{Definition}\\ \hline
  Cumulative & $g\left( g\left( \overline{x}\right), g\left(
  \overline{x}^{\prime}\right)\right) \equiv$
  $g\left(\overline{x},\overline{x}^{\prime}\right)$ \\
  Commutative & $g\left(\overline{x}^{\prime}, \overline{x}\right)
  \equiv$ $g\left(\overline{x}, \overline{x}^{\prime}\right) $\\
  Associative & $ g\left(g\left(\overline{x},\overline{x}^{\prime}\right),
  \overline{x}^{\prime\prime}\right) \equiv
  g\left(\overline{x},
  g\left(\overline{x}^{\prime},\overline{x}^{\prime\prime}\right)\right) $\\
  \end{tabular}
\end{table}

The function must work for a variable number of inputs. The fanout of the
overlay corresponds to a number of inputs for the aggregation function at a
node. If the function itself only handles, for instance, two inputs, there is no
advantage to increasing the fanout. Even if the same aggregation function can be
run on two inputs, and the result run with a third input, that is not the same
as running the aggregation function on three inputs simultaneously.

The total size of the inputs, and thus the size of the output, are also limited
by our model of everything being in memory. If the total size of the input
exceeds the available space, disk swapping leads to unmodeled behavior. This
means that if the computation nodes do not sufficiently decrease the size of the
total data entering the aggregation phase, the initial aggregation levels must
do so.

The aggregation time includes the time for
communication of results from the child to the parent. A node has only a single
network connection, so a linear increase in the number of equally sized inputs
results in a linear increase in the time for communication. This is modeled by
the aggregation function without needing to separate the times
for communication and the actual merging.

\subsection{Assumptions}

Our calculations in Section~\ref{sec:proofs} to find the optimal overlay rely on
assumptions about system behavior. Unmet assumptions mean the
optimality is not guaranteed but do not affect the
correctness of the output. Here we explain each assumption, our
reasoning, and its impact.

\begin{figure}
  \begin{tikzpicture}[scale=1.2]
   %horizontal plot
   \draw [-] (0,3.2) -- (6.4,3.2);
   \draw [-] (0,1.7) -- (6.4,1.7);
   
   %timescale
   \draw [loosely dotted, gray!125, thick] (.25,4.3) -- (.25,.2);
   \draw [loosely dotted, gray!125, thick] (2.25,3.1) -- (2.25,.2);
   \draw [loosely dotted, gray!125, thick] (3.25,1.6) -- (3.25,.2);
   \draw [loosely dotted, gray!125, thick] (4.25,3.1) -- (4.25,.2);
   \draw [loosely dotted, gray!125, thick] (6.25,4.3) -- (6.25,.2);
   \node (t0) at (.25,0) {$t_{0}$};
   \node (t1) at (2.25,0) {$t_{1}$};
   \node (t2) at (3.25,0) {$t_{2}$};
   \node (t3) at (4.25,0) {$t_{3}$};
   \node (t4) at (6.25,0) {$t_{4}$};
   
   %completely synchronous
   \node (n1) at (0,4.2) {$n1$};
   \node (n2) at (0,3.5) {$n2$};
   \draw [|-|, thick] (.25,4.2) -- (6.25,4.2);
   \draw [|-|, thick] (.25,3.5) -- (6.25,3.5);
   \path (.25,4.4) -- (6.25,4.4) node [midway] {$bandwidth/2$};
   \path (.25,3.7) -- (6.25,3.7) node [midway] {$bandwidth/2$};
   
   %not quite synchronous
   \node (n3) at (0,2.7) {$n1$};
   \node (n4) at (0,2) {$n2$};
   \draw [|-|, ultra thick] (.25,2.7) -- (2.25,2.7);
   \draw [-|, thick] (2.25,2.7) -- (4.25,2.7);
   \draw [|-, thick] (2.25,2) -- (4.25,2);
   \draw [|-|, ultra thick] (4.25,2) -- (6.25,2);
   \path (.25,2.9) -- (2.25,2.9) node [midway] {$full\, bandwidth$};
   \path (2.25,2.9) -- (4.25,2.9) node [midway] {$bandwidth/2$};
   \path (2.25,2.2) -- (4.25,2.2) node [midway] {$bandwidth/2$};
   \path (4.25,2.2) -- (6.25,2.2) node [midway] {$full\, bandwidth$};
   
   %even less synchronous
   \node (n5) at (0,1.2) {$n1$};
   \node (n6) at (0,0.5) {$n2$};
   \draw [|-|, ultra thick] (.25,1.2) -- (3.25,1.2);
   \draw [|-|, ultra thick] (3.25,0.5) -- (6.25,.5);
   \path (.25,1.4) -- (3.25,1.4) node [midway] {$full\, bandwidth$};
   \path (3.25,.7) -- (6.25,.7) node [midway] {$full\, bandwidth$};
  
  \end{tikzpicture}
  \label{fig:commsync}
  \caption{Communication time for two nodes sharing bandwidth with equal size
  messages.}
\end{figure}

We model data as evenly distributed across computation nodes.
Because we use in memory data, we are limited in the amount of data that can be
stored and used at any one given node. For large amounts of data it is
necessary to add more nodes to avoid accessing the disk, as pointed out in
prior art~\cite{rdds}. The even distribution is a fair assumption to maximize
the parallelism of computation.

We assume that the aggregation function depends on the size of the input. That
is, the fanout multiplied by the size of the average input set from the
children. In practice this might be a slight simplification. For instance,
when merging ordered lists, the most efficient implementation is to
keep the lists in a heap ordered by the first element of the list. The
aggregation function removes the first list, removes the first item, then
returns the rest of the list to the heap. This algorithm
depends on the fanout and the total combined size of the input separately. This
additional factor depends on the exact implementation of the aggregation.
In the cases we have seen, it has been a small factor (on the order of
$\log fanout$), and has not changed the optimal overlay from the calculated
optimal. The dominant factor in aggregation time is the size of inputs.

We assume communication time to be linear on the size of the input. This
may lead one to assume that we require all input to be available at the same
time, and thus that siblings must complete synchronously. However, there is a
degree of looseness to the synchrony requirement. Figure~\ref{fig:commsync}
shows that contemporary streams to the same node share bandwidth, so the
 time required to complete the communication at a node is the same as long as no
 group of nodes fails to start sending its results before its parent finishes
receiving the results from a sibling group of nodes. We do not consider partial
aggregation if this assumption fails, which requires
higher programming complexity due to additional locking and race
conditions.

% If siblings are not even this close to synchrony, the
% parent node is required to wait for the final child to complete to
% communicate results before aggregating, and that waiting time
% represents wasted time that could be used to progress the aggregation. The delay
% is likely to propagate up the branch of the aggregation overlay. We do
% consider partial aggregation of received results while waiting for the final
% input, but this requires the results from the partial aggregation to be
% considered again once the final input has been received. As a result we have not
% seen significant advantages, and the programming complexity is higher due to
% additional locking and race conditions, which is likely to lead to more bugs.

The near-synchrony requirement implies that we expect homogeneity from nodes at
the same level. This applies to input %and output
 as much as hardware.
Aggregation time depends on input size, so significantly different inputs can lead to
significantly different latency.  Moreover, if the aggregation complexity
depends on the order of the input (i.e. some merge algorithms), the order of the
input data matters.
%The output of each level is used as input for the next, so
%these requirements apply to output as well. 

Homogenous input follows from our model the even distribution of data. For some
aggregation and computation functions this expands the requirements to include
the distribution of data attributes. For instance, a grep should return an equal
number of results from each lead node. In other cases simply distributing the
size is sufficient. Ensuring this requirement may require a modicum of analysis.
The leeway in the synchrony requirement provided by the communication attributes
masks some hetergeniety, which we have found to be good enough in practice.


% The homogeneity assumption works particularly well for cloud services where
% one can rent nodes simulating identical hardware and communication topology is a
% black box. If a system includes known heterogeneity there are
% optimizations which are not considered. This includes placing nodes with low
% mutual communication costs adjacent in the overlay or moving more
% powerful nodes to more impactful positions.

Homogeneous hardware is a reasonable assumption because 1) many
datacenters mass order standard commercial hardware and 2) cloud services charge
different rates for different levels of service, and provide as close to the
same performance to all instances of the same level as possible. The variation
in service can be unpredictable.

The homogeneity requirement is only for nodes at the same
level. When access to network topology is possible, it is possible to aggregate
within racks first, then within datacenters, and so on as suggested by Yu,
Gunda, and Isard~\cite{ComputeAggregate}. Even when networking is a
black box a user may purchase nodes with different computing power or RAM within
a cloud, perhaps to use at levels in the aggregation overlay where more data is
being processed.% or at the computation nodes while saving money with lower
% priced nodes at other levels.

When the sizes of the output and the input differ
we model the growth as the same ratio at each level. This is or can be made
to be true for many application. It is not necessarily true, e.g. in the case of
word count where leaves contain different sets of words. In
this case there may be greater proportional growth when
each input contains a smaller set of words disjoint words. Later the amount of
disjointness can be expected to be less relative to the amount of overlap.
Modeling this is data dependent and cannot be applied to the general
case.

Aggregation costs, including communication, are modeled as
monotonically increasing on input size, and are zero for zero input size.
There may
be non-linear setup overheads associated with some aggregation methods. We find
that modeling this complicates analysis, and the impact of these overheads
to be very small in practice.

Communication at a node can be affected by other
nodes when many machines share network infrastructure or multiple machines
communicate with one.
This is especially true when TCP incast is present~\cite{ZhangA13}.
We assume that TCP incast is resolved on the TCP level as
in~\cite{TCPincastsolution} or communication time is
a minor factor. Our experiments appear to confirm that, so linear
modelling is sufficient.

Our final assumption %for the purposes of mathematical modeling 
is a full and balanced aggregation trees. Thus every node at the same
level is close to synchronized when all levels below them are. %If two
%branches are not equal height, the node which joins the two unbalanced sides
%needs to wait one aggregation level longer for one side.
Balancing is easy and is accomplished by our system. Fullness is impossible
unless the number of leaves is a power of the fanout, e.g. the fanout is the
number of leaves.
