% please textwrap! It helps svn not have conflicts across a multitude of lines.
%
% vim:set textwidth=78:

\begin{abstract}
With the emergence of large-scale data processing frameworks, a variety of
different distributed applications are being hosted in today's data centers. 
Each of these applications have their own networking requirements in
terms of bandwidth, degree of redundancy, isolation needs etc. A 
single built-in network allocation mechanism may not be the best
approach for all applications. Allowing each application to custom
configure its network properties has gained some attention recently.
Different approaches to enabling a custom configurable network consist of
software abstractions to implement desired network allocation schemes,
as opposed to topology-aware frameworks which implement such allocation 
policies at the physical network layer.

In this report, we compare the performance of each of these approaches. We
use \emph{Hadoop MapReduce} as the application of concern and measure the
performance of different mapreduce jobs in the context of two custom network
allocation frameworks. The first of these frameworks is \emph{Orchestra}, which
implements network allocation in terms of number of open connections per
job. The second system is \emph{Topology Switching}, which allows an
application to choose from amongst different optimization metrics, such as
bandwidth, isolation, redundancy or rate limiting, and configures the
network with desired parameters. For the purposes of our project, we focus
on rate limiting as the metric of optimization.

We found that Hadoop on top of Topology Switching performs approximately
the same as Hadoop on top of Orchestra. We believe that this
may be due to the overhead in re-allocating the rates accross all 
transfers every time a reducer is ready to get data from a mapper.. 
\end{abstract}

