% please textwrap! It helps svn not have conflicts across a multitude of
% lines.
%
% vim:set textwidth=78:

\section{Introduction}
With the rapid growth of large-scale data processing systems, data centers
today are fast evolving to adopt the ``cloud computing" model. Several
different applications are run in a multi-tenant fashion on the same
cluster of nodes, leading to a number of resource provisioning policies
being deployed to manage the allocation of resources such as CPU, memory
and network amongst them. Unfortunately, the needs
of these applications might be drastically different and cluster-wide uniform
policies may not be the best approach for resource allocation in such a
context.

One such resource, which \fix{is} not efficiently managed by such 
inflexible policies is the network. In distributed data intensive
applications, such as those deployed in data centers, the network is a key
shared resource amongst different applications, and also one of the main
bottlenecks. As such, managing the network efficiently is important for
improved job throughputs and increased efficiency of overall resource
utilization. In \cite{orchestra}, Chowdhury et al successfully argue
that optimizing data transfers is crucial to achieve improved job
performance. To this end, they implement \emph{Orchestra}, which manages
the allocation of network bandwidth amongst data transfers within each job,
as well as across different jobs in the cluster. It does so by allocating a
number of open connections to each \emph{flow} in a transfer proportional
to the amount of data to be transmitted through the flow. Here, we adopt a
similar definition of \emph{flow} as presented by them. We refer to a
point-to-point transfer of data between a source and destination to be a
single flow within a data transfer operation. The advantage of
Orchestra is that it can be deployed at the application level, thus
enabling it to manage network resources agnostic of the underlying network
topology.

In contrast, Kevin et al propose a topology aware system in
\cite{topology} as a framework for efficiently re-configuring the
underlying network topology to suit the needs of the application deployed
on top of it. Topology Switching enables each application to custom select
an optimization metric which best suits the nature of its network
workloads and define distinct topologies, routing schemes and network
attributes such as imposing a different rate limit on each flow in a
transfer. Being aware of the underlying physical network allows the
framework to make decisions at a much finer granularity and in a more
flexible manner, and balance the tradeoff between different desirable
properties of the network.

In this report, we compare the two different approaches to managing network
resources. We choose \emph{Hadoop MapReduce} as a benchmark for our
experiments. Hadoop provides a distributed platform for large scale data
processing applications, which is ideal for our purposes due to the
frequent data transfers incurred by each job. In the rest of the paper, we
describe different aspects of our project as follows. Section
\ref{sec:orchestra} gives an overview of the architecture of Orchestra and
its different components. Section \ref{sec:topology} describes the Topology
Switching framework. Section \ref{sec:hadoop} describes the Hadoop
instrumentation needed to enable it to use Orchestra. Section
\ref{sec:performance} presents our evaluation and the results. Finally, Section
\ref{sec:conclusion} concludes this report.
