% taken from hs's mary.tex, and tracing back to knuth.
\def\dash---{\kern.16667em---\penalty\exhyphenpenalty\hskip.16667em\relax}

\section{Hadoop MapReduce}
\label{sec:hadoop}

We choose to evaluate our implementation against Hadoop due to its
popularity as a data center application, as well as it's relevance to the
issue of network resource allocation. Hadoop is a framework for running
large scale distributed and data intensive applications. each job submitted
to Hadoop is separated into a Map phase and a Reduce phase, which are data
processing stages. Data is transmitted from the Map phase and into the
Reduce phase over the network by an all-to-all \emph{shuffle} data transfer
operation. Although there are several other instances of data transfer
operations in a normal Hadoop workflow, we focus only on the shuffle
operation in this report. The details of the Hadoop instrumentation which
we performed for our evaluation are described in the following subsection.

\begin{figure}[!t]
\center
\includegraphics[width=0.5\textwidth]{hadoop.eps}
\caption{\figtitle{Hadoop COntrol Flow}
}
\label{fig:hadoop_flow}
\end{figure}

\paragraph{Hadoop Control Flow.}
The primary Hadoop component is the JobTacker, which accepts submitted
jobs. Hadoop maintains its own priority queue and schedules jobs for
execution based on their priorities. Map tasks (mappers) are spawned to 
perform the Map phase based on the number of blocks in the input. The number 
of Reduce tasks (reducers) to carry out the Reduce phase is configurable 
via the JobConf object. Thech map or reduce task invokes user provided code
to perform the data processing as required by the user. JobTracker spawns a 
TaskTracker for each mapper or reducer that it needs to launch. Each TaskTracker 
in turn creates an instance of a TaskRunner object, which eventually launches a 
child process for a map/reduce task. Each map or reduce task is executed in
its own process to protect the TaskRunner against bugs in the users' code.
The TaskTracker periodically sends out a heart beat message to the
JobTracker to inform it of its liveness and report the progress of its
allocated task. The output of each MapTask is pre-sorted and written to disk 
and the output data is partitioned according to the reducers. Map output 
is written to the local disk of a TaskTracker, which takes care of notifying 
the JobTracker about the completion of the MapTask. The TaskTracker of a reducer 
periodically polls the the JobTracker to find if any map output is 
available for copying. The reducer spawns Fetcher objects which copy 
output data from a mapper to the host on which the reducer resides. The
Hadoop control flow is shown in Figure \ref{fig:hadoop_flow}.

\paragraph{Instrumentation.}
We instrumented the Hadoop code to invoke an API of the ITC each time a new
job is submitted to the JobTracker. The JobTracker registers the job with
the ITC and supplies its priority as part of the registration. As each
mapper of the job completes its task, it notifies the ITC about the amount
of data for each reducer. When all mappers are completed and the job is
prepared to be schedules for a shuffle data transfer, the ITC adds the job to
a priority queue. When the job is ready to be scheduled, that is, there are
no more jobs of higher priority in the queue, the ITC invokes a TC
instance and assigns it to manage the transfer. The TC then allocates a
share of the network to each of the flows of its assigned transfer. 

When Orchestra is used as the sole arbitrator of network resources, the TC
accomplishes this by assigning a number of connections to each flow based
on the amount of data to be sent over the particular flow. When Topology
Switching is employed as well, the number of connections per flow is fixed
at $1$, and a rate limit is set on each flow, proportional to the amount of
data that needs to be sent.
