% please textwrap! It helps svn not have conflicts across a multitude of
% lines.
%
% vim:set textwidth=78:

\section{Orchestra: Overview and Architecture}
\label{sec:orchestra}

\begin{figure}[!t]
\center
\includegraphics[width=0.5\textwidth]{tcanditc.eps}
\caption{\figtitle{Architecture Overview}
}
\label{fig:tcitcts}
\end{figure}

As part of this project, we implemented a three-tier framework. As shown in
Figure \ref{fig:tcitcts}, at the highest tier is the 
application, which, in our deployment is Hadoop. In the middle tier, 
providing a layer of abstraction between the application and the network 
is Orchestra. The bottom tier, has the Topology Switching framework which 
customizes the network according to the needs of the application. 
Section \ref{sec:topology} describes the Topology Switching framework. 
Here, we present our implementation of Orchestra.

Orchestra, as described in \cite{orchestra}, consists of a hierarchical
control structure, comprising of an \emph{Inter Transfer Controller} and a
\emph{Transfer Controller}. The Inter Transfer Controller, or ITC, manages
the network across different transfers across the cluster. It is
responsible for making scheduling decisions amongst different applications
running on the cluster. The Transfer Controller, or TC, is responsible for
managing a single transfer within a cluster. The ITC allocates a share of
the netwrok to each transfer by means of certain resource allocation
policies, and the TC responsible for each transfer arbitrates the division
of the given share of resources amongst the different flows in the
transfer. The roles of the ITC and the TC are explained in more detail in
the following subsections. Since Hadoop is the application we have instrumented 
Orchestra for, we use Hadoop terminologies to describe operations. Job in 
this paper alludes to a Hadoop job. Orchestra can easily be extended to work 
with other applications. Below are the descriptions for each of these 
components and how they interact to provide the necessary division of 
resources.

\subsection{Inter Transfer Controller}
\label{sub:itc}

In a real production environment, multiple simulatneous jobs may be running
within a cluster. As such, multiple data transfers may take place
concurrently in the cluster. The Inter Transfer Controller (ITC) implements
scheduling policies in order to allocate the network amongst different jobs
in a supervised manner. Two different policies that were considered in
\cite{orchestra} are:

\paragraph{Weighted Flow Assignment with FIFO scheduling.}
With Weighted Flow Assignment, the ITC assigns a percentage of total
bandwidth available to each job based on the priority of the given job. 

\paragraph{Priority scheduling with FIFO.}
With Priority scheduling, the transfers are strictly scheduled in order of
the priorities of their containing jobs. At any time, there is only a
single transfer that is scheduled and the total available bandwidth is
divided between its flows in tha ratio of that data being sent over the
flow.

Chowdhury et al have shown the effect of employing different scheduling 
policies on job completion times in Hadoop in \cite{orchestra}. As 
suggested by the results, a priority based scheduling, with a FIFO policy 
for breaking ties, achieves higher overall job throughput as opposed to 
weighted flow assignment of the available network between the pending jobs.

In accordance with this, we adopt the same policy in our implementation.
Each job is assigned a priority and network transfers of each job are
scheduled in order of their priority. Within each priority level, FIFO
scheduling is used to select the next transfer to be scheduled. However,
for ease of implementation, as compared to the design presented in
\cite{orchestra}, we have slightly modified the flow of actions which
take place when a transfer is ready to be scheduled. Thus, when a job needs
to transfer data over the network, it contacts the ITC with its assigned
priority and ITC adds the job to a priority queue. When there are no jobs
of higher priority available, ITC schedules the job for a transfer. At this
point, ITC invokes a TC instance to handle the transfer for the given job.
When the transfer is completed for the gievn job, ITC receives a
notification to the effect from the TC and accordingly prepares the next
job to be scheduled.

\subsection{Transfer Controller}
\label{sub:tc}

Orchestra's Transfer Controller (TC) is responsible for supervising the
sharing of network amongst the different flows in a given transfer.
Orchestra chooses to allocate network in terms of number of open
connections permitted per job. The TC is responsible for distributing these
connections betwee different flows based on the amount of data that needs
to be transferred in each flow. In other words, if, within a transfer, two
different sources send different amounts of data to the same destination,
the overall time taken by the transfer to complete can be optimized by
assigning a higher share of the network to the source with a larger amount
of data to be transferred. This follows directly from the well-known rule
that any operation is bounded by its slowest participant. 

We implement the TC in a manner identical to the design presented in
\cite{orchestra}. When the ITC invokes a TC for a given transfer, the
TC receives a list of different flows in the transfer in the form of
source-destination pairs. For each flow within the transfer, the TC also
receives the amount of data to be transmitted over the flow. The TC employs
weighted fair share scheduling between the different flows based on this
information. When Orchestra is run without the support of the Topology
Switching framework, this weighted fair division is done via the number of
permitted open connections. In the presence of Topology Switching, rate
limiting is employed to achieve the same optimization. Section
\ref{sec:topology} describes this in more detail.

\ignore{
The number of connections is calculated as .Data\_per\_mapper/Minimum\_
Data\_To\_Be\_Sent

The rate-limit is calculated as . MAXIMUM\_BANDWIDTH * Data\_per\_mapper
/Maximum\_Data\_To\_Be\_Sent
}

\ignore{
2.       The ITC maintains a barrier and does not allow transfer of data to
start from the mappers to reducers till all mappers are ready.

3.       When the ITC has received a green signal from all the mappers,  it
contacts the TC with the data each mapper will be sending to each reducer.

4.       In case of Orchestra only, the number of connections from each
mapper to each reducer are maintained and when a reducer contacts the TC,
the TC returns the number of connections the reducer needs to create to the
mapper to receive data.  We have made changes in Hadoop for the reducer to
create that many number of TCP connections to the mapper. The mapper
divides the data into chunks and sends each chunk over a connection.

5.       In case of Topology Switching, each time a reducer contacts the TC
informing that it wants to start pulling data, the TC recalculates the
rate-limits of all mappers to that reducer . The TC then sends a json file
to Topology Switching to set up the new rate-limits at the mappers.

6.       As each reducer finishes, it informs the TC and once all transfers
are completed, the TC informs the ITC. The ITC now schedules the next job.
}

