% taken from hs's mary.tex, and tracing back to knuth.
\def\dash---{\kern.16667em---\penalty\exhyphenpenalty\hskip.16667em\relax}

\section{Performance}
\label{sec:performance}

\begin{figure}[!t]
\center
\includegraphics[width=0.5\textwidth]{orchestra_vs_topology.eps}
\caption{\figtitle{Architecture Overview}
}
\label{fig:perf_graph}
\end{figure}


Our tests were carried out using the word count application that is
provided as part of Hadoop's map reduce example programs. Our Hadoop
cluster had one name node and three data nodes i.e, one JobTracker node
and three TaskTracker nodes. Each node was a Linux machine on the
sysnet cluster. We submitted multiple versions of the word count
program to the JobTracker. Each version of the word count program had a 
different number of reducers and  priorities set via the API for the 
Job class. We ran all these versions against plain Orchestra code
and later against Orchestra with topology switching and compared 
the performance of the same. We found that jobs submitted to Orchestra with 
topology switching completed at the same compared to the ones submitted to plain Orchestra.
While three jobs for Orchestra took only 39 seconds while the same three jobs for 
Orchestra with topology switching took 41 seconds. We suspect that the reason
Orchestra is faster is because each time a reducer needs to start a new transfer, 
the rate-limits of all mappers connected to that reducer need to be readjusted. 
This process might be cumbersome if the number of mappers is large
and we speculate that each readjustment of rate-limit adds to the delay. 
This observation is indicated in \ref{fig:perf_graph}.

One very useful feature of rate-limiting was that the number of changes to be 
made in Hadoop was minimal. On the other hand, for Orchestra-only changes, we 
needed to add a lot of code to Hadoop to create new connections and to merge the
data correctly from the different connections.
This observation is indicated in \ref{fig:perf_graph}. 

%%\paragraph{Recommendations.}
%%
%%Flash's cross-domain policy is an important mechanism for allowing
%%communication in Web~2.0 applications beyond what is possible under
%%the same-origin policy.  The risks of overly permissive policy files
%%are clearly laid out in Adobe documentation.  Yet many otherwise
%%well-run, popular sites have misconfigured policies.  A comprehensive
%%solution would be to change Flash's interaction with browsers so that
%%cookies are not attached to cross-domain requests\dash---at least for
%%sites not in restricted intranets, and at least once most users had
%%upgraded their Flash plugin.  Unfortunately, to the extent that Flash
%%applications rely on this behavior in browsers, this solution may be
%%untenable.
%%%
%%Most modestly, we suggest that cross-domain policy files should be
%%audited as part of a site's security audits the way that a firewall
%%ruleset is (and for the same reason).  Flash authoring tools should
%%issue warnings when encountering overly permissive policies (e.g.,
%%those that allow \verb^"*"^ access).
%%%
%%The cross-domain policy file could be augmented to allow sites to
%%express whether they would like cookies attached or not; this may be
%%the most appropriate setting for some sites.  A more sophisticated
%%approach is to associate Flash URL requests with a separate set of
%%cookies, and allow sites that wish to tie a Flash session to a
%%non-Flash session to do so manually by means of a special request
%%(e.g., from JavaScript code, a request of the form
%%\verb^http://example.com/bindsessions?flashsess=...^, to which the
%%non-Flash cookies are attached by the browser and the Flash cookies
%%manually included in the request URL).


%%% Local Variables:
%%% TeX-master: "main"
%%% End:
