\section{Performance}
\subsection{Expectations}
The BLAST algorithm compares each input sequences against all
sequences in a database, and the comparisons of each input sequence
are independent from the comparisons of every other sequence.  Because
of this decoupled nature of the sequence comparisons, it is trivial to
distribute sequences to different computers to execute.  It is
expected that runtime increases linearly in proportion to the number
of input sequences, not considering job startup time.  Reducing the
number of sequences in a job should therefore effect a roughly
proportional linear reduction in runtime.


%todo:figure illustrating trivial job division

\subsection{Preliminary Results}

To test these expectations, an incoming query file was divided into
roughly equal parts size for submission to Condor, a distributed
platform. A 2.6GB version of the {\it nr} database was fully
replicated onto 16 machines using the Chirp file-system so as to avoid
multiple processes accessing the same database over the network. In
order to measure performance increase, query files with 1,2,3..239,240
queries were submitted to {\it Condor Blast}. The total processing time of each
query set was measured and recorded.

Referring to Figure~\ref{prelimgraph}:


\begin{figure}
\centering
%\includegraphics[width=.9\columnwidth]{perf_graph.pdf}
\includegraphics[scale=.7,angle=90]{perf_graph.pdf}
\caption{Preliminary Performance of BLAST distributed sequence-wise
  over Condor}
\label{prelimgraph}
\end{figure}



\begin{itemize}
\item {\it Condor Blast} indicates runtimes for queries distributed as
described
\item {\it Blast} is running the n-query job against a single instance
of the database residing elsewhere on the network
\item {\it Parrot Blast} is running a single, undivided n-query job against
a locally-stored copy of the database
\end{itemize}

This test shows that {\it Condor Blast} does indeed offer performance
benefits over using normal BLAST; The runtime for {\it Condor Blast}
remained relatively constant with the increase in query set size,
while both {\it Parrot Blast} and {\it Blast} increased linearly with query
set size.  These performance benefits are shown to scale while the
number of Condor jobs submitted is smaller than the number of
available nodes. The largest query set in this test was 240 queries,
and incoming query files were split into batches of 30 queries per
job. This results in only 8 machines being utilized at a time. On a
16-machine cluster, the trend for the {\it Condor Blast} line should in
theory hold until 480 queries before stepping upwards because multiple
jobs would have to be run on the same machine.





\subsection{Characterizing Job Executions}
Both the input division and results merging steps of a BioCompute job
are currently implemented in linear time algorithms with respect to
the number of queries submitted.  With a small number of queries,
these time taken by these steps are negligible, and the time has not
been observed to be significant in any jobs submitted yet.  If
required in the future, these steps can be implemented as additional
Condor jobs to be executed in a remote, distributed fashion.
%todo: steps: {divide O(n), process, reduce O(n)}




\subsection{Task Granularity}


The number of sequences, N, in each division of sequences is a
variable to tweak.  Assuming low startup overhead time per job, a
lower N is desirable.  In an real-world environment, however, many
factors could affect an optimal N, which could depend on the current
job load on Condor and the amount of churn in Condor jobs---executing
on a node with the needed sequence database already loaded in memory
from a previous job might not need to read as much from disk and thus
incur a lower startup penalty.



A 580-sequence input file was tested on BioCompute with values of 25,
50, and 100 for N, spawning 24, 12, and 6 individual Condor jobs,
respectively.  As shown in Table~\ref{granularity-580}, the jobs with
N = 25 ran much faster than the jobs with higher N values.  The jobs
with N = 50 and N = 100 had similar execution times.



Runtime dramatically improves when N is sufficiently small to
effectively utilize the entire cluster of BioCompute machines. These
results indicate that a value for N is desired wherein enough jobs
will be spawned to saturate the available nodes.



Additionally, a 2126-sequence input file was tested with values of 10,
25, and 200 for N. Table~\ref{granularity-2126} showed a large speed
improvement from N = 200 to N = 25, from 2891 seconds to 1058 seconds.
However, when N = 10, 213 jobs were spawned and execution took 1282
seconds, indicating a general location of the lower limit for the
optimal value of N.


Peripherally, choosing smaller N would lead to more granular status
reporting: Condor maintains a logfile in the master directory
indicating how many jobs have been spawned and their statuses.  Lower
values of N would enable more granular status reporting to be
presented to the user using only Condor job statuses.  In the event of
unreliable systems, the additional granularity of smaller jobs would
result in less wasted computing in the event of failure of an
executing node.


\begin{table}
\begin{center}
\begin{tabular}{|c|c|r|r|r|}
\hline N & CPU Time (s) & Wall Clock Time (s) & Jobs Spawned \\ \hline 10 &
6,523 & 266 & 58 \\ 25 & 5,230 & 327 & 24 \\ 50 & 5,000 & 723 & 12 \\
100 & 4,368 & 790 & 6 \\

\hline
\end{tabular}

\caption{Execution time and job granularity for 580 input sequences.}
\label{granularity-580}


\end{center}
\end{table}

\begin{table}
\begin{center}
\begin{tabular}{|c|c|r|r|r|}
\hline
N & CPU Time (s) & Wall Clock Time (s) & Jobs Spawned \\
\hline
10  & 29,855 & 1282 & 213 \\
25  & 26,046 & 1058 & 86 \\
50  & 24,291 & 1151 & 43 \\
200 & 23,036 & 2891 & 11 \\

\hline
\end{tabular}

\caption{Execution time and job granularity for 2126 input sequences.}
\label{granularity-2126}


\end{center}
\end{table}

\subsection{Future Work}
Many variables potentially relating to distributed BLAST performance
have not been examined in this context.  The size of the database
remained constant among tests, but changing it may change the
scalability characteristic of job execution ~\cite{bioportal}.  In
addition, the BLAST output formats differ widely in size and data
layout, and these may have an effect on the total job execution time
and the execution profile of the division and reduce/merge stages.  In
addition, these tests were conducted sequentially and in isolation
from one another, but in a shared computing environment with other
jobs.  This would not be a typical usage pattern; users of BioCompute
are expect to submit long jobs which would execute concurrently.
Testing two jobs executing in tandem could show that multiple
BioCompute jobs with small input query sets could effectively harness
BioCompute by selecting different nodes on which to work.  This could
also potentially reveal inefficiencies where nodes alternate between
processing two different BioCompute jobs on different databases,
potentially causing a higher cache miss rate on memory.  Also,
BioCompute has processed jobs orders of magnitude larger than
the test cases presented here, and tests should be conducted on jobs
such as these to determine their execution profile. 



%output format
%Investigate Distributing the BLAST executable
