\section{Abstract}
%Providing a Distributed Computing Model for Searching Genome Datasets


In recent years, the size of bioinformatics datasets has followed an
exponential growth trend which show no signs of abating.  While
established genome search algorithms exist, specifically BLAST, an
algorithm which identifies similarities between a genomic sequence and
a database of other sequences, the growth of computer storage and
datasets has outstripped the growth in processing power.


Analysis of the problem domain and performance measurements indicate
that a distributed computing model can provide significant and
scalable performance benefits to BLAST.  Genome search is
data-intensive, and sequence databases frequently range in the
gigabytes; established non-data-intensive distributed computing models
do not suffice.

%todo:expand above for intro

\singlespace
The aim of the project is threefold:

\begin{enumerate}
\item to build a framework for executing batch bioinformatics 
jobs using existing distributed computing resources, specifically the
Condor pool at Notre Dame;
\item to provide a user-friendly tool for bioinformatics researchers to process
BLAST search queries;
\item to provide users a useful and unobtrusive record of queries
they conducted.
\end{enumerate}
\doublespace
A system called BioCompute was developed atop Condor to meet these
goals.  Condor is a distributed batch computing system used by many
institutions around the world.  Notre Dame's Condor pool comprises
several hundred machines across campus.



Using BLAST as a reference application for the first goal, a model was
developed in which input queries were distributed across a static pool
of hosts pre-seeded with multi-gigabyte datasets.  Currently,
BioCompute runs on 32 nodes.  The interface and framework were
constructed with modularity in mind in order to facilitate adding new
applications in the future.  To satisfy the second goal, a web
interface was developed to submit, monitor, view, and explain BLAST
%todo: better word for ``explain'' above
jobs and results.  This interface should accommodate users with a wide
range of familiarity with the original BLAST tool.  In satisfaction of
the third goal, this interface records the parameters and details of
each job submission to ease reproducing results and provide a paper
trail of conducted queries and comments on the query results.



The functional goal of this tool is to empower biologists to conduct
large BLAST searches in a smaller time frame by hiding the
implementation details of the underlying computing model.
