\section{Handling Sequence Databases}

\subsection{Distributing Databases}


Distributing BLAST across a set of nodes differs from the MapReduce
model in its dependency on large databases.  In MapReduce, all inputs
to the program are expressed as splits of a list, while the BLAST
there is the additional dependency of a database.  It is not feasible
to simply distribute the databases on-demand per each instance because
of their relatively large size and the relative paucity of network
bandwidth, a frequent bottleneck in distributed systems.

%ref: iowa, RIKEN
%todo: Chirp reference
%todo: speak of reduce phase 


Complete copies of the required databases are stored locally at each
node using the Chirp distributed file-system.  Databases are added by
first staging the requisite files in a master copy which is not used
for production job execution.  To enable reproducibility of results,
once a database is added to the system under a specific name and
release date, it will not be updated.  Rather, the updated version of
the database would be distributed under a different release date.




Adding a new database to the system is handled by a set of scripts
which first push the new database onto a single node in the cluster
and then distributes the data among the cluster by making a call to
the {\it chirp\_distribute} function provided by the Chirp file-system.
This function efficiently distributes a file or directory on one node
to multiple specified Chirp nodes.



There may be a significant portion of time during which a newly-added database
will be locally stored on a fraction of the BioCompute nodes.  Upon
arriving at a node for execution, each BioCompute job checks for the
presence of its required database.  If it is not stored on the
machine, the job exits with an exit code specifying that the database
could not be found on the machine.  The Condor job handler will then
reschedule the job at another node.  In this way, new databases can be
added to the production system and queried against while only
existing on a portion of the cluster.


\subsection{Treating BLAST Databases Atomically}
\label{sec:atomicdb}


In BioCompute, the BLAST databases are treated as atomic; a complete
copy of each database is stored locally on each node, and the split
of the original input sequence is compared against the whole copy of the
database.  Strictly speaking, each BLAST database is not
atomic.  It it possible to split a database along sequence boundaries
and compare input splits against database splits.  Treating databases
as atomic elements, however, greatly reduces the complexity of the
``reduce'' step for the BLAST abstraction: the original BLAST
algorithms use domain-specific heuristics to determine the relevance
of results from the comparison between an input sequence and the
database, and order results accordingly.  By maintaining the BLAST
databases intact and whole, BioCompute avoids the necessity of
reimplementing these heuristics.  Each input sequence will have been
compared against the whole database and the result set will already be
ordered according to domain relevance.  In addition, treating
databases as atomic elements simplifies other processes including
adding and distributing databases, distributing jobs among nodes, and
adding nodes to the BioCompute pool.





