\section{Related Works}

\subsection{Distributed Systems}
\subsubsection{MapReduce}



MapReduce ~\cite{map-reduce} is a distributed programming model and
computing system developed at Google for data-intensive applications.
It is modeled and named after the map and reduce primitives in LISP: a
map function is applied to each item in a list to produce a list of
resulting items, and a reduce function is executed on lists of the
resultant items to produce a scalar result.  By restricting users to
this functional programming model, MapReduce can use a common
distributed execution model for a variety of real-world applications.



In their programs, MapReduce users specify a map function and a reduce
function.  The input data is split into pieces and each piece is fed
to an instance of the map function, which emits values which are fed
to instances of the reduce function.  By forcing users to write their
programs in a manner which emphasizes modular and independent
computations, MapReduce can efficiently distribute execution of the
program on thousands of clusters.



MapReduce seeks to make it easy for developers to harness distributed
computing resources to execute very large data-intensive jobs while
hiding logistical concerns typical to distributed computing,
e.g. fault tolerance and load-balancing.  It is implemented as a C++
library against which a user must write, link, and compile their
programs.  


While BioCompute's division input division strategy closely resembles
that found in MapReduce, BioCompute would have to undergo many changes
to be implemented on MapReduce.  First, the binary BLAST application
would have to be modified in some fashion in order to hook into the
MapReduce structure, where no such modification has taken place in the
current implementation.  Also, running BLAST jobs require querying
large sequence database files which are not trivially divided nor
distributed.  MapReduce's programming model does not accommodate such
large, static resources, so the database would have to be divided
among jobs, which would introduce domain-level algorithmic
complications, and further modifications to the binary application.
In short, BioCompute as-is could not run on MapReduce, but similar
computations could be accomplished after non-trivial modifications to
the execution framework.

\subsubsection{Condor}
Condor ~\cite{condorinpractice}is a distributed computing system
developed at the University of Wisconsin-Madison.  Based on the fact
that many personal workstations possessed on-demand computing power
which often sat unused, it was originally developed as a tool to pool
and harness idle workstation CPU cycles.  As such, it is built to
handle highly heterogeneous Unix cluster environments.  It was built
to respect the rights of the machine owners, who can specify the level
of Condor usage on their machines, and remove a Condor job from their
machine at any time.


Jobs submitted for execution in the Condor pool are described by a
Condor submit file.  This file details the inputs, executable,
outputs, and parameters of the job and requirements of the job
environment.  Condor has a classification system called ``ClassAds'',
which describe attributes of a machine's execution environment in
name-value pairs.  When jobs are submitted to the Condor system,
attribute values are specified, and machines which match the
specifications are considered for job execution.



Condor is a flexible environment.  Programs typically do not require
recompilation or relinking to run in the Condor environment, nor are
they constrained to a particular programming model or language.
Unless otherwise specified, Condor filters nodes using ClassAds to
find an execution environment matching the submitting computer
(i.e. 32- vs. 64-bit, hardware architecture, etc).



Reliability and fault-tolerance is a major concern in distributed
systems.  In the event of an expected shutdown of a machine running a
Condor job, Condor can package up the execution and resume the
execution on another machine.  In the event of an unexpected shutdown,
e.g. someone accidentally unplugging a machine, Condor can restart the
job on another eligible node.


There is a Condor pool at the University of Notre Dame composed of
hundreds of machines of different varieties.  Computer lab
workstations, faculty workstations, and various computing clusters are
included.  Condor is the underlying distributed system upon which
BioCompute relies for distributed execution.


\subsection{The Chirp Filesystem}

Alongside Condor, Notre Dame also runs a distributed file-system named
Chirp~\cite{chirp}.  The file-system was designed to facilitate
data-transfer within local-network computational grids.  



Chirp aims to provide unprivileged deployment, simple interfaces,
familiar access controls, and flexibility to accommodate different
types of use cases.  Use of Chirp is commonly facilitated through
Parrot~\cite{parrot}, an {\it interposition agent} which presents
files on various nodes as a directory tree in a Unix system and
accordingly rewrites system calls to those resources.



Chirp has two main parts, the {\it chirp\_server} and the catalog
server.  The {\it chirp\_server} is the client which operates in
user-space on a node, provides files for distributed access, and
periodically informs the central catalog server of its presence and
status.  The catalog server maintains and reports information about
all the Chirp nodes.  By maintaining a periodically-updated central
list and caching it on nodes, Chirp can provide fast and local
inquiries into the top-level of the file-system.



BioCompute employs Chirp alongside Condor to aid in efficient and
simple batch distribution of BLAST sequence databases across the Notre
Dame network.  Chirp provides a simple interface for a peer-to-peer
file-distribution mechanism which reduces total distribution time.



%\subsection{SETI@Home and Folding@Home}
%Two distributed computing projects to emerge into the mainstream
%consciousness in recent years have been SETI@Home and
%Folding@Home. SETI@Home processes slices of the night sky in search
%for specific radiographic patterns, while Folding@Home processes
%protein descriptions and iterates through potential 3d looks for valid
%protein fold

%These projects distribute tasks to thousands of machines across the
%Internet and globe. These tasks are very compute-intensive and light
%on data; SETI@Home estimated a 350 kilobyte payload to generate ten
%hours of work on the client side.  



\subsection{Distributing BLAST}
\subsubsection{University of Iowa BLAST Cluster}


Braun et al. detail three possible approaches to distributing BLAST
queries among a cluster and discuss the implications of the design
options they made in designing a cluster for batch processing of BLAST
queries~\cite{threecomp}.  The first and most fine-grained approach is
to parallelize to the level of comparing two individual sequences
against one another, one source and one target.  The next approach to
dividing BLAST queries they mention is to partition the genomic
databases into ``chunks'' and distribute these chunks among several
nodes.  When a query is submitted for processing against a database,
the query is submitted against all chunks of the database, and the
results are returned and merged.  The third and most coarsely-grained
approach entails storing complete databases on nodes and splitting up
the set of incoming query sequences across many nodes, each with a
complete copy of the database against which to compare results.


At the time of the writing of the paper, only the coarse-grained
parallelization of BLAST queries had occurred, and work was in
progress on the medium-grained approach involving partitioning the
databases.


The cluster accepted job submissions from two sources: a daily batch
input from an array of sequencing machines and a web interface. At the
time of writing, the vast majority of system use came from the batch
processing.  The researchers noted that at the time of writing, 90\%
of use cases involved querying a single sequence against a database.
In this case, coarse-grained distribution of BLAST will not result in
any performance improvement of processing because there would only be
one sequence to run against one database.



The paper noted that an implementation of the medium-grained database
distribution method was nearing completion.  Since the sequence
datasets are divided in this implementation, the merging of results
from each sequence-database comparison in order to maintain
output-compatibility with a normal, sequential BLAST is non-trivial.
Once all jobs have completed and returned results, the results must be
sorted.  The E-values, the number of significant alignments expected
by chance, must be recomputed to reflect the actual size of the whole
database rather than the size of each individual partition.  Also, the
results must be sorted by each match's score, which is a function of
the length of the sequence alignment.



This coarse-grained implementation described in the paper closely
resembles BioCompute's division strategy.  BioCompute's only source of
input sequences is through a web-based interface, but the character of
input is different than what the authors expect in their system.
Queries sourced from the web interface in their system are treated as
time-critical and have operational preference over batch jobs.  In
BioCompute, web-based jobs are treated as batch queries and are
expected to have a potentially lengthy runtime.  While there is
emphasis on immediate feedback to the user in BioCompute and the
system is built to deliver processing time improvements to BLAST, the
user interface does not expect nor encourage the user to expect
immediate results processing.


It should also be noted here the BioCompute's BLAST distribution
strategy assumes sequence input files will multiple queries, often
many thousands of queries.  This has been the observed standard
practice of bioinformatics researchers at Notre Dame.



At full capacity, the sequencing systems feeding the batch processes
produce 2880 sequences per day.  Typically eighty percent of the
sequence pass verification steps and are fed to BLAST.  The paper
reports that a daily load of 2310 sequences are processed by the
cluster in 20 hours, keeping up with the daily production of sequences
by the sequencers.  By contrast, a single node at the time would 
have taken more than three weeks to process the same dataset.


%ref: iowa parallelization


%\subsection{Electronic Lab Notebook}


\subsection{Web Portal}
\subsubsection{RIKEN Bio Portal}


The Advanced Center for Computing and Communication at the RIKEN
Institute in Japan saw a need for a user-friendly system for life
sciences researchers to access distributed computing resources at the
Institute ~\cite{bioportal}.  They noted that many life sciences researchers were
unfamiliar or uncomfortable with traditional computer user interfaces,
especially the command line.  They developed Bio Portal, a web
interface written in Java using the Commodity Grid Toolkit, to
streamline and simplify using the RIKEN Super Combined Cluster (RSCC)
in order to process BLAST and ClustalW jobs.
%ref: CoG toolkit



Using Bio Portal, life sciences researchers (``wet'' researchers) are
able to easily upload genomic or protein sequences for processing,
view and download results, stop progress of currently-running jobs,
and delete jobs.  Also, frequently researchers would need to use
results from BLAST as input for a ClustalW job, and the process for
doing so manually using Bio Portal's system was slow and cumbersome, 
so a BLAST+ClustalW processing option was added which executed this 
chain of processes automatically.
%ref:ClustalW
%todo: expand on portal aspects


The RSCC uses Hi-Per BLAST, a parallelized version of BLAST which
maintains result compatibility.  While large jobs were expected to
effectively utilize the computing resources of the RSCC, it was
postulated that smaller jobs would not benefit in runtime from the
cluster's resources because of higher temporal overhead when using the
RSCC.  A separate server was added to Bio Portal to process jobs with
fewer input sequences.  After conducting trials, the team concluded
that jobs against large databases such as ``nt'' should be run on the
RSCC cluster, while jobs against smaller databases such as ``sts''
should be run on the separate server.  They found that only when
comparing against databases of intermediate size such as ``patnt''
should the number of sequences be using as a tool in estimating job
runtime and selecting which computing resource to use.  They found
that database size is more influential in resource selection than the
number of sequences.
%ref:  hi-per blast.

A separate group in RIKEN hosts mirrors of public databases, and Bio
Portal synchronizes their databases daily with these mirrors and
provides status indicators and timestamps for each database.
