﻿\chapter{BIOCOMPUTE: Principles and Architecture of a Bioinformatics Web Portal}

%More like an abstract right now
\section{Introduction}

Bioinformatics analysis is by nature a collaborative process. To support
necessary collaborations, navigate differing expertise, and improve the
reproducibility of research we have developed a bioinformatics web portal:
Biocompute. We initially developed biocompute to facilitate these
collaborations, and focused on its interfaces: the DAQ-paradigm providing an
interface between application developers and end-users, and the biocompute
module API providing a simple way for application developers to leverage the
expertise of system programmers. The day to day operation of Biocompute
suggested several structural improvements, most notably the separation of job
execution code from the web interface and the decentralization of computing
provisioning and coordination tools.

In this chapter, we describe the goals and interface of Biocompute, its
architecture and performance, and the insights gained in its development. We
introduce a backend crawler that improves the responsiveness of the site and
permits alternative interfaces to the tools developed for Biocompute. We also
discuss the impact of two new CCL tools--the catalog\_server and the
work\_queue\_pool--on the scalability, performance and resource sharing
capacities of Biocompute.

%Comments below are more or less addressed
%Problems the crawler solves:
%Problems the catalog\_server/work_queue_pool solve:
%shadow overhead, scalability, decentralization, resource sharing

\section{System Goals}
%might include some of this stuff into the web-portal introduction
Bioinformatics portals range from broad-base resources, such as NCBI
~\cite{johnson2008ncbi}, to more specific community level resources, such as VectorBase
~\cite{lawson2007vectorbase}, down to organism or even dataset level web portals. While
these portals all rely, at least in part, on the scientific communities they
support for data and analysis, they share the characteristics of centralized
computation and curation. Additionally, many existing portals suffer from
imperfect data transparency or job parameter storage, reducing the rigor and
reproducibility of the results generated. As increasing number of organisms
are sequenced, smaller and less well funded biological communities are
acquiring and attempting to analyze their own data. These communities rarely
have the resources to support the development of specialized community web
portals, and find portals such as NCBI insufficient for their computation,
customization, or collaboration needs. 

%This bit seems a little heavy-handed for the thesis format
It seems natural, then, to turn to a more rigorous, reproducible, and
collaborative way to do data-intensive science. We believe the following
characteristics to be vital to these goals.

\begin{enumerate}
\item User data must be able to integrate with the system just as well as curator provided data. 
\item Data management should be simple for owners as well as site administrators. 
\item Sharing of data and results should be straightforward.
\item Persistent records of job parameters and metadata need to be kept. 
\item Jobs should be easily resubmitted in order to reproduce results. 
\item System resources should be shared fairly, productively and transparently.
\item The resources providing computation and data storage must be scalable and shareable.
\end{enumerate}

A system exhibiting these characteristics should permit a user community to
develop and improve a shared resource that is capable of meeting their
computational needs and contains the data they require. Further, it will allow
users to maintain a clear, traceable record of the precise sources of datasets
and results. 

%Here we want to introduce a little bit about the model of cooperation at work in Biocompute (maybe some other place instead?)
Biocompute facilitates a variety of collaborations by providing interfaces
between collaborators.  Biologists need not know the details of tool
implementation.  Tool developers need not worry about user interface or
web development, and they are provided convenient abstactions on the 
distributed computing side.

\begin{figure}
\includegraphics[width=5.5in]{figures/Biocompute_Cooperation_Model.pdf}
\caption{Biocompute Cooperation Model}
\end{figure}


\section{Data-Action-Queue}

Having the described functionality is insufficient if users cannot effectively
use the resource. To provide the requisite interface, we employ the
Data-Action-Queue (DAQ) interface metaphor. Like Model-View-Controller\cite{krasner1988description}, this
suggests a useful structure for organizing a program. However, DAQ describes
an interface, rather than an implementation.

The DAQ metaphor rests on the idea that users of a scientific computing web
portal will be interested in three things: their data, the tools by which they
can analyze that data, and the record of previous and ongoing analyses. This
also suggests a modular design for the implementing system. If tool developers
need only specify the interface for the initial execution of their tool, it
greatly simplifies the addition of new actions to the system. The Queue view
documents job parameters and meta-information, and permits users to drill down
to a single job in order to resubmit it or retrieve its results. Since the
Queue also shows the ongoing work in the system, it gives users a simple way to
observe the current level of resource contention.

\begin{figure}
\includegraphics[width=5.5in]{figures/Biocompute_Interface_Model.pdf}
\caption{Biocompute Interface Model}
\end{figure}

\section{System Interface Implementation} 

In accordance with the DAQ model, users have three separate views of
Biocompute: a filesystem-like interface to their data, a set of dialogues
permitting users to utilize actions provided by the system, and a queue storing
the status, inputs, and outputs of past and current jobs. Figure 1 shows our
implementation of this model, and the following sections describe the
interaction and sharing characteristics of its components.

Recently, we have moved to a more mature implementation of the
Data-Action-Queue paradigm relying heavily on the insights of web design. It
features an extremely flat interface---the deepest functionality is two clicks
from the front page---and a REST-ful\cite{fielding2000architectural} strategy that greatly facilitates natural
collaboration by permitting users to share urls. The site also provides users
with periodically updated statistics regarding disk and CPU usage and warnings
when needed. We have also implemented an approximation of the cost of
performing equivalent operations through Amazon's EC2 service.

\subsection{Data}

Users interface with their uploaded data much as they would in a regular
filesystem: they can move files between folders, delete them, and perform other
simple operations. 

The same interface can be used to promote files to datasets. To perform such a
promotion, a user enters into a dialogue customizable by Biocompute tool
developers. This screen permits users to set metadata as well as any required
parameters. Once the selected file has been processed by the appropriate tool,
the resulting data are distributed to a subset of the Biocompute Chirp
~\cite{thain2009chirp} stores. The meta-information provided by the users is stored in
a relational database, along with other system information such as known file
locations. Our BLAST module uses this information to improve performance.
%extent of performance improvements. Is it worth doing at all? 

The data stored in Biocompute and the datasets generated from it are often
elements of collaborative projects. User files and datasets may be marked
public or private. A publically available dataset is no different from an
administrator created one, allowing users to respond to community needs
directly. This primarily facilitates collaboration between biologists.

\subsection{Action}

Tools provide the core functionality of Biocompute. As Biocompute grew from a
distributed BLAST web portal to a bioinformatics computing environment, we
quickly realized a need for a flexible and low maintenance method for
integrating new tools into the system. Specifically, we needed to provide
application developers with simple hooks into the main site while providing
enough flexibility for including diverse applications. Further, it was
important that application developers be provided with a simple and flexible
tool for describing and exploiting parallelism provided by the underlying
distributed system.

From these requirements, modules emerged. Conceptually, each module consists of
an interface and an execution unit. The interface provides a set of php
functions to display desired information to users via the web portal. The
execution component is a template for the job directory used by Biocompute to
manage batch jobs. So far, each Biocompute module utilizes a local script to
generate and run a \textit{Makeflow}~\cite{albrecht2012makeflow} for execution on the
distributed system. Most importantly, we have set up the system to allow
developers to create module without detailed knowledge of Biocompute. In fact,
two of our available modules have been developed by computer science grad
students not otherwise involved with biocompute.

%Might want to add some notes about Brian DuSell's python API
%Though it isn't done... we'll do this in future work instead

\subsection{Queue}

While the distributed system beneath Biocompute may not be of general interest
to its users, the details of job submissions and progress are important.
Biocompute provides methods for users to recall past jobs, track progress of
ongoing jobs, and perform simple management such as pausing or deleting jobs.
Users may view currently running jobs or look through their past jobs. Each job
has drill down functionality, permitting a user to look through or download the
input, output, and error of the job, and to see pertinent meta-data such as job
size, time to completion (or estimated time to completion if the job is still
running), and parameters. 


As with user data, queue jobs may be marked public or private. Making a job
public exposes the source data, parameters, and results to inspection and
replication by other users. Further, queue detail pages provide curious users
with a way to evaluate the current resource contention in the system. 


\subsection{System Description}
%Biocompute is arranged into three primary components. A single server hosts the website, submits batch jobs, and stores data. A relational database stores metadata for the system such as user data, job status, runtime and disk usage statistics. Each dataset is stored on a Chirp~\cite{chirp} cluster of 32 machines that has been integrated into the Condor distributed computing environment. These machines serve as a primary runtime environment for batch jobs and are supplemented by an extended set of machines running Chirp that advertise availability for Biocompute jobs using the Condor classad system. 

Biocompute has evolved beyond a website. Recently, we have separated it into
distinct components: a web interface, a database backend documenting the Queue,
a process responsible for executing enqueued tasks, and a resource manager to
provision computation to executing tasks. Any tool with database access
(admittedly a limited set) can add to the queue. Likewise, anyone may submit
work-queue workers to supplement their own job, or contribute resources broadly
to the entire project.

\begin{figure}
\includegraphics[width=5.5in]{figures/Biocompute_Architecture.pdf}
\caption{Early Biocompute framework}
\end{figure}

\begin{figure}
\includegraphics[width=5.5in]{figures/Fragmented_Biocompute_Architecture.pdf}
\caption{Extended Biocompute framework}
\end{figure}

\subsection{Biocompute Modules}

Modules provide the core of biocompute's functionality. These tools cover a
wide range in complexity, from SSAHA\cite{ning2001ssaha}, a single executable, to SNPEXP\cite{holm2010snpexp}, a complex
set of java programs. To support this variety we developed a module structure
which leverages encapsulation, interface consistency, record keeping, and data
redundancy. 


Namespacing provides much of the advantage of our modular structure. All of the
elements required to run a module's job must be contained in a job template
folder and follow a prescribed naming convention. In addition, each module must
implement a common interface API in php. Essentially, modules inherit their
characteristics from a generic model module (in fact we have implemented such a
module to facilitate rapid development). A module is first instantiated by the
biocompute architecture. This process is achieved by copying the template
folder to a globally unique job number folder. This instance is initialized
with the job's input and parameters. It is then executed through the standard
wrapper executable provided in the job template, which in turn creates a
makeflow specification of the work necessary and submits it to the appropriate
batch system.  As we have gained experience in developing these modules, we
have constructed a shared python library to provide common functionality
generically, though only a few modules use this shared library.


Makeflow~\cite{albrecht2012makeflow} provides application developers needed simplicity and
flexibility for developing distributed applications. A developer uses makeflow
with make-like syntax to describe dependencies and execution requirements for
their tool (see figure 3 for an example makeflow). The fault-tolerant makeflow
engine is capable of converting this workflow into the necessary submissions
and management of distributed jobs on diverse batch systems. 

%\begin{figure}
%\centering\includegraphics[width=5.5in]{spec\_and_workflow.eps}
%\caption{An example Makeflow file and corresponding graph of its execution. The distributed acyclic graph of this job is typical of Biocompute jobs .}
%\end{figure}

While Biocompute's queue uses a relational database to store important job
metadata--including job status, time and space utilization statistics, and
input parameters--we are not satisfied with a single point of loss for this
data. To manage the potential damage incurred by database loss or data
corruption, we ensure that all job statistics can be recreated by inspecting
the job directory. Each job records the execution plan specified by a makeflow
file, a log of the timing and return values of the substeps in that execution,
and detailed logs output by each individual executable and by the batch system
selected for that job. It also freezes the input and output of the job so that
future manipulations will not impact our ability to recover initial inputs and
results.


\subsection{Sharing Work Across Machines} 

Our hope for Biocompute has always been to enable broader, more democratic
access to computing resources and a more collaborative way to share data and
software. In service of these goals, two recent changes have been made to
Biocompute: an asynchronous work execution service, and the integration of a
Work-Queue catalog\_server and worker\_pool.

%Maybe deserves a section to itself?
\subsubsection{Crawler}

Originally, Biocompute's webcode responded to user commands by making direct
changes to the filesystem and forking tasks on the user's behalf. This commonly
resulted in long wait times and poor web performance as synchronous data
transfers and long running program execution blocked the website. The
submission of new jobs--which involves the transfer of all relevant input
files--would often take minutes. 


To reduce this inconvenience, the website was modified to simply enqueue all
desired actions in the database. A crawler process was developed to examine the
database state, perform and monitor the desired actions, and appropriately
update any relevant fields.


The production of this process provides benefits beyond reduced website latency
(should we do an experiment to demonstrate this? probably). First, it allows
any process with access to Biocompute's database to enqueue tasks for
execution. At present, the crawler provides the functionality directly using a
single command mode. With this mode, users may perform any action allowable by
Biocompute through the command line. Second, the crawler may be executed on a
machine other than the web-server. This means that the overhead incurred from
file-transfers and other resource-intensive operations need not slow the
operation of the website. Finally, it simplifies the process of deploying a
Biocompute-like project in other contexts, as the crawler doesn't require a
web-server to run.

The crawler supports a single-command mode, which effectively permits Biocompute
to be used as a command line tool, and for Biocompute modules to be included
in pipeline.  This functionality is limited, because the results of such 
executions will be stored in the website's heirarchy.  However, future work
to provide tools for automated data retrieval would permit Biocompute jobs
to be leveraged in a variety of new and powerful ways.

%We need an example of the crawler related database modifications
%We need an example of the crawler being used to in command line mode

%Not sure how much detail to put into this: do we need a rigorous description of how the crawler works?

%Maybe deserves a section to itself?
\subsubsection{Using Work-Queue's Catalog Server}

Recent developements by the cooperative computing lab have permitted
us to expand the ways in which resources can be contributed and shared
in Biocompute.  Previous iterations have supported sharing of user data
and collaboration between tool developers and biologists.  However, the
computing resources supporting Biocompute have previously been fixed,
requiring fairly expert interventions for resource holders to contribute
to the pool (by modifying class-ads on machines running condor).  With
the addition of the resource management tools available through work\_queue\_pool\cite{yu2012resource}
it is now substantially simpler for users to donate resources to the
Biocompute pool, or even to specific projects--their own for instance--within
that pool. The new work\_queue\_pool also provides mechanisms to avoid
overprovisioning of resources, and allows for decentralized worker submission--reducing
load on the web portal host. 

%We want an illustration here of the load from condor shadows

%Talk about capacity.

%Talk about automatic resource management with the pool

%Talk about scaling.

\subsection{Data Management for BLAST}
%This needs substantial revision to reflect the evolution over time... I wish
%that I could throw WQ in there
BLAST ~\cite{johnson2008ncbi}, or Basic Local Alignment Search Tool, is a commonly used
bioinformatics tool that implements a fast heuristic to align a set of one or
more query sequences against a set of reference sequences. Because biologists
often wish to compare hundreds of thousands of sequences against databases
containing gigabytes of sequence data, these jobs can take prohibitively long
if executed sequentially. However, BLAST jobs are conveniently parallel in that
the output of a single BLAST is identical (absent formatting restrictions) to the concatenation of the output
from BLASTs of all disjoint subsets of desired query sequences against the same
reference database. 


We found that using the dataset metadata stored in our relational database, we
could rank machines by the number of bytes already pre-staged to that machine
~\cite{carmichael2010biocompute}. This rank function schedules jobs to the machine requiring
the least dynamic data transfer. While this approach naturally ties our BLAST
module to Condor\cite{litzkow1988condor}, similar concepts could be used to provide job placement
preferences in other batch systems. 


This technique requires that the list of database locations be correct. To
maintain correct lists, we use the following method. Any successful job was run
on a machine containing all required databases. Jobs returning with the
database not found error are known to lack one of the needed databases.
Therefore, we parse job results and update the MYSQL database with data
locations accordingly. For those jobs that run against only one database, this
is sufficient information to update the database, however in the general case
we cannot simply determine the appropriate action and therefore make no
modifications to the database. 


To balance load, we transfer databases from a randomly selected machine in the
primary cluster. If the chosen machine lacks the appropriate database, the job
fails and is rescheduled. 


Even with random source selection load balancing, potential network traffic is
significant. For example, the largest BLAST database hosted in Biocompute is
over 8 GB. Further, many Biocompute jobs run in hundreds or even thousands of
pieces, and therefore it is possible that a job could request 8GB transfers to
dozens or even hundreds of machines simultaneously. To mitigate this
unreasonable demand we limit the number of running jobs to 300. Additionally,
we cap transfer time at 10 minutes. This effectively prevents us from
transferring files overly large for the available bandwidth. 


%As Biocompute transitions to rely more and more on the work\_queue system,
%the requirement that we store Blast databases has become increasingly frustrating.
%Here are some ideas currently in test in order to make this work better:
%1. tar up large databases and run Blasts with wq

%\subsection{Semantic Comparison of Manual and Dynamic Distribution Models}
%
%The transition from manual to dynamic distribution required a shift in dataset
%storage semantics within Biocompute. This shift was brought about by the new
%authentication requirements introduced by node to node copying, and by the
%automatic updating characteristics provided by our logfile inspection
%technique. In Table 1 we document the characteristics of datasets before and
%after dynamic distribution.  
%
%\begin{table*}
%\centering
%\caption{Object Characteristics Before and After Implementing Dynamic Distribution}
%{\small
%\begin{tabular}{|l|p{1.0cm}|p{2.5cm}|c|p{2.5cm}|} \hline
%Object&\multicolumn{2}{|c|}{Before Dynamic}&\multicolumn{2}{|c|}{After Dynamic}\\ \hline
%&record&permissions&record&permissions\\ \hline
%New Primary Machine Replica&local fs&Biocompute, local&global&Biocompute, local, nd\\ \hline
%Audited Primary Machine Replica&global&Biocompute, local&global&Biocompute, local, nd\\ \hline
%New Dynamic Replica&n/a&n/a&global&Biocompute, local\\ \hline
%\end{tabular}
%}
%\end{table*}
%
\subsection{Performance of Data Management Schemes}
%will revisit this section with new tests... I'm not sure these numbers are comprehensive enough

In this section we will explore the performance characteristics of Biocompute,
and evaluate the impact of data distribution model, and data availability on
these characteristics.

Figure 3 illustrates the cost of the current timeout policy for the dynamic
distribution model, as compared to the original static distribution model.
Figure 4 shows the runtime of the dynamic BLAST query, which was executed
against the NCBI Non-Redundant (NR) BLAST database. This dataset is 7.5GB -
uncomfortably large for multiple transfers on our network. This effectively causes any
job assigned to a machine without the database to immediately reschedule itself
(in the static case) or to wait for some time and then reschedule (in the
dynamic case). Though one might expect this behavior to significantly increase
the overall runtime, we only observe an 18\% increase in runtime for the
dynamic distribution worst case run. The characteristics of the rank function
ensure that dynamic jobs are only assigned to database-less machines when all
of the usable machines are already occupied. In the static case, these jobs
would simply wait in the queue until a usable machine became available.
Essentially, the change to dynamic distribution forces a transition to a
busy-wait. While this increases badput, and is suboptimal from the perspective
of users competing with Biocompute for underlying distributed resources, it has
a limited impact on Biocompute users. 

%\figcap{2.25in}
%\begin{figure}
%\begin{minipage}[t]{0.4\linewidth}
%\centering
%{\tiny
%\begin{tabular}{|l|p{1.5cm}|p{1.5cm}|}\hline
%Distribution Method&Execution Time (hours)&Badput (hours)\\ \hline
%dynamic&17.09&517.3\\ \hline
%static&14.49&193.7\\ \hline
%\end{tabular}
%}
%\caption{Worst case cost of dynamic distribution. While the dynamic case shows a 167\% increase in badput, it only suffers an 18\% increase in runtime}
%\label{tab:worstcase}
%\end{minipage}
%\begin{minipage}{0.7\linewidth}
%\centering
%\includegraphics[width=2.5in]{timeline.eps}
%\caption{Worst case dynamic distribution run. The high variation in number of jobs running is a result of failed data transfer attempts.}
%\label{fig:examplerun}
%\end{minipage}
%\end{figure}


%\figcap{2.25in}
%\begin{figure}
%\begin{minipage}[b]{0.5\linewidth}
%\centering
%\includegraphics[width=2.25in]{replication.eps}
%\caption{Runtime of BLAST versus a medium sized (491 MB) database for varying initial dataset replication level}
%\label{fig:replication}
%\end{minipage}
%\hspace{0.5cm}
%\begin{minipage}[b]{0.5\linewidth}
%\centering
%\includegraphics[width=2.25in]{timeout.eps}
%\caption{Runtime of BLAST versus a medium sized (491 MB) database for varying timeout lengths.}
%\label{fig:timeout}
%\end{minipage}
%\end{figure}


%NEED TO MAKE THIS FIGURE
Figure 5 shows the impact of initial replication level on the runtime of a
BLAST job. It is important to note that the Rank function remains unmodified
throughout the runtime of a particular job, so the number of jobs arriving at
machines which have acquired the required databases during runtime is
statically determined. The naivety of our source selection algorithm ensures
that many transfer attempts will fail if the primary cluster is saturated. This
reduces our ability to grow the level of parallelism as the job progresses.
However, because 300 subjobs may be submitted at any one time, it is likely
that each wave of execution will propogate at least a few databases. The
results demonstrate that over the course of a job the distribution task is
accomplished quickly. 


%NEED TO MAKE THIS FIGURE
Figure 6 shows the impact of varying timeout times on the runtime of a BLAST
job. For this test, each database was initially distributed to the 32 core
nodes and, depending on timeout, potentially distributed to any of the nodes
open to Biocompute jobs. The lowest timeout time never transfers the target
database, the middle one sometimes succeeds and sometimes fails, and the final
one always succeeds. As expected, the increased parallelism generated by
successful distribution reduces runtime.


In both of our experiments, long tail runtimes for some sub-jobs limited
benefits from increased parallelism. A more sensitive input splitting scheme
might mitigate this effect. Alternatively, fast-abort strategies available
through work\_queue\_pool have been shown to reduce long-tail effects driven
by runtime environment differences.


Our final timeout value was set high in order to maximize the transferrable
database size, as the impact on performance was acceptable even for
untransferable databases. Our final replication value was set to 32, the size
of our dedicated cluster. Since the introduction of dynamic distribution, some
datasets have been spread to up to 90 machines, tripling the parallelism
available for those data.

\subsection{Social Challenges of Biocompute}

Up to this point, we have only addressed how Biocompute meets its technical
challenges. However, as a shared resource Biocompute requires several
mechanisms to balance the needs of all of its stakeholders. We believe that
this is best achieved through fair policies and transparency. In this section
we discuss the policies and tools we have used to facilitate the management of
our two most limited resources - data and compute time.


The sheer size and number of available biological datasets require
consideration of disk space costs of Biocompute. The two simplest ways of
achieving this are motivating users to control their disk consumption, and
convincing our users to provide or solicit funding to increase available space.
In either case, it is necessary to provide users with both comparative and
absolute measures of their disk consumption. To this end we track disk space
consumption and CPU utilization for all users, and publish a "scoreboard"
within Biocompute. So far, Pie-charts have provided heavy users with sufficient
motivation to delete the jobs and data they no longer need, and have given us
insights into common use cases and overall system demand.


Resource contention also comes into play with regards to the available grid
computing resources. Utilization of Biocompute's computational resources tends
to be bursty, generally coinciding with common grant deadlines and with the
beginnings and ends of breaks when users have extra free time. These
characteristics have produced catastrophic competition for resources during
peaks. Without a scheduling policy, job portions naturally interleaved,
resulting in the wall clock time of each job to converge on the wall clock time
necessary to complete all the jobs. To combat this problem we implemented a
first in first out policy for large jobs, and a fast jobs first policy for
small (less than five underlying components) jobs. This has allowed quick
searches to preempt long running ones, and prevents the activities of users
from seriously impacting the completion timeline for previously started jobs. 


%Add stuff about the pools here

\section{Related Work}

The provenance system at work in Biocompute bears similarity to the body of
work generated by The First Provenance Challenge ~\cite{moreau2008special}. For
biologists, the queue and detail views provide extremely high level provenance
data. At the user interface level, we were most interested in giving the user
an easily communicable summary of the process required to produce their results
from their inputs.  This model hides the complexity of the underlying
execution, and gives our users a command that they can share with colleagues
working on entirely different systems. Obviously, such information would be
insufficient for debugging purposes, and Makeflow records a significantly more
detailed record of the job execution, including the time, location and identity
of any failed subjobs, and a detailed description of the entire plan of
execution. Formal provenance systems are expected to have a mechanism by which
queries can be readily answered ~\cite{moreau2008special}. We selected Makeflow
as our workflow-engine for its simplicity, and its ability to work with a wide
variety of batch systems. A similarly broad workflow and provenance tool could
replace it without modifying the architecture of Biocompute.


%This might go to the workflow section instead. Or perhaps a tie-in?

BLAST has had many parallel implementations ~\cite{darling2003design, dumontier2002nblast}. While the
BLAST module of Biocompute is essentially an implementation of parallel BLAST
using makeflow to describe the parallelism, we do not mean to introduce
Biocompute as a competitor in this space. Rather, we use BLAST to highlight
common problems with the management of persistent datasets, and show a generic
way to ease the problems generated without resorting to database segmentation
or other complex and application-specific techniques. The BLAST module, like
all of Biocompute's modules, uses an unmodified serial executable for each
subtask. 


\section{Conclusions}

We stated that our system should integrate user data, and make its management
simple. Job parameters and meta-information should be kept and results easily
shared and resubmitted. System resources should be scaleable, and fairly,
productively and transparently shared. 


Our system has, at least in pilot form, achieved these goals. However, our
solutions are for the most part initial steps. By implementing the current
version of Biocompute we have exposed a need for a much more complex dataset
management tool. With the addition of such a tool, Biocompute's ability to
usefully classify user data and vary its storage and presentation policies
based on these classifications could be expanded to any kind of data. Our first
attempt at implementing a Data-Action-Queue interface came out effective, but
complex.  At Notre Dame, in-house biological data generation is commonplace,
and an effective system to pipeline this data directly into biocompute would be
a very useful to our user community. Likewise, mechanisms to efficiently import
data from online resources would be welcomed.


%The complexities of the data management challenge


Biocompute addresses a broad spectrum of challenges in scientific computing web
portal development. It illustrates some of the technical solutions to social
challenges presented by collaborative and contentended resources, implements
techniques for mitigating the performance impact of large semi-persistent
datasets on distributed computations, and provides a useful framework for
exploring and addressing open problems in cluster storage, computation, and
socially sensitive resource management. Regular users range from faculty to
support staff to students, and cover all areas of computational expertise from
faculty to undergraduate biology majors; they span 10 research groups and 3
universities. In a year we provided our users with more than eighteen years of
CPU time, and enabled them to perform research using custom datasets.


%\acks
We would like to acknowledge the hard work of Joey Rich, Ryan Jansen, and Brian
Kachmark who greatly assisted in the implementation of Biocompute. Thomas
Potthas implemented the crawler, and Brian Dusell contributed additional
modules and an improved module API. We would also like to thank Andrew
Thrasher, Irena Lanc and Li Yu for their development of the SSAHA and SHRIMP
modules. This work is supported by the University of Notre Dame's strategic
investment in Global Health, Genomics and Bioinformatics and by NSF grant
CNS0643229.

