\documentclass[final,noinfo]{nddiss2e}
\usepackage{subfigure}

\begin{document}
\frontmatter

\title{Implementing Friendly Bioinformatics on Distributed Systems}
\author{Rory Carmichael}
\work{Thesis}
\degaward{Master of Science\\in\\Computer Science and Engineering}
%\degprior{B. S.}
\advisor{Scott Emrich}
\department{Computer Science and Engineering}
\degdate{December 2012}

\maketitle

\begin{abstract}
Stuff about bioinformatics problems being important, large, amenable to parallelization, and of interest primarily to non-CS experts. 
Making resources accessible and usable, tools that are distributable, and inventing novel, scale-leveraging bioinformatics algorithms are important services to that community.

\end{abstract}

\tableofcontents
\listoffigures

\mainmatter


\chapter{INTRO}
%This is in need of much more citation
\section{Overview}

In this document, we present research in bioinformatics ranging from the
presentation and organization of data and tools, to the utilization of
large-scale computational resources, to the implementation of novel
algorithms.  We first describe a general framework to support bioinformatics
collaborations. We then discuss principles and pitfalls in distributed
bioinformatics in the context of two example pipelines.  Finally, we present
a novel algorithm for quantifying selective pressure for rare codon clusters
in orthologs across the tree of life, leveraging principles described in the earlier chapters.

\section{Bioinformatics Web Portals}

Bioinformatics has long relied on Web Portals (ncbi, vectorbase, something else) to provide
biologists and other non-technical users with access to computing resources, tools, and data.
Here we present Biocompute, a web portal enablings large scale bioinformatics computation:
1. Interface designed to enable users
2. Distributed provisioning of resources
3. Shared capacity to bring in novel data
4. Consistent interfaces between application developers

\section{Bioinformatics Pipelines}

The proliferation of biological tools and formats has led to the common
practice of bioinformatics pipeline development.  In this chapter we discuss
the challenges involved in bioinformatics pipeline development in general, and
development of distributed pipelines in particular.  We present case studies of
pipelines in varying levels of complexity, and show the challenges involved in
scaling up using existing tools.  We document common pitfalls and workarounds
to address them.

\section{Rare Codon Analysis}
%I just wrote importantness <-- stupid
1. Background about rare codons (biology stuff)
2. Computation as a way to establish the extent of the importance of these effects 
3. We put together some of these scary pipeline things to answer this question


\chapter{BIOCOMPUTE: Principles and Architecture of a Bioinformatics Web Portal}

%More like an abstract right now
\section{Introduction}

Bioinformatics analysis is by nature a collaborative process. To support
necessary collaborations, navigate differing expertise, and improve the
reproducibility of research we have developed a bioinformatics web portal:
Biocompute. We initially developed biocompute to facilitate these
collaborations, and focused on its interfaces: the DAQ-paradigm providing an
interface between application developers and end-users, and the biocompute
module API providing a simple way for application developers to leverage the
expertise of system programmers. The day to day operation of Biocompute
suggested several structural improvements, most notably the separation of job
execution code from the web interface and the decentralization of computing
provisioning and coordination tools.

In this chapter, we describe the goals and interface of Biocompute, its
architecture and performance, and the insights gained in its development. We
introduce a backend crawler that improves the responsiveness of the site and
permits alternative interfaces to the tools developed for Biocompute. We also
discuss the impact of two new CCL tools--the catalog\_server and the
work\_queue\_pool--on the scalability, performance and resource sharing
capacities of Biocompute.

%Comments below are more or less addressed
%Problems the crawler solves:
%Problems the catalog\_server/work_queue_pool solve:
%shadow overhead, scalability, decentralization, resource sharing

\section{System Goals}
%might include some of this stuff into the web-portal introduction
Bioinformatics portals range from broad-base resources, such as NCBI
~\cite{johnson2008ncbi}, to more specific community level resources, such as VectorBase
~\cite{lawson2007vectorbase}, down to organism or even dataset level web portals. While
these portals all rely, at least in part, on the scientific communities they
support for data and analysis, they share the characteristics of centralized
computation and curation. Additionally, many existing portals suffer from
imperfect data transparency or job parameter storage, reducing the rigor and
reproducibility of the results generated. As increasing number of organisms
are sequenced, smaller and less well funded biological communities are
acquiring and attempting to analyze their own data. These communities rarely
have the resources to support the development of specialized community web
portals, and find portals such as NCBI insufficient for their computation,
customization, or collaboration needs. 

%This bit seems a little heavy-handed for the thesis format
It seems natural, then, to turn to a more rigorous, reproducible, and
collaborative way to do data-intensive science. We believe the following
characteristics to be vital to these goals.

\begin{enumerate}
\item User data must be able to integrate with the system just as well as curator provided data. 
\item Data management should be simple for owners as well as site administrators. 
\item Sharing of data and results should be straightforward.
\item Persistent records of job parameters and metadata need to be kept. 
\item Jobs should be easily resubmitted in order to reproduce results. 
\item System resources should be shared fairly, productively and transparently.
\item The resources providing computation and data storage must be scalable and shareable.
\end{enumerate}

A system exhibiting these characteristics should permit a user community to
develop and improve a shared resource that is capable of meeting their
computational needs and contains the data they require. Further, it will allow
users to maintain a clear, traceable record of the precise sources of datasets
and results. 

%Here we want to introduce a little bit about the model of cooperation at work in Biocompute (maybe some other place instead?)
Biocompute facilitates a variety of collaborations by providing interfaces
between collaborators.  Biologists need not know the details of tool
implementation.  Tool developers need not worry about user interface or
web development, and they are provided convenient abstactions on the 
distributed computing side.

\begin{figure}
\includegraphics[width=5.5in]{Biocompute_Cooperation_Model.pdf}
\caption{Biocompute Cooperation Model}
\end{figure}


\section{Data-Action-Queue}

Having the described functionality is insufficient if users cannot effectively
use the resource. To provide the requisite interface, we employ the
Data-Action-Queue (DAQ) interface metaphor. Like Model-View-Controller\cite{krasner1988description}, this
suggests a useful structure for organizing a program. However, DAQ describes
an interface, rather than an implementation.

The DAQ metaphor rests on the idea that users of a scientific computing web
portal will be interested in three things: their data, the tools by which they
can analyze that data, and the record of previous and ongoing analyses. This
also suggests a modular design for the implementing system. If tool developers
need only specify the interface for the initial execution of their tool, it
greatly simplifies the addition of new actions to the system. The Queue view
documents job parameters and meta-information, and permits users to drill down
to a single job in order to resubmit it or retrieve its results. Since the
Queue also shows the ongoing work in the system, it gives users a simple way to
observe the current level of resource contention.

\begin{figure}
\includegraphics[width=5.5in]{Biocompute_Interface_Model.pdf}
\caption{Biocompute Interface Model}
\end{figure}

\section{System Interface Implementation} 

In accordance with the DAQ model, users have three separate views of
Biocompute: a filesystem-like interface to their data, a set of dialogues
permitting users to utilize actions provided by the system, and a queue storing
the status, inputs, and outputs of past and current jobs. Figure 1 shows our
implementation of this model, and the following sections describe the
interaction and sharing characteristics of its components.

Recently, we have moved to a more mature implementation of the
Data-Action-Queue paradigm relying heavily on the insights of web design. It
features an extremely flat interface---the deepest functionality is two clicks
from the front page---and a REST-ful\cite{fielding2000architectural} strategy that greatly facilitates natural
collaboration by permitting users to share urls. The site also provides users
with periodically updated statistics regarding disk and CPU usage and warnings
when needed. We have also implemented an approximation of the cost of
performing equivalent operations through Amazon's EC2 service.

\subsection{Data}

Users interface with their uploaded data much as they would in a regular
filesystem: they can move files between folders, delete them, and perform other
simple operations. 

The same interface can be used to promote files to datasets. To perform such a
promotion, a user enters into a dialogue customizable by Biocompute tool
developers. This screen permits users to set metadata as well as any required
parameters. Once the selected file has been processed by the appropriate tool,
the resulting data are distributed to a subset of the Biocompute Chirp
~\cite{thain2009chirp} stores. The meta-information provided by the users is stored in
a relational database, along with other system information such as known file
locations. Our BLAST module uses this information to improve performance.
%extent of performance improvements. Is it worth doing at all? 

The data stored in Biocompute and the datasets generated from it are often
elements of collaborative projects. User files and datasets may be marked
public or private. A publically available dataset is no different from an
administrator created one, allowing users to respond to community needs
directly. This primarily facilitates collaboration between biologists.

\subsection{Action}

Tools provide the core functionality of Biocompute. As Biocompute grew from a
distributed BLAST web portal to a bioinformatics computing environment, we
quickly realized a need for a flexible and low maintenance method for
integrating new tools into the system. Specifically, we needed to provide
application developers with simple hooks into the main site while providing
enough flexibility for including diverse applications. Further, it was
important that application developers be provided with a simple and flexible
tool for describing and exploiting parallelism provided by the underlying
distributed system.

From these requirements, modules emerged. Conceptually, each module consists of
an interface and an execution unit. The interface provides a set of php
functions to display desired information to users via the web portal. The
execution component is a template for the job directory used by Biocompute to
manage batch jobs. So far, each Biocompute module utilizes a local script to
generate and run a \textit{Makeflow}~\cite{albrecht2012makeflow} for execution on the
distributed system. Most importantly, we have set up the system to allow
developers to create module without detailed knowledge of Biocompute. In fact,
two of our available modules have been developed by computer science grad
students not otherwise involved with biocompute.

%Might want to add some notes about Brian DuSell's python API
%Though it isn't done... we'll do this in future work instead

\subsection{Queue}

While the distributed system beneath Biocompute may not be of general interest
to its users, the details of job submissions and progress are important.
Biocompute provides methods for users to recall past jobs, track progress of
ongoing jobs, and perform simple management such as pausing or deleting jobs.
Users may view currently running jobs or look through their past jobs. Each job
has drill down functionality, permitting a user to look through or download the
input, output, and error of the job, and to see pertinent meta-data such as job
size, time to completion (or estimated time to completion if the job is still
running), and parameters. 


As with user data, queue jobs may be marked public or private. Making a job
public exposes the source data, parameters, and results to inspection and
replication by other users. Further, queue detail pages provide curious users
with a way to evaluate the current resource contention in the system. 


\subsection{System Description}
%Biocompute is arranged into three primary components. A single server hosts the website, submits batch jobs, and stores data. A relational database stores metadata for the system such as user data, job status, runtime and disk usage statistics. Each dataset is stored on a Chirp~\cite{chirp} cluster of 32 machines that has been integrated into the Condor distributed computing environment. These machines serve as a primary runtime environment for batch jobs and are supplemented by an extended set of machines running Chirp that advertise availability for Biocompute jobs using the Condor classad system. 

Biocompute has evolved beyond a website. Recently, we have separated it into
distinct components: a web interface, a database backend documenting the Queue,
a process responsible for executing enqueued tasks, and a resource manager to
provision computation to executing tasks. Any tool with database access
(admittedly a limited set) can add to the queue. Likewise, anyone may submit
work-queue workers to supplement their own job, or contribute resources broadly
to the entire project.

\begin{figure}
\includegraphics[width=5.5in]{Biocompute_Architecture.pdf}
\caption{Early Biocompute framework}
\end{figure}

\begin{figure}
\includegraphics[width=5.5in]{Fragmented_Biocompute_Architecture.pdf}
\caption{Extended Biocompute framework}
\end{figure}

\subsection{Biocompute Modules}

Modules provide the core of biocompute's functionality. These tools cover a
wide range in complexity, from SSAHA\cite{ning2001ssaha}, a single executable, to SNPEXP\cite{holm2010snpexp}, a complex
set of java programs. To support this variety we developed a module structure
which leverages encapsulation, interface consistency, record keeping, and data
redundancy. 


Namespacing provides much of the advantage of our modular structure. All of the
elements required to run a module's job must be contained in a job template
folder and follow a prescribed naming convention. In addition, each module must
implement a common interface API in php. Essentially, modules inherit their
characteristics from a generic model module (in fact we have implemented such a
module to facilitate rapid development). A module is first instantiated by the
biocompute architecture. This process is achieved by copying the template
folder to a globally unique job number folder. This instance is initialized
with the job's input and parameters. It is then executed through the standard
wrapper executable provided in the job template, which in turn creates a
makeflow specification of the work necessary and submits it to the appropriate
batch system.  As we have gained experience in developing these modules, we
have constructed a shared python library to provide common functionality
generically, though only a few modules use this shared library.


Makeflow~\cite{albrecht2012makeflow} provides application developers needed simplicity and
flexibility for developing distributed applications. A developer uses makeflow
with make-like syntax to describe dependencies and execution requirements for
their tool (see figure 3 for an example makeflow). The fault-tolerant makeflow
engine is capable of converting this workflow into the necessary submissions
and management of distributed jobs on diverse batch systems. 

%\begin{figure}
%\centering\includegraphics[width=5.5in]{spec\_and_workflow.eps}
%\caption{An example Makeflow file and corresponding graph of its execution. The distributed acyclic graph of this job is typical of Biocompute jobs .}
%\end{figure}

While Biocompute's queue uses a relational database to store important job
metadata--including job status, time and space utilization statistics, and
input parameters--we are not satisfied with a single point of loss for this
data. To manage the potential damage incurred by database loss or data
corruption, we ensure that all job statistics can be recreated by inspecting
the job directory. Each job records the execution plan specified by a makeflow
file, a log of the timing and return values of the substeps in that execution,
and detailed logs output by each individual executable and by the batch system
selected for that job. It also freezes the input and output of the job so that
future manipulations will not impact our ability to recover initial inputs and
results.


\subsection{Sharing Work Across Machines} 

Our hope for Biocompute has always been to enable broader, more democratic
access to computing resources and a more collaborative way to share data and
software. In service of these goals, two recent changes have been made to
Biocompute: an asynchronous work execution service, and the integration of a
Work-Queue catalog\_server and worker\_pool.

%Maybe deserves a section to itself?
\subsubsection{Crawler}

Originally, Biocompute's webcode responded to user commands by making direct
changes to the filesystem and forking tasks on the user's behalf. This commonly
resulted in long wait times and poor web performance as synchronous data
transfers and long running program execution blocked the website. The
submission of new jobs--which involves the transfer of all relevant input
files--would often take minutes. 


To reduce this inconvenience, the website was modified to simply enqueue all
desired actions in the database. A crawler process was developed to examine the
database state, perform and monitor the desired actions, and appropriately
update any relevant fields.


The production of this process provides benefits beyond reduced website latency
(should we do an experiment to demonstrate this? probably). First, it allows
any process with access to Biocompute's database to enqueue tasks for
execution. At present, the crawler provides the functionality directly using a
single command mode. With this mode, users may perform any action allowable by
Biocompute through the command line. Second, the crawler may be executed on a
machine other than the web-server. This means that the overhead incurred from
file-transfers and other resource-intensive operations need not slow the
operation of the website. Finally, it simplifies the process of deploying a
Biocompute-like project in other contexts, as the crawler doesn't require a
web-server to run.

The crawler supports a single-command mode, which effectively permits Biocompute
to be used as a command line tool, and for Biocompute modules to be included
in pipeline.  This functionality is limited, because the results of such 
executions will be stored in the website's heirarchy.  However, future work
to provide tools for automated data retrieval would permit Biocompute jobs
to be leveraged in a variety of new and powerful ways.

%We need an example of the crawler related database modifications
%We need an example of the crawler being used to in command line mode

%Not sure how much detail to put into this: do we need a rigorous description of how the crawler works?

%Maybe deserves a section to itself?
\subsubsection{Using Work-Queue's Catalog Server}

Recent developements by the cooperative computing lab have permitted
us to expand the ways in which resources can be contributed and shared
in Biocompute.  Previous iterations have supported sharing of user data
and collaboration between tool developers and biologists.  However, the
computing resources supporting Biocompute have previously been fixed,
requiring fairly expert interventions for resource holders to contribute
to the pool (by modifying class-ads on machines running condor).  With
the addition of the resource management tools available through work\_queue\_pool\cite{yu2012resource}
it is now substantially simpler for users to donate resources to the
Biocompute pool, or even to specific projects--their own for instance--within
that pool. The new work\_queue\_pool also provides mechanisms to avoid
overprovisioning of resources, and allows for decentralized worker submission--reducing
load on the web portal host. 

%We want an illustration here of the load from condor shadows

%Talk about capacity.

%Talk about automatic resource management with the pool

%Talk about scaling.

\subsection{Data Management for BLAST}
%This needs substantial revision to reflect the evolution over time... I wish
%that I could throw WQ in there
BLAST ~\cite{johnson2008ncbi}, or Basic Local Alignment Search Tool, is a commonly used
bioinformatics tool that implements a fast heuristic to align a set of one or
more query sequences against a set of reference sequences. Because biologists
often wish to compare hundreds of thousands of sequences against databases
containing gigabytes of sequence data, these jobs can take prohibitively long
if executed sequentially. However, BLAST jobs are conveniently parallel in that
the output of a single BLAST is identical (absent formatting restrictions) to the concatenation of the output
from BLASTs of all disjoint subsets of desired query sequences against the same
reference database. 


We found that using the dataset metadata stored in our relational database, we
could rank machines by the number of bytes already pre-staged to that machine
~\cite{carmichael2010biocompute}. This rank function schedules jobs to the machine requiring
the least dynamic data transfer. While this approach naturally ties our BLAST
module to Condor\cite{litzkow1988condor}, similar concepts could be used to provide job placement
preferences in other batch systems. 


This technique requires that the list of database locations be correct. To
maintain correct lists, we use the following method. Any successful job was run
on a machine containing all required databases. Jobs returning with the
database not found error are known to lack one of the needed databases.
Therefore, we parse job results and update the MYSQL database with data
locations accordingly. For those jobs that run against only one database, this
is sufficient information to update the database, however in the general case
we cannot simply determine the appropriate action and therefore make no
modifications to the database. 


To balance load, we transfer databases from a randomly selected machine in the
primary cluster. If the chosen machine lacks the appropriate database, the job
fails and is rescheduled. 


Even with random source selection load balancing, potential network traffic is
significant. For example, the largest BLAST database hosted in Biocompute is
over 8 GB. Further, many Biocompute jobs run in hundreds or even thousands of
pieces, and therefore it is possible that a job could request 8GB transfers to
dozens or even hundreds of machines simultaneously. To mitigate this
unreasonable demand we limit the number of running jobs to 300. Additionally,
we cap transfer time at 10 minutes. This effectively prevents us from
transferring files overly large for the available bandwidth. 


%As Biocompute transitions to rely more and more on the work\_queue system,
%the requirement that we store Blast databases has become increasingly frustrating.
%Here are some ideas currently in test in order to make this work better:
%1. tar up large databases and run Blasts with wq

%\subsection{Semantic Comparison of Manual and Dynamic Distribution Models}
%
%The transition from manual to dynamic distribution required a shift in dataset
%storage semantics within Biocompute. This shift was brought about by the new
%authentication requirements introduced by node to node copying, and by the
%automatic updating characteristics provided by our logfile inspection
%technique. In Table 1 we document the characteristics of datasets before and
%after dynamic distribution.  
%
%\begin{table*}
%\centering
%\caption{Object Characteristics Before and After Implementing Dynamic Distribution}
%{\small
%\begin{tabular}{|l|p{1.0cm}|p{2.5cm}|c|p{2.5cm}|} \hline
%Object&\multicolumn{2}{|c|}{Before Dynamic}&\multicolumn{2}{|c|}{After Dynamic}\\ \hline
%&record&permissions&record&permissions\\ \hline
%New Primary Machine Replica&local fs&Biocompute, local&global&Biocompute, local, nd\\ \hline
%Audited Primary Machine Replica&global&Biocompute, local&global&Biocompute, local, nd\\ \hline
%New Dynamic Replica&n/a&n/a&global&Biocompute, local\\ \hline
%\end{tabular}
%}
%\end{table*}
%
\subsection{Performance of Data Management Schemes}
%will revisit this section with new tests... I'm not sure these numbers are comprehensive enough

In this section we will explore the performance characteristics of Biocompute,
and evaluate the impact of data distribution model, and data availability on
these characteristics.

Figure 3 illustrates the cost of the current timeout policy for the dynamic
distribution model, as compared to the original static distribution model.
Figure 4 shows the runtime of the dynamic BLAST query, which was executed
against the NCBI Non-Redundant (NR) BLAST database. This dataset is 7.5GB -
uncomfortably large for multiple transfers on our network. This effectively causes any
job assigned to a machine without the database to immediately reschedule itself
(in the static case) or to wait for some time and then reschedule (in the
dynamic case). Though one might expect this behavior to significantly increase
the overall runtime, we only observe an 18\% increase in runtime for the
dynamic distribution worst case run. The characteristics of the rank function
ensure that dynamic jobs are only assigned to database-less machines when all
of the usable machines are already occupied. In the static case, these jobs
would simply wait in the queue until a usable machine became available.
Essentially, the change to dynamic distribution forces a transition to a
busy-wait. While this increases badput, and is suboptimal from the perspective
of users competing with Biocompute for underlying distributed resources, it has
a limited impact on Biocompute users. 

%\figcap{2.25in}
%\begin{figure}
%\begin{minipage}[t]{0.4\linewidth}
%\centering
%{\tiny
%\begin{tabular}{|l|p{1.5cm}|p{1.5cm}|}\hline
%Distribution Method&Execution Time (hours)&Badput (hours)\\ \hline
%dynamic&17.09&517.3\\ \hline
%static&14.49&193.7\\ \hline
%\end{tabular}
%}
%\caption{Worst case cost of dynamic distribution. While the dynamic case shows a 167\% increase in badput, it only suffers an 18\% increase in runtime}
%\label{tab:worstcase}
%\end{minipage}
%\begin{minipage}{0.7\linewidth}
%\centering
%\includegraphics[width=2.5in]{timeline.eps}
%\caption{Worst case dynamic distribution run. The high variation in number of jobs running is a result of failed data transfer attempts.}
%\label{fig:examplerun}
%\end{minipage}
%\end{figure}


%\figcap{2.25in}
%\begin{figure}
%\begin{minipage}[b]{0.5\linewidth}
%\centering
%\includegraphics[width=2.25in]{replication.eps}
%\caption{Runtime of BLAST versus a medium sized (491 MB) database for varying initial dataset replication level}
%\label{fig:replication}
%\end{minipage}
%\hspace{0.5cm}
%\begin{minipage}[b]{0.5\linewidth}
%\centering
%\includegraphics[width=2.25in]{timeout.eps}
%\caption{Runtime of BLAST versus a medium sized (491 MB) database for varying timeout lengths.}
%\label{fig:timeout}
%\end{minipage}
%\end{figure}


%NEED TO MAKE THIS FIGURE
Figure 5 shows the impact of initial replication level on the runtime of a
BLAST job. It is important to note that the Rank function remains unmodified
throughout the runtime of a particular job, so the number of jobs arriving at
machines which have acquired the required databases during runtime is
statically determined. The naivety of our source selection algorithm ensures
that many transfer attempts will fail if the primary cluster is saturated. This
reduces our ability to grow the level of parallelism as the job progresses.
However, because 300 subjobs may be submitted at any one time, it is likely
that each wave of execution will propogate at least a few databases. The
results demonstrate that over the course of a job the distribution task is
accomplished quickly. 


%NEED TO MAKE THIS FIGURE
Figure 6 shows the impact of varying timeout times on the runtime of a BLAST
job. For this test, each database was initially distributed to the 32 core
nodes and, depending on timeout, potentially distributed to any of the nodes
open to Biocompute jobs. The lowest timeout time never transfers the target
database, the middle one sometimes succeeds and sometimes fails, and the final
one always succeeds. As expected, the increased parallelism generated by
successful distribution reduces runtime.


In both of our experiments, long tail runtimes for some sub-jobs limited
benefits from increased parallelism. A more sensitive input splitting scheme
might mitigate this effect. Alternatively, fast-abort strategies available
through work\_queue\_pool have been shown to reduce long-tail effects driven
by runtime environment differences.


Our final timeout value was set high in order to maximize the transferrable
database size, as the impact on performance was acceptable even for
untransferable databases. Our final replication value was set to 32, the size
of our dedicated cluster. Since the introduction of dynamic distribution, some
datasets have been spread to up to 90 machines, tripling the parallelism
available for those data.

\subsection{Social Challenges of Biocompute}

Up to this point, we have only addressed how Biocompute meets its technical
challenges. However, as a shared resource Biocompute requires several
mechanisms to balance the needs of all of its stakeholders. We believe that
this is best achieved through fair policies and transparency. In this section
we discuss the policies and tools we have used to facilitate the management of
our two most limited resources - data and compute time.


The sheer size and number of available biological datasets require
consideration of disk space costs of Biocompute. The two simplest ways of
achieving this are motivating users to control their disk consumption, and
convincing our users to provide or solicit funding to increase available space.
In either case, it is necessary to provide users with both comparative and
absolute measures of their disk consumption. To this end we track disk space
consumption and CPU utilization for all users, and publish a "scoreboard"
within Biocompute. So far, Pie-charts have provided heavy users with sufficient
motivation to delete the jobs and data they no longer need, and have given us
insights into common use cases and overall system demand.


Resource contention also comes into play with regards to the available grid
computing resources. Utilization of Biocompute's computational resources tends
to be bursty, generally coinciding with common grant deadlines and with the
beginnings and ends of breaks when users have extra free time. These
characteristics have produced catastrophic competition for resources during
peaks. Without a scheduling policy, job portions naturally interleaved,
resulting in the wall clock time of each job to converge on the wall clock time
necessary to complete all the jobs. To combat this problem we implemented a
first in first out policy for large jobs, and a fast jobs first policy for
small (less than five underlying components) jobs. This has allowed quick
searches to preempt long running ones, and prevents the activities of users
from seriously impacting the completion timeline for previously started jobs. 


%Add stuff about the pools here

\section{Related Work}

The provenance system at work in Biocompute bears similarity to the body of
work generated by The First Provenance Challenge ~\cite{moreau2008special}. For
biologists, the queue and detail views provide extremely high level provenance
data. At the user interface level, we were most interested in giving the user
an easily communicable summary of the process required to produce their results
from their inputs.  This model hides the complexity of the underlying
execution, and gives our users a command that they can share with colleagues
working on entirely different systems. Obviously, such information would be
insufficient for debugging purposes, and Makeflow records a significantly more
detailed record of the job execution, including the time, location and identity
of any failed subjobs, and a detailed description of the entire plan of
execution. Formal provenance systems are expected to have a mechanism by which
queries can be readily answered ~\cite{moreau2008special}. We selected Makeflow
as our workflow-engine for its simplicity, and its ability to work with a wide
variety of batch systems. A similarly broad workflow and provenance tool could
replace it without modifying the architecture of Biocompute.


%This might go to the workflow section instead. Or perhaps a tie-in?

BLAST has had many parallel implementations ~\cite{darling2003design, dumontier2002nblast}. While the
BLAST module of Biocompute is essentially an implementation of parallel BLAST
using makeflow to describe the parallelism, we do not mean to introduce
Biocompute as a competitor in this space. Rather, we use BLAST to highlight
common problems with the management of persistent datasets, and show a generic
way to ease the problems generated without resorting to database segmentation
or other complex and application-specific techniques. The BLAST module, like
all of Biocompute's modules, uses an unmodified serial executable for each
subtask. 


\section{Conclusions}

We stated that our system should integrate user data, and make its management
simple. Job parameters and meta-information should be kept and results easily
shared and resubmitted. System resources should be scaleable, and fairly,
productively and transparently shared. 


Our system has, at least in pilot form, achieved these goals. However, our
solutions are for the most part initial steps. By implementing the current
version of Biocompute we have exposed a need for a much more complex dataset
management tool. With the addition of such a tool, Biocompute's ability to
usefully classify user data and vary its storage and presentation policies
based on these classifications could be expanded to any kind of data. Our first
attempt at implementing a Data-Action-Queue interface came out effective, but
complex.  At Notre Dame, in-house biological data generation is commonplace,
and an effective system to pipeline this data directly into biocompute would be
a very useful to our user community. Likewise, mechanisms to efficiently import
data from online resources would be welcomed.


%The complexities of the data management challenge


Biocompute addresses a broad spectrum of challenges in scientific computing web
portal development. It illustrates some of the technical solutions to social
challenges presented by collaborative and contentended resources, implements
techniques for mitigating the performance impact of large semi-persistent
datasets on distributed computations, and provides a useful framework for
exploring and addressing open problems in cluster storage, computation, and
socially sensitive resource management. Regular users range from faculty to
support staff to students, and cover all areas of computational expertise from
faculty to undergraduate biology majors; they span 10 research groups and 3
universities. In a year we provided our users with more than eighteen years of
CPU time, and enabled them to perform research using custom datasets.


%\acks
We would like to acknowledge the hard work of Joey Rich, Ryan Jansen, and Brian
Kachmark who greatly assisted in the implementation of Biocompute. Thomas
Potthas implemented the crawler, and Brian Dusell contributed additional
modules and an improved module API. We would also like to thank Andrew
Thrasher, Irena Lanc and Li Yu for their development of the SSAHA and SHRIMP
modules. This work is supported by the University of Notre Dame's strategic
investment in Global Health, Genomics and Bioinformatics and by NSF grant
CNS0643229.


\chapter{Pipelines for Bioinformatics Computing}

\section{Introduction}

In the development of Biocompute, and in ongoing collaborations with
biologists as part of the activities of the Notre Dame Bioinformatics Core
Facility, we have developed a broad spectrum of distributed bioinformatics
pipelines.  Several of these are briefly presented here, along with lessons
learned regarding the effective development and deployment of such pipelines.

\section{Microsatellite Pipeline}
\subsection{Introduction}

Microsatellites--also known as tandem repeats or simple sequence repeats
(SSRs)--are repeated strings of a 1-10 nucleotide sequence of DNA (figure here
showing msats).  They are found in surprisingly high proportions in most
genomes ~\cite{tautz1984simple}, making them good markers for genome mapping
~\cite{jeffreys1985hypervariable, jeffreys1985individual}.  Their repetitive nature make them particularly
likely to undergo length changing mutations through ``slippage".  The
resulting variability makes SSRs particularly useful for fine-grained
phylogenetic and population analyses.  

Typically, microsatellites are only useful if they can be reliably extracted
from the genomes of interest.  Primers--sequences that uniquely or
near-uniquely occur flanking the region of interest--are generally developed
to target and extract microsatellites.  The program Primer3
~\cite{koressaar2007enhancements} is often applied in order to appropriately design these
features.  The usual deliverables of a microsatellite discovery project are a
set of summary statistics regarding the SSRs found, primers for those SSRs
with sufficient unique flanking sequence to be targeted, and spreadsheets for
further analysis and filtering.

\subsection{Pipeline Description}

The goal of this pipeline was to stitch together a chain of pre-existing
programs into a single end-to-end analysis suite with inbuilt parallelism and
appropriate final deliverables. MSATPIPEFIGURE shows the components of
the final workflow.  The constituent programs are phobos/sputnik\cite{phobos,sputnik} and primer3.
Final outputs are produced in tab-separated-values files for easy parsing and
analysis on a variety of platforms.  The structure of the output is provided
in MSATOUTFIGURE 

\subsection{Performance}

The pipeline works, but since the individual steps are rather efficient, we
don't see much in the way of exploitable parallelism (though all steps are
data-parallel). 

\section{Ka/Ks pipeline}
\subsection{Introduction}
Ka/Ks important measure for finding evidence of selection\cite{nekrutenko2002ka}

\subsection{Pipeline Description}
%Here's how the generic pipeline works: alignment of orthologs, kaks on orthologs, all parallel and such hurray, pictures
        
\subsection{Transcriptomics Challenges} 

In the genomic context, the Ka/Ks pipeline can be straightforwardly applied:
complete genomes provide accurate reading frame information and can therefore
be reliably and usefully aligned to orthologs from other genomes.  However,
many non-model organisms have genes characterized only in the context of
transcriptome sequencing projects.  No popular assembler is sensitive
to the requirements of open reading frames within transcripts. This commonly
resulted in transcriptome assemblies that had substantial sequence similarity
to known genes, but exhibited multiple frameshifts and internal stop codons.
Such errors had little impact on functional annotation, but made Ka/Ks and
other important comparative tasks difficult or impossible.  
 
\subsubsection{Existing Solutions}

Several tools to correct such frameshifts have appeared in the literature,
including estscan\cite{iseli1999estscan} and, notoriously, prot4est\cite{wasmuth2004prot4est}.  Estscan trains Hidden Markov
Models on the genes of a related model organism and then applies those models
to correct frameshifts.  However, its efficacy is limited when no closely
related model is available.  Prot4est applies an ensemble of correction tools,
including estscan, and the infamously difficult to acquire Riken decoder to
achieve the same task. It shows moderate performance improvements over Estscan,
and is less dependent on the existence of a closely related model.  Both tools
suffer from difficult workflows, and produce only moderate improvements.

\subsubsection{Novel Solution}

We note that the primary target of most transcriptome projects is the
construction of contigs representing the genes expressed by the organism of
interest.  While non-coding regions, such as UTRs, are captured by cDNA library
construction, we posit that coding regions are the primary target of transcript
assembly.  To aid in remaining in-frame, we extract open reading frames from
cDNA reads, and assign them overlaps based on protein-space alignments.  We
evaluate the frame-shift errors (as proxied by internal stop-codons) of
transcripts assembled in this manner as compared to default celera\cite{miller2008aggressive} assemblies
and to Newbler 2.6 assemblies.

\subsubsection{Design}

Our approach involves 6 steps:
\begin{enumerate}
\item Find longest open reading frame for each read
\item Translate reads into protein space, trim nucelotide reads to correspond to ORF
\item Pair reads with shared 25mers (in nucleotide space)
\item Align translated paired reads to one another using water (EMBOSS)\cite{rice2000emboss}
\item Construct OVL records for each alignment
\item Pass OVL file to Celera
\end{enumerate}

%probably want a figure of the workflow and a pipeline .dot or something

\subsubsection{Results}

Our comparison of assemblers is, as always, complicated by substantial
differences between the fundamental output of Newbler and Celera.  

On our small test dataset, Newbler produces only 12 contigs, each considered to
be a separate isotig.  By contrast, neither our modified Celera nor the
standard Celera 7.0 pipeline produced any proper contigs, though each produced
several unitigs (which are basically lower confidence contigs in Celera-land).
The standard Celera produced 433 unitigs, the modified version produced 157.  

However, our modified pipeline produces a higher proportion of in-frame bases
(0.958326261706756) than the default celera pipeline (0.948321836235964) which,
surprisingly, outperforms the frame-sensitive Newbler 2.6 (0.927661412221982).
For some further context, we provide histograms showing the distribution of
longest ORF length for each assembly.

%histograms as per final project

\subsection{Performance}


%I'm dropping the transcriptome analysis I think.  It'd basically just be a rehash of the paper I wrote with Andrew
%But that pipeline is largely defunct, and the most interesting transcriptome stuff has to do with the 
%Ka/Ks pipeline--namely adjusting the sequences to be amenable for in-frame alignments.  Since that can include 
%The stuff with Lauren, I think I'm just going to do it that way.
%\section{Transcriptome Analysis}
%\subsection{Introduction}
%\subsection{Pipeline Description}
%\subsection{Performance}

\section{Practical Difficulties}
It's hard to scale up, and to move to different environments. 
\subsection{Evil Magic Numbers}
\subsection{Encapsulation}
\subsection{Lessons Learned}

\chapter{RARECODONS}

\section{Abstract}

Biologists commonly consider synonymous mutations--those mutations which alter
a DNA sequence, but not the amino acid sequence in the corresponding protein--
to be functionally neutral. The rareness of particular codon synonyms varies
between organisms\cite{duret2002evolution}. Previous work has shown that common codons are
translated more quickly than their rarer synonyms, implying that protein
production can be made more efficient by increasing the usage of common
codons\cite{duret2002evolution}.  Due to this selective pressure, rare codons were assumed
to be randomly distributed consequences of genetic drift\cite{smith2001translationally}. However,
by quantifying the relative rarity of codons, our collaborators were able to
show that rare codons cluster more than is expected by random chance across a
diverse section of the tree of life\cite{clarke2008rare}.  Our biological collaborators
hypothesize that rare codon clusters have functional effects. If this were the
case, we would expect to find evidence of rare codon cluster conservation in
orthologous genes across organisms. In this project, we develop a pipeline to
quantify codon rarity and the conservation thereof in alignments of orthologous
genes.

\section{Introduction}
%How do we test this hypothesis?
%Find orthologs from diverse genomes
%align them with one another
%calculate %min-max for each sequence in alignemtn
%calculate a significance threshold and splice locations of significant rare codon clusters into
%multiple alignment
%test to evaluate the likelihood of significantly rare codon clusters aligning by chance

%Let's talk about biological background
%1. Nucleotide sequence is degenerate
%2. Different Codons appear at different frequencies
%3. Codon rareness is a proxy for translation frequency (Translation figure)

Each amino acid is coded by a nucleotide triplet known as a codon.  However, there are only
20 amino acids and 64 possible codons, 3 of which are commonly reserved as "stop" codons indicating
the end of a coding sequence.  This degeneracy allows for synonymous coding for certain amino acids.
The most common coding table is shown in Figure \ref{fig:codontable}.

\begin{figure}
\includegraphics[width=5.5in]{codon_table.pdf}
\caption{Standard codon table}
\label{fig:codontable}
\end{figure}

Previously, researchers have shown that rare codons are translated more slowly
than their more common synonyms \cite{research}. Though the correlation is
imperfect, this allows us to use codon-rarity as a proxy for translation rate.
Translation is achieved in the ribosome by binding tRNA to mRNA to produce
amino acid chains as conceptualized in Figure \ref{fig:translationdiagram}.
Rare codons tend to have fewer matching tRNAs, slowing the incorporation of
amino acids into the protein, and increasing the rate of error.

\begin{figure}
\includegraphics[width=5.5in]{Translation_diagram.pdf}
\caption{The ribosome binds tRNA to mRNA to produce amino acid chains--a process known as translation.  Rare codons have fewer tRNAs, slowing the incorporation of amino acids into the protein.}
\label{fig:translationdiagram}
\end{figure}

Our collaborators came up with an algorithm to characterize the rareness of
given stretches of sequence controlling for amino acid content \cite{minmax}.
Figure \ref{fig:minmaxalgorithm} diagrams their algorithm for performing this
calculation.

\begin{figure}
\includegraphics[width=5.5in]{min-max-algorithm.pdf}
\caption{The \%min-max algorithm for determining the amino acid constrained
codon rarity of sequence windows}
\label{fig:minmaxalgorithm}
\end{figure}

Using this method, they established strikingly non-random clustering of codon
rarity across many different organisms, as shown in Figure
\ref{fig:rarecodonscluster}. 

If these clusterings have functional effects--e.g. through cotranslational
folding-- then we would expect them to be conserved across orthologs.  We find
aligned regions of rareness at far greater frequency than would be expected by
random chance.  These results hold after controls for amino acid sequence,
GC-content, and codon-pair bias. %I hope
The software produced provides a framework to evaluate the significance of 
alignments of regions of interest in orthologs, as well as many useful tools
to produce appropriately constrained random control sequences on distributed
computing resources.

\begin{figure}
\includegraphics[width=5.5in]{rare-codons-cluster.pdf}
\caption{Rare codons cluster}
\label{fig:rarecodonscluster}
\end{figure}

\section{Methods}
%this bit applies to both
\subsection{pre-processing}
We identify orthologs using the Orthomcl tool and align those orthologs with
the MUSCLE multiple alignment tool\cite{thing5,thing6}.  We then use our
implementation of the \%min-max algorithm to identify rare codon clusters in
each aligned sequence.  

%original methods
We establish a significant level of \%min by comparing gene results to 200
random reverse translations of the gene's amino acids using the organisms'
global codon usage. 

\subsection{Peak Method}
Our initial implementation focused on identifying aligned windows of rareness
at a per-position basis.  In this method, codons with \%min below the
threshhold that are also minimal within a 17 codon window were identified as
``peaks". By splicing the distance from the nearest peak into each sequence's
position in the multiple alignment, we enable exact calculation of the
probability of the level of peak correspondence at each position. Figure
\ref{fig:peakfile} shows an example file in this format.  The processing
workflow for this method is shown in Figure \ref{fig:rccarchitecture}.

%original workflow

\begin{figure}
\includegraphics[width=5.5in]{RC-Clust_Architecture.pdf}
\caption{Original pipeline to evaluate significance of rare codon cluster alignment}
\label{fig:rccarchitecture}
\end{figure}

\subsection{Poisson Sampling}
%modify to talk about peaks, rather than positions

Our initial probability calculation makes a few simplifying assumptions--most
importantly that each rare codon peak in a given sequence is an independent
random event, and is representative of a rare codon cluster. Using this
assumption enables us to utilize Poisson Sampling as shown in figure
\ref{fig:poissonsampling}.  This calculation is exponential in the number of
aligned sequences, but can be performed in parallel for each position in the
alignment (rows in the provided figure).  %move (rows in the provided figure)
to caption perhaps?

\begin{figure}
\includegraphics[width=5.5in]{Bernoulli_Sampling.pdf}
\caption{Poisson Sampling}
\label{fig:poissonsampling}
\end{figure}

\subsection{Limitation of Poisson}

Though the Poisson Sampling strategy produced multiple positions with low
p-values, it suffered from several important conceptual limitations. The method
splits clusters of codons into multiple ``independent" parts, though in
practice this does not reflect the biological linkage of the split clusters.
This splitting increases the total number of interesting events in each
alignment, diluting the p-values through false proliferation.  Furthermore, it
is difficult to determine the appropriate null model to correct
multiple-sampling problems with the calculated p-values. Standard estimates of
the expected number of significant events predicted far more correspondences
than were present in the dataset.  

Finally, as the distributed computing strategy developed only exploits
parallelism at the granularity of alignment columns, large ortholog groups can
still exceed practical computational limits.  Approximations of the method
could be used to work around these limits at the price of precision.  

\subsection{Holistic Evaluation of Rare Codon Co-Occurrence}

The challenges with the "peak" strategy led us to consider metrics and methods
for simply evaluating the likelihood that a given orthology group would show
the observed amount of rare-codon co-occurrence by chance alone.  To do so, we
established or borrowed a variety of measures for comparing distributions and
applied them to samples and randomized populations.  We calculate significance
by comparing to two random controls: one holding rare codon cluster presence
constant, and the other controlling for protein sequence, GC-enrichment
, and codon pair bias.

\subsection{Comparison Measures} 

Intuitively, we are interested in orthology groups exhibiting more overlap of
predominantly rare regions than can be explained by random chance. We characterize
the degree of overlap through a histogram of position counts with overlaps of varying
depth. We compute several measures on these histograms to summarize the extent
of overlap in a given group as compared to random shuffles of the rare codons
in the same alignment.

All of the measures operate by comparison of a sample histogram S to a random
histogram R, where both S and R bin positions by overlap depth (overlap depth
on x axis, \# positions with that depth on y axis).  In addition to the base bin
histograms, the algorithm can consider histograms linearly weighted by depth,
and cumulative histograms--where each bin contains all values greater than or
equal to the given depth.  Figure \ref{fig:ovlhistexample} shows examples of 
such histograms.

\begin{figure}
\includegraphics[width=5.5in]{ovlhistexample.png}
\caption{Overlap Histogram Construction.  All histograms constructed from shown alignment.}
\label{fig:ovlhistexample}
\end{figure}

To structure and automate our search for overlaps of interest, we use the following metrics:
\subsubsection{n99}
n99(histogram h1) - Lowest value of x such that the sum of all y values for each lower x in h1 is at least 99 percent of the sum of all y values
if n99(R) >= n99(S) then we consider R to be a similarly significant example, and add one to our fail count.  The p-value is fail-count/trial-count.  
%[example image]

\subsubsection{skew}
skew(histogram h1) - the mathematical skew of h1
if skew(R) >= skew(S) then we consider R to be a similarly significant example, and add one to our fail count.  The p-value is fail-count/trial-count.  
%[example image]

\subsubsection{solomon}
solomon(histogram h1, depth n) -  the sum of the positions of depth greater than n in h1
if solomon(R) >= solomon(S) then we consider R to be a similarly significant example and add one to our fail count.  The p-value is fail-count/trial-count.
%[example image]

\subsubsection{kurtosis}
We include kurtosis not as a direct estimate of the right-tail weight of the distributions, but rather to inform our interpretation of results showing significantly different skew.
kurtosis(histogram h1) - the excess kurtosis of the histogram h1
if kurtosis(R) >= kurtosis(S) then we conside R to be a similarly significant example and add one to our fail count.  The p-value is fail-count/trial-count.  
%[example image]

\subsubsection{Correlation of Comparison Measures}
1. we expected them to be correlated
2. they were
3. we expected them to differ
4. they did
5. look, they are similar enough to be gesturing at the same thing and different enough to be useful in aggregate!
6. figure of the correlation matrix

\subsection{Controlling for GC-content}

Though the random shuffles establish significance regarding the degree of overlap,
there are competing explanations for this significance. We wish to distinguish
between selective pressure favoring rare codons from rareness dictated by
GC content and the constraints of the amino acid sequence. To do this, we produce
a set of negative controls--random reverse translations maintaining similar GC content
to the sample. Sequences for which these controls produce similarly aligned rare
codon clusters are excluded in our search for candidates for co-translational folding.

%Obviously, we need to describe our methods for GC-content and codon-pair bias control

\subsection{Visualization} 

To present the information provided by these measures, we constructed a
multipart visualization as seen in \ref{fig:visualizationexample}. The top
badge shows the inverse logs of the measured p-values for non-cumulative
histograms, the bottom badge shows cumulatively calculated measures.  The blue
bars in the histogram represent the sample values, the candlesticks show the
minimum, first, second, and third quartiles, and maximum of the random
permutation histograms. The bottom chart describes the alignment of orthologous
genes, with areas of rare codons denoted by thickened lines.

\begin{figure}
\includegraphics[width=5.5in]{example_vis.png}
\caption{Example Visualization}
\label{fig:visualizationexample}
\end{figure}

\section{Results}

Here we discuss the computational performance of our tools, and the preliminary
biological results generated by their execution.  Our tools sustain substantial
parallelism and run in diverse environments.

\subsection{Performance}


\subsection{Biological Results}


\chapter{CONCLUSION AND FUTURE WORK}

\section{Future Work}
\subsection{Improvements to Biocompute}

Switching BLAST to an all or mostly work-queue friendly thing would be nice.  Tarring databases, longer-term caching capabilities on work\_queue\_workers
and galaxy-compatible modules.

\subsection{Future Rare Codon Work}

Per position probabilities--fairly straightforward modification to existing permutation code.  Implementation in C for performance improvements.
Use of a better proxy for translation rate than \%min-max.  Inclusion of other factors that might effect translation rate, like the weird motifs.

\section{Conclusions}

We hope to have provided a comprehensive overview of the common challenges in bioinformatics, ranging from collaboration and resource sharing to 
computation to novel algorithm development.  We describe general principles aiding in the design of bioinformatics web portals and distributed
pipelines.  We show implementations exemplifying these principles, and describe solutions to commonly encountered challenges.  Finally, we present
novel bioinformatics research applying these principles to achieve meaningful biological results from large scale computations.

\backmatter
\bibliographystyle{nddiss2e}
\bibliography{rcarmich-thesis}

\end{document}
