﻿\chapter{Pipelines for Bioinformatics Computing}

\section{Introduction}

In the development of Biocompute, and in ongoing collaborations with
biologists as part of the activities of the Notre Dame Bioinformatics Core
Facility, we have developed a broad spectrum of distributed bioinformatics
pipelines.  Several of these are briefly presented here, along with lessons
learned regarding the effective development and deployment of such pipelines.

\section{Microsatellite Pipeline}
\subsection{Introduction}

Microsatellites--also known as tandem repeats or simple sequence repeats
(SSRs)--are repeated strings of a 1-10 nucleotide sequence of DNA (figure here
showing msats).  They are found in surprisingly high proportions in most
genomes ~\cite{tautz1984simple}, making them good markers for genome mapping
~\cite{jeffreys1985hypervariable, jeffreys1985individual}.  Their repetitive nature make them particularly
likely to undergo length changing mutations through ``slippage".  The
resulting variability makes SSRs particularly useful for fine-grained
phylogenetic and population analyses.  

Typically, microsatellites are only useful if they can be reliably extracted
from the genomes of interest.  Primers--sequences that uniquely or
near-uniquely occur flanking the region of interest--are generally developed
to target and extract microsatellites.  The program Primer3
~\cite{koressaar2007enhancements} is often applied in order to appropriately design these
features.  The usual deliverables of a microsatellite discovery project are a
set of summary statistics regarding the SSRs found, primers for those SSRs
with sufficient unique flanking sequence to be targeted, and spreadsheets for
further analysis and filtering.

\subsection{Pipeline Description}

The goal of this pipeline was to stitch together a chain of pre-existing
programs into a single end-to-end analysis suite with inbuilt parallelism and
appropriate final deliverables. MSATPIPEFIGURE shows the components of
the final workflow.  The constituent programs are phobos/sputnik\cite{phobos,sputnik} and primer3.
Final outputs are produced in tab-separated-values files for easy parsing and
analysis on a variety of platforms.  The structure of the output is provided
in MSATOUTFIGURE 

\subsection{Performance}

The pipeline works, but since the individual steps are rather efficient, we
don't see much in the way of exploitable parallelism (though all steps are
data-parallel). 

\section{Ka/Ks pipeline}
\subsection{Introduction}
Ka/Ks important measure for finding evidence of selection\cite{nekrutenko2002ka}

\subsection{Pipeline Description}
%Here's how the generic pipeline works: alignment of orthologs, kaks on orthologs, all parallel and such hurray, pictures
        
\subsection{Transcriptomics Challenges} 

In the genomic context, the Ka/Ks pipeline can be straightforwardly applied:
complete genomes provide accurate reading frame information and can therefore
be reliably and usefully aligned to orthologs from other genomes.  However,
many non-model organisms have genes characterized only in the context of
transcriptome sequencing projects.  No popular assembler is sensitive
to the requirements of open reading frames within transcripts. This commonly
resulted in transcriptome assemblies that had substantial sequence similarity
to known genes, but exhibited multiple frameshifts and internal stop codons.
Such errors had little impact on functional annotation, but made Ka/Ks and
other important comparative tasks difficult or impossible.  
 
\subsubsection{Existing Solutions}

Several tools to correct such frameshifts have appeared in the literature,
including estscan\cite{iseli1999estscan} and, notoriously, prot4est\cite{wasmuth2004prot4est}.  Estscan trains Hidden Markov
Models on the genes of a related model organism and then applies those models
to correct frameshifts.  However, its efficacy is limited when no closely
related model is available.  Prot4est applies an ensemble of correction tools,
including estscan, and the infamously difficult to acquire Riken decoder to
achieve the same task. It shows moderate performance improvements over Estscan,
and is less dependent on the existence of a closely related model.  Both tools
suffer from difficult workflows, and produce only moderate improvements.

\subsubsection{Novel Solution}

We note that the primary target of most transcriptome projects is the
construction of contigs representing the genes expressed by the organism of
interest.  While non-coding regions, such as UTRs, are captured by cDNA library
construction, we posit that coding regions are the primary target of transcript
assembly.  To aid in remaining in-frame, we extract open reading frames from
cDNA reads, and assign them overlaps based on protein-space alignments.  We
evaluate the frame-shift errors (as proxied by internal stop-codons) of
transcripts assembled in this manner as compared to default celera\cite{miller2008aggressive} assemblies
and to Newbler 2.6 assemblies.

\subsubsection{Design}

Our approach involves 6 steps:
\begin{enumerate}
\item Find longest open reading frame for each read
\item Translate reads into protein space, trim nucelotide reads to correspond to ORF
\item Pair reads with shared 25mers (in nucleotide space)
\item Align translated paired reads to one another using water (EMBOSS)\cite{rice2000emboss}
\item Construct OVL records for each alignment
\item Pass OVL file to Celera
\end{enumerate}

%probably want a figure of the workflow and a pipeline .dot or something

\subsubsection{Results}

Our comparison of assemblers is, as always, complicated by substantial
differences between the fundamental output of Newbler and Celera.  

On our small test dataset, Newbler produces only 12 contigs, each considered to
be a separate isotig.  By contrast, neither our modified Celera nor the
standard Celera 7.0 pipeline produced any proper contigs, though each produced
several unitigs (which are basically lower confidence contigs in Celera-land).
The standard Celera produced 433 unitigs, the modified version produced 157.  

However, our modified pipeline produces a higher proportion of in-frame bases
(0.958326261706756) than the default celera pipeline (0.948321836235964) which,
surprisingly, outperforms the frame-sensitive Newbler 2.6 (0.927661412221982).
For some further context, we provide histograms showing the distribution of
longest ORF length for each assembly.

%histograms as per final project

\subsection{Performance}


%I'm dropping the transcriptome analysis I think.  It'd basically just be a rehash of the paper I wrote with Andrew
%But that pipeline is largely defunct, and the most interesting transcriptome stuff has to do with the 
%Ka/Ks pipeline--namely adjusting the sequences to be amenable for in-frame alignments.  Since that can include 
%The stuff with Lauren, I think I'm just going to do it that way.
%\section{Transcriptome Analysis}
%\subsection{Introduction}
%\subsection{Pipeline Description}
%\subsection{Performance}

\section{Practical Difficulties}
It's hard to scale up, and to move to different environments. 
\subsection{Evil Magic Numbers}
\subsection{Encapsulation}
\subsection{Lessons Learned}
