\documentclass[12pt,titlepage]{article}
%\usepackage[spanish]{babel}
%\usepackage[utf8]{inputenc}
%\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{graphicx}
%\usepackage{caratula}
\usepackage{float}
\usepackage{subfigure}
\usepackage{wrapfig}
\usepackage{listings}
%\usepackage{float}
\usepackage{xunicode,xltxtra,url,parskip}
\usepackage{fontspec}
\defaultfontfeatures{Mapping=tex-text}
\setmainfont[SmallCapsFont = Fontin SmallCaps]{Fontin}
\lstset{language=C,basicstyle=\small\tt,keywordstyle=\bf,tabsize=3,breaklines=true,linewidth=16cm,postbreak={\mbox{$\rightsquigarrow$}},prebreak={\mbox{$\rightsquigarrow$}}}

%\usepackage{a4wide}
%\usepackage{amssymb}
%\usepackage{amsmath}
 \usepackage{enumerate}
 \parindent = 12 pt
 \parskip = 12 pt
%\usepackage[width=15.5cm, left=3cm, top=2.5cm, height= 24.5cm]{geometry}

\usepackage{color}
\usepackage{url}
\definecolor{lnk}{rgb}{0,0,0.4}
\usepackage[colorlinks=true,linkcolor=lnk,citecolor=blue,urlcolor=blue]{hyperref}

\newcommand{\func}[2]{\texttt{#1}(#2) :}
\newcommand{\tab}{\hspace*{2em}}
\newcommand{\FOR}{\textbf{for }}
\newcommand{\TO}{\textbf{ to }}
\newcommand{\IF}{\textbf{if }}
\newcommand{\WHILE}{\textbf{while }}
\newcommand{\THEN}{\textbf{then }}
\newcommand{\ELSE}{\textbf{else }}
\newcommand{\RET}{\textbf{return }}
\newcommand{\MOD}{\textbf{ \% }}
\newcommand{\OR}{\textbf{ or }}
\newcommand{\NOT}{\textbf{ not }}
\newcommand{\tOde}[1]{\tab \small{\mathcal{O}($#1$)}}
\newcommand{\Ode}[1]{\ensuremath{\small{\mathcal{O}\left(#1\right)}}}
\newcommand{\VSP}{\vspace*{3em}}
\newcommand{\Pa}{\vspace{5mm}}
\newenvironment{pseudo}{\noindent\begin{tabular}{ll}}{\end{tabular}\VSP}

\newenvironment{while}{\WHILE \\ \setlength{\leftmargin}{0em} }{}

\newcommand{\iif}{\Leftrightarrow}
\newcommand{\gra}[1]{\noindent\includegraphics[scale=.70]{#1}\\}
\newcommand{\gras}[2]{\noindent\includegraphics[scale=#2]{#1}\\}
\newcommand{\grasize}[2]{\noindent\includegraphics[width=#2]{#1}\\}
\newcommand{\gram}[1]{\noindent\includegraphics[scale=.50]{#1}}
\newcommand{\dirmail}[1]{\normalsize{\texttt{#1}}}
\newenvironment{usection}[1]{\newpage\begin{section}*{#1}	\addcontentsline{toc}{section}{#1}}{\end{section}}
\newenvironment{ucsection}[1]{\newpage\begin{section}*{#1}	\addcontentsline{toc}{section}{#1}}{\end{section}}
\newenvironment{usubsection}[1]{\begin{subsection}*{#1}	\addcontentsline{toc}{subsection}{#1}}{\end{subsection}}

\newcommand{\superref}[1]{\textsuperscript{\ref{#1}}}

\begin{document}

%\materia{Algorithms in Bioinformatics}
%\titulo{PSSM vs ANN}
%\autor{Kevin Allekotte}{kevinalle@gmail.com}
%\autor{Kevin Allekotte}{kevinalle@gmail.com}
%\autor{Kevin Allekotte}{kevinalle@gmail.com}
%\autor{Kevin Allekotte}{kevinalle@gmail.com}
%\autor{Kevin Allekotte}{kevinalle@gmail.com}

%\abstracto{
%	sdh sldkfj slkdfj lsdhf lsdfh lsjghljsg ldjkj hfkjd hgljdfhlg.
%}

%\title{sdg}
%\author{yo\and tu\and el}
%\date{Summer 2011}
%\maketitle
\begin{titlepage}
\begin{center}
{\textbf{Universidad de Buenos Aires}\\Facultad de Ciencias Exactas y Naturales}

\vspace{1.5cm}

\begin{tabular}{r}
{\Large \bfseries Algorithms in Bioinformatics}\\
\hline
\textsc{\small Organizer: Morten Nielsen}\\
\end{tabular}
\vspace{2cm}

\begin{tabular}{l}
{\large Project}\\
\textbf{\Huge PSSM vs ANN}\\
\end{tabular}

\vspace{1cm}

\begin{minipage}[b]{0.7\linewidth}
%\begin{tabular}{l}
\textbf{Abstract}\\
{In this document we analyze the differences of the methods PSSM (Position-Specific Scoring Matrix) and ANN (Artificial neural networks) for Peptide MHC binding predictions.}\\
%\end{tabular}
\end{minipage}

\vspace{1cm}

\begin{tabular}{lr}
%Carla Livorno & \texttt{livornocarla@gmail.com}\\
%Daniel Grosso & \texttt{dgrosso@gmail.com}\\
Kevin Allekotte & \texttt{kevinalle@gmail.com}\\
Mariano Semelman & \texttt{noinflection@gmail.com}\\
Thomas Fischer & \texttt{puedovolar@gmail.com}\\
\end{tabular}

\vspace{2cm}

{\large Summer 2011}

\end{center}
\end{titlepage}
\protect\setcounter{tocdepth}{1}
\tableofcontents
%\newpage

	\begin{ucsection}{Introduction}
		During the course we were presented with different algorithms to accomplish Peptide MHC binding predictions. Our task is to compare the following methods:
		\begin{itemize}
		\item \texttt{PSSM}: Position-Specific Scoring Matrix\\
		
			A PSSM is a weight matrix used to store information about a
			given motif in a sequence, in this case that of a peptide
			(aminoacid sequence).
			
			For a given peptide, the matrix assigns a "binding score"
			indicating how likely the peptide is going to bind
			to the MHC receptor represented by the matrix.
			
			The overall score for a sequence is defined as a sum
			of independent position specific scores, which causes
			the loss of valuable information. In peptides neighbouring
			aminoacids are hardly independent.
		
		\item \texttt{ANN}: Artificial neural networks\\
		
			A ANN is a dynamic adaptative system that is constructed
			during a learning fase, where we teach the network to
			respond to a certain motif using a data set with
			known outputs.
			
			In our case aminoacid sequences of equal length are given
			to the network, and during learning fase we train it by
			comapring the output to the expected one for each sequence,
			and adjusting the network accordingly
			with a backpropagation algorithm.
			
			Given a correctly trained network and a peptide as an input,
			the network should be able to assign a "binding score" for
			the sequence given as input on the motif
			that the network was trained on.
		
		\end{itemize}

		We analized two variants of the PSSM algorithm.
		One constructed using a simple sequence weighting algorithm,
		and another using a pseudocount correction method. We expect
		to get more reliable results by the latter.
		
		We also analized four variants of ANN's. Two networks take
		sparse encoded sequences as input. The others use Blosum
		encoded sequences, which makes the algorithm slower but
		hopefully more reliable.
		
		Each of those pairs of ANN's was constructed
		with one hidden layer, with the difference being the amount of
		neurons in that layer. We tested networks with 5 and 10 neurons
		for each pair.
		
		So we will be comparing 6 different algorithms
		that solve the same problem.
		
		\medskip
		
		To test our results we will use experimental and predicted binding data (see Reference \ref{ref:data}) and compare the predictions.
	\end{ucsection}
	
	\begin{ucsection}{Materials and method}
		The first thing we had to do is convert all the input data to the format we need it for our algorithms. We used a combination of \texttt{Python} and \texttt{Bash scripting} to strip out unnecessary information and generate the files as we need them.
		Also, we used the program completed in class that converts the input to sparse and Blosum encoding.
		
		As a result we get different test cases that consist on a training set and a testing set for each case. We chose a random subset of all the test cases to run the algorithms so the plot is clearer.
		
		\medskip
		
		For the main algorithms we used the source code given in class and we modified some details where necessary.
		
		\begin{itemize}
		\item \textbf{PSSM}\\
			We first generate the Scoring Matrix from the training set using \texttt{pep2mat}. We then score our test set with \texttt{pep2score}
		\item \textbf{PSSM with sequence weighting}\\
			Its almost the same as the case above, but we generate the Scoring Matrix with the \texttt{-sw} option, that creates the sequence weighting matrix instead.
		\item \textbf{ANN with sparse encoding}\\
			We first need to convert the training and testing set to sparse encoding, for wich we use \texttt{seq2inp}. We then use \texttt{nnbackprop} to train our network with the training set and then test it with the testing set.
		\item \textbf{ANN with Blosum encoding}\\
			In this case, when we encode the data sets, we use the blosum encoding (option \texttt{-bl})
		\end{itemize}
		
		In all cases we then calculated the predictive performance (in terms of the Pearsons correlation) using the \texttt{xycorr} script given in class. We then used these scores to compare the methods.
		
		\begin{usubsection}{Compiling and running}
		
		In order to compile there is a Makefile in each source folder. Depending if you have 32 o 64 bits operating system you may want to edit the Makefile files.
		
		Then copy files to the bin directory and this directory to your environment variable (e.g.PATH).
		
		There are several scripts that will do the tests, in order to process the benchmark data, you may run \texttt{convert\_all}, then the script to choose randomly is named \texttt{choose\_random.py}.
		
		After that there two tests you may want to run, one is \texttt{pssm\_vs\_ann} which will give you the pearson correlation for each test for each method. The other test is \texttt{pssm\_vs\_ann\_time} which will measure times using unix tool \texttt{time}. In order to make them run you will have to pass it as argument the directory where the tests are.
		\end{usubsection}
	\end{ucsection}
	
	\newpage
	\begin{ucsection}{Results}
	
		Here are some graphics describing various aspects
		of the performance of our different algorithms on the test data
		given. \\
		
		First we present a measure of the predictive performance
		in terms of the pearson correlation between our output and the
		expected one given by the test data sets.
	
		\begin{figure}[ht!]
			\grasize{results.pdf}{16cm}
			\caption{The graphic shows the pearson correlation achieved by our different MHC-binding prediction algorithms on various datasets.}
			\label{fig:results}
		\end{figure}
		
		In Figure \ref{fig:results} we plotted our results. Each color is one of the methods, and the plotted value is the predictive performance in terms of the Pearsons correlation for the corresponding test set.
		
		The more possitive the value, the better the prediction the algorithm returned.
		
		\noindent\begin{tabular}{|r|l|l|l|l|l|l|}
		\hline
								& PSSM	&	PSSM SW	&	ANN		&	ANN BL	&	ANN 5H	&	ANN BL 5H\\
		\hline
		\textbf{Average}	& 0.1551	&	0.1607	&	0.0496	&	0.0564	&	0.0460	&	0.0501\\
		\textbf{Std dev}	& 0.1758	&	0.1883	&	0.2033	&	0.2213	&	0.1890	&	0.2071\\
		\hline
		\end{tabular}
		
		In this chart we present the Average Pearson Correlation and Standart Deviation for each of the methods. We see that the PSSM's perform better, specially the PSSM with Sequence Weighting.
		
		
		

		
		\noindent\begin{tabular}{|r|l|l|l|l|}
		\hline
								&	PSSM		&	PSSM SW	&	ANN			&	ANN BL\\
		\hline
		\textbf{Avg Time}	&	0.032s	&	0.026s	&	12.096s	&	11.359s\\
		\hline
		\end{tabular}
		

		This table illustrates the Average Time the algorithms take in solving the problem. Note that in the ANN methods the training time is also included.
	\end{ucsection}
	
	\begin{ucsection}{Discussion}
		After observing the results we doubted if we made a mistake, because the methods hadn't behave very well in general.
		
		Nevertheless one can observe that the PSSMs performs much better. However we think that PSSM characteristic of using the position and probability of ocurrence of each peptide is less powerful than a well trained Neural Network, this may had occured because we did not train Neural Networks correctly, they were a difficult task in deed. . However it is clear according to the results that training PSSMs is much faster than Neural Networks.
		
		It is also observable that ANN with blosum encoding performs better than the sparse encoding in general, but not always.
	\end{ucsection}
	
	
	\begin{ucsection}{References}
		\begin{enumerate}
		\item \label{ref:home} \texttt{\url{http://www.cbs.dtu.dk/courses/BAcourse/}}\\
			Course Homepage
		\item \label{ref:data} \texttt{\url{http://mhcbindingpredictions.immuneepitope.org/dataset.html}}\\
			Experimental and predicted binding data we use to test our algorithms.
		\end{enumerate}
	\end{ucsection}

\end{document}
