% Please use the skeleton file you have received in the 
% invitation-to-submit email, where your data are already
% filled in. Otherwise please make sure you insert your 
% data according to the instructions in PoSauthmanual.pdf
\documentclass{PoS}
\usepackage{listings}

\title{The BioVeL Project: Robust phylogenetic workflows running on the GRID}

\ShortTitle{The BioVeL Project: Robust phylogenetic workflows running on the GRID}

\author{\speaker{Giacinto DONVITO}\\
       % \thanks{A footnote may follow.}\\
       INFN-Bari\\
       E-mail: \email{giacinto.donvito@ba.infn.it}}
\author{Saverio VICARIO\\
       CNR-ITB\\
       E-mail: \email{saverio.vicario@gmail.com}}
\author{Pasquale NOTARANGELO\\
       INFN-Bari\\
       E-mail: \email{pasquale.notarangelo@ba.infn.it}}
       
\abstract{

\textbf{Overview}
Altered species distributions, the changing nature of ecosystems and increased risks of extinction all have an impact in important areas of societal concern. Biologists and environmental scientists are asked to provide decision support for managing the biodiversity component of our environment at multiple scales (genomic, organism, habitat, ecosystem, landscape) to prevent and mitigate such losses.
BioVeL want to address this needs offering a series of robust and reliable web services that could be managed with the suite of tools of the myGRID project. The project will propose workflow for these services that will ensure best practice and efficiency of usage.
Within the first round of services produced by the project there are phylogenetic inference workflows.
These workflows will provide to end user the capabilities to execute application that could easily exploit several kind of resources, like: EGI grid infrastructure, local batch farm or dedicated servers.
}

\FullConference{EGI Community Forum 2012 / EMI Second Technical Conference,\\
		26-30 March, 2012\\
		Munich, Germany}


\begin{document}

\section{Introduction to BioVeL project}

E-solutions for the management of biodiversity in the 21 century. Scientists are pressed to provide convincing evidence of changes to contemporary biodiversity, to identify factors causing decline in biodiversity and to predict the impact, and to suggest ways of combating biodiversity loss. Altered species distributions, the changing nature of ecosystems and increased risks of extinction all have an impact in important areas of societal concern. Biologists and environmental scientists are asked to provide decision support for managing the biodiversity component of our environment at multiple scales (genomic, organism, habitat, ecosystem, landscape) to prevent and mitigate such losses. Generating such evidence and providing decision support relies, increasingly, on large collections of data held in digital formats and the application of substantial computational capability and capacity to analyse, model and simulate in association with such data. 
The BioVeL approach.  the aim of the BioVeL~\cite{ref:biovel} project is to provide a seamlessly connected environment that makes it easier for biodiversity scientists to carry out in-silico analysis of relevant biodiversity data and to pursue in-silico experimentation based on composing and executing sequences of complex digital data manipulations and modelling tasks. In BioVeL scientists and technologists will work together to meet the needs and demands for in-silico or \emph{e-Science} and to create a production-quality infrastructure to enable pipelining of data and analysis into efficient integrated workflows. Workflows represent a way of speeding up scientific advance when that advance is based on the manipulation of digital data (Gil, E Deelman, Ellisman, Fahringer, \& Fox, 2007).


\section{Biological goal of the Phylogenetic Services}
Phylogenetic inference is at the base of the archiving system of biodiversity, namely Systematics. All classification level of systematics but species are based exclusively on phylogenetic inference, and in recent time mainly on molecular phylogenetic inference. The term phylogenetic inference encompasses all procedures necessary to reconstruct the historical relationship among organisms and their features. Such reconstructions deal both with the topological shape of the relation and the quantification of the divergence among organisms. As such the phylogenetics reconstruction is an exhaustive summary of the evolution that brought the diversity of organisms under study and for this reason is a basic tool to interpret (i.e. qualify and quantify) biodiversity. Phylogenetic methods are used to reconstruct the biogeographic history of species and populations, annotate genes, reconstruct the evolution of species-specific features, or estimate divergence events.
However, phylogenetic approaches are still not fully integrated in biodiversity research, because of difficulties to handle large data sets, complex data/analysis, and old standards. Phylogenetic inference as all parametric statistical inference is a delicate procedure that requires to follow good practices.  In fact phylogenetic inference is sensitive to model misspecification and could even produce erroneous results using the correct model ( see "long branch attraction" literature).  For those approaches that use phylogenetic information from previous analysis is often difficult to use the phylogenetic output of a program as input for the further analysis because phylogenetic uncertainty could not be correctly reported.  Furthermore, phylogenetic inference depend from the call of homology performed on the raw data, that on molecular data is framed as the sequence alignment problem. In summary the basic procedure of sample selection, alignment, phylogenetic inference and downstream analysis need to be completed with a step of quality control that would check stability and biological consistency of the alignment, a step of control of convergence to a unique solution of the phylogenetic inference, and a post-hoc validation of the model or models used.
Special attention should be brought the capability to deal with large data sets. Especially since the emergence of high throughput sequencing methods, access to computing power is one of the central problems in phylogenetics. In fact, being phylogenetics a NP (non  polynomial time) complete problem and given that number of possible tree topology grow factorially with number of sequences only the enumeration of very large numbers of possible solution guarantee an exact solution. Nowadays different approximate (i.e. heuristic) solutions are available for large data set both based on maximum likelihood approach (i.e. RaxML, Garlie), Bayesian Markov Chain Monte Carlo (i.e. MrBayes, BEAST) or Approximated Bayesian Computation (i.e. ABCtoolbox , SPLATCHES). Still all these approaches require a large number of computation of the likelihood of different phylogenetic hypotheses (in the order of 105) or coalescent histories (in the order of 107) for a analysis in the order of few hundreds of sequences. A part the complexities residing in the tree shape itself phylogenetic inference need to deal with the complexity of the model of the character that evolve on the tree.  Deep-medium divergence for a correct representation of the data need complex substitution model that for example in phylogenomics data sets need to  partitioned model with potentially different substitution matrices  ( i.e. single base, doublet, amino acid, codon), although with the simplifying assumption that demographic fluctuation would be erased by coalescence events. On the contrary Phylogeography model would concentrate its complexities on the interaction between demography and tree shape ( topology and branch length). 

Moreover, a related problem is the complexity of initial data mining procedures. To reconstruct each phylogeny of gene family, a complex substitution model need to implemented. Today we get millions of sequences as raw data, but we have few tools to overview and process (i.e. data mine) high-throughput data, as Next Generation Sequencing data
In light of these challenges the phylogenetic service set of BioVeL would like to produce 4 type of workflows: a molecular phylogenetic inference set, phylogenetic diversity set, phylogenetic molecular evolution set, phylogenetic character mapping set. The work presented here is the first instance of the a workflow of the first type that deal with producing a phylogenetic inference of set of unaligned protein coding nucleotide sequences. First the workflow find the most likely protein domain common to all nucleotide sequence trying all coding frame and all user selected genetic codes using the implementation of Hidden Markov model of HMMer 3.0 and the PFAM db. Then it perform the alignment based on that profile, filtering sites with low fit on the protein profile.
The file is then sent to perform a phylogenetic inference with the program MrBayes. To do so the file is formatted in Nexus format and a user specified model of evolution and details of the markovian integration is attached to the file. The MrBayes services set up automatically the best MPI set up to the user requests on the markovian integration and is sent to an optimal, in term of efficiency of use, number of CPUs.

\section{Technical description of the frontend implementation}
Before describing the architecture of services and the application is necessary to mention the technologies used:
\begin{itemize}
\item{Java Language}
\item{Framework Java EE 6.0 SDK}
\item{Framework Jarsey JAX-RS}
\item{Web Server: Apache TomCat}
\item{DBMS: MySql 5}
\end{itemize}

As shown in the Figure~\ref{Fig:fig1}, the macro-elements that make up the system are three:

\begin{itemize}
\item{Database: Job storage and configuration parameters}\item{FrontEnd: Rest Services accessible from the outside}\item{BackEnd: Application daemon that runs in the background for the job submission and scheduling}
\end{itemize}

\begin{figure}
\centering
\includegraphics[width=\textwidth]{inter}
\caption{Interaction between elements}
\label{Fig:fig1}
\end{figure}

\textbf{Architecture of the application}
Using the MVC design pattern (Model View Control) were obtained at three distinct layers:
\begin{itemize}
\item{View}
\item{Business logic}
\item{Access to data}
\end{itemize}

Therefore they were written a series of libraries, opportunely referenced and integrated with each other.The main package is written:
\begin{itemize}
\item{Service: library, in FrontEnd package, containing the methods available to the service}
\item{XmlEntityLayer: dll to serialize / deserialize the XML response format of the service (Map of the responses in XML)}
\item{EntityLayer: dll file that maps the database tables}
\item{BusinessLayer: dll file containing the business logic for the management of the db and methods of service}
\item{DataLayer: dll containing classes and methods for accessing data in the db}
\item{Core: dll containing various utilities and functionality}
\end{itemize}

\begin{figure}
\centering
\includegraphics[width=\textwidth]{Archi}
\caption{DLL Schema}
\label{Fig:fig2}
\end{figure}

As shown in the Figure~\ref{Fig:fig2} FrontEnd, BackEnd and DataBase share some of the dll written. Such as Core, EntityLayer, DataLayer, BusinessLayer, etc...\textbf{Database}The RDBMS MySQL database has two tables in particular:
\begin{itemize}
\item{TaskList: contains the number of Job performed or to be performed}
\item{ConfigTask: contains the configuration parameters related to the job in the table TaskList}
\end{itemize}
\begin{table}
\begin{tabular}[width=\textwidth]{|l|l|}
\hline
\multicolumn{2}{|c|}{TaskList} \\
\hline
ID  & INTEGER \\
NAME  & CHAR \\
PRIOR & INTEGER \\
STATUS & CHAR \\
PROVENANCE &  CHAR \\ 
LINK &  CHAR \\ 
DATE\_NOW &  DATE \\ 
FAILURE & CHAR \\
arguments &  LONG\_TEXT \\ 
COMMENT & LONG\_TEXT \\ 
JOB\_ID & CHAR \\
MAIL & CHAR \\ 
OUTPUT & CHAR \\
OUTPUT\_TYPE & CHAR \\ 
FLAG & CHAR \\ 
\hline
\end{tabular}
\end{table}

\begin{table}
\begin{tabular}[width=\textwidth]{|l|l|}
\hline
\multicolumn{2}{|c|}{ConfigTask} \\
\hline
NAME  & CHAR \\
LINK &  CHAR \\ 
DATE\_NOW &  DATE \\ 
CPUs & CHAR \\
COMMENT & LONG\_TEXT \\ 
STATUS & CHAR \\
\hline
\end{tabular}
\end{table}

\textbf{FRONT-END}
The application called "FrontEnd" was written in Java using the components Jersey; in particular the framework JAX-RS and Java 6 SDK.
The FrontEnd is a RESTful web services: services that are built to work best on the web.
Representational State Transfer (REST) is an architectural style that specifies constraints, such as the uniform interface, that if applied to a web service induce desirable properties, such as performance, scalability, and modifiability, that enable services to work best on the Web. In the REST architectural style, data and functionality are considered resources, and these resources are accessed using Uniform Resource Identifiers (URIs), typically links on the web. The resources are acted upon by using a set of simple, well-defined operations. The REST architectural style constrains an architecture to a client-server architecture, and is designed to use a stateless communication protocol, typically HTTP. In the REST architecture style, clients and servers exchange representations of resources using a standardized interface and protocol. These principles encourages RESTful applications to be simple, lightweight, and have high performance. One of the advantages of the use of REST is also the independence from the clients ensuring a high portability and scalability.

Authentication is done on a temporary user database, requiring only "username" and "password", but it will be possible in the future to set up to new authentication mechanism without changing the interface exposed to the users.
It can also be specified in the configuration file if authentication is required or not. The methods currently present in the FrontEnd service, and then callable are the following:

\emph{InsertJob}: this is the method that allow the user to create a single task into the database
\begin{lstlisting}[breaklines=true]
http://localhost:8080/INFN.Grid.FrontEnd/services/QueryJob/InsertJob?USERNAME={usernameValue}&PASSWORD={passwordValue}&NAME={nameValue}&arguments={argumentsValue}	\end{lstlisting}
This method returns the xml type: XmlInsertJobs with information relating to the job added in the table TaskList.

\emph{InsertJobs}: this is the method that allow the user to create a multiple task into the database, each task is indetified by a different argument. Arguments are split using ";" 
\begin{lstlisting}[breaklines=true]
http://localhost:8080/INFN.Grid.FrontEnd/services/QueryJob/InsertJobs?USERNAME={usernameValue}&PASSWORD={passwordValue}&NAME={nameValue}&arguments={http://webtest.ba.infn.it/vicario/FinalFusariumDB_2.nex ArgOne; http://webtest.ba.infn.it/vicario/FinalFusariumDB_1.nex ArgTwo;}
\end{lstlisting}
This method returns the xml type: XmlInsertJobs with information relating to the jobs added in the table TaskList.

\emph{SelectJob}: this is the method that allow the user to get the status of a give task into the database, indentified by the "JobID" value
\begin{lstlisting}[breaklines=true]
http://localhost:8080/INFN.Grid.FrontEnd/services/QueryJob/SelectJob?USERNAME={usernameValue}&PASSWORD={passwordValue}&IdJob={IdJobValue}
\end{lstlisting}
This method returns the xml type: XmlSelectJobs with information relating to the jobs searched in the table TaskList.


\emph{SelectJobs}: this is the method that allow the user to get the status of a list of task into the database, indentified by the "FLAG" value
\begin{lstlisting}[breaklines=true]
http://localhost:8080/INFN.Grid.FrontEnd/services/QueryJob/SelectJobs?USERNAME={usernameValue}&PASSWORD={passwordValue}&FLAG={FlagValue}
\end{lstlisting}
This method returns the xml type: XmlSelectJobs with information relating to the jobs searched in the table TaskList.

Here is an example of the structure of the XML that the FRONT-END service report to the user: 

\begin{lstlisting}[breaklines=true]

XmlSelectJobs:
	<Jobs>
		<Job>
			<Arguments>Arg4</Arguments>
			<Comment>local</Comment>
			<CPUs>1</CPUs>
			<Flag>001</Flag>
			<Id>4</Id>
			<LastCheck>2012-02-14 10:54:58.0</LastCheck>
			<Name>blast4</Name>
			<Output/>
			<Provenance/>
			<Status>done</Status>
		</Job>
	</Jobs>
	
	XmlInsertJobs:
	<Job>
		<Name>NOME</Name>
		<Flag>001</Flag>
  		<JobsID> 
  			<JobId>12</JobId>
  			<JobId>13</JobId>
  		</JobsID>
  	</Job>
\end{lstlisting}

\textbf{BACK-END}

The application called "backend", written in Java, is a daemon constantly running.
Based on a strategy MultiThread interfaces to DataBase to retrieve the job to be executed in the Grid, Local Farm or dedicated servers.

The table TaskList is updated once the job was submitted.

The BackEnd was made based MVC architecture and some dll files are shared with the FrontEnd. (watch Figure~\ref{Fig:fig2})

The main packages that BackEnd uses are:
\begin{itemize}
\item{EntityLayer: dll file that maps the database tables}
\item{BusinessLayer: dll file containing the business logic primarily for the management of the db and methods of the Submit Job}
\item{DataLayer: DLL containing classes and methods for accessing data in the db}
\item{MultiThread: dll for the management and scheduling of thread for the execution of the Job}
\item{Core: dll various utilities and functionality}
\end{itemize}

\section{Description of the Job Submission Tool and the job management}

In order to submit jobs to the grid and local computing infrastructure we used a job submission tool (JST) has been developed years ago to allow the submission of large numbers of jobs and keep track of all of them in an unattended way. 
After the interaction with the frontend web service all the atomic tasks were inserted into a monitoring DB (MDB) server that controls the assignment of tasks to the jobs and monitors the job to which that specific task has been assigned (Figure 2). 
For the proper monitoring of the task assignment and job termination, several parameters are associated to each task:
\begin{itemize}
\item{The status of the task: can be set "Free" (not assigned), "Running" (assigned) and "Done" (job terminated). If the status is "Free" the task can be assigned to a job. Also if the state "Running" was there for more then a fixed time interval, meaning that probably the job has failed, the task can be reassigned to a new job. If the state is "Done" or is "Running" for less then the fixed time interval, the task is ignored during the tasks assignment process.}
\item{The dependencies of each task. It is possible to flag the tasks that need the execution of a different task before its execution (dependency): only tasks with no dependencies or with all the tasks, from which they depend on, in the "Done" status, can be executed.}
\item{Priority. It is possible to assign an arbitrary priority to each task. Priority is used to select the task that has to be executed first.}
\item{Job provenance. It is possible to know which job has actually performed which task.}
\item{The task description: it can be a string or a link to a specific script to execute, in this way it is possible to better control what is being executed on the WN. As matter of fact, it is possible to change the execution (i.e. if a bug is discovered, or there is a need for a new optimization) also after the submission of the jobs.}
\item{Number of failures. When a task fails, the system logs the event in order to avoid the resubmission of always failing tasks. It is possible to set a maximum number of resubmissions.}
\item{Date and time of execution. }
\item{The username of the user that is submitting the task together with his mail address}
\item{The information on the infrastructure where the task should be executed (grid, local farm, interactive server)}
\end{itemize}

In the submission phase all the jobs are identical: as a matter of fact, when a job is submitted it does not know which task(s) it has to execute. A background daemon was used to submit at a given rate, which could be tuned, always the same JDL to the Grid. As matter of fact several daemons were used in parallel each one pointing at is own gLite WMS in order to avoid that a failure of a single WMS could stop the submission procedure. The daemons automatically stop the jobs submission when no more unassigned tasks are found in the MDB. 
The jobs submitted to the grid are enclosed in a job wrapper. The job wrapper and the script that contains the executable can sends to the MDB server every kind of information that can be used to control which jobs are running and, which task they are performing and which operation they are performing on the WN. This DB server can be separated from the one hosting the task list, the MDB, if the scalability of the DB itself become an issue.
The schema of the MDB allows the user to send any kind of information such as: 
\begin{itemize}
\item{The date and time on the WN that sends the monitoring information}
\item{The host that is sending the information}
\item{The name of the variable to be monitored}
\item{The value of the variable}
\item{The JOBID of the job that is sending the information}
\item{The date (and time), on the server, when the information is stored}
\end{itemize}

This general schema leaves the user with the possibility to monitor any job operation by adding new variables as needed without changing the DB schema. If required this feature provides the information how many jobs are running at a given time if the job-wrapper is configured to send regularly monitoring information at fixed time intervals. 

Only when a job lands and starts running on a WN, it requests to the central MDB the assignment of a task for its execution. Information on the execution of each task is logged in the central MDB according the parameters mentioned above. Only if all steps are correctly executed, the status of that particular task is updated to "Done". In this way the MDB provides a complete monitoring of the task assignment and job execution and no manual intervention is required to follow each step and to manage the eventual resubmission of failed tasks. 
Usually we see two kind of failed jobs: the first kind of failures includes cases of jobs that were killed by the queue manager, while the second one is related to internal problem of the job (some operation failed). In the first case the wrapper can not update the MDB, while in the second case the wrapper script updates the MDB increasing the number in the "FAILURE counter".
In order to deal with the first kind of failure, tasks which are found in a "RUNNING" state by more then a fixed amount of time are considered failed and automatically reassigned to a new job. 
To optimize the input and output operations and to avoid bottlenecks and failures, two procedures were set up to randomly choose the SE source of the job input file as well as the SE where to store the job output. In this way if one SE fails temporarily one can continue running the task and store the output.

\begin{figure}
\centering
\includegraphics[width=\textwidth]{jstschema}
\caption{Logical Schema for JST components}
\label{Fig:fig3}
\end{figure}


\section{Interaction between REST service and Taverna Workflow Manager}
The interaction between Taverna~\cite{ref:taverna} workflow manager and the frontend of the submission service is possible using the Taverna plugin that is able to exploit the REST web service. In order to use it is only needed to provide the URL where the service is located, the parameters that are needed in order to submit and check the status of the jobs.
The parameters needed are:
\begin{itemize}
\item{The name of the application}
\item{The arguments}
\end{itemize}

The service as already described in the previus section could be used to run each specific task of a very complex workflow. 
Indeed, the Taverna could manage on the same workflow services provided by within this framework togheter with service already available in other infrastructure or ad-hoc developed 
service within Taverna itself. 

This provide extreme flexibility in building powerful workflows, as this give the possibility to the user to run over a grid infrastructure only the CPU intensive part of the work
while the lightweight part could relay on any kind of infrastructures.

The main problem in interaction between this framework and the Taverna workflow manager, is that while the latter is devoted to interact with syncronus services, the framework that 
we build is basically asyncronus. 
In order to solve this issue it is needed to build a loop into Taverna workflow manager that allow the end user to wait for the execution of the task and to check the status of the 
submitted tasks until they get done. 

An example of this workflow is shown in the Figure~\ref{Fig:tav}. As soon as we include more application on our computing infrastructure we will go on publishing those example web services
on standard workflow catalogues like MyExperiment~\cite{ref:myexp}.

\begin{figure}
\centering
\includegraphics{tavernaws}
\caption{An example of a Taverna workflow that could exploit the Grid Job Submission Framework described}
\label{Fig:tav}
\end{figure}

\section{Test and results}
We already test this solution in order to prove the scalability, 

\section{Conclusion and future works}
The aim of this work is to provide a seamless interface to powerful computing resources, such as Grid Distributed infrastructures based on EMI middleware, or 
big batch farms to end-users that are used to carry-on their analysis only using simple and high-level tools like workflows managers. 
The key idea is to provide a stateless web services interfaces based on REST technologies in order to make transparrent to the end users all the complexity related to such a 
job submission. 
In this way the researcher could exploit the grid infrastructures as he/she do with any available web services.
In the next months we will continue working on the possibility of uploading and downloading files using the Taverna plugins, in this way also the datamanagement will be easily
solved within workflow manager itself instead of using external tools.


\begin{thebibliography}{99}

\bibitem{ref:biovel}
\verb"http://www.biovel.eu"

\bibitem{ref:taverna}
\verb"http://www.taverna.org.uk"

\bibitem{ref:egi}
\verb"http://www.egi.eu"

\bibitem{ref:myexp}
\verb"http://www.myexperiment.org"


\end{thebibliography}

\end{document}


