% Please use the skeleton file you have received in the 
% invitation-to-submit email, where your data are already
% filled in. Otherwise please make sure you insert your 
% data according to the instructions in PoSauthmanual.pdf
\documentclass{PoS}
\usepackage{listings}
\usepackage[english]{babel}
\usepackage[babel]{csquotes}
\usepackage[style=authoryear-comp]{biblatex}
\bibliography{saverio}

\title{The BioVeL Project: Robust phylogenetic workflows running on the GRID}

\ShortTitle{The BioVeL Project: Robust phylogenetic workflows running on the GRID}

\author{\speaker{Giacinto DONVITO}\\
       % \thanks{A footnote may follow.}\\
       INFN-Bari\\
       E-mail: \email{giacinto.donvito@ba.infn.it}}
\author{Saverio VICARIO\\
       CNR-ITB\\
       E-mail: \email{saverio.vicario@gmail.com}}
\author{Pasquale NOTARANGELO\\
       INFN-Bari\\
       E-mail: \email{pasquale.notarangelo@ba.infn.it}}
       
\abstract{

\textbf{Overview}
Altered species distributions, the changing nature of ecosystems and increased risks of extinction all have an impact in important areas of societal concern. Biologists and environmental scientists are asked to provide decision support for managing the biodiversity component of our environment at multiple scales (genomic, organism, habitat, ecosystem, landscape) to prevent and mitigate such losses.
BioVeL want to address this needs offering a series of robust and reliable web services that could be managed with the suite of tools of the myGRID project. The project will propose workflow for these services that will ensure best practice and efficiency of usage.
Within the first round of services produced by the project there are phylogenetic inference workflows.
These workflows will provide to end user the capabilities to execute application that could easily exploit several kind of resources, like: EGI grid infrastructure, local batch farm or dedicated servers.
}

\FullConference{EGI Community Forum 2012 / EMI Second Technical Conference,\\
		26-30 March, 2012\\
		Munich, Germany}


\begin{document}

\section{Introduction to BioVeL project: E-solutions for the management of biodiversity in the 21 century}

 Scientists are pressed to provide convincing evidence of changes to contemporary biodiversity, to identify factors causing decline in biodiversity and to predict the impact, and to suggest ways of combating biodiversity loss. Altered species distributions, the changing nature of ecosystems and increased risks of extinction all have an impact in important areas of societal concern. Biologists and environmental scientists are asked to provide decision support for managing the biodiversity component of our environment at multiple scales (genomic, organism, habitat, ecosystem, landscape) to prevent and mitigate such losses. Generating such evidence and providing decision support relies, increasingly, on large collections of data held in digital formats and the application of substantial computational capability and capacity to analyse, model and simulate in association with such data. 
The aim of the BioVeL(\cite{BioVeL}) project is to provide a seamlessly connected environment that makes it easier for biodiversity scientists to carry out \textit{in silico} analysis of relevant biodiversity data and to pursue \textit{in silico} experimentation based on composing and executing sequences of complex digital data manipulations and modelling tasks. In BioVeL scientists and technologists will work together to meet the needs and demands for \textit{in silico} or \emph{e-Science} and to create a production-quality infrastructure to enable pipelining of data and analysis into efficient integrated workflows. Workflows represent a way of speeding up scientific advance when that advance is based on the manipulation of digital data (\cite{Gil2007}).


\section{Biological goal of the Phylogenetic Services}
Phylogenetic inference is at the base of the archiving system of biodiversity, namely Systematics. All classification level of systematics but species are based exclusively on phylogenetic inference, and in recent time mainly on molecular phylogenetic inference. The term phylogenetic inference encompasses all procedures necessary to reconstruct the historical relationship among organisms or portion of their hereditary information as genes. Such reconstructions deal both with the topological shape of the relation and the quantification of the divergence among organisms. As such the phylogenetics reconstruction is an exhaustive summary of the evolution that brought the diversity of organisms or genes under study and for this reason is a basic tool to interpret (i.e. qualify and quantify) biodiversity. Phylogenetic methods are used to reconstruct the biogeographic history of species and populations, annotate genes, reconstruct the mode and time of the evolution of species- or gene-specific  features.
However, phylogenetic approaches are still not fully integrated in biodiversity research, because of difficulties to handle large data sets, complex data/analysis, and old standards. Phylogenetic inference, as all parametric statistical inference, is a delicate procedure that requires to follow good practices.  In fact phylogenetic inference is sensitive to model misspecification and could even produce erroneous results using the correct model (see "long branch attraction" literature).  For those approaches that use phylogenetic information from previous analysis is often difficult to use the phylogenetic output of a program as input for the further analysis because phylogenetic uncertainty could not be correctly reported.  Furthermore, phylogenetic inference depend from the call of homology performed on the raw data, that on molecular data is framed as the sequence alignment problem. In summary the basic procedure of sample selection, alignment, phylogenetic inference and downstream analysis need to be completed with a step of quality control that would check stability and biological consistency of the alignment, a step of control of convergence to a unique solution of the phylogenetic inference, and a \textit{post hoc} validation of the model or models used.
Special attention should be brought the capability to deal with large data sets. Especially since the emergence of high throughput sequencing methods, access to computing power is one of the central problems in phylogenetics. In fact, being phylogenetics a NP (non  polynomial time) complete problem and given that number of possible tree topology grow factorially with number of sequences only the enumeration of very large numbers of possible solution guarantee an exact solution. Nowadays different approximate (i.e. heuristic) solutions are available for large data set both based on maximum likelihood approach (i.e. RaxML, Garlie), Bayesian Markov Chain Monte Carlo (i.e. MrBayes, BEAST) or Approximated Bayesian Computation (i.e. ABCtoolbox , SPLATCHES). Still all these approaches require a large number of computation of the likelihood of different phylogenetic hypotheses (in the order of $10^5$) or coalescent histories (in the order of $10^7$) for a analysis in the order of few hundreds of sequences. A part the complexities residing in the tree shape itself phylogenetic inference need to deal with the complexity of the model of the character that evolve on the tree.  Deep-medium divergence for a correct representation of the data need complex substitution model that for example in phylogenomics data sets need to  partitioned model with potentially different substitution matrices  ( i.e. single base, doublet, amino acid, codon), although with the simplifying assumption that demographic fluctuation would be erased by coalescence events. On the contrary Phylogeography model would concentrate its complexities on the interaction between demography and tree shape ( topology and branch lengths). 

Moreover, a related problem is the complexity of initial data mining procedures. To reconstruct each phylogeny of gene family, a complex substitution model need to implemented. Today we get millions of sequences as raw data, but we have few tools to overview and process (i.e. data mine) high-throughput data, as Next Generation Sequencing data.
In light of these challenges the phylogenetic service set of BioVeL would like to produce 4 type of workflows: a molecular phylogenetic inference set, phylogenetic diversity set, phylogenetic molecular evolution set, phylogenetic character mapping set. The work presented here is the first instance of the a workflow of the first type that deal with producing a phylogenetic inference of set of unaligned protein coding nucleotide sequences. First the workflow find the most likely protein domain common to all nucleotide sequence trying all coding frame and all user selected genetic codes using the implementation of Hidden Markov model of HMMer 3.0 and the PFAM db. Then it perform the alignment based on that profile, filtering sites with low fit on the protein profile.
The file is then sent to perform a phylogenetic inference with the program MrBayes. To do so the file is formatted in Nexus format and a user specified model of evolution and details of the markovian integration is attached to the file. The MrBayes services set up automatically the best MPI set up to the user requests on the markovian integration and is sent to an optimal, in term of efficiency of use, number of CPUs.

\section{Technical and users requirements}
In developing this framework we started with the analysis of the technical and users requirement in order to find the best solutions for our use-case. 

\subsection{Authentication and security}
We decided to split the function of identification of the user for access to specialized computing resources within the grid and the security of the system. This to allow for user without special access to resource  to go on a web page and to submit queries or calculation with a simple web form, or web service without have to register or request for a certificate. For the purpose of this paper we will describe only how we maintained the level of security required in the system without user identification. The problem of differential access to computing resource is postponed and left to the BioVeL project as a whole or more precisely to the agreement to be settled between the Lifewatch ESFRI (in which BioVeL is included) and EGI. 
We didn't implemented any security routine within the workflow that access the webservices because one of the point of interest of the project is to allow the user to modify the workflow if needed. So the security was enforced only in the Web Services that compose the workflows. 
 
\subsection{Client requirements}
One other important requirement is about the client used by the end users. 
The BioVeL project as decided that the unique supported client, for managing the complex workflow requested by the scientific analysis is Taverna workflow manager.
This means that we had to find the best implementation that could work with Taverna. 
We soon understood that this kind of technology could be interesting also for other users that could or would not exploit such a powerful tool like Taverna, but they only have some Command Line Application like curl or a simple web browser. 
In order to support both the use cases we have found that the best solution is to provide a REST interface with get method.

\section{Overall description of Job Submission Tool}
In order to satisfy the requirement of the project to submit from a Web Services jobs to the grid and local computing infrastructure, we expanded the functionality  of  Job Submission Tool (JST; \cite{Tulipano2011}). JST  has been developed years ago to allow the submission of large numbers of jobs and keep track of all of them in an unattended way but was not able yet to accept job submission through a Web Service. 

\subsection{Anatomy of Job Submission Tool}
As shown in the Figure~\ref{Fig:fig1}, the system, after the expansion, is made of four macro-elements:

\begin{itemize}
\item{Managment DataBase (MBD): MySQL Database of Job storage and configuration parameters}
\item{FrontEnd: Rest Services accessible from the outside}
\item{BackEnd: Application daemon that runs in the background for the job submission and execution both on local machine and on grid environment}
\item{Exchange Files Server: A WebDav server that allow the user and the Working node to exchange data files among them}
\end{itemize}

\begin{figure}
\centering
\includegraphics[width=\textwidth]{JSTSchema}
\caption{Interaction between elements. The graph include the four JST element together with the user, client and working nodes (WN) elements. The arrows indicate the flow of information. Gray, thick arrows indicate the flow of large files, while thin and black arrows indicate flow of small text file of various format.}
\label{Fig:fig1}
\end{figure}


The components called FrontEnd and BackEnd have been written in Java language using the Framework Java EE 6.0; in particular the FrontEnd also uses the Framework Jarsey JAX-RS and the Web Server Apache Tomcat.

The user can interact with the system in several ways, can call the FrontEnd component by Command Line Interface clients (eg. curl or wget), by all the Web browser or by Workflow manager (eg. Taverna). 
Each of the step could be executed by means of the same tool or using different tools, indeed the FrontEnd provides a client independent layer for the grid job scheduling system.

\subsection{Job submission Tool at work}

At the user submission phase all the jobs are identical: as a matter of fact, when a job is submitted it does not know which task(s) it has to execute.  FrontEnd access the table ConfigTask (\ref{tab:ConfigTask}) of the MDB to retrieve all the information and the pieces of software required to perform the job and define how many and of what type of tasks are needed.
\begin{table}
\begin{tabular}[width=\textwidth]{|l|l|}
\hline
\multicolumn{2}{|c|}{ConfigTask} \\
\hline
NAME  & CHAR \\
LINK &  CHAR \\ 
DATE\_NOW &  DATE \\ 
CPUs & CHAR \\
COMMENT & LONG\_TEXT \\ 
STATUS & CHAR \\
\hline
\end{tabular}
\caption{ConfigTask schema}
\label{tab:ConfigTask} 
\end{table}
 Using this information FrontEnd fill a record for each task in the Tasklist (\ref{tab:TaskList}) in MDB. 
For the proper monitoring of the task assignment and job termination, several parameters are associated to each task, as you can see in the reported schema of the tables. Few of the most important field are detailed described here:
\begin{itemize}
\item{The status of the task: can be set "Free" (not assigned), "Running" (assigned) and "Done" (job terminated). If the status is "Free" the task can be assigned to a job. Also if the state "Running" was there for more then a fixed time interval, meaning that probably the job has failed, the task can be reassigned to a new job. If the state is "Done" or is "Running" for less then the fixed time interval, the task is ignored during the tasks assignment process.}
\item{The dependencies of each task. It is possible to flag the tasks that need the execution of a different task before its execution (dependency): only tasks with no dependencies or with all the tasks, from which they depend on, in the "Done" status, can be executed.}
\item{Priority. It is possible to assign an arbitrary priority to each task. Priority is used to select the task that has to be executed first.}
\item{Job provenance. It is possible to know which job has actually performed which task.}
\item{The task description: it can be a string or a link to a specific script to execute, in this way it is possible to better control what is being executed on the WN. As matter of fact, it is possible to change the execution (i.e. if a bug is discovered, or there is a need for a new optimization) also after the submission of the jobs.}
\item{Number of failures. When a task fails, the system logs the event in order to avoid the resubmission of always failing tasks. It is possible to set a maximum number of resubmissions.}
\item{Date and time of execution. }
\item{The username of the user that is submitting the task together with his mail address}
\item{The information on the infrastructure where the task should be executed (grid, local farm, interactive server)}
\end{itemize}

\begin{table}
\begin{tabular}[width=\textwidth]{|l|l|}
\hline
\multicolumn{2}{|c|}{TaskList} \\
\hline
ID  & INTEGER \\
NAME  & CHAR \\
PRIOR & INTEGER \\
STATUS & CHAR \\
PROVENANCE &  CHAR \\ 
LINK &  CHAR \\ 
DATE\_NOW &  DATE \\ 
FAILURE & CHAR \\
arguments &  LONG\_TEXT \\ 
COMMENT & LONG\_TEXT \\ 
JOB\_ID & CHAR \\
MAIL & CHAR \\ 
OUTPUT & CHAR \\
OUTPUT\_TYPE & CHAR \\ 
FLAG & CHAR \\ 
\hline
\end{tabular}
\caption{TaskList schema}
\label{tab:TaskList}
\end{table}
The BackEnd daemon is  used to submit the task at a given rate, which could be tuned. As matter of fact several daemons were used in parallel. Each one could use more than one gLite WMS in order to avoid that a failure of a single WMS could stop the submission procedure. The daemons automatically stop the jobs submission when no more unassigned tasks are found in the TaskList table. 
The jobs submitted to the grid contain a job wrapper. The job wrapper and the script that contains the executable can sends to the MDB server on the job\_mon table (Table \ref{tab:jobmon}) every kind of information that can be used to control which jobs are running and, which task they are performing and which operation they are performing on the WN. This table  can be separated from the one hosting the task list, the MDB, if the scalability of the DB itself become an issue. A record in this table represent a single variable monitored in a given task and job, so several records could match a given task. 

\begin{table}
\begin{tabular}[width=\textwidth]{|l|l|}
\hline
\multicolumn{2}{|c|}{job\_mon} \\
\hline
TIME\_LOCAL  & timestamp \\
JOB\_ID &  CHAR \\ 
JOB\_NAME & CHAR \\ 
NOME\_VAR & CHAR \\
TIPO\_VAR & CHAR \\
DESCRIPTION & LONG\_TEXT \\ 
HOST & CHAR \\
FLAG & CHAR \\
\hline
\end{tabular}
\caption{Job Monitoring schema. The columns are name and type of the field in the table. The field are: date and time on the WN that sends the monitoring information; job id and name; name and type of the monitored variable; description of the variable; host that is sending the information; Id of the task}
\label{tab:jobmon}
\end{table}
This general schema leaves the user with the possibility to monitor any job operation by adding new variables as needed without changing the DB schema. If required this feature provides the information on how many jobs are running at a given time. Indeed, the job-wrapper could be configured to send regularly monitoring information at fixed time intervals. 
Information on the execution of each task is logged in the central MDB according the parameters mentioned above. Only if all steps are correctly executed, the status of that particular task is updated to "Done". In this way the MDB provides a complete monitoring of the task assignment and job execution and no manual intervention is required to follow each step and to manage the eventual resubmission of failed tasks. 
Usually we see two kind of failed jobs: the first kind of failures includes cases of jobs that were killed by the queue manager, while the second one is related to internal problem of the job (some operation failed). In the first case the wrapper can not update the MDB, while in the second case the wrapper script updates the MDB increasing the number in the "FAILURE counter".
In order to deal with the first kind of failure, tasks which are found in a "RUNNING" state by more then a fixed amount of time are considered failed and automatically resubmitted without increasing the failure counter. 
To optimize the input and output operations and to avoid bottlenecks and failures if/when needed the files available on the File Exchange server could be transferred on the grid Storage Element. In this case, two procedures were set up to randomly choose the storage element (SE) source of the job input file as well as the SE where to store the job output. In this way if one SE fails temporarily it is possible to continue running the task and store the output.


\subsection{ Software Architecture of the JST}
To allow robustness and seamless interaction across module that handle new and old functionality of JST, the application was rewritten using the MVC design pattern (Model View Control) and three distinct layers were defined:
\begin{itemize}
\item{View}
\item{Business logic}
\item{Access to data}
\end{itemize}

In order to implement those layer we developed a series of libraries, opportunely referenced and integrated with each other. 
These libraries are:
\begin{itemize}
\item{Service library, in FrontEnd package, contains the methods available to the service}
\item{XmlEntityLayer is devoted to serialize/deserialize the XML response format of the service (Map of the responses in XML)}
\item{EntityLayer contains the schema of the database}
\item{BusinessLayer contains the business logic for the management of the MDB and methods of service}
\item{DataLayer provides classes and methods for accessing data in the database}
\item{Core contains various utilities and functionality}
\item{MultiThread contains management and scheduling methods of the thread for the execution of the Job}

\end{itemize}

\begin{figure}
\centering
\includegraphics[width=\textwidth]{Archi}
\caption{Libraries Schema}
\label{Fig:fig2}
\end{figure}


Figure~\ref{Fig:fig2} is shown how each library interact which each other.
In particular FrontEnd and BackEnd interact directly with the libraries BusinessLayer and Core.
BusinessLayer implements the logic of system and the management of the database, so the BusinessLayer is the component that calls opportunely the other libraries.

\subsection{BackEnd implementation features}

The application called BackEnd, written in Java, it is a daemon, and could be constantly running on many nodes contemporarly in order to provide better scalability. 
It is implemented exploiting a MultiThread strategy and it is interfaced to the MDB to retrieve the task to be submitted to the Grid, to a Local Farm or to be executed on dedicated servers.
Each of the task is flagged to be executed over a grid environment, or on a local batch farm, or to be executed on the server itself. 
The BackEnd deamon could be configured by a site admin and could cover all the three functionalities. 
Indeed it could execute on the same server a give amount of jobs, or could submit the request for executing job over the grid or local computing infrastructures. 

\subsection{FileExchange Server}
In order to fulfil the community requirement in terms of data management we have chosen a solution that is able to provide an easy and user friendly interface for transfer file of several GB size or huge number of small files in a complex directory structure.
All this requirements are satisfied by mean of an Apache based WebDAV server. 
Indeed, this solution has several specific characteristics that could be useful:
\begin{itemize}
\item{Flexible authentication}
\item{Could be mounted as a file-system}
\begin{itemize}
\item{Allowing drag\&drop operation}
\item{Allowing direct opening of the files}
\end{itemize}
\item{Supported in the most widely used Operating System}
\begin{itemize}
\item{Windows}
\item{Mac Os}
\item{Linux}
\item{Android}
\item{iOS}
\end{itemize}
\item{Could support lock of files}
\item{Could provide Name Space operation}
\item{Could support user driven metadata information management (PUT and GET)}
\end{itemize}


The files could also be available via http/https link, both to the end users and to the application. This means that there is no need to transfer the temporary output from the computing infrastructure to the client, but an application, running under a web services, could exploit the output of another application directly from the server instead of requiring staging the files to/from the desktop of the users.
This will greatly improve the overall performance in running multi-step workflow.  

\textbf{Security issues on Data Management}

At the moment the project is trying to promote its services to external users, so the framework should provide functionalities to allow seamless interaction with the storage services within the Taverna Workflow, both in terms of input upload and output download. Furthermore, given that all users share the same account without password we add the requirement of avoiding the overwriting the input files uploaded from the users, and avoiding the possibility for a users to read output files of others users and avoiding overflowingof the storage server by a single user.
The security in the exchange of files is guaranteed using two different systems in output and input. In input the user are given access to File Exchange Server, a WebDav server without user and password identification. The account allow only uploading but no reading. Further the account is sized limited and periodically erased. To avoid that two users will submit the files with the same name during a same period of usage, in the account the overwriting is disabled. In output both the users or further job that need to access results can use an address to access an HTTP read-only folder. To ensure security the name of the last directory in the address is randomly generated. The web services delivered the address only if the user interrogate the Web Services with the correct FLAG value given during the job submission transaction.  

\subsection{FrontEnd implementation features}

The application called FrontEnd was written in Java using the components Jersey; in particular the framework JAX-RS and Java 6 SDK.
The FrontEnd is a RESTful web services: those service are widely used in the workflow managers environment, but can also be easily exploited with simpler clients, like unix command line tools and standard web browsers.

Representational State Transfer (REST) is an architectural style that specifies constraints, such as the uniform interface, that if applied to a web service induce desirable properties, such as performance, scalability, and the possibility to easily modify the application or reuse part of its code. This is important to provide usable and robust services to the end users. In the REST architectural style, data and functionality are considered resources, and these resources are accessed using Uniform Resource Identifiers (URIs), typically links on the web. 

The resources could be accessed or modified by using a set of simple, well-defined operations. The REST architectural style constrains a client-server architecture, and it is designed to use a stateless communication protocol, typically HTTP. In the REST architecture style, clients and servers exchange representations of resources using a standardized interface and protocol. These principles encourage RESTful applications to be simple and lightweight in order to provide high performance. One of the advantages of the use of REST paradigm is also the independence from the clients ensuring a high portability and scalability.


\textbf{Security implementation}

The user interact with the computing infrastructure only by means of the FrontEnd, sending short string within predefined REST command. The security within the REST command is maintain by the fact that the string are used to compose the argument for predefined application, avoiding the risk of shell injection. 
Indeed, if required, the same methods could be implemented exploiting HTTPS protocols in order to guarantee the privacy of the data provided to REST service.
   
\textbf{Supported Methods}
 
\emph{InsertJobs}: this is the method that allow the user to create a multiple jobs, for the same application, into the database, each job is identified by a different argument. Arguments are split using ";" 
\begin{lstlisting}[breaklines=true]
http://localhost:8080/INFN.Grid.FrontEnd/services/QueryJob/InsertJobs?NAME={nameValue}&arguments={http://webtest.ba.infn.it/vicario/FinalFusariumDB_2.nex ArgOne; http://webtest.ba.infn.it/vicario/FinalFusariumDB_1.nex ArgTwo;}
\end{lstlisting}
This method returns the xml type: XmlInsertJobs with information on to the tasks added in the table TaskList.

\emph{SelectJobs}: this is the method that allow the user to get the status of a list of task of a given job into the database, identified by the "FLAG" value:
\begin{lstlisting}[breaklines=true]
http://localhost:8080/INFN.Grid.FrontEnd/services/QueryJob/SelectJobs?FLAG={FlagValue}
\end{lstlisting}
This method returns the xml type: XmlSelectJobs with information relating to the jobs searched in the table TaskList.

Here is an example of the structure of the XML that the FrontEnd service report to the user: 

\begin{lstlisting}[breaklines=true]
XmlSelectJobs:
	<Jobs>
		<Job>
			<Arguments>Arg4</Arguments>
			<Comment>local</Comment>
			<CPUs>1</CPUs>
			<Flag>001</Flag>
			<Id>4</Id>
			<LastCheck>2012-02-14 10:54:58.0</LastCheck>
			<Name>blast4</Name>
			<Output/>
			<Provenance/>
			<Status>done</Status>
		</Job>
	</Jobs>
	
	XmlInsertJobs:
	<Job>
		<Name>NOME</Name>
		<Flag>001</Flag>
  		<JobsID> 
  			<JobId>12</JobId>
  			<JobId>13</JobId>
  		</JobsID>
  	</Job>
\end{lstlisting}

\section{Interaction between REST service and Taverna Workflow Manager}
The interaction between Taverna (\cite{Hull2006}) workflow manager and the FrontEnd of the submission service is possible using the Taverna plugin that is able to exploit the REST web service. In order to use it is only needed to provide the URL where the service is located, the parameters that are needed in order to submit and check the status of the jobs.
The parameters needed are:
\begin{itemize}
\item{The name of the application}
\item{The arguments}
\end{itemize}

The service as already described in the previous section, could be used to run each specific task of a very complex workflow. 
Indeed, the Taverna could manage whithin the same workflow, services provided by this framework together with service already available in other infrastructure or \textit{ad-hoc} developed service within Taverna itself. 

This provide an extreme flexibility in building powerful workflows, as this give the possibility to the user to run over a grid infrastructure only the CPU intensive part of the work while the lightweight part could relay on any kind of infrastructures.

The main problem in interaction between this framework and the Taverna workflow manager, is that while the latter is devoted to interact with synchronous services, the framework that we build is basically asynchronous. 
In order to solve this issue it is needed to build a loop into Taverna workflow manager that allow the end user to wait for the execution of the task and to check the status of the submitted tasks until they get done. 

An example of this workflow is shown in the Figure~\ref{Fig:tav}. As soon as we include more application on our computing infrastructure we will go on publishing those example workflow on standard workflow catalogues like MyExperiment~(\cite{DeRoure2009a}), while the included webservices will be published in BioCatalogue (\cite{Bhagat2010}).

\begin{figure}
\centering
\includegraphics{tavernaws}
\caption{An example of a Taverna workflow that could exploit the Grid Job Submission Framework described}
\label{Fig:tav}
\end{figure}

\section{Test and results}
We already test this solution in order to prove the scalability, the reliability and the functionalities that we described before. 
We tested the FrontEnd with huge amount of insert, in order to prove that it was reliable and fast enough to serve the request from the users.
The test showed that also after 1 million of inserts the FrontEnd do not suffer of memory leak or similar problems. 
We also tested the behaviour with multiple and concurrent access to the same server: using up to 100 concurrent clients we do not registered any kind of problem or error, and the server is also not overloaded from the resulting activities.
From the point of view of the back-end and database server, we have experience in running with more than 1 million of task into a single table, without experiencing any issues.
We have used this system for long running challenges on the European Grid Infrastructure (EGI) lasting for more than one month of running.

\section{Conclusion and future works}
The aim of this work is to provide a seamless interface to powerful computing resources, such as Grid Distributed infrastructures based on EMI middleware, or 
big batch farms to end-users that are used to carry-on their analysis only using simple and high-level tools like workflows managers. 
The key idea is to provide a stateless web services interfaces based on REST technologies in order to make transparent to the end users all the complexity related to such a 
job submission. 
In this way the researcher could exploit the grid infrastructures as he/she do with any available web services.
In the next month we will work on notification (i.e email) of completion of task. This is particularly important for phylogenetic inference in which a given task of a job could last several days. It is is useful then to avoid that the user selected client/workflow engine continue to query the REST for such long amount of time. So the notification could be used to hint the user or the workflow engine to query the REST service after a long period of waiting.
We plan to add a method in the FrontEnd that will allow to access the monitoring job table using both Taverna Workflow manager and more simple clients. This will allow users to access more detailed statistics of their job, and diagnose cause of failure or delay of their submission. 


%\begin{thebibliography}{99}
%
%\bibitem{ref:biovel}
%\verb"http://www.biovel.eu"
%
%\bibitem{ref:taverna}
%\verb"http://www.taverna.org.uk"
%
%\bibitem{ref:egi}
%\verb"http://www.egi.eu"
%
%\bibitem{ref:myexp}
%\verb"http://www.myexperiment.org"
%
%
%\end{thebibliography}

\cleardoublepage
\printbibliography
\end{document}


