\chapter{Tests and Evaluation}
\label{chapter7}
\thispagestyle{empty}
This chapter discusses the results of the tests we conducted on the Model-Driven Information Retrieval System. As explained in Chapter \ref{chapter6} the two case studies of the system retrieve project models that belong to two different datasets. The first dataset consists of a set of UML class diagrams, while the second one consists of a set of WebML real-world applications.

The tests for the UML case study involve different types of keyword-based query. Each type, that in the following is called ``meta-query'', has different characteristics in terms of the document that is searched by the query (e.g., project, class) and in terms of the information need that is expressed through the query (e.g., the user may want to search a specific project or all the projects related to a topic). We first outlined a set of five meta-queries, then we chose two of them. For each of these, we built a set of ten instances that we used to test the UML case.

The tests for the WebML case involve a set of ten queries. Each query searches WebML models (or fragments of WebML models) that apply the most common patterns, such as the ``search pattern'' \cite{Ceri:2002:DDW:599829}. The queries can be submitted to the system as document-based queries in form of a WebML model. As explained in Section \ref{webml-case}, the Query Processing phase deals with the translation of the document-based query into the corresponding text-based query that is then sent to the search platform.

For both the UML and the WebML experiments we tested various configurations involving different indexing options. For example, a test configuration of the WebML case involves the dereferentiation of some references of the project models.

To test the effectiveness of the system as a search engine system, in each test configuration we return only the first ten relevant documents. Therefore, the evaluation metrics assess the quality of the system only up to the tenth rank position.

We conducted a manual assessment in order to build a ground truth for the query-to-project relevance. Each query was manually evaluated against the projects in the repository in order to assign a value of relevance to the project elements with respect to a given query.

This chapter is organized as follows. Section~\ref{evaluation-metrics} presents the theoretical background of the metrics we used to evaluate the experiments. Section~\ref{groundtruth} provides details on the methodology adopted to build the ground truth with which the tests are compared. Section~\ref{uml-evaluation} shows the types of query we used for testing the UML experiments, then the test configurations and finally it discusses the results. Section~\ref{webml-evaluation} focuses on the WebML test configurations and their results. Section~\ref{results-main-findings} sums up the main findings that emerged from the results of the evaluation.

\section{Evaluation Metrics}
\label{evaluation-metrics}
\input{evaluation_metrics}
\section{Ground truth}
\label{groundtruth}
\input{groundtruth}
\section{UML Tests and Results}
\label{uml-evaluation}
\input{uml_evaluation}
\section{WebML Tests and Results}
\label{webml-evaluation}
\input{webml_evaluation}
\section{Main findings}
\label{results-main-findings}
\input{results_main_findings}
