For the evaluation of the WebML experiments we manually built a set of ten models with different sizes to be used as queries. The queries reflect the use case in which a user want to search for frequently used WebML patterns. Figure \ref{fig:webml_query_example} depicts an example of query.

\begin{figure}[htbp]
  \begin{center}
	\includegraphics[width=0.7\textwidth]{./pictures/webml_query_example.eps}
	\caption{An example of document-based query used to test the WebML experiments.}
	\label{fig:webml_query_example}
  \end{center}
\end{figure}

The pattern of operations expressed by the piece of project in Figure \ref{fig:webml_query_example} is the following: the user, previously logged into the Web site, browses the ``New book request page'' and adds a new Book to the collection of book requests through the entry unit called ``New book''; the submission of the entry form triggers the creation of a new Book entity and the connection between the newly created Book entity and the User entity; if the tasks executed by the operation units end with success, the user is redirected to the ``Book request list'' page.
Since the WebML case involves content-based queries, the query shown in Figure \ref{fig:webml_query_example} is an entire WebML project that includes all the concepts of the WebML metamodel (site view, area, page, unit, etc.). As discussed in Section \ref{webml-case}, the Query Processing phase of the WebML case is responsible for transforming the content-based query into a text-based query by extracting the terms contained into the query. In this example the Keyword Extraction task provides the following keywords for the given query: ``Book requests Create book ConnectUserToBook New book request New book User Book request list''. In Table \ref{tab:list_of_webml_queries} we show the complete list of the ten queries we used to evaluate the WebML case study application.

\small

\begin{table}
    \begin{tabular}{|l p{5in}|}
    \hline
    1 & Manage Products Manage Products Products List Search Product \\
    2 & Publication create new pub Enter New Publication New publication Publication type \\
    3 & Modify Modify Modify user Modify user data default group subject List User list \\
    4 & Client Details Create Modify Exist Project Details Project details Title \\
    5 & Manage clients Make calls Manage clients \\
    6 & Responsibles compose mail info Ask for information Mail \\
    7 & Manage appointments Manage appointments Appointments List \\
    8 & Book requests Create book ConnectUserToBook New book request New book User Book request list \\
    9 & Manage documents ModifyUnit1 New Document Modify document Document data Document details Document list Delete document \\
    10 & Contract type Delete Contracts Contract types \\
    \hline
    \end{tabular}
  \caption{The complete list of the ten text-based queries used to evaluate the WebML case study application. The keywords are extracted from the content-based version of the queries.}
  \label{tab:list_of_webml_queries}
\end{table}

\normalsize

As explained in Section \ref{webml-dataset}, our experiments were conducted on a project repository composed of twelve real-world industrial WebML projects from different application domains (e.g., human resource management, Web portals, etc.).

To assess the quality of the WebML experiments we tested them with different configurations:

\begin{itemize}
 \item \emph{Test Configuration 1}: this is the basic test configuration.
 \item \emph{Test Configuration 2}: this test configuration includes the dereferentiation of the ``to'' attributes of Link elements.
 \item \emph{Test Configuration 3}: this test configuration includes the dereferentiation of the ``to'' attributes of Link elements and of all the ``displayAttributes'' and ``entity'' attributes of the OperationUnit and ContentUnit elements.
 \item \emph{Test Configuration 4}: this test configuration includes the indexation of the names of the WebML metamodel concepts (e.g., site view, area, page, data unit, etc.).
 \item \emph{Test Configuration 5}: this test configuration includes the indexation of the names of the WebML metamodel concepts and assigns a payload of 0.1 to those terms.
\end{itemize}

Test Configurations 2 and 3 enable the dereferentiation of the ids contained in the ``entity'' and ``displayAttributes'' attributes of Operation Unit and Content Unit elements, and the ``to'' attributes of Link elements.
This latter references are dereferenced by replacing them with the ``name'' attribute of the object pointed by the link. The dereferentiation of the ids contained in the ``entity'' and ``displayAttributes'' attributes of OperationUnit and ContentUnit elements is done by taking the name referenced by that id from the Data Model of the WebML project in which the Unit element is contained.

The following one is an example of the XML code of an area called ``Request Information'' containing some ids that will get dereferenced:

\lstset{
language=XML,
breaklines=true,                % sets automatic line breaking
breakatwhitespace=false,        % sets if automatic breaks should only happen at whitespace
basicstyle=\footnotesize	% the size of the font
}

\begin{lstlisting}
<packagedElement name="Request Information" xmi:id="sv1b#area6g" xmi:type="webml:Area">
	...
  <packagedElement name="Request Info" xmi:id="sv1b#area6g#page3g" xmi:type="webml:Page">
    <packagedElement displayAttributes="ent1#att1 ent1#att5b" entity="ent1" name="Report" xmi:id="sv1b#area6g#page3g#dau2g" xmi:type="webml:DataUnit">
      <packagedElement name="Link 59" to="sv1b#area6g#opg5g#seu9g" xmi:id="sv1b#area6g#page3g#dau2g#ln59g" xmi:type="webml:Link"/>
    </packagedElement>
    <packagedElement name="Info" xmi:id="sv1b#area6g#page3g#enu4g" xmi:type="webml:EntryUnit">
      <packagedElement name="Link 57" to="sv1b#area6g#opg5g#cru4g" xmi:id="sv1b#area6g#page3g#enu4g#ln57g" xmi:type="webml:Link"/>
      <packagedElement name="Post" to="sv1b#area6g#opg5g#seu9g" xmi:id="sv1b#area6g#page3g#enu4g#ln58g" xmi:type="webml:Link"/>
    </packagedElement>
  </packagedElement>
	...
</packagedElement>
\end{lstlisting}
In the example above, the ids contained in the ``displayAttributes'' attribute of the Data Unit called ``Report'' will be dereferenced with the corresponding name of the entities contained in the Data Model. Also the attributes ``to'' of the Link elements will be dereferenced with the name of the element they point to.

In the Test Configuration 4 and 5 we add to the index the terms that represent the name of the WebML metamodel concepts. Also the names of the metamodel concepts of the query elements are added to the the query string. Figure \ref{fig:webml_tc4-5_example} depicts an example of WebML area and the text below shows the field ``content'' in case of Test Configuration 4 and 5 (the original words have already been analyzed; the field contains the terms generated by the content analysis):
\begin{figure}[htbp]
  \begin{center}
	\includegraphics[width=0.5\textwidth]{./pictures/webml_tc4-5_example.eps}
	\caption{An example of WebML area adopted to explain test configuration 4 and 5.}
	\label{fig:webml_tc4-5_example}
  \end{center}
\end{figure}

\lstset{
breaklines=true,                % sets automatic line breaking
breakatwhitespace=false,        % sets if automatic breaks should only happen at whitespace
basicstyle=\footnotesize	% the size of the font
}

\begin{lstlisting}
web webmodel model web webmodel model administr site siteview view compet center area save dc creat createunit unit nop op oper noopoperationunit unit nop op oper noopoperationunit unit save modifi modifyunit unit compet center page compet center power index powerindexunit unit entri unit 1 entri entryunit unit new compet center page new compet center entri entryunit unit edit code page modifi cc data dataunit unit edit compet center entri entryunit unit
\end{lstlisting}

For example, the entry unit ``New competence center'' is indexed with the name of its metamodel concept (``entry unit'').
The Test Configuration 4 and 5 are intended to increase the recall of the system.

The test configurations are assessed adopting the evaluation metrics discussed in Section \ref{evaluation-metrics}. The results are commented and compared as follows.

Figure \ref{fig:webml_tc123_dcg_comparison} shows the DCG and iDCG curves of the first three test configurations.

\begin{figure}[htbp]
%\centering
\subfigure[DCG and iDCG curves of Test Configuration 1.]{
   \includegraphics[scale = 0.45] {./pictures/WebML-DCG_TC1.eps}
   \label{fig:webml_tc123_dcg_comparison_tc1}
 }
 \subfigure[DCG and iDCG curves of Test Configuration 2.]{
   \includegraphics[scale = 0.45] {./pictures/WebML-DCG_TC2.eps}
   \label{fig:webml_tc123_dcg_comparison_tc2}
 }
 \subfigure[DCG and iDCG curves of Test Configuration 3.]{
   \includegraphics[scale = 0.45] {./pictures/WebML-DCG_TC3.eps}
   \label{fig:webml_tc123_dcg_comparison_tc3}
 }
\caption{DCG and iDCG curves of the first three WebML test configurations.}
\label{fig:webml_tc123_dcg_comparison}
\end{figure}

With Test Configuration 1 (\ref{fig:webml_tc123_dcg_comparison_tc1}), Experiments B (Concept Granularity, Multi-Field Index) and C (Concept Granularity, Multi-Field Weighted Index) perform with the same results up to the second ranking position. Except for Test Configuration 2 (\ref{fig:webml_tc123_dcg_comparison_tc2}), the Experiment C performs always better than Experiment B: these two facts suggest that the use of payloads slightly improves the quality of the results. The results of Test Configuration 2 (\ref{fig:webml_tc123_dcg_comparison_tc2}) and of Test Configuration 3 (\ref{fig:webml_tc123_dcg_comparison_tc3}) compared to the results of Test Configuration 1 (\ref{fig:webml_tc123_dcg_comparison_tc3}) show that the dereferentiation doesn't improve the quality of the system: in general, the names of the Link elements are not meaningful and their dereferentiation causes the indexing of terms that will never be relevant with respect to any query. At the same time, the ``content'' field of the relevant documents becomes larger and this penalizes them due to the FieldNorm factor.

Figure \ref{fig:webml_tc1_11pintavpr} depicts the plot of 11-point Interpolated Average Precision of WebML Experiments B (Concept Granularity, Multi-Field Index) and C (Concept Granularity, Multi-Field Weighted Index) with Test Configuration 1.
\begin{figure}[htbp]
  \begin{center}
	\includegraphics[width=0.7\textwidth]{./pictures/WebML-11pIntAvgPr_TC1.eps}
	\caption{Plot of 11-point Interpolated Average Precision of WebML Experiments B (Concept Granularity, Multi-Field Index) and C (Concept Granularity, Multi-Field Weighted Index), Test Configuration 1.}
	\label{fig:webml_tc1_11pintavpr}
  \end{center}
\end{figure}
These results show that both Experiment B and C are able to retrieve 40\% of the relevant documents after ten documents are retrieved. After the first 20\% of relevant retrieved documents, the precision is decreased to quite low values (approximately $0.2$). This behavior is due to the way we established the ground truth for the WebML dataset (see Section \ref{groundtruth}). We judged as relevant not only those documents that have terminological or conceptual similarities with respect to the query, but also those documents that employ the same kind of WebML pattern adopted in the query and, therefore, have the same structure. Since our system is text-based (although it exploits information of the metamodel), it is not able to retrieve those documents that are relevant because of their structural similarity with respect to the query. This is also a justification to investigate in future works the results that can be obtained by adopting graph-based techniques that takes into account the structural similarities between projects into the repositories and queries.

Figure \ref{fig:webml_tc1_p_at_k} shows the Precision at k curve of WebML Experiments B (Concept Granularity, Multi-Field Index) and C (Concept Granularity, Multi-Field Weighted Index) with Test Configuration 1.
\begin{figure}[htbp]
  \begin{center}
	\includegraphics[width=0.7\textwidth]{./pictures/WebML-P@k_TC1.eps}
	\caption{Plot of the Precision at k curve of WebML Experiments B (Concept Granularity, Multi-Field Index) and C (Concept Granularity, Multi-Field Weighted Index), Test Configuration 1.}
	\label{fig:webml_tc1_p_at_k}
  \end{center}
\end{figure}
These results confirm that the Experiment C is slightly better than Experiment B also in terms of precision. Both Experiments B and C have good results in terms of precision up to the third position (the precision is greater or equal than $0.8$).

\newpage

Table \ref{tab:webml_MAP} reports the MAP results of the first three test configurations, while Table \ref{tab:webml_MRR} reports the MRR results.

MAP confirms that Experiment C (Concept Granularity, Multi-Field Weighted Index) is the best scenario in all the configurations and that Test Configurations 2 and 3 don't improve the results. MRR suggests that all the experiments, especially with Test Configurations 2 and 3, are able to retrieve the first relevant document at a very high ranking position.

\begin{table}
  \begin{tabular}[htbp]{c|c|c|}
  \multicolumn{3}{c}{Mean Average Precision (MAP)} \\
  \rowcolor[gray]{.8} & \bfseries{Exp B} & \bfseries{Exp C} \\ \hline \hline
  Test Configuration 1 &	0.80 	& 0.81 \\ \hline
  Test Configuration 2 &	0.77	& 0.78 \\ \hline
  Test Configuration 3 & 	0.76	& 0.77 \\ \hline
  \end{tabular}
  \caption{MAP results of the test configurations of the WebML model-based search engine.}
  \label{tab:webml_MAP}
\end{table}

\begin{table}
  \begin{tabular}[htbp]{c|c|c|}
  \multicolumn{3}{c}{Mean Reciprocal Rank (MRR)} \\
  \rowcolor[gray]{.8} & \bfseries{Exp B} & \bfseries{Exp C} \\ \hline \hline
  Test Configuration 1 &	0.93 	& 0.93 \\ \hline
  Test Configuration 2 &	0.95	& 0.93 \\ \hline
  Test Configuration 3 & 	0.90	& 0.90 \\ \hline
  \end{tabular}
  \caption{MRR results of the test configurations of the WebML model-based search engine.}
  \label{tab:webml_MRR}
\end{table}

Figure \ref{fig:webml_tc145_dcg_comparison} shows a comparison between the DCG and iDCG curves of the test configuration 1 (no dereferentiation), 4 (names of the metamodel concepts added to the index and to the query string) and 5 (name of the metamodel concepts added to the index with a payload of 0.1 and to the query string).

\begin{figure}[htbp]
%\centering
\subfigure[DCG and iDCG curves of Test Configuration 1.]{
   \includegraphics[scale = 0.45] {./pictures/WebML-DCG_TC1.eps}
   \label{fig:webml_tc145_dcg_comparison_tc1}
 }
 \subfigure[DCG and iDCG curves of Test Configuration 4.]{
   \includegraphics[scale = 0.45] {./pictures/WebML-DCG_TC4.eps}
   \label{fig:webml_tc145_dcg_comparison_tc4}
 }
 \subfigure[DCG and iDCG curves of Test Configuration 5.]{
   \includegraphics[scale = 0.45] {./pictures/WebML-DCG_TC5.eps}
   \label{fig:webml_tc145_dcg_comparison_tc5}
 }
\caption{DCG and iDCG curves of the test configuration 1, 4 and 5.}
\label{fig:webml_tc145_dcg_comparison}
\end{figure}

In test configurations 4 and 5 every document becomes a match for a query and, as Figure \ref{fig:webml_tc145_dcg_comparison} suggests, the results decrease with respect to test configuration 1. This is due to the indexation of a lot of terms (i.e., the names of the WebML metamodel concepts) that alter the contribution of the Tf-idf when computing the score of the documents. The prove of this fact is that the results of Test Configuration 5 are better because we decrease the importance of the names of the concepts by assigning to them a payload of $0.1$ as opposed to $1.0$.

Figure \ref{fig:webml_tc145_11point-int-av-pr_comparison} shows a comparison between the 11-point Interpolated Average Precision curves of the test configuration 1 (no dereferentiation), 4 (names of the metamodel concepts added to the index and to the query string) and 5 (name of the metamodel concepts added to the index with a payload of 0.1 and to the query string).

\begin{figure}[htbp]
%\centering
\subfigure[DCG and iDCG curves of Test Configuration 1.]{
   \includegraphics[scale = 0.45] {./pictures/WebML-11pIntAvgPr_TC1.eps}
   \label{fig:webml_tc145_11point-int-av-pr_comparison_tc1}
 }
 \subfigure[DCG and iDCG curves of Test Configuration 4.]{
   \includegraphics[scale = 0.45] {./pictures/WebML-11pIntAvgPr_TC4.eps}
   \label{fig:webml_tc145_11point-int-av-pr_comparison_tc4}
 }
 \subfigure[DCG and iDCG curves of Test Configuration 5.]{
   \includegraphics[scale = 0.45] {./pictures/WebML-11pIntAvgPr_TC5.eps}
   \label{fig:webml_tc145_11point-int-av-pr_comparison_tc5}
 }
\caption{DCG and iDCG curves of the first three WebML test configurations.}
\label{fig:webml_tc145_11point-int-av-pr_comparison}
\end{figure}

Figure \ref{fig:webml_tc145_11point-int-av-pr_comparison} shows that the recall in the configurations 4 and 5 increases with respect to the Test Configuration 1, at the expense of the precision. These results confirm the conclusions emerged with the DCG curves.

To conclude, the little recall improvements in the configurations 4 and 5 do not justify the clear drop in precision.
