\subsection{Test Queries}
For testing the UML experiments we first designed a set of ``meta-queries''. A meta-query is a query type with specific characteristics in terms of the type of document that is searched by the query and in terms of the information need addressed by the query. In the following subsection we discuss about each of these meta-queries.

Figure~\ref{fig:uml_test_queries_project_example} shows an example of UML project model from the dataset of UML class diagrams. We will use this example to present the meta-queries for the rest of this section.
\begin{figure}[htbp]
  \begin{center}
	\includegraphics[width=0.5\textwidth]{./pictures/uml_test_queries_project_example.eps}
	\caption{Example of UML project model from the dataset of UML class diagrams.}
	\label{fig:uml_test_queries_project_example}
  \end{center}
\end{figure}

\newpage

In Table \ref{tab:uml_metaqueries}, we summarize the meta-queries and, for each of them, we show a description and an example referring to the UML model in Figure \ref{fig:uml_test_queries_project_example}. The ``Target'' column in Table \ref{tab:uml_metaqueries} indicates the type of document the query is intended to search (e.g. a project or a class); the second column is the identifier of the meta-query; the third column briefly describes the meta-query; the last column shows an example of query instance.
\begin{table}
  \begin{tabular}[ht]{|l|l|p{1.5in}|p{1.8in}|}
  \rowcolor[gray]{.8} \bfseries{Target} & \bfseries{Id} & \bfseries{Description} & \bfseries{Example} \\ \hline \hline
  \multirow{2}{*}{Project} & 1 & It searches all the projects related to one specific topic & Query: ``BQL'' \\ \cline{2-4}
  & 2 & It searches all the projects related to one general topic & Query: ``query language'' \\ \hline
  ``Pattern'' & 3 & It searches a ``pattern'' by using as query string the terms belonging to different classes connected by some relation & Searched ``pattern'': \emph{Entry} [is-a] \emph{LocatedElement} \newline Query: ``name type location commentsBefore''\\ \hline
  \multirow{2}{*}{Class} & 4 & It searches a class by using as query string all (some) terms belonging to that class & Searched class: \emph{LocatedElement} \newline Query with all terms: ``LocatedElement location commentsbBefore commentsAfter'' \newline Query with only some terms: ``location commentsBefore'' \\ \cline{2-4}
  & 5 & It searches a class by using as query string some terms belonging to that class plus some terms belonging to the project & Searched class: \emph{LocatedElement} \newline Query: ``location commentsBefore Predicate Expression'' \\ \hline
  \end{tabular}
  \caption{The meta-queries designed for testing the UML experiments. The ''Target`` column indicates which type of document the query searches (e.g. a project, or a class); the second column is the identifier of the meta-query; the third column briefly describes the meta-query, namely it describes the information need that can be satisfied by the queries that are instances of the meta-query; the last column shows an example of query.}
  \label{tab:uml_metaqueries}
\end{table}

Among those meta-queries, we chose the meta-query 2 and 5 for testing the UML experiments. In the following sections we refer to those meta-queries with the labels ``MQ2'' (meta-query 2) and ``MQ5'' (meta-query 5). Each experiment has been performed on ten query instances of the chosen meta-queries, and their results have been averaged.  The complete lists of the ten query instances of both meta-queries 2 and 5 are presented respectively in Table \ref{tab:list_of_uml_query_mq2} and in Table \ref{tab:list_of_uml_query_mq5}. 

\small

\begin{table}[htbp]
  \begin{center}
    \begin{tabular}{|l p{3.5in}|}
      \hline
      1 & function source struct char int class member operator variable parameter \\
      2 & element node attribute name children \\
      3 & table column database \\
      4 & business process \\
      5 & task activity \\
      6 & message operation \\
      7 & node attribute \\
      8 & formula \\
      9 & color print size \\
      10 & transition \\
      \hline
      \end{tabular}
  \end{center}
  \caption{The list of the ten instances of the meta-query 2 of the UML case study.}
  \label{tab:list_of_uml_query_mq2}
\end{table}

\begin{table}[htbp]
  \begin{center}
      \begin{tabular}{|l p{3.5in}|}
      \hline
      1 & jar manifest classpath build \\
      2 & node expression \\
      3 & tag name \\
      4 & event \\
      5 & link \\
      6 & shape \\
      7 & program type source \\
      8 & note \\
      9 & task \\
      10 & process activity status finishmode \\
      \hline
      \end{tabular}
  \end{center}
  \caption{The list of the ten instances of the meta-query 5 of the UML case study.}
  \label{tab:list_of_uml_query_mq5}
\end{table}

\normalsize

\newpage

\subsection{Test Configurations and Results}
In this section we show the results of the tests of the UML experiments (A, B, C, D). We tested different experiment configurations in order to find the one giving the best results. The whole test set is grouped by meta-query. MQ2 has one test configuration and it refers to UML Experiment A (Project Granularity, Flat Index), since both the granularity of this experiment and the target of the meta-query are ``project''. MQ5 has five test configurations and it refers to UML Experiments B, C and D, since their granularity and the target of the experiments are ``class''. It is important to notice that the results of Experiment A and the results of Experiments B (Concept Granularity, Multi-Field Index), C (Concept Granularity, Multi-Field Weighted Index) and D (Concept Granularity, Multi-Field Weighted Index, Graph Based) are not comparable with each other because they retrieve different types of documents (Experiment A retrieves projects, the others retrieve classes).

The test configurations differ from each other according to some options: the value of the payloads assigned to the various UML metamodel concepts (only in Experiments C and D), the value of the penalties (only in Experiment D) and the FieldNorm which can be enabled or disabled. 

\emph{FieldNorm} is a factor influencing the score of a document retrieved through a query submitted to Solr. The shorter the matching field is (measured in number of indexed terms), the greater the FieldNorm value will be. Therefore, the score of the matching document will be greater. FieldNorm can be omitted from some fields in the Solr schema configuration. In most of the test configurations the FieldNorm in the Experiment D is disabled in order to solve the following problem:  when a relevant class (with respect to a given query) imports many attributes from a neighboring class, the first one gets wrongly penalized by FieldNorm.

We use two sets of payload values. The first set (Table \ref{tab:uml_payloads_basic}) is determined according to simple reasonings on the UML metamodel concepts. For example, a term that represents the name of a class should have a greater relevance than a term that represents a simple attribute. In another test, we use a slightly different set of payload values with respect to the previous one (Table \ref{tab:uml_payloads_slightly_changed}). With this test we want to show that the results do not change significantly if the values of the payloads are slighly changed, thus verifying the stability of the system.

We use two sets of penalties, too. The first set (Table \ref{tab:uml_penalties_basic}) results from reasonings on the types of UML relationships. For example, let Mammal and HumanBeing be two classes connected by a generalization relationship, where Mammal is the parent class and HumanBeing is the child class. Since HumanBeing ``is-a'' Mammal, we want that the attributes that the child class imports from the parent class are only slightly penalized during the import algorithm, so, in this case, the penalty value is $0.9$. As for the payloads, we tested the UML case study application with another set of penalties which shows that, by slightly varying them (Table \ref{tab:uml_penalties_slightly_changed}), the results do not change substantially. Finally, we conducted a further test in which the values of the penalties are lowered by multiplying them by a factor $0.1$ (Table \ref{tab:uml_penalties_zerodotone_factor}). This configuration prevents an imported attribute to obtain a payload value greater than the payload value of an attribute originally contained in a class.

The different test configurations listed below show various combinations of the configuration options discussed above:
\begin{itemize}
 \item MQ2: Experiment A (Project Granularity, Flat Index)
   \subitem \emph{Test Configuration 1}: basic test with FieldNorm enabled. 

 \item MQ5: Experiments B (Concept Granularity, Multi-Field Index), C (Concept Granularity, Multi-Field Weighted Index) and D (Concept Granularity, Multi-Field Weighted Index, Graph Based) 
  \subitem \emph{Test Configuration 1}: FieldNorm disabled only in Experiment D; payloads as in Table \ref{tab:uml_payloads_basic}; penalties as in Table \ref{tab:uml_penalties_basic}.
  \subitem \emph{Test Configuration 2}: FieldNorm enabled in all experiments; payloads as in Table \ref{tab:uml_payloads_basic}; penalties as in Table \ref{tab:uml_penalties_basic}.
  \subitem \emph{Test Configuration 3}: FieldNorm disabled only in Experiment D; payloads as in Table \ref{tab:uml_payloads_slightly_changed}; penalties as in Table \ref{tab:uml_penalties_basic}.
  \subitem \emph{Test Configuration 4}: FieldNorm disabled only in Experiment D; payloads as in Table \ref{tab:uml_payloads_basic}; penalties as in Table \ref{tab:uml_penalties_slightly_changed}.
  \subitem \emph{Test Configuration 5}: FieldNorm disabled only in Experiment D; payloads as in Table \ref{tab:uml_payloads_basic}; penalties as in Table \ref{tab:uml_penalties_zerodotone_factor}.
\end{itemize}

\begin{table}
  \begin{tabular}[htbp]{|c|c|}
  \multicolumn{2}{c}{PAYLOAD VALUES (FIRST CONFIGURATION)} \\ \hline
  \rowcolor[gray]{.8} \bfseries{Concept} & \bfseries{Payload Value} \\ \hline \hline
  Attribute & 1.0 \\ \hline
  Composition 1-1 & 1.5 \\ \hline
  Composition 1-N & 1.3 \\ \hline 
  Association 1-1 & 1.6 \\ \hline
  Association 1-N & 1.3 \\ \hline
  Class & 1.7 \\ \hline
  Project & 1.0 \\ \hline
  \end{tabular}
  \caption{The first configuration of payload values. This configuration is determined according to simple reasonings on the UML metamodel concepts.}
  \label{tab:uml_payloads_basic}
\end{table}

\begin{table}
  \begin{tabular}[htbp]{|c|c|}
  \multicolumn{2}{c}{PAYLOAD VALUES (SECOND CONFIGURATION)} \\ \hline
  \rowcolor[gray]{.8} \bfseries{Concept} & \bfseries{Payload Value} \\ \hline \hline
  Attribute & 0.9 \\ \hline
  Composition 1-1 & 1.4 \\ \hline
  Composition 1-N & 1.2 \\ \hline 
  Association 1-1 & 1.5 \\ \hline
  Association 1-N & 1.2 \\ \hline
  Class & 1.5 \\ \hline
  Project & 0.9 \\ \hline
  \end{tabular}
  \caption{The second configuration of payload values. This configuration is determined by slightly changing the values of the first one.}
  \label{tab:uml_payloads_slightly_changed}
\end{table}

\begin{table}
  \begin{tabular}[htbp]{|p{3.5in}|c|}
  \multicolumn{2}{c}{PENALTY VALUES (FIRST CONFIGURATION)} \\ \hline
  \rowcolor[gray]{.8} \bfseries{Concept} & \bfseries{Penalty Value} \\ \hline \hline
  Composition (from composite class to component class) 1-1 & 0.6 \\ \hline
  Composition (from composite class to component class) 1-N & 0.5 \\ \hline
  Composition (from component class to composite class) 1-1 & 0.6 \\ \hline 
  Composition (from component class to composite class) 1-N & 0.5 \\ \hline
  Association 1-1 & 0.6 \\ \hline
  Association 1-N & 0.5 \\ \hline
  Generalization (from parent class to child class) & 0.75 \\ \hline
  Generalization (from child class to parent class) & 0.9 \\ \hline
  \end{tabular}
  \caption{The first configuration of penalty values. This configuration is determined according to simple reasonings on the UML relationship types.}
  \label{tab:uml_penalties_basic}
\end{table}

\begin{table}
  \begin{tabular}[htbp]{|p{3.5in}|c|}
  \multicolumn{2}{c}{PENALTY VALUES (SECOND CONFIGURATION)} \\ \hline
  \rowcolor[gray]{.8} \bfseries{Concept} & \bfseries{Penalty Value} \\ \hline \hline
  Composition (from composite class to component class) 1-1 & 0.5 \\ \hline
  Composition (from composite class to component class) 1-N & 0.4 \\ \hline
  Composition (from component class to composite class) 1-1 & 0.5 \\ \hline 
  Composition (from component class to composite class) 1-N & 0.4 \\ \hline
  Association 1-1 & 0.5 \\ \hline
  Association 1-N & 0.4 \\ \hline
  Generalization (from parent class to child class) & 0.5 \\ \hline
  Generalization (from child class to parent class) & 0.7 \\ \hline
  \end{tabular}
  \caption{The second configuration of penalty values. This configuration is determined starting from the first one by slightly changing penalty values.}
  \label{tab:uml_penalties_slightly_changed}
\end{table}

\begin{table}
  \begin{tabular}[htbp]{|p{3.5in}|c|}
  \multicolumn{2}{c}{PENALTY VALUES (THIRD CONFIGURATION)} \\ \hline
  \rowcolor[gray]{.8} \bfseries{Concept} & \bfseries{Penalty Value} \\ \hline \hline
  Composition (from composite class to component class) 1-1 & 0.06 \\ \hline
  Composition (from composite class to component class) 1-N & 0.05 \\ \hline
  Composition (from component class to composite class) 1-1 & 0.06 \\ \hline 
  Composition (from component class to composite class) 1-N & 0.05 \\ \hline
  Association 1-1 & 0.06 \\ \hline
  Association 1-N & 0.05 \\ \hline
  Generalization (from parent class to child class) & 0.075 \\ \hline
  Generalization (from child class to parent class) & 0.09 \\ \hline
  \end{tabular}
  \caption{The third configuration of penalty values. This configuration is determined by multiplying the values of the first one by a factor of 0.1.}
  \label{tab:uml_penalties_zerodotone_factor}
\end{table}

\newpage

\noindent In the following paragraphs we comment the results of each test configuration.

\paragraph{MQ2: Test Configuration 1}

Figure~\ref{fig:uml_test_mq2_dcg} shows the plot of the DCG and iDCG curves of Experiment A.
\begin{figure}[htbp]
  \begin{center}
	\includegraphics[width=1.0\textwidth]{./pictures/UML-MQ2_DCG.eps}
	\caption{Plot of the DCG and iDCG curves of Experiment A (Project Granularity, Flat Index), Test Configuration 1 (MQ2).}
	\label{fig:uml_test_mq2_dcg}
  \end{center}
\end{figure}
It can be noticed that the DCG and the iDCG curves are very close to each other, especially up to the first three positions. In particular, the results show that the Experiment A (Project Granularity, Flat Index) is always able to retrieve the most relevant document at the first position.

Figure~\ref{fig:uml_test_mq2_11pInt} depicts the 11-points Interpolated Average Precision of the Experiment A (Project Granularity, Flat Index).
\begin{figure}[htbp]
  \begin{center}
	\includegraphics[width=1.0\textwidth]{./pictures/UML-MQ2_11pIntAvgPr.eps}
	\caption{Plot of the 11-points Interpolated Average Precision of the Experiment A (Project Granularity, Flat Index), Test Configuration 1 (MQ2).}
	\label{fig:uml_test_mq2_11pInt}
  \end{center}
\end{figure}
The results show that the precision, up to the recall level of 0.4, is always 1. After this recall level, there is first a small decrease ($r = 0.5$) and then a more drastic decrease of the curve ($r = 0.82 $). This is an expected behavior: as all the relevant documents are retrieved, among these, an increasing number of not relevant documents are retrieved too. The results also show that the Experiment A is able to retrieve all the relevant documents after the first ten retrieved documents.

Figure~\ref{fig:uml_test_mq2_p_at_k} presents the plot of the Precision at k curve of the Experiment A (Project Granularity, Flat Index).
\begin{figure}[htbp]
  \begin{center}
	\includegraphics[width=1.0\textwidth]{./pictures/UML-MQ2_P@k.eps}
	\caption{Plot of the Precision at k of the Experiment A (Project Granularity, Flat Index), Test Configuration 1 (MQ2).}
	\label{fig:uml_test_mq2_p_at_k}
  \end{center}
\end{figure}

The results suggest that, up to the third position, all the retrieved documents are relevant. Between the third and the fifth position there is a small fall in the curve, after which there is a slight improvement. This confirms the results previously shown with the DCG and iDCG curves. However, the results show that the precision level remains high (over 0.8) for all the k levels.

The MAP results for this test configuration are given in Table \ref{tab:uml_MAP}, while the MRR results are presented in Table \ref{tab:uml_MRR}. MRR suggests that Experiment A is always able to retrieve the first relevant document at the first ranking position.

\paragraph{MQ5: Comparison between Test Configuration 1 and Test Configuration 2}

\noindent In Figure~\ref{fig:uml_test_mq5_tc1-tc2_comparison_DCG} you can see the comparison between the DCG and iDCG curves of the Experiments B (Concept Granularity, Multi-Field Index), C (Concept Granularity, Multi-Field Weighted Index) and D (Concept Granularity, Multi-Field Weighted Index, Graph Based) when the experiments use the first test configuration (Figure~\ref{fig:uml_test_mq5_tc1-tc2_comparison_DCG_tc1}) and the DCG and iDCG curves of the Experiments B, C and D where the experiments use the second test configuration (Figure~\ref{fig:uml_test_mq5_tc1-tc2_comparison_DCG_tc2}).

\begin{figure}[htbp]

  \begin{center}
    \subfigure[Test Configuration 1.]{
    \label{fig:uml_test_mq5_tc1-tc2_comparison_DCG_tc1}
    \includegraphics[scale = 0.5] 
    {./pictures/UML-MQ5_TC1_DCG.eps}  
    }

    \subfigure[Test Configuration 2.]{
    \label{fig:uml_test_mq5_tc1-tc2_comparison_DCG_tc2}
    \includegraphics[scale = 0.5] 
    {./pictures/UML-MQ5_TC2_DCG.eps}
    }

  \end{center}

\caption{Comparison between Test Configuration 1 and 2: DCG and iDCG curves of the Expertiments B (Concept Granularity, Multi-Field Index), C (Concept Granularity, Multi-Field Weighted Index) and D (Concept Granularity, Multi-Field Weighted Index, Graph Based) for both the test configurations.}
\label{fig:uml_test_mq5_tc1-tc2_comparison_DCG}
\end{figure}

The results for the Experiments B and C are almost the same in Test Configuration 1 and 2. Only the last part of the curves differ. This behavior is due to the way Solr handles ties: in case of retrieved documents that have the same score, a fixed ordering of documents is not respected (e.g., alphabetically). However, this issue occurs only in the lower parts of the ranking. The Figure \ref{fig:uml_test_mq5_tc1-tc2_comparison_DCG} shows that the Experiment C has, for each rank position $k$, a slightly better value of DCG than Experiment B, showing that the use of payloads improves the results even if not so significantly. Both experiments B and C, up to the second rank position, are close to the ideal curve. 

These results presented in Figure \ref{fig:uml_test_mq5_tc1-tc2_comparison_DCG} are intended to show the effects of FieldNorm on the Experiment D. If FieldNorm is enabled (Test Configuration 2), then a class that is relevant with respect to a query and that imports many elements from a neighboring class is unfairly penalized. Therefore, the FieldNorm should always be disabled and Test Configuration 1 should be the best scenario to choose. But, as Figure \ref{fig:uml_test_mq5_tc1-tc2_comparison_DCG} suggests, the results of Experiment D in Test Configuration 2 are much better than Test Configuration 1. The reason is explained as follows. Besides retrieving the relevant classes with respect to a query, Experiment D retrieves their neighboring classes too, which are not necessarily relevant to that query. These neighboring classes are present among the results because they have imported terms that are part of the query string. Since their ``content'' field is larger due to the imported terms, those neighboring classes are penalized by the FieldNorm and, at the same time, the truly relevant classes are ranked in a higher position, therefore the results are better. To conclude, the FieldNorm helps when it penalizes classes that are retrieved only because they are neighboring of relevant classes, but it provides misleading results when it penalizes the relevant classes due to the larger size of their ``content'' field after the import algorithm.

Figure~\ref{fig:uml_test_mq5_tc2_p_at_k} shows the plot of Precision at k of Test Configuration 2.
\begin{figure}[htbp]
  \begin{center}
	\includegraphics[width=1.0\textwidth]{./pictures/UML-MQ5_TC2_P@k.eps}
	\caption{Plot of the Precision at k of Experiments B (Concept Granularity, Multi-Field Index
), C (Concept Granularity, Multi-Field Weighted Index) and D (Concept Granularity, Multi-Field Weighted Index, Graph Based), Test Configuration 2 (MQ5).}
	\label{fig:uml_test_mq5_tc2_p_at_k}
  \end{center}
\end{figure}

It can be noticed from the results that the curves of Experiments B and C have almost the same shape, while Experiment D presents a peculiar shape. At the first $k$ ranking positions the precision is high and constant, which means that the first part of retrieved classes is relevant. Then the precision decreases for several rank positions due to the neighboring classes of the previously retrieved classes. Approximately at $k = 7$, the precision increases again because other relevant documents are retrieved, then it decreases again. This trend is typical of Experiment D and it would be visible more times into the curve if more than ten rank positions were showed.

MAP (Table \ref{tab:uml_MAP}) confirms that the best performing experiment is Experiment C and that Experiment D performs better in Test Configuration 2. MRR (Table \ref{tab:uml_MRR}) suggests that Experiments B and C retrieve as first document always a relevant one.

\paragraph{MQ5: Test Configuration 3}

Figure~\ref{fig:uml_test_mq5_tc3_dcg} shows the DCG curves of Experiments C and D for this test configuration.
\begin{figure}[htbp]
  \begin{center}
	\includegraphics[width=1.0\textwidth]{./pictures/UML-MQ5_TC3_DCG.eps}
	\caption{DCG curves of Experiments C (Concept Granularity, Multi-Field Weighted Index) and D (Concept Granularity, Multi-Field Weighted Index, Graph Based), Test Configuration 3 (MQ5).}
	\label{fig:uml_test_mq5_tc3_dcg}
  \end{center}
\end{figure}

The curves of the Experiments C and D with the payload values configuration slightly changed (Figure \ref{fig:uml_test_mq5_tc3_dcg}) can be compared with the curves of the Experiments C and D using Test Configuration 1 (Figure \ref{fig:uml_test_mq5_tc1-tc2_comparison_DCG_tc1}).

These results confirm the low sensitivity of Experiments C and D with respect to a slightly change to the payload values. The slightly changed configuration of the payloads causes only a light improvement to the curve of Experiment D, but that is not the purpose of this test configuration.

We point out that we performed training neither with weights nor with payload.

\paragraph{MQ5: Test Configuration 4}

Figure~\ref{fig:uml_test_mq5_tc4_dcg} depicts the DCG curve of Experiment D for this test configuration.
\begin{figure}[htbp]
  \begin{center}
	\includegraphics[width=1.0\textwidth]{./pictures/UML-MQ5_TC4_DCG.eps}
	\caption{DCG curve of Experiment D (Concept Granularity, Multi-Field Weighted Index, Graph Based), Test Configuration 4 (MQ5).}
	\label{fig:uml_test_mq5_tc4_dcg}
  \end{center}
\end{figure}

The curve of the Experiment D with the payload values configuration slightly changed (Figure \ref{fig:uml_test_mq5_tc4_dcg}) can be compared with the curve of the Experiment D using Test Configuration 1 (Figure \ref{fig:uml_test_mq5_tc1-tc2_comparison_DCG_tc1}).

The Experiment D shows very low variability in the results with respect to a small change of the payload values.

\paragraph{MQ5: Test Configuration 5}

As Figure~\ref{fig:uml_test_mq5_tc5_dcg} suggests, the Experiment D improves its results when using payloads multiplied by a $0.1$ factor which decreases all the payload values, and, accordingly, all the payloads of the imported attributes.
\begin{figure}[htbp]
  \begin{center}
	\includegraphics[width=1.0\textwidth]{./pictures/UML-MQ5_TC5_DCG.eps}
	\caption{DCG curve of Experiment D (Concept Granularity, Multi-Field Weighted Index, Graph Based), Test Configuration 5 (MQ5).}
	\label{fig:uml_test_mq5_tc5_dcg}
  \end{center}
\end{figure}

This test configuration stresses the point that it is fundamental to use a configuration of both payloads and penalties that prevents the imported attributes of a class to have a greater payload value than the one of the attributes already contained in that class.

\begin{table}
  \begin{tabular}[htbp]{c|c|c|c|c|}
  \multicolumn{5}{c}{Mean Average Precision (MAP)} \\
  \rowcolor[gray]{.8} & \bfseries{Exp A} & \bfseries{Exp B} & \bfseries{Exp C} & \bfseries{Exp D} \\ \hline \hline
  MQ2 - Test Configuration 1 &	0.98 	& -  	   & -		& - \\ \hline
  MQ5 - Test Configuration 1 &	-		& 0.92 & 0.95 & 0.71 \\ \hline
  MQ5 - Test Configuration 2 & 	-		& 0.92 & 0.95 & 0.84 \\ \hline
  MQ5 - Test Configuration 3 & 	-		& 0.92 & 0.94 & 0.66 \\ \hline
  MQ5 - Test Configuration 4 & 	-		& 0.92 & 0.95 & 0.69 \\ \hline
  MQ5 - Test Configuration 5 & 	-		& 0.92 & 0.95 & 0.69 \\ \hline
  \end{tabular}
  \caption{MAP results of the test configurations of the UML model-based search engine. MQ2 addresses only Experiment A (Project Granularity, Flat Index); MQ5 addresses Experiments B (Concept Granularity, Multi-Field Index), C (Concept Granularity, Multi-Field Weighted Index) and D (Concept Granularity, Multi-Field Weighted Index, Graph Based).}
  \label{tab:uml_MAP}
\end{table}

\begin{table}
  \begin{tabular}[htbp]{c|c|c|c|c|}
  \multicolumn{5}{c}{Mean Reciprocal Rank (MRR)} \\
  \rowcolor[gray]{.8} & \bfseries{Exp A} & \bfseries{Exp B} & \bfseries{Exp C} & \bfseries{Exp D} \\ \hline \hline
  MQ2 - Test Configuration 1 &	1.00 	& -  	   & -		& - \\ \hline
  MQ5 - Test Configuration 1 &	-		& 1.00 & 1.00 & 0.76 \\ \hline
  MQ5 - Test Configuration 2 & 	-		& 1.00 & 1.00 & 0.95 \\ \hline
  MQ5 - Test Configuration 3 & 	-		& 1.00 & 1.00 & 0.76 \\ \hline
  MQ5 - Test Configuration 4 & 	-		& 1.00 & 1.00 & 0.81 \\ \hline
  MQ5 - Test Configuration 5 & 	-		& 1.00 & 1.00 & 0.81 \\ \hline
  \end{tabular}
  \caption{MRR results of the test configurations of the UML model-based search engine. MQ2 addresses only Experiment A (Project Granularity, Flat Index); MQ5 addresses Experiments B (Concept Granularity, Multi-Field Index), C (Concept Granularity, Multi-Field Weighted Index) and D (Concept Granularity, Multi-Field Weighted Index, Graph Based).}
  \label{tab:uml_MRR}
\end{table}

\newpage
