When designing a model search engine, there are several combinations of the design dimensions discussed in Section \ref{design-dimensions} that can be adopted. In this section we present the configurations of the above dimensions that we decided to adopt for our experiments.

\noindent We tested the following scenarios:

\begin{itemize}
 \item \emph{Experiment A}: in this experiment the segmentation granularity is ``entire project''; the index structure is almost totally flat (in addition to the field that contains all the elements of the project, there is only a field with the project name for visualization purposes). In the following of this work, the Experiment A will be labeled as \emph{Project Granularity, Flat Index}.

 \item \emph{Experiment B}: this experiment involves a smaller segmentation granularity that corresponds to a metamodel concept; the index structure is multi-field and the terms are stored without any kind of weighting. This experiment provides a baseline against which possible strategy improvements (like the introduction of weights) will be evaluated. In the following of this work, the Experiment B will be labeled as \emph{Concept Granularity, Multi-Field Index}.

 \item \emph{Experiment C}: this experiment adopts the same segmentation granularity as of the previous experiment; the index structure is multi-field and, differently from the previous experiment, the index terms are weighted according to the metamodel concept which they belong to. The idea is to assign a different degree of relevance to the different metamodel concepts depending on their importance. In this way, the system would rank at higher positions documents in which the matched terms belong to more relevant concepts. In the following of this work, the Experiment C will be labeled as \emph{Concept Granularity, Multi-Field Weighted Index}.

 \item \emph{Experiment D}: in this experiment the segmentation granularity is the same as in the previous two experiments; the index structure is still multi-field and the terms are still weighted according to their metamodel concept; the novelty of this experiment lies in the algorithm included before the indexing phase. This algorithm first creates a graph representation of the considered model, using as nodes the elements corresponding to the selected segmentation granularity and as edges their respective relationships. Then, each element is enriched with some information harvested from its neighbours. The idea is to let the system be able to retrieve not only the elements that match a search, but also their neighbours. This could allow some usually overlooked elements to gain importance in the ranking list, thus discovering new solutions. In the following of this work, the Experiment D will be labeled as \emph{Concept Granularity, Multi-Field Weighted Index, Graph Based}.
\end{itemize}

\paragraph*{Case Studies}
\noindent We have implemented two case studies in order to test the above experiments. The first case study deals with a UML class diagram repository. In the second one, the repository consists of WebML projects.

We implemented experiments B and C both for the UML and the WebML case. Experiment A was not developed for the WebML case because it has already been studied here \cite{Bozzon:2010:SRW:1884110.1884112}. We tested the Experiment D only on the UML repository because the chosen segmentation granularity of the WebML case along with the structure of the WebML projects make this experiment unsuitable for that scenario. The details of the implementation of the two case studies are reported in Chapter \ref{chapter6}.
