Our abstract solution for retrieving software artifacts adopts an approach that includes two main information flows as seen in Figure~\ref{fig:abstract_solution_architecture}. In particular, the upper part of the figure depicts the \emph{Content Processing Flow} and the lower part the \emph{Query Flow}.

\begin{figure}[ht]
  \begin{center}
	\includegraphics[width=1.0\textwidth]{./pictures/abstract_solution_architecture}
	\caption{The approach of our abstract solution for a general search engine model repository system. In the upper part there is the \emph{Content Processing Flow} and in the lower part there is the \emph{Query Processing Flow}.}
	\label{fig:abstract_solution_architecture}
  \end{center}
\end{figure}

\indent The information flows are processed by two pipelines: for the content part there is the Content Processing Pipeline, which is triggered by the crawler subsystem, while for the query part there is the Query Processing Pipeline, which is triggered by the user query input. The content processing flow gathers, processes and transforms significant information from the projects in order to make them ready for indexing and searching. The query flow is intended to accept, process and transform the user query in the same way as the content information. Then it performs the searching operations and returns the results to the user interface.

The \emph{Content Processing Pipeline} involves as data source a collection of models that conform to a metamodel. The metamodel can be of any type. The very first phase is the translation of the original models into the XMI format. This translation is not performed by the system but must be provided by the user. XMI (XML Metadata Interchange) is an Object Management Group (OMG) standard for exchanging metadata information via XML. This format can be used for any metadata whose metamodel can be expressed in Meta-Object Facility (MOF). As XMI is a standard way to represent the project model, their translation eases the automatic data extraction in the following phases.

First, the \emph{crawler} ingests the structure of the entire project. This is important to propagate as much information about the project structure as possible, so that it can be reused in the following operations. This means that the entire XMI code expressing the structure of the whole project is carried forward through the operations chain. This also keeps the indexing and searching approach model-driven.

Next, the Content Processing Pipeline begins. The pipeline contains the following operations: Project Analysis, Segmentation, Segment Analysis and Indexing. Our abstract solution requires that the system has a set of standard routines for each of these operations. The routines are then instantiated according to the metamodel. In general, these routines mine useful elements from the XMI project representation. These elements are tagged by the user in the metamodel with proper tags in order to automatically generate the routines. The routines also save the extracted elements into suitable data structures, such as records of the SMILA framework. The data structures are passed as input to each successive phase in the pipeline.

The \emph{Project Analysis} extracts generic metadata referring to the entire project, for example the authors and the creation date. The user has already tagged these metadata into the metamodel. The metadata tagged in this way are extracted from the models and saved in the data structures.

\emph{Segmentation} splits the initial projects into smaller units which are more manageable for the successive operations. As in the previous phase, the user has tagged the metamodel element representing the segmentation unit. A proper routine extracts from the XMI project representation the codes fragment referring to the considered units and saves them in the data structures of the segments.

The \emph{Segment Analysis} mines the elements that will be indexed later from the previously obtained segment. These elements have also been tagged by the user with suitable tags in the metamodel. The Segment Analysis performs text processing on the just extracted elements. The user can configure the analysis type to be performed (e.g. tokenization, normalization, stemming, etc.) for each type of element. The analysis generally consists of a sequence of \emph{analyzers} through which the project model words are transformed.

The last phase is the \emph{Indexing} that stores the data structures obtained up to this point. The storage schema is previously defined taking into account the extracted information.

The \emph{Query Processing Flow} deals with the query ingestion from the user interface, the query processing and analysis, the index searching and the results presentation back to the user interface. The query can be submitted in keyword-based form (textual queries) or content-based form (the query is a model fragment). Here we expect that the query is submitted in content-based form with the same XMI format of project models. This can also be done in a graphical way. If necessary, also the original representation of the query undergoes the same translation performed on the project models.

The \emph{Query Processing Flow} mines the keywords that will be part of the query string from the content-based query expressed in XMI. It basically strips all the markup code and keeps only the names of the project elements given in that particular query instance. The metamodel is needed to extract the keywords from the user query.

The \emph{Analysis} applies the same processing techniques used for the content information. Here, the metamodel is needed to match a given model element to its sequence of analysis.

The \emph{Searching} takes the query string and performs the actual search against the index in order to find relevant \emph{documents} with respect to the query. The documents are the indexed representation of the projects containing the same information extracted during the content processing flow. The matching between the query string and the indexed documents is performed adopting a specific similarity measure that computes the distance between the query and the documents. The results are returned as a ranked list of relevant documents, which is then presented to the user.

% Aggiungere discorso sui design parameters (granularità etc.)
