The first implementation experience deals with the indexing and searching of UML projects. As explained before (Section \ref{uml}) the dataset of the UML case consists of UML class diagrams that describe several metamodels. We have designed and implemented four experiments with different types of operations each. The initial crawled item is always a UML project. The list of experiments is presented as follows:

\begin{itemize}
 \item \emph{Experiment A}: this experiment is the simplest one. The granularity is ``project'', which means that the returned documents are projects. The Solr index we obtain after the SMILA pipeline for this experiment is very basic: the project id, the project name and one single ``content'' field that contains every project element (class names and attributes).
 \item \emph{Experiment B}: this experiment uses granularity ``class''. This means that the retrieved documents are classes belonging to a certain project. The Solr index fields we obtain are the following ones: the id and the name of the project to which the class belongs, the class id, the class name and the attribute names.
 \item \emph{Experiment C}: this experiment uses the same granularity and the same index structure of Experiment B. The difference in this experiment is that we add payloads to each indexed term so that the searched classes receive different relevance according to the UML concept they refer to (simple attribute, composition, association, class and project).
 \item \emph{Experiment D}: the granularity of this experiment is still ``class''. The experiment involves an algorithm that for each class imports elements of its  neighboring classes. This is accomplished by transforming the initial UML class diagram project into a graph where nodes represent classes and edges represent relation between classes (e.g. generalization between a parent class and a child class). The payloads of the imported elements are penalized during the import according to the type of the followed edge (relation) and to the distance in number of hops with respect to the currently processed class.
\end{itemize}

After the crawling phase performed by our custom implementation of a SMILA crawler, one SMILA record still represents an entire project and contains the following metadata elements that, if not differently specified, are extracted directly from the XML representation of the UML project:
\begin{itemize}
 \item \emph{Project name}: the file name of the XML representation of the crawled project minus the file extension.
 \item \emph{Project id}: the id of the project.
 \item \emph{Class names}: a list of the class names of the current project.
 \item \emph{Class ids}: a list of the class ids of the current project. These class ids are in pairs with the Class names explained above.
 \item \emph{Attribute names}: a list of all attribute names of the current project. The attributes are stored in a way that keeps some information about the attributes: in the same string of the attribute are saved the class id to which the attribute belongs and the relation type expressed by that attribute. The relation type can be simple attribute, association and composition. In case of association and composition we also save the information about the cardinalities of the relationships.
\end{itemize}

\noindent Next, the SMILA record enters the \emph{Add Pipeline} which is the BPEL pipeline that processes records. There is a different Add Pipeline for each experiment, but every Add Pipeline ends with an indexing pipelet which analyzes and indexes new documents to Solr.
Since the \emph{types of analysis} and the \emph{indexing pipelet} are the same for the four experiments, we discuss them separately at the end of this section, while in the other paragraphs we describe with further details the specific operations of each experiment, providing some examples.

\paragraph{Experiment A}
This experiment involves two pipelets. The first one simply gets the crawled projects in the form of records from the SMILA crawler and scraps the non-relevant information for this experiment. This means that the information regarding the relation type and the cardinalities is not preserved. After this first pipelet a record just contains three metadata elements: the project id, the project name and the content, which contains class names and attributes in an undifferentiated manner. 

The second pipelet is the \emph{Solr Indexer Pipelet} that is described in the last paragraph. In this experiment the index is the most simple one. It is very close to a single-fielded index: each document in the index represents a UML project model and has three fields: project id, project name and content. The last field contains all the attribute names and class names of that project. The project name field is copied into the content field, since we want a single searchable field. In this way, one could also search for a project name. Users have also the possibility to submit a multi-field query specifying different query terms for the project name field and the content field.
\paragraph{Experiment B}
This experiment involves two pipelets, too. Since this experiment uses ``class'' granularity, the main task is to segment the initial record representing an entire project into several records representing the classes belonging to that project. Basically this is done by creating a new record for each class of a given project. The id of this record is the id of the class extracted from the XML representation of the project model. The record also has other fields such as the class name, the attribute names of that class, the project name and the project id to which the class belongs to. This experiment doesn't use the information about the types of relation and the cardinalities, so they are discarded. 

The second pipelet, that is also the last one, as usual, is the Solr Indexer Pipelet that is described at the end of this section in the last paragraph. The index for this experiment is more complex than the previous one. Each indexed document represents a class with the following fields: the project id to which the class belongs to, the project name, the class id, the class name, the attribute names. There is also an additional field, the content field, that contains a copy of all the other searchable fields (except the ids, which are not copied there). Also in this case, it is possible to let the user submit a multi-field query, so that he can specify project name, class name or attribute names separately.
\paragraph{Experiment C}
The pipeline of this experiment, which you can see in Figure~\ref{fig:UMLcase_ExperimentC}, involves four pipelets.
\begin{figure}[ht]
  \begin{center}
	\includegraphics[width=0.5\textwidth]{./pictures/UMLcase_ExperimentC}
	\caption{The pipeline of the UML case Experiment C.}
	\label{fig:UMLcase_ExperimentC}
  \end{center}
\end{figure}

\noindent In this experiment, the first pipelet, \emph{Split Pipelet}, is very similar to the first pipelet of the previous Experiment B. The difference is that the information about relations types and cardinalities is preserved.

The second pipelet, \emph{Payload Adder Pipelet}, uses the information about relation types and cardinalities to assign different payloads to terms according to those information. For example, we can give a higher payload to a term referring to a class name and a lower one to a term referring to a simple attribute. This operation is performed to obtain the following behavior: if there are two classes, let's say class A and class B, where class A contains the term ``java'' as class name and class B contains it as a simple attribute, then, in case of a query string containing the term ``java'', we want the class A ranked in a higher position than class B. This behavior is achieved by assigning payloads as described above. There are no specific or standard rules to assign payloads to the UML elements and some tuning to adjust the payloads according to the obtained rankings is needed. The payloads we used are determined following simple reasonings: we give a higher payloads to those UML elements that in a sense are more important than others. For example, it makes sense to give an higher payload to a term representing a class name than to a term representing an attribute because the class name is supposed to better describe the general concept of that class while an attribute is too specific. To let Solr understand that a term has a certain payload, one has to add the payload information directly in the token string before analysis and indexing. For example, one can include payload information by marking up the tokens with a special character followed by the payload value: ``java\textbar2.0'' means that the token ``java'' has a payload value of ``2.0''. Then, a special Solr analyzer, the \emph{DelimitedPayloadTokenFilter}, interprets in the right way the sequence of characters composed by the token string itself and the payload information. In Solr, payloads are byte arrays optionally stored with every term on a field. However, Solr has a particular issue in payloads ``propagation''. The payloads are applied only to the original word that is indexed and the WordDelimiterFilter (the analyzer that splits words into subwords according to some user defined rules, like splitting on case transitions) doesn't apply the payloads to the tokens it generates. For example, let's say that the string containing the attribute names of a class is the following one: ``javaAttribute\textbar1.0''. The DelimitedPayloadTokenFilter applies the payload to the word ``javaAttribute''. When the WordDelimiterFilter is called, it doesn't propagate the payloads to the generated subwords ``java'' and ``attribute''.  It is evident that the payload information gets lost for the generated subwords. This problem is solved by the following pipelets.

The third pipelet, \emph{Analyzer and Payload Substitution Pipelet}, does the following operations:
\begin{enumerate}
 \item  Takes from the record the SMILA metadata elements that should be analyzed. These metadata elements, for example the string of attribute names of a class, contain words that already have the payload attached, such as ``firstName\textbar1.0 NewEmployee\textbar2.0'' and they are not yet analyzed.
 \item Extracts and saves temporarily the payload for each word contained in the metadata element.
 \item Calls the Solr analyzers for each metadata elements and performs the analysis (the original words are preserved).
 \item Parses the Solr analyzers output (that is expressed in XML) extracting the single subwords derived from the analysis. Then it ``propagates'' the previously saved payloads by attaching them to the subwords and saves them back to their original SMILA metadata field.
\end{enumerate}

\noindent The string at the end of this pipelet looks like this: ``first\textbar1.0 name\textbar1.0 new\textbar2.0 employee\textbar2.0''.

The fourth pipelet is the last one and it is the same Solr Indexer Pipelet as in the previous two experiments, the difference lies in the type of analysis performed. Since most of the analysis have been already done in the previous pipelet, here the words are processed only by the WhiteSpaceTokenizer to split the words on white space and by the DelimitedPayloadTokenFilter to take into account payloads. This pipelet ends the Experiment C by indexing to Solr the tokens eventually obtained. The index for this experiment is exactly the same of Experiment B index. The difference is that the terms are weighted through payloads.

\paragraph{Experiment D}
The Experiment D is deeply different from the previous three experiments. The idea is to import, for each class, the elements of the neighboring classes. This kind of processing can lead to a better recall when trying to search for a class. In Figure~\ref{fig:ExperimentD_query_example} you can see an example of UML project model named BQL from the dataset of UML class diagrams.
\begin{figure}[ht]
  \begin{center}
	\includegraphics[width=0.5\textwidth]{./pictures/ExperimentD_query_example}
	\caption{An example of UML project model diagram from the dataset to explain Experiment D purposes. A query string like ``entry location'' (AND query) without the importation algorithm would produce no results. The algorithm of Experiment D allows to retrieve both classes ``/LocatedElement/'' and ``Entry''.}
	\label{fig:ExperimentD_query_example}
  \end{center}
\end{figure}
A user looking for the class ``Entry'' could also be interested in retrieving the father class ``/LocatedElement/'' using the query string ``entry location'' (AND query). An Experiment without elements importation, would have no results. In this experiment, the class ``Entry'' imports the elements of the neighboring classes, among which there is also the class ``/LocatedElement/''. The class ``/LocatedElement/'' imports the class name of the class ``Entry'' too. This means that, at the end of the algorithm, the document representation of the class ``Entry'' contains, besides their own elements, also the attribute names of the class ``/LocatedElement/'' and the class ``/LocatedElement/'' contains also the class name of ``Entry''. Therefore, both the classes are part of the ranked list as result of the previously mentioned query.

As already explained before, in this Experiment, we \emph{import} the elements of the neighboring classes for each class. The payload of the imported elements is properly \emph{penalized} according to the relation type. For \emph{neighboring classes} of a class $X$, we mean all the classes connected to $X$ through a relation (association, generalization, composition). In Figure~\ref{fig:UML_ExperimentD} you can see a diagram showing the pipeline for this experiment.
\begin{figure}[ht]
  \begin{center}
	\includegraphics[width=1.0\textwidth]{./pictures/UML_ExperimentD}
	\caption{The pipeline of the UML case Experiment D.}
	\label{fig:UML_ExperimentD}
  \end{center}
\end{figure}
The crawling phase is the same as in the previous experiments B and C, the source consists of the UML project models in XMI format. After this step a SMILA record still represents an entire UML project. Next, the SMILA Processing Pipeline for Experiment D starts.

The \emph{Translate XMI to GraphML} pipelet translates the original UML project model represented as an XMI document into GraphML representation. GraphML is a format to describe graphs. It consists of a language core to describe the structural properties of a graph. Here we use a format to describe graphs to ease the navigation algorithm. The pipelet performs the translation and saves the GraphML files to disk. Our translation from XMI to GraphML is straightforward. Each class is represented by a node in the graph, and the relation among classes are represented by typed edges. Since the XMI representation only stores the relationships in one way, we added to the GraphML representation the edges in the opposite way. This representation also stores the relation type as an edge attribute and, possibly, relation cardinalities.

The \emph{Create Graph Pipelet} creates an instance of a graph in memory. Given a record representing a project, this pipelet creates its graph with JUNG starting from the GraphML specification. JUNG (the Java Universal Network/Graph Framework) is a software library that provides a common and extendible language for the modeling, analysis and visualization of data that can be represented as a graph or network. The JUNG graph instance created in this way is serialized to memory.
% Aggiungere citation -> http://jung.sourceforge.net/

Since the granularity of the segmentation of this experiment is ``class'', the \emph{Segmentation} pipelet splits the original record representing an entire project into several records representing classes, deserializes the JUNG graph instance and starts the navigation and import algorithm. For each class, there is a different run of this algorithm.

The \emph{Graph Navigation} starts the navigation of the current class. Classes navigation and elements import are decoupled so that they can be managed separately. The importation of the elements from neighboring classes is considered as the business logic of the classes navigation. The navigation algorithm visits recursively the nodes and its neighbors. The algorithm takes as input the number of hops that have be to performed. Looking at the previous example in figure \ref{fig:ExperimentD_query_example}, this means that if the number of hops is two and the current visited class is ``Entry'', the list of its imported elements will also include the attributes coming from the ``Expression'' class. The algorithm looks at the outgoing edges, so that if a node is connected with more than one outgoing edge to other neighboring classes, we process that node more times. The business logic, which is the elements importation, is executed after visiting the last neighbor in a depth-first manner. Since it is very common that the relationships that connect classes form cycles, we needed to implement a solution to avoid importing the same elements more than one time. The solution is to ``tag'' the edges that have been already followed during the algorithm and visit the nodes connected through edges that have not been processed yet. As said, the importation also involves a penalization of the payloads of the imported attributes. This penalization depends on the relation type, and, possibly, also on the cardinalities of the relationships. Notice that the penalization takes into account the number of hops that the imported elements have done to reach the visited class. With reference to the example in figure \ref{fig:ExperimentD_query_example}, given the class ``Entry'' as the current visited node, the attribute ``value'' coming from the class ``Expression'' (two hops) is more penalized than the attribute ``location'' from ``/LocatedElement/'' (one hop).

The last two pipelets, \emph{Text Analysis and Payload Substitution} and \emph{Solr Indexing}, involve the same operations of the last two pipelets of the previous Experiment C. Also the index of this experiment has the same structure of the Experiment C index.

\paragraph{Analysis and Solr Indexer Pipelet}
These two pipelets are quite similar in all the four experiments, so we describe them separately in this section.

In Experiment A and Experiment B the analysis and indexing tasks are actually done in the same pipelet, called SolrIndexerPipelet. This pipelet is configured through the BPEL configuration file where we specify a mapping between the SMILA metadata elements and the Solr fields. The Solr configuration file named ``schema.xml'' contains the declaration of each field that in turn specifies the type of that field. The sequence of analyzers that perform the text processing is specified for each field type in the ``schema.xml'' file. The SolrIndexerPipelet prepares a new document by adding the specified fields to the XML document that has to be posted to Solr through the \emph{update} URL. This kind of request performs both analysis and indexing. The fields added to the XML document contain the information extracted from the project models.

As explained in the above sections, Experiments C and D perform analysis and indexing in two different pipelets. This is done to avoid the problem of ``payloads propagation'' that we previously explained and also to better separate the two types of operation. The drawback is that the analysis task is more time consuming since there is a different HTTP request to the Solr server for each SMILA metadata element.
