The second case study deals with the indexing and searching of WebML projects. With respect to the previous case explained in Section \ref{uml-case}, here we adopt a more structured solution, even closer to the abstract solution of Section \ref{abstract-solution}. In Figure~\ref{fig:webml_chain} you can see a diagram showing the WebML chain of operations.
\begin{figure}[ht]
  \begin{center}
	\includegraphics[width=1.0\textwidth]{./pictures/webml_chain}
	\caption{WebML operations chain. The diagram shows the operations involved in both content-based approach and text-based approaches. The Content Processing phase is showed in the top right, while the Query Processing phase is showed in the bottom left.}
	\label{fig:webml_chain}
  \end{center}
\end{figure}
\noindent As usual for a search engine, the whole chain includes two main phases:
\begin{itemize}
 \item a Content Processing phase that is showed in the top right of Figure~\ref{fig:webml_chain}, starting from the WebML Projects Dataset translation,
 \item a Query Processing phase that is showed in the bottom left of Figure~\ref{fig:webml_chain}, starting from the User Query input.
\end{itemize}
In both these phases, the solution includes a set of operations which is in common between the content-based and text-based approach. The content-based approach involves a comparison between projects and queries that are translated into a graph representation to perform graph matching techniques. In this work we discuss only about the text-based approach, you can find more information about the content-based approach in \cite{conf/icwe/BislimovskaBBF11}.

In the next paragraphs we explain with further details both the Content Processing phase and the Query Processing phase of the WebML chain, providing information for each operation. Notice that each operation corresponds to a SMILA pipelet and an entire chain is a SMILA pipeline. Regarding the Content Processing phase, after the common operations are performed, the other operations specific for each experiment start. For the WebML case there are two experiments: B and C. Experiments B and C have similarities with experiments B and C of the UML case so they have the same name. These experiments are explained with more information in the next paragraphs. At the end of this section we also present the Query Processing phase.
\paragraph{WebML Chain - Content Processing}
The WebML chain starts, as usual, with the crawling phase. The crawler ingests a project representation which is different from the original XML project representation in WebML syntax shown in Section \ref{webml-dataset}. This is done because the XML original format doesn't conform to any particular standard model representation language. Therefore, before the actual crawling phase there is an off-line translation from the original format to a target format. The target format is expressed in UML 2.1 conforming to Ecore and it is very similar to the representation format of the projects from the UML dataset.

The code snippet below shows an example of translation from the original WebML representation format discussed in Section \ref{webml-dataset}.
\lstset{
language=XML,
breaklines=true,                % sets automatic line breaking
breakatwhitespace=false,        % sets if automatic breaks should only happen at whitespace
basicstyle=\footnotesize	% the size of the font
}
\begin{lstlisting}
<packagedElement name="Shops" xmi:id="sv3g#area5g" xmi:type="webml:Area">
  <packagedElement entity="ent2" name="Save DC" xmi:id="sv3g#area5g#cru11g" xmi:type="webml:CreateUnit">
    <packagedElement name="Link OK 44" to="sv3g#area5g#page20g" xmi:id="sv3g#area5g#cru11g#oln48g" xmi:type="webml:OKLink"/>
  </packagedElement>
  <packagedElement name="nop" xmi:id="sv3g#area5g#opu13g" xmi:type="webml:NoOpOperationUnit">
    <packagedElement name="Link OK 45" to="sv3g#area5g#page20g" xmi:id="sv3g#area5g#opu13g#oln49g" xmi:type="webml:OKLink"/>
  </packagedElement>
	...
	...
  <packagedElement name="Modify Shop" xmi:id="sv3g#area5g#page22g" xmi:type="webml:Page">
    <packagedElement entity="ent2" name="Modify Shop" xmi:id="sv3g#area5g#page22g#dau7g" xmi:type="webml:DataUnit">
      <packagedElement name="Link 99" to="sv3g#area5g#page22g#enu22g" xmi:id="sv3g#area5g#page22g#dau7g#ln107g" xmi:type="webml:Link"/>
      <packagedElement name="Link 31" to="sv3g#area5g#mfu4g" xmi:id="sv3g#area5g#page22g#dau7g#ln31g" xmi:type="webml:Link"/>
    </packagedElement>
    <packagedElement name="Modify Store" xmi:id="sv3g#area5g#page22g#enu22g" xmi:type="webml:EntryUnit">
	...
    </packagedElement>
  </packagedElement>
</packagedElement>
\end{lstlisting}
The translation from WebML representation to the target format is not one to one, only the relevant information suitable to be part of the index is kept, therefore some of the elements from the original representation are discarded. For example all the tags referring to the graphical layout specifications are not present into the target representation. Unlike the original files organization, the target representation has all the XML code of a project in one single file.

After the translation the \emph{Crawling} phase starts. The crawler ingests all the projects creating a new record into the SMILA framework for each of them. The record contains the following attributes: the project id and all the structured representation (the XML code conforming to Ecore) of that project model. The project structured representation is carried on to the following stages of the process as much as possible. This is done to keep all the necessary information about the project structure (e.g. an area X is contained into a siteview Y).

The \emph{Dereferencing} is an optional step performed only by some test configurations of the WebML case which are discussed in Chapter \ref{chapter7}.

Up to this point of the chain, a SMILA record still represents a whole WebML project (the granularity is still entire project). The \emph{Splitting} (or Segmentation) takes the SMILA record containing information on a whole WebML project and splits it creating several new records that has granularity ``area''. We chose this level of granularity because WebML areas have a size that is a good compromise between the size of the site views, which is too big, and the one of the pages, which is too small.
The XML representation of an area record X contains the following information: the areas and siteviews that are ancestors of X stripped of their content, the area X with all its sub-elements (units and pages) and all the sub-areas of X stripped of their content. For example, let X be an area with father area Y which in turn is children of siteview Z and let A be a sub-area of X. Obviously each of these elements have their content consisting of operation units, pages, content units, etc. The SMILA record representing the area X looks like this:
\begin{lstlisting}
<packagedElement name="siteviewZ" xmi:id="svZ" xmi:type="webml:Siteview">
  <packagedElement name="areaY" xmi:id="svZ#areaY" xmi:type="webml:Area">
    <packagedElement name="areaX" xmi:id="svZ#areaY#areaX" xmi:type="webml:Area">
	...
	CONTENT OF AREA X (Operation Units, Content Units, Pages, etc.)
	...
    </packagedElement>
  </packagedElement>
</packagedElement>
\end{lstlisting}

The \emph{Analysis} extracts from the records in input (that now represent areas) the content of the ``name'' attributes of each area element. Then, the words mined in this way are processed through the Solr analyzers and the original content of the ``name'' attributes is replaced with the analyzed version. The original content of the ``name'' attributes is kept inside the field for visualization purposes by separating it from the analyzed content via an escape character like ``\$''. For example, the ``name'' attribute with content ``lazyEmployees'' after the analysis becomes ``lazyEmployees\$lazyemploye lazi employe''. The transformations that have to be performed on the text are configurable by modifying the Solr configuration files. This pipelet decouples the Analysis from the Indexing, which is performed at the end of each experiment. The separation between the Analysis and the Indexing allows to share the same analysis operations between the chain of the content-based approach and the chain of the text-based approach. This way, it will be easier in future works to compare the two different indexing approaches since they start from the same word analysis.

After the Analysis there is a branch between the two types of approach. The text-based approach continues with three different experiments which all ends with the indexing step.
\subparagraph{Experiment B}
This experiment doesn't perform any particular operation except for the indexing. The indexing pipelet has still to send HTTP requests to Solr in order to call the WhiteSpaceTokenizer and split the words contained as a single string in the ``name'' attribute.
\subparagraph{Experiment C}
This experiment calls the \emph{Payloads Adder} pipelet that adds the payloads to the content of the area that has been previously analyzed. After this, the indexing pipelet includes also the DelimitedPayloadTokenFilterFactory among the called Solr analyzers for processing the payloads added in the previous pipelet. The payloads are easily configurable through a configuration file. Concepts to which is possible to add them are Siteview, Area, Page, Unit and Link.
\paragraph{WebML Chain - Query Processing}
The Query Processing deals with all the operations that have to be performed when the user inputs a query to the system. The Query Processing phase is depicted on the bottom left of Figure \ref{fig:webml_chain}. The user can input the query as a WebML model. The query model has its own XML representation in WebML format which is translated in UML 2.1 conform to Ecore format, as in the Content Processing phase. Next, the model is analyzed through the same methods as the projects in the repository. At this point, the Text-based chain for the Query Processing part starts. The issue is to transform a search-by-example query into a keyword-based one. The \emph{Keyword Extraction} extracts the query keywords from the XML document. Only the content of the ``name'' attributes are extracted and put into the query string. The query is then submitted to the Solr index as an AND query. The user can also submit a keyword-based query without the Keyword Extraction operation.
