In the text-based approach an artifact is represented as an unstructured text document. The index contains bags of words extracted from the models in the repository. These terms can possibly be weighted based on their concept of belonging, boosting them according to its relative relevance in the used metamodel. The matching is ultimately performed on these textual terms which can also include annotations and comments produced by developers to better describe their meaning, facilitating the retrieval process. Facets are also common since they allow further filtering in case too many results are retrieved. In the following subsections we distinguish between systems specifically designed to search through the source code of applications and systems that target other software representations.

\subsection{Source code search}
\emph{Selene} \cite{Takuya:2011:SCR:1985429.1985434} is an Eclipse plug-in built around a text-based search engine over a source code repository. It is a code recommendation tool that uses the entire editing code as a query. It searches and displays similar program fragments from a repository of examples programs. The searches are started automatically while the developer is editing the code. Selene is expected to assist developers in finding usages and idioms of API libraries and frameworks suitable to their operation context, without extensive manual search through tutorials or general search engines.

The work in \cite{Heinemann:2011:RAM:1985429.1985430} presents another code recommendation tool that uses the knowledge embodied in the identifiers of variables and functions as its basis for the suggestion. The assumption made is that code fragments using similar terms within the identifiers also reuse similar methods. The system constructs a matrix which associates a method call with the identifiers preceding it. Each row of the matrix is a binary vector stating which terms correspond to a call. These vectors are then matched with a set of terms extracted from the context of the current cursor position as to retrieve possible relevant method calls.

The system in \cite{Bhatia:2011:ASE:1985429.1985433} is designed to perform a search for algorithms. Users can enter a free text query which will be used to perform a search through a collection of academic documents, since they generally follow an easier structure for a machine to parse in order to identify the relevant sections containing pseudo-code for the algorithms. This tool is being developed as part of the SeerSuite toolkit, a collection of open source tools for creating academic search engines and digital libraries such as CiteSeerX.

\emph{Sourcerer} \cite{Bajracharya:2009:SIS:1556907.1556984} is an infrastructure for large scale indexing and analysis of open-source code, upon which code search engines and services can be built. Sourcerer crawls the Internet looking for Java source code from public web sites, version control systems and open source repositories. In Sourcerer, the code is parsed, analyzed and stored in three forms: Managed Repository contains a versioned copy of the original project content; Code Database stores models of parsed projects; Code Index stores keywords extracted from the code. The metamodel used by Sourcerer to store the structural information is an adapted version of the C++ entity-relationship metamodel. Each library file is uniquely identified across all the projects to maintain cross-project dependencies.

\emph{Exemplar} (EXEcutable exaMPLes ARchive) \cite{DBLP:conf/icse/GrechanikFXMPC10a} combines information retrieval and program analysis techniques to link high-level concepts to the source code of the applications via standard API calls that these applications use. The novelty of this approach is to augment the standard code search to
include into indexes also the API documentation of the most widely used libraries (e.g. Java Development Kit). Description and titles of Java applications are indexed independently from the Java API call documentation. Keywords entered by the user are matched separately in these two indexes. As a result two ranked lists are retrieved, the one of the applications and the one of the API calls. Then the system locates the retrieved API calls in the retrieved applications and the combined score is computed. As final step, the list of applications is sorted using the computed ranks and returned to the user.

\emph{Maracatu} \cite{conf/cbse/GarciaLDSAFM06} is a keyword-based search engine for retrieving source code components from development repositories. This search engine combines both text mining and facet-based search, achieving better results with respect to situations where these two techniques were used alone. Before searching, a filtering is performed to exclude components which do not satisfy  the constraints. A visualization of the retrieved component is allowed before its download.


\subsection{Other text-based approaches}
\emph{CodeBroker} \cite{Ye:2002:SRD:581339.581402} is a system that allows to autonomously locate components in a repository by taking into account the background knowledge of the developer (information delivery). This method was inspired by the fact that software reuse is often unsuccessful because of the lack of knowledge and inability of the users to create a well-defined query. CodeBroker utilizes user models in order to represent their knowledge about the repository.

\emph{SPARS-J} \cite{Inoue:2005:RSS:1070618.1070833} is a Java class retrieval system that uses a graph-represenation model of software libraries called component rank model. This model is based on analyzing actual usage relations of the components and on propagating their significance through usage relations. The resulting rank allows for highly ranked components to be quickly found by uses. Results show that a class frequently invoked by other classes has a high rank, with respect to nonstandard classes.

\emph{WISE} (Workflow Information Search Engine) \cite{Shao:2009:WWI:1546683.1547349} is specifically designed to query workflow hierarchies. A workflow hierarchy provides different views of the same workflow. Each view represents the workflow at different depth levels and includes as more tasks as the level is deeper. The user issues keyword queries and the system finds the workflow hierarchies in the repository that contain matches to those keywords. The system then returns search results as the minimal views of the most specific workflow hierarchies that contain tasks matching keywords. Query results defined in this way are then proved to be informative and concise.

After receiving in input an informal description of a semantic domain (represented as a set of terms) and a set of ontologies in an ontology repository, \emph{CORE} \cite{Fernandez_Cantador_Castells_2006} (Collaborative Ontology Reuse and Evaluation) automatically determines which ontologies describe most accurately the given domain by using similarity measures. The user selects a subset of available comparison techniques and obtains in output a ranked list of ontologies for each of them. Then a global aggregated measure that uses rank fusion techniques is used to define a unique ranking. 

Service-oriented systems (SOS) are search-driven because they are based on using software components usually provided by third parties over the web. Since BPEL documents often contain the invocation and definition of such services, the paper in \cite{Takada:2011:FWS:1985429.1985432} proposes a way to search for them via BPEL fragments. Retrieved fragments are shown along web services corresponding to the activities within the fragment, as well as the BPEL documents containing such fragment in order to give context on how the retrieved services were used. Fragments are ranked according to their number of occurrences in the documents and on their relevance to the query. The matching is not performed on the documents one by one. Instead, this approach considers all documents at once by merging them in a big graph where the nodes are the basic activities and the edges are the control flow. This way the amount of unnecessary matching is limited since each activity appears only one time in the graph.

\emph{Woogle} \cite{Dong:2004:SSW:1316689.1316723} is a search engine that is designed to find web services that offer similar operations. An algorithm clusters the parameter names of the operations into semantically meaningful concepts, which are then exploited to determine input similarity. The similarity is determined by considering textual descriptions of operations and web services as well as the parameter names. The clustering policy is based on the assumption that parameters express the same concept if they occur often together. Woogle allows for both template and composition search. Template search allows the user to specify the functionality, input and output of the web service operation, and returns a list of operations that fulfill the requirements. Composition search on the other hand, returns operations composition that achieve the desired functionality specified in the search.

The work of this thesis is based on the system presented in \cite{Bozzon:2010:SRW:1884110.1884112}. The paper describes a model-driven information retrieval system to search through projects expressed in the WebML language. The realized prototype applies metamodel-aware extraction rules to analyze models. It has a visual interface to submit keyword-based queries, performed on whole projects, sub-projects, or concepts. Then it inspects the results, presented as a paginated list of matching items with a possibility of snippet visualization. Details of this work will be better described in Section \ref{design-dimensions}. 

One of the objectives of this thesis is to evaluate the performances of text-based system against content-based ones. So, in the next section we report an overview of most of the current content-based search systems.
