An Information Retrieval System (IR) is a system that fetches raw data from some source of information, transforms it into searchable format and provides an interface to allow a user to search and retrieve that information by submitting queries to the system. Starting from this definition, four major processing subsystems can be isolated from the general flow \cite{kowalski2011}:

\begin{itemize}

 \item \emph{Content Registration}: this subsystem finds and retrieve from a given data source the items that are analyzed and searched in the following steps. This operation could be done in several ways, for example via crawling networks (on the Internet) as well as receiving new items that are ``pushed'' to the system (e.g. file system crawling).

 \item \emph{Content Analysis}: this subsystem is concerned with the analysis and the consequent transformation of the raw data registered by the previous phase. The registered items undergo several elaborations such as tokenization, normalization, format standardization, stemming and other kinds of processing to get a canonical format from the original raw data. This phase can include other content analysis techniques to define and add some metadata to the items that could facilitate the mapping between the vocabulary of the user and the vocabulary of the author of the original data during the search process.

 \item \emph{Content Indexation}: this subsystem is concerned with taking the tokens of the normalized items and other normalized metadata to create the searchable index. There are many different approaches to create the index such as Boolean or Weighted. Within the Weighted approach there are the Statistical, Concept and Natural Language indexing approaches.

 \item \emph{Search}: this subsystem is concerned with mapping the user information need to a processable form and determining which items are to be returned to the user. Within this process lies the calculation of the score of the retrieved documents that is used to order the list of displayed results.

 \item \emph{Display}: this subsystem is concerned with how the user can locate the items of interest among all the possible results returned. It deals with display options such as highlighting and faceted navigation.

\end{itemize}

\noindent The quality of all these subsystems determines the capabilities in retrieving a higher number of relevant documents needed by the user and in displaying them in a suitable way. Each of these processing phases is addressed in the following sections.

In Figure~\ref{fig:general_IR_architecture} you can see a diagram depicting the general architecture of an Information Retrieval system.
\begin{figure}[htbp]
  \begin{center}
	\includegraphics[width=1.0\textwidth]{./pictures/general_ir_architecture.eps}
	\caption{Architecture of a general-purpose Information Retrieval system.}
	\label{fig:general_IR_architecture}
  \end{center}
\end{figure} 

\subsection{Content Registration}
Content Registration is the initial process of an information retrieval system. It is the process that receives the items to be stored and indexed and performs their initial processing. The crawling policy can be either a ``pull'' or ``push'' process. In the \emph{pull} process the system inspects other locations to retrieve the items (e.g., web crawling). In the \emph{push} process the items are delivered to the IR system. This typically means that one system, different from the IR system itself, writes files to a directory that is monitored and can detect new items. The Content Registration module usually checks whether an item has already been processed by the system. This is accomplished by creating a unique signature key that represents the content of such item. The most common methodology is to create a hash for the complete file.

\subsection{Content Analysis}
The Content Analysis (Figure \ref{fig:general_ir_architecture_content_analysis}) process takes as input the items gathered by the Content Registration.

\begin{figure}[htbp]
  \begin{center}
	\includegraphics[width=1.0\textwidth]{./pictures/general_ir_architecture_content_analysis.eps}
	\caption{Operations involved in Content Analysis.}
	\label{fig:general_ir_architecture_content_analysis}
  \end{center}
\end{figure} 

This subsystem is responsible for the extraction and transformation of information that will actually be part of the index. The Content Analyses produces several metadata that enrich the description of the registered items. Any item information that should be treated as metadata, like for example the date and time of creation, needs to be placed in the appropriate metadata field. In case of items formatted in structured languages (e.g. HTML/XML), all the markups must be thrown away so that only continuous text is present. The next step could be the \emph{standardization} of text format: this can be done first by inferring the language of the text and then putting it into UNICODE. Once the characters have been standardized to a single format, \emph{normalization} is performed. This activity includes operations such as lower-casing, diacritic removal, and ligature expansion.

Once an item has been selected and normalized, the next step is to ``split'' the documents and then identify processing tokens for indexing. The \emph{splitting} phase consists of parsing the item and subdividing it into logical sub-parts that have meaning to the user. This process is used to increase the precision of a search and to optimize the display of results. For example, if we want to index books (so in this case items are books), then splitting can be done by dividing the book item into Title, Author and Main Text. These parts will then be inserted in the appropriate index fields. The splitting of the items allows searches to be restricted to a specific part of the item. Another use of splitting and fields is when a user wants to display the results of a search. A major limitation is the size of the display screen which constraints the number of items visible for review. To overcome this problem, the user can decide to display only some splitted part of the original documents in order to browse an higher number of results.

Once the standardization and the splitting have been completed, the information used to create the index needs to be identified. Here, the effort is to analyze and transform the original words contained in the items. The elements that are found are called tokens. The \emph{tokens} are the data that are finally indexed at the end of Content Analysis. Tokens are used instead of words because words are not the most efficient unit on which to base search structures. The first step of token identification consists in distinguishing the words of the items that are suitable for indexing. Generally, systems divide words into three classes: valid word symbols (alphabetic character and numbers), inter-word symbols (blanks, periods and semicolons) and special processing symbols (for example, hyphens). A word is defined as a contiguous set of word symbols bounded by inter-word symbols. In most systems inter-word symbols are non-searchable. Special symbols such as hyphens must be processed in special ways.

Token identification could be followed by word characterization, which includes morphological analysis. Thus, a word such as ``plane'' is interpreted as an adjective or as a noun according to morphological analysis or even context analysis.

Now that the potential list of processing tokens has been identified, some can be removed by a Stop List or a Stop Algorithm. The objective of the \emph{Stop function} is to delete from the set of searchable processing tokens those that have little relevance to the user. \emph{Stop lists} are commonly found in most systems and consist of words (processing tokens) whose frequency and semantics use make them of no value. For example, parts of speech such as articles (e.g. ``the'') have no search value and should be thrown away. The \emph{Stop algorithm} operates according to the Ziph's law, which postulates that, looking at the frequency of occurrence of the unique words across a corpus of items, the majority of unique words are found to occur a few times, so that the product of the frequency and the ranking of a word into the frequency histogram equals a constant.

One of the last transformations often applied to data before placing it in the searchable data structure is stemming. \emph{Stemming} reduces the diversity of representations of a word to a canonical morphological representation called \emph{stem}. The risk with stemming is that the discrimination of concept information may be lost in the process, causing a decrease of retrieving precision and affecting the ability of ranking. The positive aspect of stemming is that it improves recall. A very related operation is called lemmatization. \emph{Lemmatization} is typically accomplished via dictionary look-up which is also one of the possible solution to implement stemming. Lemmatization, besides modifying word endings or dropping them as in stemming (``cats'' and possibly ``catlike'', ``catty'', etc. are mapped to the root stem ``cat''), maps word to another one (for example, it could map ``eat'' to ``ate'' and ``better'' to ``good'').

\subsection{Content Indexation}
This phase takes as input the processed tokens identified from the registered items. Its goal is to transform the received tokens into the searchable data structure. The index is what really defines an item more than its original content. This is because the primary mechanism to retrieve an item is based upon search of the index. If there are concepts in the items that are not reflected in the index, then a user will not find that item when searching for those concepts. In addition to the mapping of concepts to the searchable data structure, the indexing process may attempt to assign a weight on how much that item discusses a particular concept. This is used in the search phase in order to rank the retrieved documents. The attempt is to get the items more likely to be relevant higher in the list of retrieved documents.

In a weighted index system, each index term receives a weight (a positive scalar) that indicates the degree to which that term represents the related concept in the original item. The most direct and obvious method to be used in weighting a term is the frequency of occurrence of that term in the item. The query process uses the weights assigned to terms that are present in the query to determine a scalar value for each item in the collection. This value is called \emph{score} and it is used to predict the likelihood that a retrieved item satisfies the user query. There are several approaches to generate the searchable index. Here we discuss the statistical approach, which is then used in the rest of this work and it is the most prevalent in commercial systems. The basics of this approach is the use of the frequency of occurrence of tokens. The possible statistics that are applied to the tokens are probabilistic, Bayesian and vector space. We now illustrate the vector space, which is then adopted in the rest of this work.

The \emph{Vector Space Model} approach is based on a vector model. The semantics of every item are represented as a vector. Each component of the vector represents a term in the vocabulary. A vector has the same dimension of the terms vocabulary. There are two possible domains of values for the vector's components: binary and weighted. Under the binary approach, the domain contains the value of one or zero, so the term is present or not present in the item. In the weighted approach, the domain is typically the set of all real positive numbers. The value for each term represents the relative importance of that term in representing the semantics of the item. A weighted vector acts the same as a binary vector but provides a range of values that accommodates a variance in the value of the relative importance of a term in representing the semantics of the item. Moreover, the use of weights also provides a basis for determining the rank of an item. Weights are determined using the classical Tf-idf weighting, which will be discussed in Section \ref{solr-searching}.

\subsection{Search}
The information retrieval processes continues after Content Registration, Content Analysis and Content Indexation with the search against the index. The selection and ranking of the items are accomplished via similarity measures that calculate the similarity between the user's search statement (user query) and the weighted stored representation of the semantics of each item in the index. Relevance feedback can also help a user to enhance the search by selecting items from previous ranked lists. This technique uses information from items judged as relevant or not to determine an expanded ranked list.

The search statements use Boolean Logic and/or Natural Language to express user needs. The typical search statement consists of few words that the user selects to represent his information need. The user may have the ability to assign different levels of importance to different concepts in the statements (query terms boosting).

Then, the search statement is parsed by the system and used to search against the index. This process is similar to the indexing of an item described before.

The next step is to calculate the similarity between a user's search statement and the indexed items. Thinking about the Vector Space Model, both the user query and the indexed weighted documents can be treated as vectors where each element represents a different term. A variety of different measures can be used to calculate the similarity between the item and the search statement. A common characteristic of all similarity measures is that the result of the formula increases as the items become more similar. An example of similarity measure is the \emph{cosine distance}, that calculates the cosine of the angle between the query vectors and the indexed documents vectors. As the cosine approaches ``1'', the two vectors approach the same direction, so the item and the query represent the same concept.

\subsection{Display}
Once a search has been completed, the system has identified an ordered list of items relevant with respect to the given user query. The next step is to present this information to the user. This step has a significant impact on the user's ability to find what he's really looking for. There are two stages of information displaying: the first one defines how the list of retrieved documents is presented to the user so he can easily find what it's important for him; the second one is how individual items are presented once the user has selected a specific one. A situation where the user can satisfy his information need without accessing a specific item in the retrieved list is considered exceptional.

What is obvious is that the ``hit list'' returned by the system contains the most relevant documents according to the system. What is less obvious is how long this list should be. Most systems display the hit list as a sequential list of retrieved documents organized in pages of 10 results per page but the user at most reviews two pages (20-30 results). The list can include the title of the retrieved items, a ``snippet'' of text coming from the item and a graphic overview of the item. Another interesting feature that can be offered by the system is the highlighting into the search results of the terms that are present in the user query.