Solr is an open source enterprise search server developed by the Apache Software Foundation.
In addition to the standard ability to return a list of search results for some query, it has numerous other features such as result highlighting, faceted navigation, query spell correction and auto-spell queries. The core technology behind Solr is Apache Lucene, an open source, high-performance full-text search engine library. Differently from Lucene that is just a code library, Solr is a search \emph{server} platform that is easily configurable with XML configuration files. In order to use Lucene directly, one should write code to store and query the index.

\noindent The major features of Lucene are the following~\cite{Smiley:2009:SES:1795535}:
\begin{itemize}
 \item a text-based inverted index persistent storage for efficient retrieval of documents by indexed terms,
 \item a rich set of text analyzers to transform a string of text into a series of tokens, which are the fundamental units indexed and searched,
 \item a query syntax with a parser and a variety of query types, from a simple term lookup to exotic fuzzy matches,
 \item a good scoring algorithm based on Information Retrieval principles to retrieve the more likely candidate first.
\end{itemize}
\noindent Solr can be described as the ``server-ization of Lucene'', that is, Solr makes easier the use of Lucene search services by its clients.
Solr is executed within a servlet container, such as Apache Tomcat. Clients communicate with Solr by means of HTTP requests. Solr follows the Representational State Transfer (REST) paradigm. The server and schema properties are configured by XML files. Here is the major feature-set in Solr:
\begin{itemize}
 \item HTTP request processing for indexing and querying documents,
 \item configuration files for the schema and the server itself through XML,
 \item Lucene's text analysis library is configurable through XML,
 \item notion of field type.
\end{itemize}

The Solr HTTP interface has two main access points: the \emph{update} URL for index management and the \emph{select} URL for query submission.
An index is structured in \emph{fields}, each entry in the index is a \emph{document}. Adding new documents to the index is done through a HTTP request using the POST method. The request body includes the XML representation of the document as index fields. Each document has a unique identifier which is specified in the XML representation using a special field. Documents can potentially be of any type like XML, CSV or ``rich documents'' such as Word files. It is also possible to define special routines in order to import data with complex structures from relational databases.
Moreover, any client able to submit HTTP requests can communicate with the Solr server. As soon as the indexing is completed, it is possible to issue a new HTTP request pointing to the select URL to submit a query to the index.

In Figure~\ref{fig:solr_design} you can see a diagram summing up all the possible inputs and outputs managed by Solr and its general index composition.
\begin{figure}[htbp]
  \begin{center}
	\includegraphics[width=0.8\textwidth]{./pictures/solr_design.eps}
	\caption{Diagram summing up all the possible inputs and outputs and the composition of a Solr index.}
	\label{fig:solr_design}
  \end{center}
\end{figure} 

\noindent The stages to develop a search engine with Solr are essentially three:
\begin{itemize}
 \item \emph{schema design}: maps the original schema of the considered data into a Solr index, which is necessarily flat (one could face the task of mapping a relational database into a Solr index),
  \item \emph{schema definition}: configures the \emph{schema.xml} configuration file where the index elements are defined; this file includes the definitions of the fields and of the field types,
  \item \emph{text analysis configuration}: configures the way the text is analyzed and processed (for example, tokenization and normalization) before indexing; this configuration influences the document retrieval.
\end{itemize}

\noindent The following sections explain in more detail some fundamental features of Solr usage: Section~\ref{solr-index-desing} introduces the concept of field and field type in index design, Section~\ref{solr-text-analysis} is about the most important text analysis operations; then Section ~\ref{solr-searching} provides more details concerning the possible queries for searching the index, the Solr response to queries and factors influencing the score of retrieved documents.

\subsection{Design and Index Definition}
\label{solr-index-desing}
A database and a search index have several conceptual differences. An index is like a very big relational table from a database, but has no support for relational queries (joins). Other differences are:

\begin{itemize}
 \item in an index the search is done by term and not only by substring matching; this means that it is possible to find different forms of the same words,
 \item Solr, and more generally every search engine, can retrieve a list of ordered results according to a certain measure of relevance with respect to a given generic query instead of an unordered set of documents obtained by a very specific structured request.
\end{itemize}
\noindent Another important factor to keep in mind when designing a schema index is that every possible data needed to retrieve a certain document must be present in the document representation itself, as it is not possible to use relational queries.

When the index design is done, the next task is defining the actual schema. The first thing to do is defining field types. A \emph{field type} is a data type that can be used in the index. A field type declares its type (boolean, number, date, etc.), has a unique name and it is implemented by a Java class. Next, the fields are defined. A \emph{field} is the atomic cell where data coming from documents is saved. Each field has a unique name, a type chosen among the field types, plus other optional configurations. Field values may be ``stored'' or ``indexed''. A \emph{stored} field can be retrieved during search and then visualized but it is not searchable; an \emph{indexed} field is searchable and its content is not retrievable for displaying. There is the opportunity to have a field that is indexed but not stored or vice versa, or a field that is both indexed and stored.

\subsection{Text Analysis}
\label{solr-text-analysis}

Text analysis covers the most important techniques for text processing used on raw data in input: tokenization, case normalization, stemming, synonyms, etc. The goal of this stage is to analyze the text and transform it in a sequence of terms. A \emph{term} is the core atomic unit saved into a field of a Solr index. Terms are what Solr searches at query time.

Thanks to Solr and its configurable infrastructure, the text analysis configuration is straightforward. Each field type has two \emph{analysis chains} attached, each of them defines an ordered sequence of analysis steps that convert the original text in a sequence of terms. There is a sequence of analysis steps for the indexing phase and another one only for queries. Each step has an associated \emph{analyzer}. There are several types of analyzer that perform a lot of different processing tasks: they tokenize the text, filter tokens, add terms and modify terms.
The first analyzer of an analysis chain is always a \emph{tokenizer}; its job is to divide the original text in tokens using a simple algorithm (for example, it generates a new token every white space). A \emph{token} is the smallest unit which is matched to a query during search. After the tokenizer, the remaining analyzers are defined as \emph{filters}, and their job is to further transform the tokens. the actual transformation performed depend on the application and are at the designer's discretion. In general, an analyzer is a TokenStream factory, which iterates over tokens. The input is always a character stream.

In Figure~\ref{fig:solr_analizzatori} you can see a diagram depicting some of the analyzers offered by Solr and their hierarchical organization.
\begin{figure}[htbp]
  \begin{center}
	\includegraphics[width=1.0\textwidth]{./pictures/solr_analizzatori.eps}
	\caption{Hierarchical organization of Solr analyzers.}
	\label{fig:solr_analizzatori}
  \end{center}
\end{figure} 

\subsection{Documents Search}
\label{solr-searching}
After the indexing phase, it is possible to submit queries to the index. Solr has a very useful and easy to use web-based interface. There are several parameters to better define the queries; here we list only some of them:

\begin{itemize}
 \item \emph{q}: the query string provided in input by the user,
 \item \emph{q.op}: specifies whether all or just one term in the query should be present in the document so that it can be retrieved,
 \item \emph{df}: specifies the default search field.
\end{itemize}
\noindent It is possible to use the classical boolean operators AND, OR and NOT, specify sub-expressions, search in a specific field, perform a \emph{phrase query} (a set of terms to be found all together into documents) or using the \emph{score boosting} that modifies the degree to which a term contributes to the final score of a document.
After submitting the query, Solr returns as output a XML document containing the list of retrieved documents and their score. It is also possible to highlight the searched terms among the returned results.

The query processing and parsing in Solr is done through request handlers. A \emph{request handler} performs the search and allows to configure the search parameters and to register some additional components, such as highlighting.

Another important aspect concerns how Lucene and Solr compute the score of a document with respect to a query. Lucene combines the Boolean model (BM) with the Vector Space Model (VSM): the documents ``approved'' by the BM are scored by the VSM.
In the VSM, documents and queries are represented as weighted vectors in a multi-dimensional space, where each term of the whole vocabulary is a dimension (an axis), and weights are Tf-idf values. The VSM score of a document \emph{d} for a query \emph{q} is obtained through the \emph{Cosine Similarity} of the weighted query vectors \emph{V(q)} and \emph{V(d)}:

\begin{center}
\begin{math}
   CosineSimilarity(q,d) = \frac{V(q) \cdot V(d)}{\left|{V(q)}\right| \left|{V(d)}\right|}
\end{math}
\end{center}

\noindent Where $ V(q) \cdot V(d) $ is the dot product of the weighted vectors, and $ \left|{V(q)}\right| \text{and} \left|{V(d)}\right| $ are their Euclidean norms. Lucene refines this formula in a simplified way as terms and documents are fielded. The practical scoring function used by Lucene is the following one:

\begin{displaymath}
score(q,d) = coord(q,d) \, \times \, queryNorm(q) \, \times \, \sum_{t \in q} ( tf(t \in d) \, \times \, idf(t)^2 \, \times \, norm(t,d) )
\end{displaymath}

\noindent where,

\begin{itemize}
 \item $coord(q,d)$, is a score factor based on how many terms of the query $q$ are found in the given document $d$. A document that contains more query terms will receive a higher score than a document containing fewer query terms,
 \item $queryNorm(q)$, is a normalizing factor used to make scores between queries comparable. This factor does not affect document ranking, since all ranked documents are multiplied by the same factor, but rather just attempts to make scores from different queries comparable,
 \item $tf(t \in d)$ is the term's frequency, defined as the number of times $t$ appears in the currently scored document $d$. This means that documents that have more occurrences of a given term receive a higher score. The default computation for $tf(t \in d)$ is $\frac{frequency}{2}$,
 \item $idf(t)$ stands for Inverse Document Frequency. This value represents the inverse of \emph{docFreq} (the number of documents in which the term $t$ appears). This means that rarer terms give higher contribution to the total score. The default computation is: $1 + \log(\frac{numDocs}{docFreq+1})$
 \item $norm(t,d)$, the shorter the matching field is (measured in number of indexed terms), the greater the matching document's score will be.
\end{itemize}
