%
% ATTENZIONE! usa \noindent se PROPRIO non vuoi che il paragrafo venga indentato
% 
SMILA is an extensible framework which allows to build search applications able to access different data sources containing non-structured information. SMILA provides an essential infrastructure comprised of fundamental components and services, which are extendable and exploitable in your own application. We proceed now to illustrate a brief overview of the behavior of these components.

\subsection{Architecture}
% N.B. la direttiva ``ht'' all'ambiente figure dice a LaTeX: ``prova a mettermi l'immagine QUI (here) dove ti dico, se non ci riesci, mettimela pure in ALTO (top)
\begin{figure}[ht]
  \begin{center}
	\includegraphics[width=1.0\textwidth]{./pictures/SMILA_Architecture_Overview}
	\caption{SMILA general architecture}
	\label{fig:SMILA_Architecture_Overview}
  \end{center}
\end{figure}    
Figure~\ref{fig:SMILA_Architecture_Overview} shows the general SMILA architecture. It is possible to divide this infrastructure in two parts, each corresponding to a different phase of the data processing. The first phase is called pre-processing  (left section of the diagram) and the second phase is called information retrieval (right section). In the case of a search application these two phases are commonly referenced as \emph{indexing} and \emph{search} respectively.

\paragraph{Indexing:} 
The indexing phase includes the raw information gathering from the data source. The gathered information generally includes the metadata and the content of the documents as well as possibly other security related information, like access rights.

This is a summary of the purpose and of the behavior of the main components.
\begin{itemize}
    \item \emph{Agent/Crawler}: information gathering might be done in two ways, namely push or pull. The agents work in the push way. An agent is always active and constantly monitors the data source. It sends an object of the data source to be elaborated immediately after having found some differences with respect to the same one present in the SMILA storage. This is useful in case of dynamic data sources in which information changes rapidly and/or constantly over time.

Crawlers gather information in the pull manner. A crawler is started manually and navigates through the data source, sending gathered info to the system to be processed. Once finished, the crawler stops and must be restarted manually if needed. This is good for one-time indexing, when we know that our data will not be subject to changes.
	\item \emph{Storage}: this is where the information is stored. SMILA provides two kinds of storage: the Record Storage which stores metadata and access rights of the documents and the Binary Storage which stores the binary content of a document. This division is useful because the binary content is rarely used since indexing operations often use only metadata. Also, copying binary content of the record in the pipeline is usually expensive since it is much bigger than the text content. Thus, this system allows to push in the pipeline only the id and the metadata necessary to identify the record with the possibility to retrieve the binary content only if it is really needed.
	\item \emph{Delta Indexing}: this module stores information related to the last modification of every document and is able to determine which documents have changed since the last time indexing was performed. This improves the performances during the crawling phase because only the documents that actually need processing are forwarded to the next parts of the workflow chain.
	\item \emph{Ontology Store}: it's a storage dedicated to the management of ontologies.
	\item \emph{Blackboard}: the Blackboard is an interface between the system services and the storages. It is an abstraction layer that has the purpose to hide the persistency technology of the records to the services, so that they need not to know what persisting technology is used by the system in order to manipulate the data of the records. Complete record data is stored only on the blackboard which is not pushed through the workflow engine itself. Only the ids of the records are pushed through the workflow. They can then be used to access the record's data via the blackboard API.
	\item \emph{Router}: After a record has been stored, a message is created. The router sends this message to a queue.
	\item \emph{Queue}: the queue processes the messages sent by the router and acts as a bridge between the information gathering section and the information processing section. The queue is a fundamental component that increases the scalability of the system. The messages are processed remotely and is easy to spread their elaboration on more clusters. The queue introduces asynchronous execution of the business logic. The technology used is JMS (Java Message Service). SMILA includes ActiveMQ as default provider of JMS services.
	\item \emph{Listener}: this module draws messages from the queue and starts the right BPEL workflow depending on the configuration rules. Both the Listener and the Router are configurable using a set of rules which specify the correct way of dispatching of the messages.
	\item \emph{BPEL Engine}: here the user can define whatever workflow he wants using the BPEL (Business Process Execution Language) format. This workflow will contain the business logic of the application and will perform processing operations on the records. According to SMILA terminology, a workflow is called \emph{pipeline} and the modules of which it is composed are called \emph{pipelets}. A pipelet is a reusable and configurable component and can be orchestrated like any other BPEL service. This means that an instance of a pipelet is not shared across more pipelines. Also, calls to the same pipelet in the same pipeline don't share the same instance. An instance can be accessed by multiple threads, so pipelets need to be developed according to thread-safeness concepts.
\end{itemize}

\paragraph{Search:}
The search phase provides access to the previously indexed information. This process is synchronous, so it's necessary to provide an external component in order to allow load balancing to achieve horizontal scalability. It is possible to define a workflow executing business logic managed by a BPEL engine during this phase too.

\subsection{Data Model}

\begin{figure}[ht]
  \begin{center}
	\includegraphics[width=1.0\textwidth]{./pictures/SMILA_Datamodel}
	\caption{SMILA data model}
	\label{fig:SMILA_Datamodel}
  \end{center}
\end{figure} 
Data in SMILA are represented by a record composed by metadata and attachments. A record can correspond to a document or to whatever resource destined to be indexed and searched.

Metadata contain Value types organized in Maps (key-value associations) and Sequences (lists of anything). The Values can be primitive Java types or Date types. Maps and sequences can be nested arbitrarily. Attachments can contain binary data (byte arrays).

The elements contained in metadata can be interpreted in various ways:
\begin{itemize}
    \item \emph{Attributes}: this is the most common situation in which the record represents an object of the data source that has to be processed or retrieved by a search. For example, some typical attributes of a web page are its URL, its title, and its textual content. So, the attributes are defined by the specific application domain.
	\item \emph{Parameters}: there are some record types for which attributes are not adequate. For example, in the searching phase, a record doesn't represent an object of the data source to process but it represents a search request object. In this case, the record doesn't contain data of the object of the application domain, but it contains the search parameters needed to configure the request.
	\item \emph{Annotations}: it is possible to enrich the information contained in a record during the processing phase by adding other attributes other than the ones retrieved during the crawling phase. These new attributes are called annotations.
	\item \emph{System attributes}: these are attributes needed by SMILA to coordinate the processing of a record. The name of these attributes begins with an underscore to discriminate them from the application specific attributes. System attributes include the record ID, unique for every processed record. There isn't a predefined format, so the ID can be built by any string. All the records must contain also another ID that represents the data source from which they have been generated (for example the crawler definition).
\end{itemize}
