%%This is a very basic article template.
%%There is just one section and two subsections.
\documentclass{memoir}


\usepackage{pgf-pie}

% urls
\usepackage[hyphens]{url}

% use of acronyms
\usepackage[nolist,withpage]{acronym}

% use of multirow
\usepackage{multirow}

% nice fonts for the math sets
\usepackage{amsfonts}

\usepackage[francais,english]{babel}

\usepackage[T1]{fontenc}

\begin{hyphenrules}{francais}
\hyphenation{pr\'e-f\'e-ren-ces}
\hyphenation{fra-me-work}
\hyphenation{questions-r\'eponses}
\end{hyphenrules}

\usepackage{pgfplots}

\usepackage{graphicx}
\usepackage[hang,small,bf]{caption}
\usepackage{subfig}

\usepackage{lscape}
\usepackage{longtable}
\usepackage{rotating}

\usepackage{abstract}

\usepackage{tikz}
\usetikzlibrary{fit, arrows, decorations.markings,positioning,trees}

% use of listings
\usepackage{listings}
\usepackage{framed}
\usepackage{MnSymbol}
\lstset{
	language=SQL,
	basicstyle=\ttfamily\fontsize{8}{11}\selectfont,
	aboveskip=6pt plus 2pt, 
	belowskip=2pt plus 8pt,
	morekeywords={PREFIX,java,rdf,rdfs,url,xsd},
	numbers=left,
	numberstyle=\tiny,
}


% use of minitoc
\usepackage{shorttoc,titletoc}

\usepackage{subfig}

\newcommand\partialtocname{Outline}
\newcommand\ToCrule{\noindent\rule[5pt]{\textwidth}{1.3pt}}
\newcommand\ToCtitle{{\large\bfseries\partialtocname}\vskip2pt\ToCrule}
\makeatletter
\newcommand\Mprintcontents{%
  \ToCtitle
  \ttl@printlist[chapters]{toc}{}{1}{}\par\nobreak
  \ToCrule}
\makeatother


\newcommand\TODO[1]{{\textcolor{red}{\underline{TODO}: #1\\}}}

\hyphenation{spe-ci-fic}


\setcounter{secnumdepth}{2}
\setcounter{tocdepth}{2}

\newcommand{\Chapter}[1]{\chapter{#1} \setcounter{figure}{1}}


\begin{document}

\begin{acronym}[TDMA]
\acro{AI}{Artificial Intelligence}
\acro{BI}{Business Intelligence}
\acro{CMS}{Content Management System}
\acro{CRM}{Customer Relationship Management}
\acro{DBMS}{Database Management System}
\acro{DSS}{Decision Support Systems}
\acro{ER}{Entity/Relationship}
\acro{IDF}{Inverse document frequency}
\acro{IE}{Information Extraction}
\acro{IR}{Information Retrieval}
\acro{MDX}{Multidimensional expression}
\acro{NER}{Named entity recognizer}
\acro{NL}{Natural Language}
\acro{NLP}{Natural Language Processing}
\acro{OLAP}{Online analytical Processing}
\acro{QA}[Q\&A]{Question Answering}
\acro{RDF}{Resource Description Framework}
\acro{SPARQL}{SPARQL Protocol and RDF Query Language}
\acro{SQL}{Structured query language}
\end{acronym}

\chapter{Personalized and Contextual Q\&A}
\label{sec:chapter-personalized}
\startcontents[chapters]
\Mprintcontents

Previous chapter concludes on challenging systems, that they should not
only offer \emph{personalized} results, but also \emph{non-textual} pieces of
information on the basis of not-well formed queries, or even non-textual
formulation of an information need.
Indeed, we present a personalized Q\&A system that leverages queries
expressed in NL or keywords, and offers results in the form of charts or
tables of values.
This system has been entirely implemented, and details are
provided in this chapter.


The notion of \emph{context} of a given entity is defined by a set of
entities that interact with some entity. Thus, the context of a user is defined
by its environment -- the application she is using, the documents she creates,
the search that she performs, etc.
It must also take into account her \emph{social network}, or other users who
have a common interest or goal which is for instance measured by some distance
metrics.
These notions are further detailed in~\cite{dasfaa12} through an application to
auto-completion of BI queries.
The personalization process that is presented in this work takes into account
not only characteristics about the user -- her profile -- but also so-called
\emph{preferences}, which are used basically to filter and disambiguate the vast
amount of potential resources of interest for the current user. 


This chapter is organized as follows:
first, we present the Q\&A framework itself, its architecture and how
personalized results are rendered.
Secondly, we detail the extensibility of the framework from the point of view of
the administrator who wants to extends its current capabilities.
Thirdly, we elaborate on performance considerations and more specifically on the
multithreaded implementation of the framework.
Last but not least, we detail how we have integrated the notion of context
introduced above in our system, and provide as an example further insight on
\emph{usage statistics}.

\begin{figure}[ht]
\centering
%\includegraphics[width=\textwidth]{img/archi2}
\includegraphics[trim=310pt 170pt 290pt 180pt,scale=0.99]{img/archi-new}
\caption{General architecture of the answering system}
\label{fig:personalized-general-architecture}
\end{figure}

\section{Q\&A Framework}
\label{sec:chap-3-qa-framework}
As pointed out in the state of the art in
chapter~\ref{sec:chapter-state-of-the-art}, a limitation of many historic
Q\&A systems is that they are tailored to specific domains and sometimes
datasets.
Our proposal is a framework with standard linguistic \emph{features} that can
be efficiently extended to specific domain applications. We will define these
features in chapter~\ref{sec:chapter-patterns}
page~\pageref{sec:chapter-patterns}.

The general architecture of the system is represented
Figure~\ref{fig:personalized-general-architecture}.
The upper ribbon corresponds to the different available front-ends. Currently,
two front-ends have been implemented: 
\begin{itemize}
  \item a desktop application which is available as an HTML 5 application
  \item and an iPhone/iPad native application
\end{itemize}  
The interested reader will find screenshots of the application
Appendix~\ref{sec:appendix-screenshots}
page~\pageref{sec:appendix-screenshots}.
The second horizontal ribbon (entitled ``Answering System'' in
Figure~\ref{fig:personalized-general-architecture}) corresponds to the main
processing of the query (sent by front-ends applications) to underlying search
engines. The lower ribbon corresponds to the different search engines involved
for retrieving answers from different data sources integrated in the system.


\subsection{Authentification}
\label{sec:personalized-authentification}
Authenticating on the platform consists in filling the pair `user name/password'
that must be specified on the front-end. 
Users who want to get personalized results must log on the system. 
Not-authenticated users can however benefit from generic results, when the
underlying plug-ins do not explicitely require users to be authenticated.
Thus, some results are common to all users and some are specific to
authenticated users (according to their profile and/or situation as detailed
later).

\subsubsection{Social network of users}
Users have properties that are of interest for offering accurate personalized
results. 
Such properties can be for instance user's job title, her work location
(country and city), her branch etc. 
For instance, users of a search client on a mobile device (e.g.
iPhone\texttrademark) provide queries of minimum length, but expect it to be
augmented with contextual information, like their geo-location. 
These information can be provided by
corporate directory services, for instance via the LDAP\footnote{Lightweight
Directory Acess Protocol} exchange format.

Users are commonly represented in a graph called \emph{social network}, where
nodes represent users and edges the relationships that they share. These
reletionships can be hierarchical (for instance the `reports-to' relationship)
or not (like `has-business-contact').
\begin{figure}[h!]
\centering
\begin{tikzpicture}[->,>=stealth',shorten <=0.5pt,
    main/.style={draw,thick,rounded corners, minimum width=1cm}]
    \node[main] at (0,0) (a) {Robert Dumant};
    \node[main] at (-3,2) (b) {Bruno Anderson};
    \node[main] at (0,2.5) (c) {Peter Corner};
    \node[main] at (4,2) (d) {Arthur Burnet};
    \node[main] at (4.5,0.5) (e) {John Brown};
    \node[main] at (4.5,-0.5) (f) {Alan tucker};
    \node[main] at (4,-2) (g) {Bill Pegg};
    \node[main] at (2,-3) (h) {Mike Brown};
    \node[main] at (-3.5,-2) (i) {Marge Collins};
    \node[main] at (-2.5,0.5) (j) {Sue Watzer};
    \node[main] at (-5,0) (k) {Lisa Bryant};
    \node[main] at (-4.5,-3.5) (l) {Felipe Alverez};
    \node[main] at (-0.5,-3) (m) {Nick Cox};
    \node[main] at (0,-5.5) (n) {Joan Kellog};
    \node[main] at (-3.5,-7) (o) {Scott Miller};
    \node[main] at (0,-8) (p) {Christian Major};
    \node[main] at (3.5,-7) (q) {Albert Lew};
    \node[main] at (-4.5,-5.5) (r) {Gianni Motta};
	\path (b) edge node {reportsTo} (a);
	\path (c) edge node {reportsTo} (a);
	\path (d) edge node {reportsTo} (a);
	\path (e) edge node {reportsTo} (a);
	\path (f) edge node {reportsTo} (a);
	\path (g) edge node {reportsTo} (a);
	\path (h) edge node {reportsTo} (a);
	\path (i) edge node {reportsTo} (a);
	\path (j) edge node {reportsTo} (i);
	\path (k) edge node {reportsTo} (i);
	\path (l) edge node {reportsTo} (i);
	\path (m) edge node {reportsTo} (i);
	\path (i) edge node {hasBusinessContact} (n);
	\path (h) edge node {hasBusinessContact} (n);
	\path (o) edge node {hasBusinessContact} (n);
	\path (p) edge node {hasBusinessContact} (n);
	\path (q) edge node {hasBusinessContact} (n);
	\path (r) edge node {reportsTo} (l);
\end{tikzpicture}
\caption{Snapshot of corporate social network of 18 users sharing two types of
relations: `reportsTo' and `hasBusinessContact'}
\label{fig:personalized-social-network}
\end{figure}
Figure~\ref{fig:personalized-social-network} provides an example of social
network of some corporate users. This kind of representation allows to
compute easily distance metrics between users.
Examples of these kinds of metrics will be given in
section~\ref{sec:evaluation-experiments-auto-completion}
page~\pageref{sec:evaluation-experiments-auto-completion} in the context of
auto-completion.



\subsubsection{Integration of corporate single-sign-on technologies}
In corporate settings, single sign-on (SSO) is a technology that monitors
 access control to independant software systems (i.e. applications) that users
are allowed to access.
The two major properties of this technology are:
\begin{itemize}
  \item users have to log in once, and are not asked again for their
  credentials when accessing other applications that belong to the corporate
  network
  \item when users log off from any application, they are logged off from all
  applications that use SSO
\end{itemize}
The front-end applications that have been developped in the context of this
work use SSO. This ensures that any corporate user has access to the Q\&A
system, and is able to experiment it and eventually provide usefull feedback. 

\subsection{Search \& prediction services}
The \emph{search app} component (see figure~\ref{fig:architecture-front-end})
is responsible for sending users' requests to the back-end system, and 
feedback to the user (e.g. error codes and messages). 
\begin{figure}[h!]
\centering
%\includegraphics[width=12cm]{img/archi-search-app}
\includegraphics[trim=310pt 480pt 290pt 180pt]{img/archi-new-2}
\caption{HTTP/REST communication between the back-end and front-end systems}
\label{fig:architecture-front-end}
\end{figure}
As displayed on the figure, the front-end tools send information about context
(e.g. which kind of device is used, geo-location of user, etc.).
The front-end application displays some of the error messages that are caught in
the back-end or front-end system. The \emph{prediction service} introduced here
aims at providing auto-completion of users' queries (e.g. propose `revenue' when user
has typed  `re').
The prediction mechanism that we have implemented in this context is detailed 
section~\ref{sec:evaluation-experiments-auto-completion} 
page~\pageref{sec:evaluation-experiments-auto-completion} and
in~\cite{dasfaa12}.


\subsubsection{Users' questions}
Users' questions are expressed in NL. We decided not to restrict the way users
formulate queries (i.e. the syntax of \emph{valid} queries), because we assume
that the linguistic coverage of the system will improve over time, based on
query logs that will be collected by system administrators and possibly thanks
to machine learning techniques.

\subsubsection{Speech-to-text}
The mobile front-end applications (iPhone and iPad) are composed of a
speech-to-text component.
Currently, it works in the 3 languages supported by the system (English, German
and French).


\subsection{Answering system}
This section describes the main component of the Q\&A framework.
It aims at analyzing and processing users' input in order to retrieve
pieces of information of interest.
Figure~\ref{fig:personalized-architecture-answering-system} depicts the
architecture of the core answering system.
\begin{figure}[h!]
\centering
%\includegraphics[width=12cm]{img/archi-answering-system}
\includegraphics[trim=310pt 290pt 290pt 300pt]{img/archi-new-3}
\caption{Architecture of the answering system}
\label{fig:personalized-architecture-answering-system}
\end{figure}
It is composed of two services: \emph{user context service} and \emph{search
service}. The former service provides information about the user (i.e.
information about her profile, properties about which front-end
application she is currently using, which device has been used, etc.). These
information can then be directly accessed, for instance by search plugins, to
offer personalized results. The latter service is triggered by search requests
initiated by front-end applications. 
%Dotted boxes (\emph{user knowledge manager}
%and \emph{global knowledge manager}) have not yet been fully implemented. 
%They are intended to ease the implementation of reasoning algorithms that base
%on knowledge about users bahaviours.

Question analysis, as well as results retrieved by search engines (see
section~\ref{sec:personalized-architecture-search-engines}) are stored in two
data structures: \emph{query tree} and \emph{answer tree}. Both query tree and
answer tree are graph structures.
The query tree, in particular, can be compared to a \emph{parse tree} in Q\&A
systems.

In the next section, we further introduce the parse tree, and present the
different \emph{plugins} involved in providing annotations to this graph
structure.


\subsubsection{Parse tree \& plugins}
\label{sec:personalized-answering-parse-tree}
The parse tree is a common tree data structure for storing
\emph{annotations}, i.e. information resulting from the analysis of users'
questions.
The annotations are created by different annotators called \emph{query
plugins}.
We present below different query plugins that have been implemented.


 
To support more sophisticated queries (e.g. range queries or queries with
orderings) and go beyond keyword-like questions, we have to allow
adminsitrators to define custom vocabulary (such as `middle-agged') and
more complex linguistic patterns, where `top 5' is a simple example. Such
artifacts may export variables, such as the beginning and end of the age range
in the case of `middle agged' or the number of objects to be shown to the
user in the case of `top 5'. We will discuss further our solution for defining
linguistic patterns in chapter~\ref{sec:chapter-patterns}.


\paragraph{Information Extraction plugins}
These plugins are named ``Query plugins'' on
Figure~\ref{fig:personalized-architecture-answering-system}. 
These plugins are the first ones in the pipeline that analyze
the user's question by applying information extraction components. These
components contribute to a common data structure, the so called parse tree. 
By default, the system is equipped with three types of information extraction
plugins that can be instantiated for different data sources or configurations. 
These are plugins for matching artifacts of the data source's metadata within
the query, plugins for recognizing data values (directly executed inside the
underlying database) and plugins for applying natural language patterns (e.g.
for range queries). These plugins jointly capture lower-level semantic
information to interpret a user's question that can be interpreted in the
context of the data warehouse metadata in subsequent processing steps.

These basic plugins are are part of the framework (by default).
They are intended to be general-purpose query annotators for standard
\emph{entities} (in the sense of named entities). These entities can be of
different type (see examples of such entities 
Table~\ref{tab:personalized-basic-query-plugin-types}).
\begin{table}[h!]
\centering
\begin{tabular}{ll}\hline
\multicolumn{1}{c}{\textbf{Entity type}} &
\multicolumn{1}{c}{\textbf{Example}}\\\hline\hline numeric & 123\\\hline
date & June, 1st 2012\\\hline
country & Germany\\\hline
\end{tabular}
\caption{Examples of named entity types used by the basic query plugin}
\label{tab:personalized-basic-query-plugin-types}
\end{table}
We use a standard information extraction system (SAP BusinessObjects 
Text Analysis\texttrademark, a succesor of the system presented in 
\cite{DBLP:conf/trec/Hull99}) with a custom scoring function. 
As scoring function for evaluating individual matches 
we adapted the scoring that we presented in 
\cite{DBLP:conf/www/BrauerHHLNB10}. In a nutshell, it combines 
{\footnotesize\emph{TF-IDF}} like metrics with Levenshtein and punishes in
addition matches where the length of a term in the metadata is much 
longer than the string occuring in the users' question. A 
threshold on the score limits the number of matches
that are considered for further processing.

We discuss Section~\ref{sec:personalized-extensibility} how the standard query
plugin can be efficiently configured for application-specific settings.






\paragraph{1. Data schema query plugin}
The data schema query plugin annotates words and/or phrases that correspond to 
entities belonging to the data schema of a data warehouse.  These entities are
indexed in a dictionary (or lexicon) and compiled by the NER such that
they can be efficiently recognized at query time.
This process (indexing and named entity compilation) is performed when the
system starts-up and when a change in the data schema is observed. Besides, it
is possible to specify variants for entities being recognized and to define
confidence metrics for each entity being recognized in the query input.
These confidence values are computed by default with the edit distance (e.g.
the Lehvenshtein distance~\cite{levenshtein1966bcc}), but it is
still possible to define custom confidence values before compiling the
dictionary. An example of how we use these metrics is presented
Section~\ref{sec:pattern-confidence} page~\pageref{sec:pattern-confidence}.




\paragraph{2. Natural language query plugin}
This component annotates expressions (which are language-specific) in users'
questions.
The created annotations are supposed to be independant from the application
domain, but can be configured for a specific application.
In particular, it is well-known that different expression refer to the same
meaning (and also that the same expression can be interpreted differently based
on the context where it appears).
For example, the expression ``best country'' is interpreted (in BI
applications) as the selection of the first member of the dimension
``Country'', where members are ordered basing on a measure (which can be
explicit in the query).
Table~\ref{tab:personalized-basic-query-plugin-types} lists some examples of
expressions grouped by their type called \emph{feature}.
\begin{table}[h!]
\centering
\begin{tabular}{ll}\hline
\multicolumn{1}{c}{\textbf{Feature}} &
\multicolumn{1}{c}{\textbf{Example}}\\\hline\hline
\multirow{2}{*}{top-$k$} & ``best country''\\
 & ``2 least cities selling $x$''\\\hline
\multirow{3}{*}{range} & ``between $x$ and $y$''\\
 & ``more than $x$''\\
 & ``before January''\\\hline
\end{tabular}
\caption{Examples of NL features that are annotated}
\label{tab:personalized-nl-query-plugin}
\end{table}

The actual implementation of this plugin is not described here, but further
described in section~\ref{sec:patterns-feature}
page~\pageref{sec:patterns-feature}.


\paragraph{3. Background knowledge query plugin}
The system is composed of a background knowledge base intended to be
domain-independant, which can be configured and specialized for the sake of
specific applications. 
As an example, knowing the data type of dimensions is much important when
rendering charts, and this kind of knowledge can be easily stored in the
knowledge base.
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{img/wrong-data-type}
\caption{Example of chart rendered without any knowledge on dimensions' data
types}
\label{fig:personalized-wrong-data-type}
\end{figure}
Figure~\ref{fig:personalized-wrong-data-type} is an example of
chart\footnote{\emph{Charts} rendered in the system are further described
page~\pageref{sec:chap3-chart}.} that has been rendered without any knowledge
about data type of dimensions.
Indeed, the $x$-axis is a time dimension (dimension ``Date''). In that case, the
convention is to order the members along this dimension (while in general, for
non time or numeric related dimensions the order is based on  the measure
value, like in this example).

Interpreting semantics of dimensions is however not obvious. In the case of
dates, for instance, there are many different time formatting templates
(e.g. \verb?YYYY-MM-dd? or \verb?YYYY, dd MM? for dates). Depending on which
template has been adopted, the order of members on the axis would be different. 
Similarly, there is no single way to refer to members.
For instance, the values of the dimension `Quarter' might be in the form
`Q$x$' (e.g. `Q1', `Q2', \ldots) or of the form `$x$' (e.g. `1',
`2', \ldots).
The BI community\footnote{UK national statistics department has published
some of these guidelines at~\url{http://www.neighbourhood.statistics.gov.uk}.}
has established some guidelines or good practices for rendering charts.
In our experiments (see chapter~\ref{sec:chapter-evaluation}
page~\pageref{sec:chapter-evaluation}), we have observed following guidelines
in order of importance:
\begin{enumerate}
  \item the chart type should represent the desired analysis type (e.g. a pie
  chart is preferred for analyzing contributions while bar charts are better for
  comparing them). The notion of \emph{chart preferrence} will be defined
  section~\ref{sec:modeling-chart-type} page~\pageref{sec:modeling-chart-type}.
  \item time-related dimensions should be on the $x$-axis or in the legend area
  (but never on the $y$-axis). Besides, series should be ordered based on time
  order (and not based on measures values).
 \item when a legend is required (i.e. when there are more than one serie), the
 number of members of the serie appearing in the legend should be as small as
 possible
\end{enumerate}

The similar problem arises for geographic data. Indeed, in that case it is
better to represent measures' values on a geo-map (representing the geographic
dimension's values) instead of ordering geographic members basing on their
lexico-graphic order.

Nevertheless, a \emph{standard form} of the dataset must be used in order to
compare results. In our work, this standard form is defined as follows:
\begin{itemize}
  \item measures are ordered given the lexico-graphic order
  \item dimensions are ordered given the same order
  \item dimensions' values are ordered given the lexico-graphic order of their
  members, regardless of the data type of the corresponding dimension (as the
  chart displayed figure~\ref{fig:personalized-wrong-data-type})
\end{itemize} 

\subsubsection{Answer tree}
\label{sec:personalized-answering-answer-tree}
The answer tree is a graph data structure that contains different
\emph{answers} of different types corresponding to the input question.
The answer types that we consider are enumerated below.

\paragraph{\label{sec:chap3-chart}Charts}
A \emph{chart} is a chosen visualization for a given dataset. It aims at
better describing the data, emphasizing graphically areas of interest in the
data. 
The same dataset can be rendered in different ways (e.g. as a bar chart, a pie
chart, etc.).
\begin{figure}
\centering
\subfloat[Bar chart visualization]{
\begin{tikzpicture}[scale=0.8]
\begin{axis}[xtick=data,symbolic x coords={a,b,c},
grid=major,
ymin=0,
xticklabels={,,},
legend style={cells={anchor=center,
fill}, nodes={inner sep=1,below=-1.1ex}, legend pos=north west},area legend
]
\addlegendimage{empty legend}
\addplot[ybar,fill=black!60] coordinates {
	(a,383200)
	(b,0)
	(c,0)
};
\addplot[ybar,fill=black!40] coordinates {
	(a,0)
	(b,1124400)
	(c,0)
};
\addplot[ybar,fill=black!20] coordinates {
	(a,0)
	(b,0)
	(c,1660050)
};
\addlegendentry{\hspace{-.6cm}\textbf{Sales amount quota}}
\addlegendentry{Pacific Sales Manager}
\addlegendentry{European Sales Manager}
\addlegendentry{North American Sales Manager}
\end{axis}
\end{tikzpicture}
\label{fig:chap3-bar-chart}}
\vfill
\subfloat[Pie chart visualization]{
\includegraphics[trim=220pt 560pt 300pt 100pt,scale=0.8]{img/pie-chart}
\label{fig:chap3-pie-chart}
}
\caption{Two visualization types corresponding to the same dataset}
\label{fig:personalized-chart-representation}
\end{figure}

Figure~\ref{fig:personalized-chart-representation} represents two ways of
representing graphicaly the same dataset. The first visualization (a) is a
\emph{bar chart} while the second one (b) is a \emph{pie chart}.

\paragraph{Tables}
\emph{Tables} are a flat representation of the dataset.
In BI, it often takes the form of a subset or of an aggregation of (a) fact
table(s).
\begin{table}[!h]
\centering
\begin{tabular}{llr}\hline
\textbf{Fact} &
\multicolumn{1}{c}{\textbf{Title}} &
\multicolumn{1}{c}{\textbf{Sales}}\\\hline\hline 
$f_1$ & Pacific Sales Manager & 383200\$\\\hline
$f_2$ & European Sales Manager & 1124400\$\\\hline
$f_3$ & North American Sales Manager & 1660050\$\\\hline
\end{tabular}
\caption{Facts answering the question ``Sales target per department in 2001''}
\label{tab:personalized-facts}
\end{table}
For instance, the facts represented Table~\ref{tab:personalized-facts}
correspond to the dataset which visualization is depicted
Figure~\ref{fig:personalized-chart-representation}.
In this example, data come from a relational database, and the real attribute
and table names have been renamed to the dimension and measure names used in
the domain application.

A table can also be a \emph{cross-table}, in which case the first column
corresponds to a dimension (like the dimension ``Title'' in
Table~\ref{tab:personalized-facts}). We have reproduced an example of such a
cross-table table~\ref{tab:appendix-search-results-cross-table}
page~\pageref{tab:appendix-search-results-cross-table}.

\paragraph{Web results}
\emph{Web results} are pieces of Web pages that should be relevant with respect
to users' queries. 
Web results can be:
\begin{itemize}
  \item unstructured documents (raw text)
  \item semi-structured content (e.g. Wikipedia pages)
  \item structured content (e.g. Freebase, WolramAlpha, \ldots)
\end{itemize}
Searching for documents on the Web and offering relevant documents or pieces of
documents is a challenging task, merely because of the large amount of available
documents in repositories.
Typically, users' queries should be modified based on the different knowledge
bases of the system. This intuitively promises more accurate and personalized
results than querying these different Web services with users' inputs.
This component is part of the system and has been implemented, but is not of the
scope of this thesis. Therefore we do not focus on it, and its implementation is
not described there.




\subsection{Search engines}
\label{sec:personalized-architecture-search-engines}

\emph{Search plugins} operate from \emph{annotations} held by the \emph{parse
graph} and generated by the different information extraction plugins.
Search plugins intend to execute queries on back-end systems, which could be
indeed traditional search engines in the context of the federated search.
They use recognized semantics from the parse graph and 
formulate arbitrary types of queries. In the case of leveraging
a traditional search engine, a plugin might take the users' question, rewrite
the question using the recognized semantics to achieve higher recall or
precision.
However, for Q\&A we define an abstract implementation of a plugin that is
equipped with algorithms to transform the semantics captured in the parse
graph to a structured query.

The output of a search plugin is a stream of objects representing a 
well-structured result together with its metadata 
(e.g. consisting of the datasource that was used to retrieve the object
 or a score computed inside the plugin).

Search engines are components that actually retrieve documents or pieces of 
documents from a corporate repository or a public data source (i.e. the Web).
Figure~\ref{fig:personalized-archi-search-engines} is an illustration of 
different search engines used by default.
Components with dotted lines are examples of new component that can be
implemented in addition but that are not necessary at the time.
\begin{figure}[h!]
\centering
%\includegraphics[width=\linewidth]{img/archi-search-engines}
\includegraphics[trim=310pt 170pt 290pt 500pt]{img/archi-new-4}
\caption{Search engines implemented as plug-ins}
\label{fig:personalized-archi-search-engines}
\end{figure}

In our work, we focus on the second component entitled ``Content server'' which
is a corporate document repository. It contains data schemas of data warehouses
 (called \emph{universes} in the implementation of the BI system that we use)
 from which we generate database queries.
The mapping from the query tree (see 
section~\ref{sec:personalized-answering-parse-tree}) to answers (stored in a
structure called answer tree) is performed through \emph{patterns} which
are detailed in chapter~\ref{sec:chapter-patterns}
page~\pageref{sec:chapter-patterns}. These patterns capture the semantics of
users' questions, and generate database queries accordingly. 



\section{Extensibility of the framework}
\label{sec:personalized-extensibility}
The system is configured and set up for standard settings, but can be configured
for improving performance in different environments.
%
We present first how to implement new plugins in the framework in
section~\ref{sec:personalized-extensibility-new-plugins}. 
Then, we describe how to configure the linguistic resources that are part of the
system in section~\ref{sec:personalized-extensibility-linguistic-resources}.

\subsection{Implementing new plugins}
\label{sec:personalized-extensibility-new-plugins}
The plugins that have been introduced so far (see
section~\ref{sec:chap-3-qa-framework}) can be specialized for
application-specific settings.
Besides, additional plugins can be easily implemented. 
In this section, we provide information on how such extensions can be made.


 



\subsubsection{Query plugins}
\label{sec:personalized-query-plugins}
Query plugins annotate the parse tree (or \emph{query tree}). Any such plugin
must implement the interface reproduced on
Listing~\ref{lst:personalized-query-plugin}
\begin{lstlisting}[caption={Interface of any query plugin},captionpos=b,label=lst:personalized-query-plugin] 
IParsingQueryPlugin {
 public void annotate(
  QueryTree queryTree, SessionContext sessionContext);
}
\end{lstlisting}




\subsubsection{Search plugins}
As introduced above, search plugins take as input not only users' tokenized
question, but the graph structure where the question has already been analyzed
(see Listing~\ref{lst:personalized-search-plugin}).
\begin{lstlisting}[caption={Interface of any search
plugin},captionpos=b,label=lst:personalized-search-plugin] 
ISearchEnginePlugin { 
 public ResultIterator search(
  QueryTree queryTree,
  SessionContext sessionContext);
}
\end{lstlisting}
As reproduced on the listing, the only required method is the \verb?search?
method which takes as argument the parse tree (\verb?queryTree?) and the session
context of the user currently logged on (\verb?sessionContext?).



\subsubsection{Answer plugin}
Answer plugins are responsible for rendering \emph{answers} and \emph{results}.
As reproduced Listing~\ref{lst:personalized-answer-plugin}, these kinds of
plugins consist in augmenting an existing answer with an ordered set of
visualizations corresponding to a result.
\begin{lstlisting}[caption={Interface of any answer
plugin},captionpos=b,label=lst:personalized-answer-plugin]
IAnswerPlugin {
 public void augmentAnswer(Answer answer,
  SessionContext sessionContext,
  QueryTree queryTree);
}
\end{lstlisting}



\subsection{Configuration of linguistic resources}
\label{sec:personalized-extensibility-linguistic-resources}
Query plugins (see section~\ref{sec:personalized-query-plugins}) use lexicons
and other linguistic resources to annotate users' questions.
The system is composed of following linguistic resources:
\begin{itemize}
  \item lexicons of the data schema
  \item domain-independant knowledge base
  \item parsing rules for natural language \emph{features}
  \item contextual user model
\end{itemize}
We describe below how to configure the different resources below.

\subsubsection{Configuration of the lexicons}
Lexicons of the data warehouse schema are generated automatically.
They are generated and indexed again when there are changes in the data
models (this is checked when the system starts up).
The location of the different data models is specified in a configuration
file.



\subsubsection{Configuration of domain-independant knowledge bases}
The domain-independant knowledge base is composed of time and geographic
knowledge.
It is currently used to interpret semantically dimensions and dimensions'
values of geographic and time-related type as pointed out
section~\ref{sec:personalized-answering-parse-tree}.
The domain-independant knowledge base is a triple repository (RDF) which
statements must comply an RDF schema, which can be personalized (e.g. new classes,
attributes and predicates can be declared).


% \subsubsection{Configuration of natural language parsing rules}
% NL \emph{features} (see section~\ref{sec:patterns-feature}
% page~\pageref{sec:patterns-feature}) are defined as regular expressions which
% export property pairs.
% New parsing rules can be easily added to the system, and the syntax depends on
% the tools used for NER.


\subsubsection{Natural language features}
NL \emph{features} (see section~\ref{sec:patterns-feature}
page~\pageref{sec:patterns-feature}) are defined as custom regular expressions
which export property pairs.
%
New parsing rules can be easily added to the system, and the syntax depends on
the tools used for entity recognition.
Custom features (or patterns, see
section~\ref{sec:personalized-answering-parse-tree} for a short description)
are simply declared in the \emph{feature definition} file.
\begin{lstlisting}[caption={NL
feature definition},captionpos=b,label=lst:personalized-feature-definition]
feature:topk_feature rdf:type feature:GGUL_Feature ;
 rdfs:label "top k" ;
 query:generatesAnnotationsOfType query:TokKAnnotation ;
 feature:cgulExpression 
  "#group topk{<top>|[OD number]<[0-9]+>[/OD]"
  ^^xsd:string .
\end{lstlisting}

The mechanism for NL pattern is very powerful. It is used to
implement custom functionalities (e.g. range queries, top-$k$ queries
or custom vocabulary such as shown for ``middle-agged''
(see figure~\ref{fig:running-example} page~\pageref{fig:running-example}) that
goes beyond keyword-matching.
%
As explained in the previous section, natural language patterns 
are configured using RDF (see listing~\ref{lst:personalized-feature-definition}
for an example).
Mandatory properties are:
\begin{itemize}
  \item \verb?rdf:type?: the kind of feature being defined. When defining new NL
  features, the object should always be \verb?feature:CGUL_Feature?
  \item \verb?rdfs:label?: the name given to the feature. There is no
  requirement of uniqueness
  \item \verb?feature:cgulExpession?: the expression to be matched by the rule.
  This expression must be of type \verb?xsd:string?, and the syntax depends on
  the NER software being used (see below)
\end{itemize}
 The three main parts of a natural language patterns are:

\begin{enumerate}
\item \textbf{Extraction Rules:} The basis for natural language
patterns are extraction rules. In our case we use the \textsc{CGUL} rule
language\footnote{\url{http://help.sap.com/businessobject/product_guides/boexir4/en/sbo401_ds_tdp_ext_cust_en.pdf}},
which can be executed using SAP BusinessObjects Text Analysis\texttrademark. It
bases similarly as \textsc{CPSL}~\cite{Appelt:1998:CPS:1119089.1119095} or
\textsc{JAPE}~\cite{Cunningham99jape:a} on the idea of \emph{cascading
finite-state grammars} meaning that extraction rules can be built in a cascading way. Thus
any other rule engine can be used for this purpose. We make heavy use of
built-in primitives for part-of-speech tagging, regular expressions and the
option to define and export variables (e.g. the `$5$' in `top 5'). Note, that a
rule might simply consist of a token or a phrase list, e.g. containing
`middle-aged'.\\
%
 \item \textbf{Transformation Scripts:} Once a rule fired, exported
  variables may require some post-processing 
  %(i.e., a normalization  or transformation)
  , e.g. to transform `15,000' or `15k'
  into `$15000$', a expression that can be used within a
  structured query. In many cases there is also the need to compute
  additional variables. The most simple case for such functionality
   is to output  beginning and ending of the age range defined 
   by a term such as `middle-agged'. 
   To do additional computations and transformations,
  we allow to embed scripts inside a natural language pattern,
  which can consume output variables of the extraction rule and
  can define new variables as needed.\\
%
  \item \textbf{Referenced Resources:} A rule is often specific for
  a resource in some metadata graph. For instance in 
  figure~\ref{fig:query-graph} the pattern
  for `AgeTerms' applies only to
  the dimension `Age', the `Context' pattern only to
  nodes within the user profile and other patterns apply only
  to certain data types (e.g. patterns for ranges to numerical 
  dimension values) --  which are also represented as nodes. 
  In order to restrict the  domain of patterns, 
  we allow to specify referenced resources. 
\end{enumerate}


\subsubsection{Configuration of contextual user model}
There is no a single way of modeling users' context in corporate environments:
many information about the authenticated user on the network are usually stored
in a LDAP system, but each application usually adds profile attributes that are
not shared accross the different applications.

 In the answering system, we
provide a basic profile model, where users are automatically logged on on the
basis of the network authentification (see single-sign on
section~\ref{sec:personalized-authentification}).
Users' model is currently composed of profile information, such as
surname, last name, job title, location, etc.
This can be configured in order to add new attributes and relations.
% \begin{lstlisting}[caption={Skeleton
% of the social provider},captionpos=b,label=lst:personalized-social-provider]
% private IGraph loadSocialGraph() {
%  IGraph uSocialGraph = repo.getDefaultGraphFactory().
%   createGraph(this, SocialVocab.NS_DATA, "Social network");
%  String userLogin = repo.getToken().getLogin();
%  String password = userLogin;
%  boolean authOk = repo.getToken().isValid();
%  if (authOk) {
%   INode userNode = uSocialGraph.createNode(
%    repo.getToken().getAgentUri(), RepoVocab.CLASS_UserAgent);
%   userNode.setLabel(userLogin);
%  } else {
%   uSocialGraph.createNode(getAgentURIFromLogin(userLogin),
%    RepoVocab.CLASS_UserAgent).setLabel(userLogin);
%  }
%  return uSocialGraph;
% }
% \end{lstlisting}
We base on the corporate social network system that has been described
in~\cite{thollotThesis}, which provides a so-called ``social provider'' which
is loaded when the system starts up.
In this system, the social network is represented as a \emph{graph}.
Each user of the system is then associated to a \emph{node} in the graph
(\verb?userNode?), and further configuration can be provided monitoring
what attributes shall be loaded in the graph. 


\section{Multithreading and performance considerations}
\begin{figure}[!h]
\centering
\begin{tikzpicture}
\begin{axis}[grid=major,xlabel={processing time ($ms$)},ylabel={number of
parallel threads},legend style={cells={anchor=center, fill},nodes={inner
sep=1,below=-1.1ex},at={(1,1.2)},anchor=north east},area legend]
\addplot[xbar]
plot coordinates { (14178,1)
	(13984.66667,2)
	(13675.66667,3)	
	(13797,4)
};
\addplot[xbar,fill=gray] plot coordinates {
	(11273,1)
	(10981.5,2)
	(10484,3)
	(10686,4)
};
\legend{{na\"ive processing time},{without timeout}}
\end{axis}
\end{tikzpicture}
\caption{Average overall processing time depending on the number of threads
being executed in parallel}
\label{fig:personalized-parallel-threads}
\end{figure}
The first implementation of the system described so far suffered from the lack
of parallel algorithms, and therefore its performance was not satisfactory.
We have reported on figure~\ref{fig:personalized-parallel-threads} the mean
execution time (for the same set of queries) for different number of parallel
threads.
Performance tests have been lead on a 64-bit operating system with 8Gb RAM
memory and Intel Core2 Duo processor.
The second serie (see legend ``without timeout'') corresponds to the results
of a subset of queries only (the ones that do not generate a timeout signal).
Indeed, the timeout is sent by the back-end system in order to prevent the
system from being stucked for queries which take too long. Parallel programming
does not improve processing time for those queries.
As expected, it seems that we gain in performance when increasing the number of
cores.
The overall processing time remains important, mainly because of the chart
generation process which optimization was not the purpose of our work.

We have introduced parallel algorithms in the following
components of the answering framework:
\begin{itemize}
  \item pattern execution
  \item (object) query generation
  \item (database) query generation
  \item query execution
\end{itemize}
% In the following, we detail how we have introduced parallelism in these
% components.
% \begin{figure}[t]
% \centering
% \includegraphics[trim=410pt 290pt 290pt 120pt,scale=0.7]{img/parallel-execution}
% \caption{Overview of the threads involved in the parallel implementation of the
% framework}
% \label{fig:parallelism}
% \end{figure}
% 
% 
% 
% \subsection{Pattern execution}
% The method \verb?applyPatterns? apply all patterns\footnote{Linguistic patterns
% are introduced in chapter~\ref{sec:chapter-patterns}.} (which have been loaded
% in memory at start-up time) to the parse tree.
% The application of each pattern can be performed in parallel (since there is no
% concurrent modification of the parse tree by any of the patterns).
% \begin{lstlisting}[caption={Parallel implementation of the component
% responsible for the pattern execution},
% captionpos=b, label=lst:personalized-parallel-pattern-execution]
% public static void applyPatterns(final Model inputModel, 
%  final PriorityQueue<ScoredResultModel> outputModels, 
%  Token userToken, 
%  final Map<String, INode> parameterSettings, 
%  final INode bundleNode) {
%  
%  int NTHREADS = Runtime.getRuntime().availableProcessors();
%  ExecutorService exec = Executors.newFixedThreadPool(NTHREADS);
%  final QuerySolutionMap fixedQueryParameters = new QuerySolutionMap();
%  
%  for (String parameterName : parameterSettings.keySet()) {
%   RDFNode rdfNode = (RDFNode) parameterSettings.get(parameterName)
%    .getHandler();
%   fixedQueryParameters.add(parameterName, rdfNode);
%  }
%  
%  for (final ScoredPattern pattern : patternList) {
%   final INode patternNode = userTokenToNodeToQuery.get(
%    userToken.toString()).get(pattern);
%   Runnable requestHandler = new Runnable() {
%   public void run() {
%    try {
%     long startTime = System.currentTimeMillis();
%     RDFNode node = (RDFNode) patternNode.getHandler();
%     fixedQueryParameters.add("pattern", node);
%     Model resultModel = applyPattern(inputModel, 
%      pattern.getPatternQuery(), fixedQueryParameters, bundleNode);
%      
%      if (resultModel != null && !resultModel.isEmpty()) {
%       outputModels.offer(new ScoredResultModel(
%       	pattern, resultModel));
%      }
%     } catch (Exception e) {
%      logger.error("Error executing pattern" 
%       + patternNode.getLabel(), e);
%     }
%    }
%   };
%   exec.execute(requestHandler);
%  }
%  exec.shutdown();
%  while (!exec.isTerminated()) {
%   try {
%    exec.awaitTermination(GlobalConfigurator
%     .getConfiguratorInstance()
%     .getConfiguration(GlobalConfigurator.SEARCH_CONFIG)
%     .getProperty(GlobalConfigurator.SEARCH_CONFIG_TIMEOUT)
%     .toInteger(), TimeUnit.SECONDS);
%   } catch (InterruptedException e) {
%    logger.error(e.getMessage(), e);
%   }
%  }
%  // send done signal
%  outputModels.offer(PATTERN_EXECUTOR_DONE_POISON_OBJECT);
% }
% \end{lstlisting}
% Listing~\ref{lst:personalized-parallel-pattern-execution} reproduces the main
% method \verb?applyPatterns? implemented in a parallel fashion.
% Basically, this method executes \textsc{SparQL} queries (i.e. patterns) on a
% tree-like structure (composed of user's question parse tree, general knowledge,
% contextual information etc.) and generates a RDF model that represent all
% possible interpretations and answers of user's question.
% 
% \subsection{Object query generation}
% The component above generates a RDF model that represent all conceptual
% queries (see chapter~\ref{sec:chapter-modeling}
% page~\pageref{sec:chapter-modeling}).
% To optimize execution time, statements of the model are ordered based on the
% confidence of the conceptual queries.
% Details on how these confidence are computed can be found in
% section~\ref{sec:pattern-confidence} page~\pageref{sec:pattern-confidence}.
% Then, ordered statements generate (object) conceptual queries that fill a queue.
% \begin{lstlisting}[caption={Parallel implementation of the conceptual query
% generation},
% captionpos=b,label=lst:personalized-parallel-conceptual-query-generation] 
% Thread queuefiller = new Thread() { 
%  public void run() { 
%   float maxScore = 0; int counter = 0;
%   for (Statement stmt : statements) { 
%    Statement confidenceStat1 = ((StatementJena24Impl) stmt)
%     .toJenaStatement(model).getSubject().getProperty(
%      model.getProperty("bquery:hasConfidence"));
%    float confidence1 = confidenceStat1.getObject().
%     asLiteral().getFloat(); 
%    if (maxScore < confidence1)
%     maxScore = confidence1;
%    if (maxScore * MAX_SCORE_RATIO_FOR_IGNORING_RESULTS 
%     >= confidence1)
%     break;
%     
%    counter++;
%    final RDFBeanManager manager = new RDFBeanManager(impl);
%     try {
%      Object obj = manager.get(stmt.getSubject());
%      BusinessQuery bQuery = (BusinessQuery) obj;
%      queueToFill.offer(bQuery);
%     } catch (RDFBeanException e) {
%      Logger.getLogger(BQueryObjectWrapper.class).error(e);
%     } catch (ClassCastException e) {
%      Logger.getLogger(BQueryObjectWrapper.class).error(e);
%     }
%    }
%   // signal done
%   BusinessQuery doneSignal = new BusinessQuery();
%   doneSignal.setDatasource("$SIGNAL_DONE$");
%   queueToFill.offer(doneSignal);
% 
%   impl.close();
%  }
% };
% queuefiller.start();
% \end{lstlisting}
% Listing~\ref{lst:personalized-parallel-conceptual-query-generation} presents the
% parallel implementation of the queue.


% \subsection{Database query generation}
% The conceptual queries filling the queue (see above) are then peeked up and
% \emph{translated} into database queries.
% The translation process of the conceptual queries is detailed
% section~\ref{sec:modeling-translation} page~\pageref{sec:modeling-translation}.

% \begin{lstlisting}[caption={Parallel implementation of the query translation
% process},captionpos=b,label=lst:personalized-parallel-query-translation]
% public void run() throws Exception { 
%  Thread schedulerThread = new Thread() {
%   @Override
%   public void run() {
%    while (!isQueueDone) {
%     while (bqueries.peek() == null) {
%      try {
%       Thread.sleep(50);
%      } catch (InterruptedException e) {
%     }
%    }
%   
%    while (bqueries.peek() != null) {
%     BusinessQuery businessQuery = bqueries.poll();
%      if (businessQuery.getDatasource().equals("$SIGNAL_DONE$")) {
%       isQueueDone = true;
%       break;
%      }
%      DaslResult daslQueryAndChartType;
%      try {
%       daslQueryAndChartType = BQueryTranslation.bQueryToDaslQuery(
%        businessQuery,sessionContext.getUserToken(),slGraph, true); 
%        if (alreadyAddedQueries.containsKey(
%         daslQueryAndChartType.getDaslQuery())) 
%         continue;
%        scheduleQueryForExecution(businessQuery,daslQueryAndChartType); 
%       } catch (Exception e) { 
%        logger.error(e.getMessage(), e); 
%       } 
%      }
%     }
%     exec.shutdown();
%    }
%   };
%  schedulerThread.start();
% }
% \end{lstlisting}
% Listing~\ref{lst:personalized-parallel-query-translation} presents the parallel
% implementation of the component responsible for the translation of the
% conceptual queries into database queries.





% \subsection{Query execution}
% The database queries must then be executed.
% The database engine to be considered for the execution is determined in the
% (object) conceptual query (see the \verb?getDatasource()? method in
% listing~\ref{lst:personalized-parallel-query-execution}).
% \begin{lstlisting}[caption={Parallel implementation of the query execution
% process},captionpos=b,label=lst:personalized-parallel-query-execution]
% Runnable requestHandler = new Runnable() {
%  public void run() {
%   try {
%    daslProcessors.get(businessQuery.getDatasource()).
%     processQuery(daslResult);
%    res.offer(daslResult);
%   } catch (Exception e) {
%    logger.error("dasl encountered an error trying to " +
%     "execute Query " + daslResult.getDaslQuery()); 
%   }
%  }
% };
% exec.execute(requestHandler);
% \end{lstlisting}
% The parallel implementation of this component is reproduced
% listing~\ref{lst:personalized-parallel-query-execution}.



% \subsection{Result \& answer generation}
% \TODO{This section is missing}
% The result generation consists in creating a \emph{dataset} (i.e. the set of
% tuples resulting of the database query execution).
% This dataset is translated such that it can be rendered in both a table and a
% chart.
% \begin{figure}[h!]
% \TODO{The figure is missing}
% \caption{Example of result composed of a chart and a table. The result can be
% switched to chart or table mode when the upper-right button is pressed on the
% UI}
% \label{fig:personalized-multithreading-result}
% \end{figure}
% Figure~\ref{fig:personalized-multithreading-result} is an example of a result
% which is a chart, and which can be switched to a table when pushing the
% appropriate button.



% WHERE?
% \section{Contextual Q\&A}
% Modern Q\&A systems do not offer answers from one data source only, but
% combine different data sources to provide more accurate information to
% end-users.
% Data warehouses are designed to integrate and prepare data from production
% systems to be then analyzed with BI tools.
% However, domain models become extremely complex in important deployment of
% production systems~\cite{thollotThesis}. As a result the combination of BI
% entities (i.e. dimensions and measures) can be huge (hundreds of thousands).
% 
% \subsection{Usage statistics}
% Multidimensional models define entities (measures and dimensions) and relations
% between these entities (dimensions that correspond to different levels of the
% same hierarchy, or \emph{compatible} entities).
% Two entities $E_1$ and $E_2$ are said compatible if they can be used together in
% a single database query. We distinguish two kinds of compatible entities:
% \begin{itemize}
%   \item they belong to the same hierachy. We can note $E_1$ determines $E_2$ if
%   $E_1$ and $E_2$ belong to the same hierarchy, and $l_H(E_1)<l_H(E_2)$ where
%   $l$ maps to the level in the hierarchy $H$.
%   \item they belong to the same \emph{context}: e.g. dimensions and measures
%   appear in the ssame fact table, if applicable
% \end{itemize}
% 
% \subsubsection{What information do we get from dashboards/reports?}
% Dashboards and reports are valuable source of information about usage of
% multidimensional queries by real users in real settings. 
% These documents provide information such as user $u$ created document $d$
% composed of multidimensional queries $q_i$ at time $t$.
% From $q_i$, one can monitor that several entities $e_j$ appear together in the
% same query $q_i$. 
% This measure is called \emph{co-occurrency}, and meets several applications such
% as auto-completion.
% An experiment of auto-completion based on co-occurrency computed from usage
% statistics and users' social network is presented
% section~\ref{sec:evaluation-experiments-auto-completion}
% page~\pageref{sec:evaluation-experiments-auto-completion}.
% 
% 
% 
% \subsubsection{Security}
% Corporate document repositories are data management systems, where data are
% organized in \emph{documents}. Dashboards and reports are examples of such
% documents.
% Access to these documents, or to parts of these documents are regulated
% according to the corporate policy. 
% For instance, the policy may state that user $u_1$ is allowed to access to a
% part $p_1\subset\mathcal{M}$ of the data model $\mathcal{M}$ of a data warehouse
% and that user $u_2$ is allowed to access to $p_2\subset\mathcal{M}$ of the same
% data model.
% If $p_1\cap p_2\neq\emptyset$, then users $u_1$ and $u_2$ can access common
% entities. 
% 
% The social network of users accessing the same subset of entities is
% used in several applications. For instance, auto-completion (see
% section~\ref{sec:evaluation-experiments-auto-completion}
% page~\pageref{sec:evaluation-experiments-auto-completion}) or recommendation are
% examples of applications where this kind of security requirements must be
% satisfied.


\section{Summary \& discussion}
We have presented the architecture of the answering framework.
It behaves as a corporate system, where users are well identified on the
network, and have profiles of two kinds (shared properties that can be reached
from any corporate application, and private properties specific to each
application).
End-users access the system through the \verb?HTTP? protocol. So far, three
front-end applications have been implemented: a desktop application (in HTML 5),
an iPhone and iPad applications (these front-ends will be introduced in
chapter~\ref{sec:chapter-evaluation} page~\pageref{sec:chapter-evaluation}).
Information about users (``User Context''), data models of the data warehouses
and various domain- and language-specific configurations are aggregated in a
graph structure.
Then query plugins operate on this graph, and the output (answer graphs) are
then processed by search engines to render results (e.g. to execute SQL or MDX
queries and to render corresponding charts).
In addition, the framework is extensible. Configuration can be easily performed
to take into account new domains (the system interfaces several data warehouses)
and/or new languages.
Configuring additional query plugins is the right way to go in the scope of
context-aware applications. Indeed other coporate applications provide
meaningful information in some cases, and the system would thus provide more
personalized results. Search plugins would be used for instance to support
additional stuctured query languages (in addition to SQL or MDX), like
\textsc{SparQL}.
In chapter~\ref{sec:chapter-evaluation} dedicated to experiments and evaluation,
we will show how easy is the configuration of the system for additional domains.
Moreover, we have implemented the system using parallel algorithms, which has
improved a lot the performances of the system.

Compared to state-of-the-art systems in the BI domain
(e.g. \textsc{Soda}~\cite{blunschi2012}), we provide more personalized results
because the framework takes also into consideration contextual information
(from the ``user context'' component).
In addition, the processing pipeline of our system is similar to some extent to
some recent systems, like~\cite{2226}.
However, our system is not yet perfect, and we suggest several directions to
improve it.
First, the execution time can be improved. Currently, \emph{new} questions
(i.e. questions that have never been asked) are being answered in a few seconds.
We believe that this is due to the call to some native libraries (the
named-entity recognizer), plus the fact that the constraint mapping engine
(Jena executing \textsc{SparQL} over RDF) is not optmized. A solution would be
to re-implement the constraint mapping algorithms in such a way that it would be
executed in-memory. 


\stopcontents[chapters]


\bibliographystyle{plain}
\bibliography{these}





\end{document}
