%%This is a very basic article template.
%%There is just one section and two subsections.
\documentclass{memoir}


\usepackage{pgf-pie}

% urls
\usepackage[hyphens]{url}

% use of acronyms
\usepackage[nolist,withpage]{acronym}

% use of multirow
\usepackage{multirow}

% nice fonts for the math sets
\usepackage{amsfonts}

\usepackage[francais,english]{babel}

\usepackage[T1]{fontenc}

\begin{hyphenrules}{francais}
\hyphenation{pr\'e-f\'e-ren-ces}
\hyphenation{fra-me-work}
\hyphenation{questions-r\'eponses}
\end{hyphenrules}

\usepackage{pgfplots}

\usepackage{graphicx}
\usepackage[hang,small,bf]{caption}
\usepackage{subfig}

\usepackage{lscape}
\usepackage{longtable}
\usepackage{rotating}

\usepackage{abstract}

\usepackage{tikz}
\usetikzlibrary{fit, arrows, decorations.markings,positioning,trees}

% use of listings
\usepackage{listings}
\usepackage{framed}
\usepackage{MnSymbol}
\lstset{
	language=SQL,
	basicstyle=\ttfamily\fontsize{8}{11}\selectfont,
	aboveskip=6pt plus 2pt, 
	belowskip=2pt plus 8pt,
	morekeywords={PREFIX,java,rdf,rdfs,url,xsd},
	numbers=left,
	numberstyle=\tiny,
}


% use of minitoc
\usepackage{shorttoc,titletoc}

\newcommand\partialtocname{Outline}
\newcommand\ToCrule{\noindent\rule[5pt]{\textwidth}{1.3pt}}
\newcommand\ToCtitle{{\large\bfseries\partialtocname}\vskip2pt\ToCrule}
\makeatletter
\newcommand\Mprintcontents{%
  \ToCtitle
  \ttl@printlist[chapters]{toc}{}{1}{}\par\nobreak
  \ToCrule}
\makeatother


\newcommand\TODO[1]{{\textcolor{red}{\underline{TODO}: #1\\}}}

\hyphenation{spe-ci-fic}


\setcounter{secnumdepth}{2}
\setcounter{tocdepth}{2}

\newcommand{\Chapter}[1]{\chapter{#1} \setcounter{figure}{1}}


\begin{document}

\begin{acronym}[TDMA]
\acro{AI}{Artificial Intelligence}
\acro{BI}{Business Intelligence}
\acro{CMS}{Content Management System}
\acro{CRM}{Customer Relationship Management}
\acro{DBMS}{Database Management System}
\acro{DSS}{Decision Support Systems}
\acro{ER}{Entity/Relationship}
\acro{IDF}{Inverse document frequency}
\acro{IE}{Information Extraction}
\acro{IR}{Information Retrieval}
\acro{MDX}{Multidimensional expression}
\acro{NER}{Named entity recognizer}
\acro{NL}{Natural Language}
\acro{NLP}{Natural Language Processing}
\acro{OLAP}{Online analytical Processing}
\acro{QA}[Q\&A]{Question Answering}
\acro{RDF}{Resource Description Framework}
\acro{SPARQL}{SPARQL Protocol and RDF Query Language}
\acro{SQL}{Structured query language}
\end{acronym}

\chapter{Introduction}
\label{sec:introduction}
\startcontents[chapters]
\Mprintcontents


% The need to structure data in social structures (companies, organizations, etc.)
% comes from the fact that those data are so voluminous, and so complex, that a
% human being can not understand them.
% Trying to find a structure is indeed a classic scientific task (e.g.
% classifying things is the basis of any scientific approach). 
% However, structures may not be easily understood by people: because it is too
% complex; because the interested human being is not an expert of the field;
% because she does not share the same view on how to classify things than the one
% who created the structure; or because the structure is far too complex, or
% un-natural to be easily understood (for instance, labels used in classes do not
% refer to real-world entities).
% \begin{figure}
% \centering
% \includegraphics[width=5cm]{img/data-structures}
% \label{fig:introduction-data-structures}
% \caption{Growth of data and the need for structures to understand them}
% \end{figure}
% We illustrate figure~\ref{fig:introduction-data-structures} the problem when
% data structures become too complex: only experts can understand them.
% 
% Data structures have evolved over time, because the processing capabilities have
% improved a lot in the last few decades. 
%
%
%
%We will present section~\ref{sec:introduction-structured-data} the different
%data structures used nowadays in corporate environments
The amount of data that is stored in companies is growing over time. 
As an example, table~\ref{tab:introduction-data-growth} shows the evolution of the 
proportion of companies storing more than 1TB and more than 100TB in 2009 and 2010 (from~\cite{stadtmueller2011}).   
\begin{table}[h]
\centering
\begin{tabular}{rrr}\hline
\multirow{2}{*}{\textbf{Year}} & \multicolumn{1}{c}{\textbf{companies storing}} & \multicolumn{1}{c}{\textbf{companies storing}}\\
 & \multicolumn{1}{c}{\textbf{more than 1TB}} & \multicolumn{1}{c}{\textbf{more than 100TB}}\\\hline\hline
2009 & 74\% & 24\%\\\hline
2010 & 87\% & 29\%\\\hline
\end{tabular}
\caption{Data growth trends}
\label{tab:introduction-data-growth}
\end{table}
These growing data must be stored in appropriate data structures and algorithms should
be implemented in such a way that these data can be queried in an efficient
manner.
Databases are generally distributed over different data sources and can be corrupted
or data can be redundant.
Data loading from production systems is performed by an extraction transformation 
loading (ETL) tool to be then analized by \ac{BI} tools to generate reports and dashboards.
\begin{figure}[!h]
\centering
\begin{tikzpicture}
\begin{axis}[
    xticklabels={$0<x<50\textnormal{m}$,$50\textnormal{m}<x<100\textnormal{m}$,$100\textnormal{m}<x<250\textnormal{m}$,$250\textnormal{m}<x<500\textnormal{m}$,$500\textnormal{m}<x<1\textnormal{b}$,$1\textnormal{b}<x<5\textnormal{b}$,$5\textnormal{b}<x$},
    xtick={1,...,7}
    ,x post scale=1.5,
    xtick=data,xticklabel style={inner sep=0pt, anchor=north east,
    rotate=20},ymin=0,grid=major] 
    \addplot[ybar,fill=gray]
    coordinates { (1,38)
    (2,92)
    (3,195)
    (4,344)
    (5,475)
    (6,2187)
    (7,3365)
    };
\end{axis}
\end{tikzpicture}
\caption{Evolution of the number of users with company growth}
\label{fig:introduction-user-count}
\end{figure}





As shown on the chart Figure~\ref{fig:introduction-user-count}, the number 
of users growths with the growth of companies (with respect to revenue).
A direct consequence is an increased amount of documents generated by users.
As a result, users are faced with the problem of \emph{information overload}.
Indeed, many corporate applications are entry points for information. 
Besides, searching for information in a single application (like the company's
intranet) is not an easy task, beacause of information overload, and because
the order according to which result items are ordered is not based on relevancy
for the specific user.

In our thesis, we tackle this problem in supporting information access. Indeed,
our contributions are:
\begin{itemize}
  \item a framework that allows natural search over data warehouses
  \item a question answering system that offers personalized results
\end{itemize}

First, we present how data are structured in warehouses. Second, we introduce
the \ac{BI} domain and some of current challenges in this area.
Then, we present how \ac{BI} can benefit from techniques used in \ac{QA} systems.
Finally, we formulate the main problem of interest of our work. 


\section{Structured data}
\label{sec:introduction-structured-data}

The history of structured data has followed the history of physical storage
devices. 
Indeed, improved storage capabilities has made it possible to store larger
databases.
Recent years have seen a major turning point in database history. Indeed,
state-of-the-art database management systems rely on main memory instead 
of classic disk storage.

The way data are organized in the database is defined in a \emph{model}: it
defines how elements are organized from a semantic point of view.
Early data structures like \emph{specification lists} used in the \textsc{Baseball} 
NL interface~\cite{Green:1961:BAQ:1460690.1460714} are based on hierarchical
relations between database entities.
We reproduce below an example of specification list below (from~\cite{BAA}):
$$ \begin{array}{l} \textnormal{Month}=\textnormal{July}\\
\quad\textnormal{Place}=\textnormal{Boston}\\\quad\quad\textnormal{Day}
= 7\\\quad\quad\textnormal{Game Serial
No.} = 96\\\quad\quad(\textnormal{Team}=\textnormal{Red
Sox},\textnormal{Score}
= 5)\\\quad\quad(\textnormal{Team} = \textnormal{Yankees},\textnormal{Score} =
3)\end{array} $$ This structure combines \emph{hierarchies} (for instance, the
game occurs at a specific place in a specific month) and \emph{associations}
marked with parentheses.

%The most popular Modern data structure is the relational , which we
%further describe in the following section.
In modern data base systems, there are several abstraction layers for
representing data structures:
\begin{itemize}
  \item \emph{Physical} layer: \ac{DBMS}-specific data structure
  \item \emph{Logical} layer: data organization which eases database
  administration (usually performed by database administrator)
  \item \emph{Conceptual} layer: domain- and application-specific representation
  of relations, dependencies and constraints of data. The most popular
  conceptual representation of data is based on the relational architecture
\end{itemize}
We provide an example of relational model in
section~\ref{sec:introduction-relational} and introduce the multidimensional
model which is widely used to model data warehouses in
section~\ref{sec:introduction-multidimensional}.

\subsection{Relational models}
\label{sec:introduction-relational}
In Figure~\ref{fig:introduction-relational}, we illustrate the relational model
with the example of the \emph{eFashion} dataset, which is described
Appendix~\ref{sec:appendix-dataset-efashion}.
In this model, the \emph{entities} are tables (for instance
\verb?ARTICLE_LOOKUP?) with attributes (e.g. \verb?ARTICLE_ID?);
\emph{relations} are defined between attributes. 
The relational model also defines constraints of uniqueness (through the
combinatyion of primary and foreign keys depicted with keys in the figure). 
\begin{figure}[h!]
\centering
\includegraphics[width=12cm]{img/relational-model}
\caption{\ac{ER} representation for the \emph{EFashion} dataset used for
the evaluation of the system}
\label{fig:introduction-relational}
\end{figure}


\subsection{Multidimensional models}
\label{sec:introduction-multidimensional}

Multidimensional models are popular, especially for \ac{BI} purposes, because 
they ``facilitate complex analyses and
visualization''~\cite{Chaudhuri:1997:ODW:248603.248616}.
Data warehousing is a collection of decision support technologies and aims at 
enabling the knowledge worker (executive, manager, analyst) to make better and
faster decisions~\cite{Chaudhuri:1997:ODW:248603.248616}.
These models have influenced front-end tools, database design and query engines for
OLAP~\cite{Chaudhuri:1997:ODW:248603.248616}.
A classic definition of multidimensional models has been proposed by Golfarelli et 
Rizzi~\cite{Golfarelli:1998:MFD:294260.294261}.
An intuitive representation is a multidimensional cube, where each axis
corresponds to a dimension of the model and a cell of the cube corresponds to a
fact instance aggregated along the different dimensions.
The objects of analysis are numeric measures (e.g. sales,
budget, revenue, etc.). Each numeric measure depends on a set of dimensions
which provide the context for the measure~\cite{Chaudhuri:1997:ODW:248603.248616}.
Each dimension is described by a set of attributes. Attributes can be related
to each other through a hierarchy of
relationships~\cite{Chaudhuri:1997:ODW:248603.248616}.
Figure~\ref{fig:introduction-multidimensional-model} is an example of a
multidimensional data model.
\begin{figure}[h!]
\centering
\includegraphics[clip=true, trim = 0cm 7cm 8.5cm 0.5cm,width=1.00\linewidth]{img/data-schema}
\caption{Multidimensional data schema of the dataset \emph{eFashion}}
\label{fig:introduction-multidimensional-model}
\end{figure}
In this figure, two fact tables (``sales'' and ``reservations'') contain several
measures (e.g. ``days'', ``revenue'' and ``reservation days''). Rounded nodes
correspond to dimensions organized in hierarchies. The depth of the node (i.e.
the distance in from the fact table) corresponds to the level of the dimension
in the hierarchy. Finally, attributes at a specific hierarchy level are
underlined (e.g. ``age'', ``phone number'' and ``invoice year'').

Multidimensional architecture is preferred to relational one, when there are a
huge amount of data to aggregate.
Therefore, some \ac{DBMS} implementations optimize some of these operations, and
the multidimensional representation is preferred to the relational one.
In some implementations, the multidimensional architecture can be used directly
with relational databases, because the targetted applications prefer the
multidimensional representation. Hybrid architectures have also appeared, where
the logical layer is composed of both relational tables (for instance for
representing large quantities of detailed data) and multidimensional
implementations for less detailed / more aggregated data.


We present in the following the star data schema, which is the most popular way
to implement the multidimensional model for relational data sources. 
Then we explain why multidimensional models are used in \ac{BI}.
In addition, we introduce the notion of functional dependencies which brings
constraints in multidimensional data models.
Finally, we briefly introduce database query languages.



\subsubsection{Star schema:  an implementation of the multidimensional model
for relational databases}
\label{sec:introduction-star-schema}
A popular implementation representing the multidimensional
data model is the \emph{star schema}~\cite{Chaudhuri:1997:ODW:248603.248616}. 
It consists of a single fact table and surrounding tables for each dimension. 
A refinement of this model is the \emph{snowflake schema}, where the (dimensional) 
hierarchy is represented by normalizing the dimension tables.

Star-schema is the best-known schema for modeling data warehouses. 
Typically the star-schema is composed of several tables in the middle called
fact tables. These fact tables correspond to the measures. All tables arround
these fact tables correspond to analysis axes or dimensions. 
The star-schema is actually a special case of the snowflake-schema, which is not
further detailed here.

The model displayed figure~\ref{fig:introduction-relational} is organized
according to the star-schema: two fact tables (\verb?SHOP_FACTS? and
\verb?PRODUCT_PROMOTION_FACTS?) are surrounded by several tables corresponding
to different analysis axes.


\subsubsection{\ac{OLAP} operations}
Standard operations on the cube are called \ac{OLAP} operations: rollup (increase the 
level of aggregation), drill-down (decrease the level of aggregation) along one or more 
dimension hierarchies, slice and dice (seletion and projection) and pivot (reorient the
multidimensional view of data)~\cite{Chaudhuri:1997:ODW:248603.248616}.
Front-end tools let users execute these operations.
Figure~\ref{fig:introduction-bi-tool} is a screenshot of a \ac{BI} tool that is used
to create dashboards.
When checking/unchecking boxes on the left-upper panel or when double-clicking
on parts of the chart, these operations are triggered and new charts are
rendered accordingly.
\begin{figure}[!h]
\centering
\includegraphics[width=12cm]{img/bi-tool}
\caption{State-of-the-art BI tools for exploring data}
\label{fig:introduction-bi-tool}
\end{figure}



\subsubsection{Functional dependencies and role of hierarchies}
\label{sec:introduction:functional-dependencies}
Two objects (measures or dimensions) are functionally \emph{dependant} if one
determines the other. For instance, knowing the ``City'' determines the related
``State''.
Another example that involves a measure and a dimension is to say that the
``Sales revenue'' is determined by a ``Customer'' (e.g. aggregated from unit sales
in a fact table). 
Functional dependencies are transitive: if $A$ determines $B$ which determines $C$, 
then $A$ determines $C$. In the most simple scenario, all measures are determined by 
all dimensions. This is the case when using a basic dataset, for instance reduced 
to one fact table with dimensions in a star schema.
Functional dependencies are important to compose meaningful queries. For
instance, they can be used to ensure suggested queries do not contain incompatible
objects which would prevent their execution. However, business domain
models do not necessarily capture and expose this information. Hierarchies of
dimensions are more common though, usually exploited in reporting and analysis
tools to enable the drill-down operation. For instance, if a \fbox{Year $\rightarrow$
Quarter} hierarchy is defined, the result of a user drilling down on ``Year 2012'' is
a more fine-grained query with the ``Quarter'' dimension, filtered on
``Year'''s value 2012.
If hierarchies of dimensions can be used to determine minimal dependency chains,
techniques are required to help with automatic detection of functional dependencies.
In particular, the approach presented by~\cite{Romero:2009:DFD:1651291.1651293}
is to create domain ontologies from conceptual schemas and use inferencing
capabilities.


% \subsubsection{Data sources and query languages}
% The data model is independant from the data source. 
% This means that it's possible to build a multidimensional model on top of a relational 
% data source (see the star schema~\ref{sec:introduction-star-schema}).
% Classic query languages are SQL (for relational data sources) and MDX (for OLAP
% data sources).
% 
% We provide in appendix~\ref{sec:appendix-query-generation} an example of
% generation of a multidimensional query into \ac{SQL}, which can be done
% automatically.

% \subsection{Knowledge bases}
% Knowledge bases are a special kind of databases, that have this specific feature
% of allowing deductive reasoning on the data.
% The \emph{knowledge} described in such bases are described in a logically
% consistent manner, and a set of rules state how to reason on the data.
% 
% Knowledge bases have been widely used in the \ac{AI} domain as part of expert
% systems.
% \ac{DSS} can be seen as a kind of expert systems, where \ac{DSS} target
% decision-making activities while expert systems are used for solving problems. 
% 
% 
% 
% \subsubsection{Ontologies}
% \label{sec:introduction-ontology}
% Ontologies are knowledge bases that model a \emph{domain} through
% \emph{concepts}. These concepts are linked together with \emph{semantic
% relations}. A part of those semantic realtions must be hierarchical, so that the
% ontology defines concepts from the most general one, the top-level concept which
% is the supertype of all concepts, written $T$ and the most specific ones. 
% The most famous hierarchical relation is the $is-a$ relation, that describes the
% membership between concepts. There might also be other kinds of hierarchical
% relations, like for instance the meronymy (or $part-of$) relation.
% Many relations might also be non-hierarchical, like the synonymy or variance
% relation, which is widely used to capture the fact that the same idea or concept
% can be expressed in various ways.
% 
% Concepts and relations can be instantiated. The role of domain experts is to
% decide what term should be used as a concept label, and whether real-world entities
% should be instances of a concept, or concepts themselves. Instances of relations
% are naturally relations applied to instances (while relations are stated
% between two concepts).
% As part of knowledge bases, ontologies must also define a set of rules, which
% define the logical aspect of ontologies (see the ontology represented as a
% layer cake figure~\ref{fig:introduction-ontology-layer-cake}).
% \begin{figure}
% \centering
% \includegraphics[width=6cm]{img/ontology-layer-cake}
% \caption{Ontology layer cake from~\cite{DBLP:phd/de/Cimiano2006}}
% \label{fig:introduction-ontology-layer-cake}
% \end{figure}
% The two last layers in the figure correspond to axioms and the basis for
% reasoning in ontologies.
% Axiom schemata express constraints on concepts. The last layer (general axioms)
% express the actual rules for reasoning in the ontology.
% 
% 
% 
% 
% 
% 
% 
% \subsubsection{Semantic Web \& Linked Data}
% The Semantic Web is an initiative\footnote{lead by the World Wide Web
% Consortium (W3C)~\url{http://w3.org/}} to ``make the Web more meaningful''.
% Indeed, during the last few decades, numberous unstructured documents have been made
% available on the Web. This mass of documents are a rich source of information,
% but the lack of structure within those documents make it difficult to process
% them automatically. 
% This problem is summarized in ``less is more'', as a reference to popular search
% engines that may return hundreds of thousands of results for a single query
% (while real users are not going to go through all these results).
% The idea then is to add semantic mark-up and meta-data, to be able to link
% \emph{semantically} the content of unstructured documents. 
% 
% \begin{figure}
% \centering
% \includegraphics[width=8cm]{img/linked-data}
% \caption{Some of the resources that compose the LinkedData project}
% \label{fig:introduction-linked-data}
% \end{figure}
% As an application to the Semantic Web initiative, the Linked Data
% project\footnote{See~\url{http://linkeddata.org/}.} aims at connecting available
% data on the Web (see figure~\ref{fig:introduction-linked-data}).
% This relies on ``recommended best practices'', as defined by Wikipedia, for
% ``exposing and sharing pieces of data, information and knowledge''. 
% For example, both databases DBpedia and Freebase are available on LinkedData. 
% Some of the resources are domain specific, or specific to some content type
% (e.g. DrugBank\footnote{See~\url{http://www.drugbank.ca/}.} is a detailed
% database about drug composition, etc\ldots).


% \subsection{Main concepts}
% \label{sec:introduction-main-concepts}
% 
% Let ($m$) be a measure $m$ and [$d$] a dimension $d$. For
% instance, (Sales revenue) refers to the measure ``Sales revenue'' and [Year] to
% the dimension ``Year''.
% We note $M$ a set of measures that can appear in a single fact table $F$ (i.e.
% $m\in M$) and $D_i$ the set of dimensions that belong to the same hierarchy
% indexed by $i$ (i.e. $d\in D_i$).
% The set of values of a dimension $d$ or \emph{members} is called domain of $d$ and
% written $Dom(d)$. 
% With the notations from~\cite{Giacometti:2009:RMQ:1617540.1617589}, a
% $n$-dimensinal cube is noted $C=<D_1,\ldots,D_n,F>$ where $n$ is the number of
% dimensions.
% The fact table $F$ is composed of facts $f$ (i.e. $f\in F$).
% The member corresponding to a fact $f$ for the dimension $d$ is written $f\cdot
% d$; thus, $f\cdot d\in Dom(d)$.
% Similarly, the value of the measure $m$ for the fact $f$ is written $f\cdot m$, with
% values in $\mathbb{R}$; thus $f\cdot m\in\mathbb{R}$.


%We assume that a query for a multidimensional database can be reduced to:
%\begin{itemize}
%  \item a set of analysis axis (or dimensions)
%  \item a set of measures
%  \item possibly, a set of filters (basing on dimension values or on measure
%  values)
%  \item possibly, a set of clauses for ordering the axis of analysis
%  (``group-by'' statements)
%  \item possibly, a set of truncation clauses for selecting a subset of a fact
%  table, in association with an ordering statement (``top-$k$'' statements)
%\end{itemize}
%Our motivation for representing a query in this way is that:
%\begin{enumerate}
%  \item iterative query building (e.g. results of OLAP operations) ends up to
%  % a query that can be represented in that way
%  \item as described in more detailed in
%  % section~\ref{sec:chapter-personalized}, we target the generation of charts;
%  % for that reason we are restricted to a subset
%%  of all possible multidimensional queries
 %% \item there are existing commercial solutions that permits to trasform such
 % a
%  query into a ``real'' multidimensional query, expressed in a MDX-like
%  language (see section~\ref{sec:appendix-query-generation}).
%\end{enumerate}
%This model will be elaborated and further detailed in chapter~\ref{sec:model},
%where we demonstrate how usefull it is to intagrate perosnalized and
%contextualized information in the conceptual query.





\section{BI and the need for answers}
\label{sec:introduction-bi}
The BI field aims at increasing the efficiency of decision-making activities in
providing key figures of very large data that are generated and processed for
instance by enterprise resource planning (ERP) or customer relationship
management (CRM) systems.
To ease this task of seeking for relevant data, tools have been designed (see
screenshot figure~\ref{fig:introduction-bi-tool}).
Such tools can be used by non database experts.
However, the construction of a valid query that meets user's need is still a
pain.
Indeed, such tools are not ``natural'' in the sense that users
need to know how data are organized in the data model, in order to formulate meaningful
queries.

In the following, we introduce the \ac{BI} domain and the increasing importance of mobility
paradigm.
Then, we detail specific aspects of corporate settings.




\subsection{Introduction to Business Intelligence}
\ac{BI} is defined as the ability for an organization to take
all capabilities and convert them into knowledge, ultimately, geting the right
information to the right people at the right time via the right channel.
During the last two decades, numberous tools have been designed to make
available a huge amount of corporate data for non expert users (see
Figure~\ref{fig:introduction-bi-tool}).
These tools vulgarize notions of the multidimensional world like \emph{data
schema}, \emph{dimensions}, \emph{measures}, \emph{members}, \emph{hierarchy
levels} etc. 
These concepts are usually not named but their existence is assumed. 
In figure referenced above, none of these concepts appears. 
The upper ribbon is composed of \emph{filters} i.e. selected members; the left
panel let users select dimensions and the measures have been selected in a
previous step. 
User experience studies have been carried on~\cite{bastien:inria-00070012} to 
determine where to place each component, such that its semantics is better 
understood by users.
Figure~\ref{fig:query-panel} is an example of a graphic interface that 
let users select dimensions (e.g. ``Year''), measures (e.g. ``Sales
revenue''), predifined filters (e.g. ``This year'') and execute the query.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{img/query-panel}
\caption{Graphic interface to build multidimensional queries}
\label{fig:query-panel}
\end{figure}





\subsection{Mobility in BI and the growing popularity of natural search interfaces}
In recent years, mobile devices have significantly invaded business users'
everyday life.
Indeed, capabilities of such devices have increased a lot and wireless internet
connections have developed as well. 
As a result, mobile devices are getting more popular and are extensively used in
corporate settings, especially by employees who travel a lot. 

The major advance in this field is probably the speech-to-text feature. 
Indeed, voice recognition is now working fine (at the condition that the user
is a native speaker). 
Users can thus speak out loud instead of interacting with computers through
traditional devices (mouse, keyboard).
This eases access to corporate data sources, but a range of requirements
specific to business use-cases still need to be checked.
We briefly describe those requirements in the following subsection.





\subsection{BI and corporate settings}
In corporate settings, users are well identified on the network and use
different applications with possibly different authentification methods.
The different systems can be for instance \ac{CRM} systems or any \ac{CMS}.

These various systems interact in some cases in order to offer aggregated
information to end-users.
However, each system usually has its own security management rules for users
and/or groups of users. Therefore, allowing these applications to interact is
a tricky issue. 


\subsubsection{Security requirements}
We have identified three security requirements in corporate environments that
we consider in our proposal:
\begin{itemize}
  \item users authenticate to the platform, possibly using a single-sign-on
  technology
  \item each user accesses a different subgraph of the data model; and usually
  cannot access the entire schema
  \item the token given to the user for authentification purposes is associated
  to a timeout; in which case user must authenticate again in case of inactivity
\end{itemize}
The \emph{graph} mentionned above is the representation of the knowledge that
our system can access to.  
These aspects will be explained and detailed in
chapter~\ref{sec:chapter-personalized} which describe the implementation of the
proposal.


\subsubsection{Context-aware applications}
Researchers aggree that aggregating information from several data sources -- not
only corporate ones in the best case -- promises more accurate results in the
context of \ac{IR} (see~\cite{Hearst:2011:NSU:2018396.2018414}).
To allow interaction between these data sources, one need to enter into an
agreement on what can be shared accross applications (this involves security
considerations as sketched out above) and on how the environment is being
modeled.
Business workers share a consensual view on main concepts used in their daily
work, which eases this process.

Context is a very active research domain, and thus it has been defined many
times.
The definition that is usually kept because of its generality is the one from
Dey~\cite{Dey:2001:UUC:593570.593572}:
\begin{quotation}
``A system is context-aware if it uses context to provide relevant
information and/or services to the user, where relevancy depends
on the user's task.''
\end{quotation}
Taking into account context in corporate settings makes sense, first because
users are well identified on the network (for instance through single-sign-on
technology). 
This ``context-awareness'' can be used in an efficient way in order to provide
personalized items of interest (e.g. search results, answers, etc.).
To illustrate this, let consider two different scenarios. 
The first one is a user who builds a query on a speradsheet application and expects a chart as a
response (see figure~\ref{fig:excel}).
\begin{figure}[b!]
\centering
\includegraphics[width=11cm]{img/excel}
\caption{Charts generated in an Excel instance must be of relatively small size
compared to other applications like a dashboard}
\label{fig:excel}
\end{figure}
%The application presented on the figure has been implemented by
%Thollot~\cite{thollotThesis}.
On this figure, the generated chart must be of convenient size (i.e. not too
large, such that it fits well in the spreadsheet).
\begin{figure}[!h]
\centering
\includegraphics[width=11cm]{img/answer}
\caption{Chart rendered on a desktop application}
\label{fig:first-prototype}
\end{figure}
On the other hand, charts displayed on dashboards (see
Figure~\ref{fig:introduction-bi-tool}) or on desktop-specific applications (see
Figure~\ref{fig:first-prototype}) should be of bigger size. 






\section{Q\&A and its benefits for BI}
\label{sec:introduction-qa}
According to Hirschman~\cite{Hirschman:2001:NLQ:973890.973891}, the goal of \ac{QA}
is to allow users to ask questions in \ac{NL} using their own terminology and
receive a concise answer.
\ac{QA} is often seen as a subfield within \ac{IR} like~\cite{Grishman:1979:RGQ:982163.982192}.
The purpose of \ac{IR}~\cite{Manning:2008:IIR:1394399} is to retrieve
meaningful, or at least relevant information from a large repository of
documents basing on a formulation of an information need.
This information need is usually expressed by the means of keywords. The most
popular examples of \ac{IR} systems are search engines on the Web.
While \ac{IR} systems retrieve documents relevant for a specific query, \ac{QA}
systems aim at answering a question as concisely as possible~\cite{Hirschman:2001:NLQ:973890.973891}.
The main benefit of \ac{QA} systems is that users do not have to get through the
list of documents to find the answer; the most likely answer is returned by
\ac{QA} systems. 
The most advertised advantage of \ac{QA} systems is that users waste less time
in searching for answers. 

We have shown in section~\ref{sec:introduction-bi} that existing tools that
interface data warehouses support users when designing their queries.
However, there is still a gap to bridge which can be compared to the \ac{QA} gap 
between query space and document space.
Concerning \ac{BI}, users are supposed to be experts of the domain (i.e. they
know the key indicators, or dimensions, used to describe their data), but they
do not know the \emph{exact} terminology that has been used in the data schema.
Users may use synonyms, or variant terms, or phrases formulated differently.
The document space can be compared to the database itself, where a (structured)
document would be a structured query (which is the requested information in our
case).
The growth and the ``triumph''~\cite{CIE} of the Web in the 90's have focused
researchers' attention on \ac{IR}: indeed, the Web can be considered as an ``infinite''
corpus (in the context of \ac{QA} systems, it is not possible to tell whether
the answer of a question is known or not).
Data warehouses in productive environments are modeled with hundreds of measures
and possibly as many dimensions. As a result, the number of potential queries
(i.e. the combinations of these objects) is huge, and similar in that sense to
the problem of answering questions in large text corpora.

Recently, researchers have emphasized the benefits and possible interactions of
\ac{QA} systems for \ac{BI} systems. 
Ferr\'andez and Peral~\cite{TBO} have proposed a system where the data schema of
the data warehouse and the ontology schema of a \ac{QA} system can be merged. In
that way, the system is able to generate queries to the data warehouse and to
the Web. 
More generally, the topic of combining data coming from various sources in
corporate settings is of utmost interest and has been named ``enterprise search'' by
Hawking~\cite{CIE} ten years ago.
The different data sources can be trusted data sources (e.g. data warehouses), 
corporate information systems (e.g. intranet Web pages) or external sources 
(e.g. Web pages from competitors) 


\subsection{\textsc{Watson}, a success story}
\textsc{Watson}~\cite{FerrucciBCFGKLMNPSW10} has probably been the system that
has made the strongest impression in \ac{AI} in 2010. 
Indeed, it is the first artificial program that has won the
american \emph{Jeopardy!} TV show.
In this game, \emph{clues} are presented to players (instead of questions), and
players must provide answers in the form of \emph{questions} (i.e. the
question to which the clues answer).
For instance, instead of answering ``Ulysses S. Grant'' or ``The Tempest'',
answers must be ``Who is Ulysses S. Grant?'' or ``What is the
Tempest?''~\cite{FerrucciBCFGKLMNPSW10}.
The underlying architecture (called \textsc{DeepQA}) is generic enough, such
that it can be applied not only to \emph{Jeopardy!}, but also to text retrieval
(for instance compete to the TREC campaigns), enterprise search, etc.
The system is composed of several statistic models that generate hypothesis and
evidences. The synthesis of all potential hypotheses and evidences produces
answers that are in the end merged and ranked. 
A part of the process, consist in acquiring knowledge from various sources,
which is partly a manual task. 


\subsection{Challenges in the field of structured data search}
Research in the field of search over structured data consists mainly of keyword
search over structured data.
As introduced above, natural interactions with structured data becomes now
popular in the community. 
Personalization meets a similar goal to recommender systems, where a
personalized query can be seen as as a suggested query from recommender systems
point of view.
Besides, extensive work focused on \emph{prediction} systems. Indeed, analysis
of sessions of multidimensional queries show that queries are often refined
iteratively. To ease query formulation taking into account this behaviour,
predictive models have been proposed, that predict queries that are more likely
to be next users' queries (and pro-actively execute them).

A great challenge in this feild is thus to propose comprehensive
frameworks that tackle the problems of \emph{personalization}, \emph{query
prediction} and \emph{recommendation} in the context of multidimensional
analysis (and not only one of them). 
As an example, personalization in the work of Golfarelli et
al.~\cite{Golfarelli:2011:MAE:1990771.1991001} is resolved with users'
\emph{preferences} (which are qualitative preferences).







\section{Problem formulation}
\label{sec:introduction-problem}
We consider a user's query $q$ called \emph{question} in the following (to make
the distinction with structured queries simply called \emph{queries}).
The problem of translating a user's query expressed in natural language in a
structured (database) query can be formalized as finding mappings $t$
from a question $q$ and a family of results $(r_i)_{i\in I}$:
\begin{equation}
t:\left\{\begin{array}{l}
\mathcal{Q}\rightarrow R^I\\
q\mapsto\left(r_i\right)_{i\in I}
\end{array}\right.
\end{equation}
where $I=[1,n]$ is the interval of ranks associated to each result (i.e. $n$ is
the number of results that the question leads to).
The index $i$ of items $r_i$ of the family is interpreted in that case as the
\emph{rank} of the result $r_i$ (i.e. $rank(r_i)=i$).
The rank $i$ assigned to a result $r$ is computed by a \emph{scoring} function
that we note $score(r)\in[0,1]$:
\begin{equation}
rank:\left\{\begin{array}{l}
R\rightarrow I\\
r\mapsto\phi(score(r))
\end{array}\right.
\label{eq:introduction-problem-formulation}
\end{equation}
where $\phi$ is a bijection from $[0,1]\subset\mathbb{R}$ to
$I\subset\mathbb{N}^\star$.
In the context of multidimensional queries, we consider that a \emph{result} is
a multidimensional query $Q$ and a set of metadata (which can be the chart title
or application-specific requirements in terms of visualization like the color
panel, the expected size of the frame containing the chart, etc.).
We note: $r=(Q,M)\in R$; in the following $Q$ is the notation for a structured
query and $M$ the one for metadata.


\subsection{Machine translation and pivot languages}
\label{sec:introduction-pivot-language}
\begin{figure}[h!]
\centering
\subfloat[Translation approach using a pivot language]{
	\label{fig:introduction-pivot-language-1}{
		\includegraphics[width=0.4\textwidth]{img/pivot-language-1}
	}
}\hfill
\subfloat[Statistical translation approaches]{
	\label{fig:introduction-pivot-language-2}{
		\includegraphics[width=0.4\textwidth]{img/pivot-language-2}
	}
}
\caption{Two classic translation appraoches}
\label{fig:introduction-pivot-language}
\end{figure}
The problem introduced above (formula~\ref{eq:introduction-problem-formulation})
is similar to the problem of translating natural language sentence from a
(natural) language to another. The difference is that we target here a
structured (artificial) language (i.e. a database language) instead of a natural
language.
This problem is one of the most difficult one in \ac{AI} and one of the most popular (at least 
until the 90's); is has thus a long history in computer science. A popular
representation of two classic approaches is displayed
figure~\ref{fig:introduction-pivot-language}.
Figure~\ref{fig:introduction-pivot-language-1} is the class of approaches that
use a ``pivot-language'' to translate a source language to a target language. 
The difficulty of this approach, and its main limitation, is that the different
sub-task when going from the first language to the pivot language generate
noise, and more noise is introduced when translating this pivot language to the
second language.
The second class is represented figure~\ref{fig:introduction-pivot-language-2}:
no pivot language is required, but the translation usually relies on statistical
models, that require costly resources.



\subsection{What is not of the scope of our work}
In our work, following interesting topics have not been investigated:
\begin{itemize}
  \item searching over the Web and aggregating results from the Web and results
  from trusted sources
  \item generating results that cannot be rendered as charts
\end{itemize}
Indeed, we have focused on translating questions in structured queries for data warehouses 
(see section~\ref{sec:introduction-pivot-language}).
The framework described in chapter~\ref{sec:personalized} is extensible and also supports other kinds of 
data sources.
The application-driven motivation of our work explains why we consider a subset of possible 
query results.


\section{Contribution to the state of the art}
Our contributions to the state of the art are as follows:
\begin{enumerate}
  \item a comprehensive framework for \ac{QA} dedicated to business users which
  leverages contextual information to offer more personalized results
  \item a \ac{NL} query interface associated to a speech-to-text tool
  \item a translation approach 
  	\begin{itemize}
  		\item that has been proven to be valid in at least 3 european languages
  		\item which graph-matching bases on constraints satisfaction rules
  	\end{itemize}
  \item a plugin-based architecture which ensures a high degree of portability
  \item an overall approach which makes the system quite independant with
  respect to the domain according to our evaluation results
\end{enumerate}
The next chapter will survey the related work of \ac{QA} systems for structured
data.


\stopcontents[chapters]

\bibliographystyle{plain}
\bibliography{these}





\end{document}
