\chapter{Conclusion}
\label{sec:conclusion}
\startcontents[chapters]
\Mprintcontents

First, we sum up our contribution to the state of the art. Secondly, we discuss
our results, and suggest follow-up research topics as well as perspectives to
the work presented in this thesis.


\section{Summary}
\label{sec:conclusion-summary}
Business users of today's retrieval systems face the problem of finding
relevant documents in a short time. Indeed, there are many access points where
information can be found (e.g. applications like the corporate portal, wikis,
forums, etc.) and these applications do not co-operate correctly (e.g.
underlying query languages are different, and documents are of different
types). More specifically in \ac{BI}, today's retrieval systems are able to
offer reports or dashboards that have been annotated beforehand, but annotating
these documents is a tedious task.
Users may also want to execute queries over data warehouses, because the
information they are looking for may not already be in a report or a dashboard.
A range of retrieval systems on the Web are dedicated to seeking for information
over structured data belonging to various domains (such as WolframAlpha).
These systems are dedicated to non-expert users (standard users); and provide
not only one answer (as it would be in \ac{QA} systems in most of the cases) but
a set of answers, corresponding to each interpretation of user's question
(different interpretations correspond roughly to the domain).
In response to the issue mentionned beforehand, that finding information in
corporate settings is still a pain, retrieval systems in \ac{BI} have emerged in
cunjunction with advances in visualization techniques aiming at delivering
concise answers related to huge amount of data.

Numberous approaches and techniques were developed in order to interface
databases in \ac{NL}, that we have sum up in
chapter~\vref{sec:chapter-state-of-the-art}. In particular, systems which
aggregate different data sources and which are domain-independant focus today's
researchers' attention (more specifically \textsc{Watson}~\cite{WATSON}).
A typical approach consists in translating \ac{NL} in an intermediate query
language (called \emph{pivot} language in the \ac{NLP} community), which is then
translated in the target (database) query language.
Two other very recent systems dedicated to \ac{BI} questions, have also focused
our attention are \textsc{Soda}~\cite{blunschi2012} and
\textsc{Safe}~\cite{Orsi:2011:KCS:1951365.1951390}.
The former system is a keyword-based search system over data warehouses.
It uses some kinds of patterns to map keywords and some operators in the user's
query to rules to generate SQL fragments. It integrates various knowledge
sources like a domain ontology etc. However, this system does not focus on
``using natural language processing to understand the
input''~\cite{blunschi2012}, nor any contextual information that should be taken
into account in modern system~\cite{Hearst:2011:NSU:2018396.2018414}.
The latter system is an answering system dedicated to mobile devices in the
medical domain. It uses patterns in order to relate keyword queries and NL
queries. It uses semantic technologies to overcome the problem of matching
users' terminology terms and database terms.
They propose a variant of auto-completion, they suggest terms and
relationships from the ontology (and not only from the database).
While authors insist on the necessity of getting answers rapidly due to the
domain (medical domain), more than 30\% of questions presented in their
evaluation are answered in more than 50s.



The system that be propose is in the form of a search interface available on
desktop as well as on mobile devices, and has been described in
chapter~\vref{sec:chapter-personalized}. Users type or pronounce queries in
their own terms, and search results are eventually displayed in a dashboard.
We focus in this thesis on the translation of questions in \ac{NL} in database
queries.
The system can be easily configured:
\begin{itemize}
  \item any data warehouse or more generally any repository of structured data
  can be interface to our system with limited configuration effort. This
  corresponds to the implementation of a new `search plugin'
  \item the system has been implemented for three languages (english, german
  and french) but we are confident on the feasibility of supporting additional
  languages. The main limitation is imposed by the named entity recognizer, that
  supports more than 30 languages. In terms of effort, supporting a new language
  means translating data schema in the new language, integrating named entity
  recognition rules for the new language and translating sentences to be
  synthesized by the siri-like mobile application. 
\end{itemize}
We have implemented and end-to-end \ac{QA} framework for \ac{BI}. This framework
can be augmented with various search plugins, i.e. for supporting additional
data sources. Moreover, the different algorithms involved in this framework have
been implemented in a parallel way, in order to optimize the execution time
required by the overall system.


In chapter~\vref{sec:chapter-patterns}, we define the \emph{patterns} that are
involved in the translation of \ac{NL} questions in conceptual queries (i.e.
pivot language mentioned above). These patterns define constraints that must be
satisfied:
\begin{itemize}
  \item in user's question: words used by the user and (database) entities in
  her query
  \item by the user's context: device which is used to access the search
  application, user's profile (like her job title, her preferences, etc.) 
\end{itemize}
Some of these constraints can be optional (for instance, a single pattern may
define that there should be \emph{at least} one dimension in user's query).
Each pattern triggers translation rules with \emph{slots}, to be filled with
actual values of user's query.
This approach is inspired from the form-based \ac{QA} approach, but is much more
flexible, since:
\begin{itemize}
  \item there is no strong requirement in the order according to which the
  \emph{features} must be listed in the pattern
  \item the constraint problem is solved by a \ac{NER}, wich allows variant
  (such as synonyms), and not only well-formed terms with respect to the
  data schema of the warehouse
\end{itemize}




The query model has been presented in chapter~\vref{sec:chapter-modeling}.
This model is a high-level representation of any multidimensional query, that
can be automatically translated in \ac{MDX} or \ac{SQL} (depending on the
underlying data warehouse). 




We have evaluated our proposal in chapter~\vref{sec:chapter-evaluation}. 
This evaluation consists in an original approach that we believe could be
generalized to any retrieval system in the \ac{BI} domain. Indeed, in addition
to an evaluation protocol, we suggest some \emph{gold-standard} queries
(formulated by real users) which formulation vary from precise to very vague.
Some additional metrics have also been introduced, in particular distance
metrics between patterns, which could be the starting point to future work on
machine learning.


 The contribution to the sate of the art can be thus sumarized as follows:
\begin{itemize}
  \item a set of constraint-matching algorithms, as a solution proposal to the
  graph-matching problem already observed in keyword-based search interfaces to
  structured data
  \item a framework for \ac{QA} in the \ac{BI} domain that can be easily
  configured for new domains of application and for new kinds of data
  warehouses (e.g. via the implementation of new \emph{search plugins})
  \item an evaluation framework for \ac{BI} retrieval systems, through
  evaluation metrics and gold-standard queries from the Census dataset
  \item a basis for machine learning algorithms that would improve the system
  that we have built
  \item a query model that we believe could be successfully associated to a
  prediction model, and which main goal would be to decrease significantly the
  response time of the entire system in executing pro-actively well-chosen
  queries
\end{itemize}
We detail in the next section our proposal for future work, in order to improve
and go further the work presented in this thesis.


\section[Challenges]{Challenges \& problems to be addressed in future
works}
\label{sec:conclusion-challenges}
All the problems related to mapping \ac{NL} queries to structured queries for
data warehouses have not been solved.
We review below some research directions to be considered as follow-up to the
work presented in this thesis.

\subsection{Personalized patterns}
In our work, we do not consider personalized patterns, because
we haven't seen any case where a pattern should be personalized.
The case where it might be of interest, is where the system could learn users
terminology, and build personalized patterns to this end.


\subsection{Linguistic coverage}
The current system as a good linguistic coverage, according to the evaluation
results presented in the previous chapter, but is currently limited. An
immediate follow-up to the current implemention is then to use machine learning
techniques to improve the current capabilities over time, basing for instance on
failing queries from query logs.


\subsection{Prediction model}
We have introduced in section~\vref{sec:modeling-prediction-model} a module
performing query prediction as an application to the personalization of
multidimensional queries. 
The implementation of this component is still in progress, and therefore we
suggest to further investigate both prediction and recommendation of
multidimensional queries.

\stopcontents[chapters]





