\chapter{State of the Art}
\label{sec:chapter-state-of-the-art}
\startcontents[chapters]
\Mprintcontents


\ac{NL} is the most natural and easy way to express information
need~\cite{Hearst:2011:NSU:2018396.2018414}, because it is not restricted to
expert users who know how to formulate formal queries (e.g. \ac{SQL} or
\ac{MDX} queries).

NL interfaces have been investigated by researchers in both Information
Retrieval (IR) and Database communities. In the IR community however, the
keyword search has become popular because of the success of commercial search
engines which have made extensive use of algorithms for matching keywords to
documents.
%NL interfaces for unstructured documents are refered to as Question Answering
%(Q\&A) systems, that have received much researchers' attention as the Web had
Interfaces to unstructured documents have become popular as the \ac{WWW} has
made available huge amount of textual data, and recently as the famous {\textit
Jeopardy!} quiz was won by IBM's \textsc{Watson}~\cite{FerrucciBCFGKLMNPSW10}
system.
This trend seems to evolve slightly. For instance,
WolframAlpha\texttrademark{} is a popular search
service\footnote{\url{http://www.wolframalpha.com/}} based on structured data,
that accepts keyword queries as well as some questions in \ac{NL}. However, the
proportion of \ac{NL} questions that are understood by
WolframAlpha\texttrademark{} is still low, and therefore this system is still
subject to improvements.

Interfaces to structured data have focused much researchers' attention for
decades, but the field seems to have known a renewed interest for a few
years, probably thanks to the development of the \ac{SW}. 
Today's semantic technologies cannot be easily manipulated by standard users,
and therefore there is a need for interfaces. Indeed, users prefer NL
interfaces than the logic which bases the \ac{SW}.
Unger et al.~\cite{Unger:2012:TQA:2187836.2187923} have illustrated this problem
in the context of the \ac{SW}, where data are usually represented as
\ac{RDF}~\emph{triples}.
Consider for instance the following
question~\cite{Unger:2012:TQA:2187836.2187923}:
\begin{equation}
\textnormal{``Who wrote The Neverending Story?''}
\label{equ:sota-question}
\end{equation}
In the best case, triple data answering this question would be of the form:
\begin{equation}
<[\textnormal{person,organization}],\textnormal{wrote},\textnormal{`Neverending
Story'}>
\end{equation}
where $[\textnormal{person,organization}]$ would be a placeholder for a subject
representing a person or an organization in the data.
Depending on the adopted approach, this \emph{triple} would be either retrieved
based on distance metrics with respect to the question
(\ref{equ:sota-question}), or the result of the execution of a formal query (in
this case a \ac{SPARQL} query) which would look like:
\begin{lstlisting}[label={lst:sota-sparql},caption={Example of SparQL
query template operating on top of RDF data},captionpos=b] 
SELECT ?x WHERE { ?x ?p ?y .
	?y \ac{RDF}:type ?c .
}
HAVING expr_1
ORDER BY expr_2
LIMIT expr_3
OFFSET expr_4
\end{lstlisting}
where \verb?expr_1?, \verb?expr_2?, \verb?expr_3? and \verb?expr_4? are
placeholders for various \ac{SPARQL} expressions.
In this example, not-expert users would not write \emph{triples} (especially
because some \ac{NL} questions cannot be translated in this representation,
e.g. when there are aggregations or filter constructs which are not faithfully
captured~\cite{Unger:2012:TQA:2187836.2187923} by~\ac{SPARQL}). Nor would they
write a formal \ac{SPARQL} query like the one shown in~\vref{lst:sota-sparql},
because the syntax is not straightforward to standard users.


%The challenge of answering \ac{NL} questions is far from being
%solved~\cite{Kotov:2010:TNQ:1772690.1772746} 


The area of \ac{NL} interfaces to structured data is even broader than \ac{IR},
in particular there is a range of systems that translate \ac{NL} queries in
\emph{goals}, e.g. in the context of (household) appliances (see for instance
the \textsc{Exact} system~\cite{Yates:2003:RNL:604045.604075}).
These kind of systems are out of the scope of our thesis, and therefore they
will not be described in this chapter.


In our work, we outline the major dimensions of existing systems
and present the new trends and challenges of state-of-the-art systems.
The history of \ac{NL} interfaces can be developped as follows:
\begin{enumerate}
  \item early years of domain-specific systems
  \item complex question answering in a specific domain
  \item rise of domain-awareness in \ac{NL} interfaces (through learning
  techniques)
  \item data-driven systems (or schema-unaware approaches)
\end{enumerate}
We present in the following the main dimensions used throughout
the paper. 










%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% BIG PICTURE                         %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Main Dimensions}
\label{sec:main-dimensions}
The input to every \ac{NL} interfaces is the (1)~\emph{data}, and
(2)~\emph{user questions} representing information needs, which are translated
to an (3)~\emph{internal representation} of the needs employed by the system
and/or directly mapped to (4)~\emph{database queries} that finally, are
executed by the underlying query engine to produce the final answers. The main
problem tackled by a \ac{NL} interfaces approach is to transform the question
to the internal representation and then to the database query -- i.e. (2)
$\mapsto$ (3) $\mapsto$ (4) -- or to directly mapped the question to the query
-- i.e. (2) $\mapsto$ (4). This problem is hard when considering the large and
rapidly evolving mass of structured data that have become available. The major
challenges modern \ac{NL} interfaces are facing in this regard is to
operate across domains (5)~\emph{domain-independence} as well as to adopt to
new domains (6)~\emph{portability}. For evaluating the quality of the \ac{NL}
interfaces output, different (7)~\emph{metrics} based on the user questions the
system can understand as well as the database queries it can produce, have been
proposed.

% \TODO{old content that might be considered:} 
% Dimensions:
% \begin{itemize}
%   \item What kind of query? (NL, controlled language)
%   \item What are the data? 
%   \item Target structured query: select query? update query?
%   \item What are the answers?
%   \item How portable is the system? Does it improve its performance over time?
%   \item What is the linguistic coverage of the system? 
%   \item Error feedback: how well is the user informed of misunderstanding of her
%   question
%   \item Predictability: how the system disambiguates / lets the user decide what
%   should be the correct interpretation
%   \item Performance: response time / evaluation metrics
% \end{itemize}



\subsection{Data sources}
\label{sec:data}
%\TODO{Discuss the different types of structured data models that have been
% used, make clear that while they vary in expressiveness, they commonly enables
%capturing resources/entities/objects and relationships between them. Besides
%domain information captured in terms of entities and relationships, also point
%out other popular types of information that have considered, e.g. temporal and
%spatial information}

Data are organized in \emph{structures} of various kind. In the case of
unstructured documents (e.g. raw textual documents), the structure is defined by
the corpus that contains the documents and the metadata possibly attached to the
documents (e.g. the title or the authors' names). In semi-structured documents
(e.g. \ac{WWW} pages) the structure explicitely surrounds the content of the
documents (e.g. categories in Wikipedia). In structured documents (databases,
knowledge bases, etc.) the nature of the structure depends on the logic adopted
when these documents have been created. In any case, data structures can be
reduced to a set of \emph{entities} and \emph{relations} between those entities
of possibly various kind.

An early data structure is called \emph{hierarchical
list} and used in the \textsc{Baseball} system~\cite{BAA}. We reproduce an
example of such a list below:
\begin{equation*}
\begin{array}{l}
\textnormal{Month}=\textnormal{July}\\
\quad\textnormal{Place}=\textnormal{Boston}\\
\quad\quad\textnormal{Day}=7\\
\quad\quad\textnormal{Game serial No.}=96\\
\quad\quad(\textnormal{Team}=\textnormal{Red Sox},\textnormal{Score}=5)\\
\quad\quad(\textnormal{Team}=\textnormal{Yankees},\textnormal{Score}=3)
\end{array}
\end{equation*}
This list could also be represented as a set of \emph{facts}. `Month', `Place',
`Team' and `Score' are \emph{entities}, `Game serial No.' is an attribute, and
there are to kinds of relations: hierarchical relation (depicted with white
spaces in the list representation) and a standard relation (depicted with
parentheses). 

We classify the different data structures as follows:
\begin{itemize}
  \item early data structures (Prolog databases; hierarchical lists)
  \item relational databases
  \item XML databases (which is an example of hierarchical database)
  \item ontologies; \emph{linked data}
\end{itemize}
The way to access (i.e. query) and modify data depends on the data structure.
For example, relational databases are logically represented with the relational
model (see section~\vref{sec:introduction-relational}) and queries are usually
expressed in a specific language, like \ac{SQL} (or \ac{MDX}).
Hierarchical lists are hierarchical structures (like XML documents) that are
composed of embeded key-value pairs with optional attributes. XML documents are
more generic, since keys in those documents are nodes (i.e. can be trees
themselves).
Ontologies are not more generic than XML databases from the structure point of
view, but ease the expression of semantic relationships between entities
(namely \emph{concepts} and \emph{individuals}).

% Dome data structures are dedicated to some types of information. For instance,
% temporal databases are similar to relational databases, but capture \emph{time}
% in addition. Indeed, in some domains, the truth value of a fact or of
% a resultset is determined by a \emph{timestamp}, while in classic database facts
% are true in general. 
% 
% At the end, all these data structures manipulate \emph{entities} (i.e. named
% properties of the database) and \emph{relations} between these entities.
% Any database selection query can be reduced to a set of constraints on these
% entities and relations. 





\subsection{Users' questions}
\label{sec:big-picture-question}

Traditional DBMSs provide an interface where users can query the data in various
ways, usually in a formal query language such as \ac{SQL}.
We present in the following two kinds of input to database interfaces, namely
\emph{keyword queries} and \emph{\ac{NL} queries}.


\subsubsection{Keyword query}
A \emph{keyword query} consists in a set of words traditionally used in the
context of document search. Traditionnally, documents are represented in a
vector space, where the vectors are composed of the terms of the documents.
Search engines have made this paradigm popular.
Hearst~\cite{Hearst:2011:NSU:2018396.2018414} reports that the number of words
in queries over \ac{WWW} search engines trends to increase (the experiment
compared queries performed over a month in 2009 and in 2010): queries from 5 to
8 words increase up to 10\% while queries from 1 to 4 words decrease up to 2\%.
It seems indeed that users are more and more aware of new capabilities of
state-of-the-art search engines, and are less reluctant to express their
information need in a more ``natural'' way.
Therefore we focus further on natural inputs to interfaces in the following
section.

\subsubsection{Natural language query}
In this state of the art, we focus on natural language interfaces, and not on
keyword search over structured data, even if the latter has become quite popular
lately.
A significant recent system with this respect is
\textsc{Soda}~\cite{blunschi2012}.

Early systems do not undersand well-formed sentences (usually in English), but
some basic syntactic constructions; such language was called ``English-like'' by
some authors~\cite{Woods:1973:PNL:1499586.1499695}. More recent systems try to
go further and to capture as much as possible regularities as well as some
irregularities of \ac{NL}.

Most of current systems handle only a subset of \ac{NL} questions. This
subset of \ac{NL} is called \emph{controled \ac{NL}}. Some systems even go
further, and are able to analyze why a question can not be answered. There are basically two
cases:
\begin{itemize}
  \item the question is beyond the system's linguistic coverage (i.e. the
  system does not understand the question)
  \item the underlying knowledge base (e.g. database) does not contain any fact
  answering the question (i.e. the semantics of the question is fully or partly
  captured but there is no answer to such a question)
\end{itemize}



\ac{NL} questions are classfied based on their \emph{type}:
\begin{itemize}
  \item \emph{facto\"id questions} (i.e.
questions that can be answered by an \emph{objective} fact, usually questions
staring with \emph{wh}-words except `why')
\item \emph{complex questions} which
answers are usually \emph{subjective} to the answerer, and for which there
might be several correct answers, possibly contradicting each other
\end{itemize}
Complex questions can be `why'-questions, `how'-questions or
\emph{definition}-questions.
Facto\"id questions have been defined by Soricut \&
Brill~\cite{Soricut:2006:AQA:1127331.1127342} as questions ``for which a
complete answer can be given in 50 bytes or less, which is roughly a few
words'' (see also Kwok et al. ``Scaling question answering to the Web, WWW).
Several finer classification have been proposed in specific domains or
applications. 
For instance, in the case of temporal questions, Saquete et
al.~\cite{Saquete:2004:SCT:1218955.1219027} have proposed the classification
reproduced table~\vref{tab:saquete-classification}.
\begin{table}
\centering
\begin{tabular}{ll}\hline
\multicolumn{1}{c}{\textbf{Question type}} &
\multicolumn{1}{c}{\textbf{Example}}\\\hline\hline
Single event temporal questions &
``When did Jordan close \\
 without temporal expressions & the port of Aqaba to Kuwait?''\\\hline 
 Single event temporal questions & 
``Who won the 1988 New \\
 with temporal expressions & Hampshire republican primary?''\\\hline
 \multirow{2}{*}{Multiple events temporal questions} & ``What did G. Bush do
 after\\
  & the U.N. Security Council\\
 \multirow{2}{*}{with temporal expressions} & ordered a global embargo on\\
  & trade with Irak in August 90?''\\\hline
  \multirow{2}{*}{Multiple events temporal questions} & ``What happened to world
  oil\\
   & prices after the Iraqi\\
  without temporal expressions & annexation of Kuwait?''\\\hline
 
\end{tabular}
\caption{Question classification for temporal databases from~\cite{Saquete:2004:SCT:1218955.1219027}}
\label{tab:saquete-classification}
\end{table}

\subsubsection{Error feedback}
NL interfaces output database queries.
Those queries must be then be executed by underlying \ac{DBMS}.
The execution of queries can lead to different failing states:
\begin{itemize}
  \item the query execution fails (the generated database query is not valid)
  \item the query execution lead to an empty result set
\end{itemize}
In both cases, the system may inform the user and/or suggest or rephrase the
query. This is a task performed by systems belonging to the range of
feedback-based approaches (see section~\vref{sec:feedback-driven}). 



\subsection{Internal representation}
\label{sec:big-picture-syntactic}

The syntactic representation of a question (also called \emph{parse tree}
because the tree representation is usually adopated) is an intermediate
representation, before the internal representation of the question (i.e. the
semantic representation) is being created. 
In many systems however, the syntactic representation also contain pieces
of semantic information. For instance, nodes of the syntactic tree contain
information about how to generate fragments of the target database query.
The nodes of the tree representation contain information about words and
relations between words of the question. 
Typical semantic information contained in a syntactic parse tree are the
database elements that those words refer to. 
Those semantic information (also referred to as \emph{meaning} in early
systems) are kept in a lexicon. 
The syntactic parse tree usually does not try to resolve ambiguities, and keep
all possible interpretations. Resolution of ambiguities is done afterward, when
the semantic representation is being built out of the syntactic representation.
Figure~\vref{fig:sota-parse-tree} is an example of parse tree of the question
``Who produced the most films?'' processed in the system described
in~\cite{Unger:2012:TQA:2187836.2187923}.
\begin{figure}
\Tree [ .S [ .WP who ] [ .VP [ .VBD produced ] [ .DP [ .DT {the most} ] [ .NNS films ] ] ] ]
\caption{Example of parse tree for the question ``Who produced the most films?''
from~\cite{Unger:2012:TQA:2187836.2187923}.}
\label{fig:sota-parse-tree}
\end{figure}
In the figure, nodes hold syntactic information (e.g. `WP' stands for
\emph{wh}-pronoun).
The result of the parse tree depends on the chosen parser.



\subsection{Database queries}
\label{sec:big-picture-semantic}
%\TODO{Discuss query languages; discuss the different types of queries expressed
%by these languages, e.g. entity queries, relational queries, additional
%constructs such as group by etc.}

\begin{table}
\centering
\begin{tabular}{llc}\hline
\multicolumn{1}{c}{\textbf System} & \multicolumn{1}{c}{\textbf Internal
representation} & {\textbf DB query}\\\hline\hline 
\textsc{Baseball}~\cite{Green:1961:BAQ:1460690.1460714} & specification list &
$\surd$\\\hline
\textsc{Lunar}~\cite{Woods:1973:PNL:1499586.1499695} & meaning representation
language & X\\\hline 
\textsc{Chat-80}~\cite{Warren:1982:EEA:972942.972944} &
logic expression & X\\\hline
\textsc{Team}~\cite{Grosz:1987:TED:25672.25674} & logic expression & X\\\hline
\textsc{Qwerty}~\cite{Nelken:2000:QTD:992730.992808} & logic expression & X\\\hline
\textsc{Irus}~\cite{Bates:1983:IRU:511793.511804} & meaning representation language
& X\\\hline
\textsc{Precise}~\cite{Popescu:2003:TTN:604045.604070} & graph representation &
X\\\hline
\textsc{Masque/SQL}~\cite{Androutsopoulos93masque} & meaning representation
language & X \\\hline 
\textsc{NaLIX}~\cite{Li:2005:NIN:1066157.1066281,Li:2007:DDN:1247480.1247643} &
\multirow{2}{*}{XQuery} & \multirow{2}{*}{$\surd$}\\
\textsc{DaNaLIX} & & \\\hline 
\textsc{C-Phrase}~\cite{Minock:2010:CSB:1715942.1716190} & $\lambda$-calculus & X
\\\hline
\textsc{Panto}~\cite{Wang:2007:PPN:1419662.1419706} & graph representation & X\\\hline 
\textsc{Orakel}~\cite{Cimiano:2007:PNL:1216295.1216330} & $\lambda$-calculus &
X\\\hline
Miller et al.~\cite{Miller:1996:FSA:981863.981871} &
semantic frames & X\\\hline 
Zettlemoyer et
Collins~\cite{DBLP:conf/uai/ZettlemoyerC05} & $\lambda$-calculus &
$\surd$\\\hline 
\textsc{Wolfie}~\cite{Thompson:2003:AWM:1622420.1622421} & Prolog & $\surd$
\\\hline 
\textsc{PowerAqua}~\cite{DBLP:conf/esws/LopezMU06} & graph
representation & X\\\hline
\textsc{DeepQA}~\cite{FerrucciBCFGKLMNPSW10} & semantic triples & $\surd$\\\hline
\end{tabular}
\caption{Semantic meaning representations}
\label{tab:semantic-meaning-representation}
\end{table}
The parse tree (which may or not contain semantic information in the nodes of
the parse tree) must then be \emph{interpreted} in some internal semantic
representation.
Table~\vref{tab:semantic-meaning-representation} displays semantic meaning
representations used in various systems.
As shown in the table, the internal semantic representation can be or not the
target query representation.
%The expressivity of the database query is discussed in section
%section~\vref{sec:big-picture-expressivity}. 
The semantic representation is intented to capture as much as possible the
user's intent, and is sufficient to generate the final database query. 
While the syntactic representation may contain lots of ambiguities, the semantic
representation does not. 
The adopted semantic representation depends on the data structure, and thus the
target \ac{DBMS}.
However, some systems contain components that translate the semantic
representation in the different database query languages, for instance when
there are several underlying systems with different query languages.
This kind of architecture participate to make the system
more \emph{domain-independant} (see
section~\vref{sec:main-dimensions-domain-independance} for more details).



\subsubsection{Query languages}
Relational databases are generally associated with the \ac{SQL} query language (and
temporal databases are associated to an extension of \ac{SQL} called temporal \ac{SQL}).
Multidimensional databases (e.g. data warehouses) are associated with MDX,
which is a \ac{SQL}-like query languages dedicated for handling measures, dimensions
and hierarchies (key concepts of multidimensional models).
Other data stuctures are associated with their own query language. For instance,
hierarchical lists (or specification lists) is not associated with a language,
but to a similar structure, which is a template where empty slots correspond to
the expected items of the resultset. XML databases are associated to XPath, and
semantic databases (e.g. \ac{RDF}) are usually queried with languages derived
from \ac{SPARQL} which looks similar to \ac{SQL}.

Recently, a new range of query languages have appeared and are associated to a
new class of \ac{DBMS}s, namely NoSQL (for `not only \ac{SQL}'). These
\ac{DBMS}s are optimized for special data models and large data sets beyond the
scope of this survey.

In the following we review the main primitives that appear in query languages:
selection, constraints and query modifiers.


\paragraph{Selection}
\label{sec:query-languages-selection}
a selection consists in choosing the expected database attribute to appear in
the resultset.
In \ac{SQL}, the selection is expressed as \verb?SELECT t.x?, where $t$ is a
table and $x$ a field or attribute belonging to $t$. In \ac{SPARQL} it
corresponds to \emph{variables} that are defined in the query, and that must
satisfy the constraints defined in the \verb?WHERE? section of the \ac{SPARQL}
query (see section~\vref{sec:query-languages-constraints}).
In \ac{MDX}, the selection corresponds to the ordered set of dimensions or
measures that should appear in the resultset, with the information about the
level of the expected attributes in the hierarchy, and the filters. For
instance, \verb?SELECT Country.[All members]? corresponds to the selection of
the dimension `Country' with no filter (i.e. selection of all members of the
dimension `Country').
Selection for a hierarchical list corresponds to a slot in the structure
template.
A selection in a XPath query can be expressed: \verb?nodename? (selects all
nodes with the name \emph{nodename}), \verb?/nodename? (selects the root node
\emph{nodename}), \verb?//nodename? (selects all nodes \emph{nodename} no
matter where they occur), \verb?@attr? (selects attribute \emph{attr}),
etc.\footnote{for more details,
see~\url{http://www.w3schools.com/xpath/xpath_syntax.asp}}. 
In \ac{SPARQL}, a selection can be expressed also differently.
Indeed, the output of a \ac{SPARQL} query can be a data structure of the same
kind than the data themselves (i.e. a graph). This is performed with the
\verb?CONSTRUCT? keyword.

\paragraph{Constraints}
\label{sec:query-languages-constraints}
Canstraints in database queries aim at reducing the size of the resultset.
In the case of \ac{SQL}, \ac{SPARQL} and \ac{MDX}, such constraints are
introduced with the \verb?WHERE? keyword. In \ac{SQL}, these constraints can
define how different tables can be \emph{joined} together such that attributes
belonging to different tables can appear in a single view (i.e. in the
resultset). Other kinds of constraints are about the values of attributes.
In \ac{SPARQL}, the constraints are expressed with \emph{triples} (which can be
made of some \emph{blank node} or undefined node). Thus, a \ac{SPARQL}
constraint is about relations between entities (see \emph{joins} for relational
databases), or about entities themselves (which type must be the entity? If
this is a \emph{litteral}, in what range of value must it belong?,etc.).



\paragraph{Query modifiers}
\label{sec:query-languages-modifiers}
Conceptually, modifiers are primitives that modify the resultset afterwards.
Such primitives are:
\begin{itemize}
\item order the resultset along a given attribute (e.g.
\verb?ORDER BY? in \ac{SQL})
\item select at most $n$ \emph{tuples} of the
resultset (e.g. \verb?LIMIT x? in \ac{SQL})
\item select only different tuples
(e.g. the keyword \verb?DISTINCT? in \ac{SQL})
\item etc.
\end{itemize}
In \ac{SPARQL}, it is possible to test if a set of constraints can be satisfied
from the data, and to combine these tests with optional constraints
(\verb?OPTIONAL? keyword) or mandatory constraints (\verb?FILTER? keyword).





%\subsubsection{Expressivity of database query}
%\label{sec:big-picture-expressivity}




\subsection{Domain-independence}
\label{sec:main-dimensions-domain-independance}
\emph{Domain-independance} is the ability of a system, to operate accross
different domains simultaneously. 
In practice, this means that a domain-independant system would be on top of
different data sources that belong to different domains, and that this system is
able to meet users' requests for any of these domains. 
The main difficulty of building domain-independant systems is the ability to
translate \ac{NL} constructions differently in the different application domains.
For instance, the qualitative expression ``middle-aged'' can be interpreted
completely differently accross the different domains (see how it is interpreted
in our \ac{QA} system in section~\vref{sec:pattern-running-example}).


\subsection{Portability}
\label{sec:main-dimensions-portability}
A system is said \emph{portable} if it can be easily configured for different
domains (and not the first domain for which it was first designed).
Domain-independance (section~\vref{sec:main-dimensions-domain-independance}) is a
step further, because different domains are considered simultaneously; while in
portable systems the challenge consists in easing the effort of the system
administrator when configuring the system.
Portable systems are also referred to as \emph{configurable} interfaces in the
following.
Configurable interfaces let users improve the system's capabilities, both the
linguistic coverage $S$ and the system coverage $K$ (see
Fig.~\vref{fig:interface}).


Minock et al.~\cite{Minock:2010:CSB:1715942.1716190} have identified three kinds
of configuration applicable to \ac{NL} interfaces:
\begin{itemize}
  \item let users name database elements, so that phrases used in the question
  can be easily mathed with database elements
  \item offer a \ac{GUI} that automatically generates semantic rules or a
  grammar for translating \ac{NL} questions in database queries
  \item use machine learning techniques to induce semantic rules or a grammar
  from annotated corpora
\end{itemize}
These ways of configuring interfaces are not of equal cost: for instance, the
third one (machine learning techniques) can be highly costly if it requires a
huge volume of annotated data. The cheapest configuration is based on user
interaction, where no initial configuration is needed, but domain-specific
knowledge is learned based on user interaction. An example of such a
system is \textsc{NaLIX}~\cite{Li:2005:NIN:1066157.1066281}.




\subsection{Metrics}
\label{sec:big-picture-metrics}
Several evaluation metrics have been introduced in various systems. We review
them briefly below.
We present the interesting figure~\vref{fig:interface} copied from Han et al.
work~\cite{Han:2010:NLI:1719970.1720022}. It shows the tradeoff between
linguistic coverage ($S$ in the figure) and the
expressions that can be answered from a knowledge base ($K$ in the figure). The
goal of any interface would be that the linguistic coverage comprises all
expressions that can be answered from the knowledge base.

 \begin{figure}[!h]
\centering
\begin{tikzpicture}
\tikzstyle{ell}=[ellipse,draw,minimum width=80pt,minimum height=30pt]
\node[ell](s){$S$};
\node[ell,right=10pt of s.center](k){$K$};
\node[left=0pt of s.east]{$S\cap K$};
\end{tikzpicture}
\caption[Linguistic coverage vs. logical coverage]{Linguistic coverage vs.
logical coverage within interfaces, copied
from~\cite{Han:2010:NLI:1719970.1720022}. $S$ represents the range of
expressions that are understood by the system and $K$ the range of expressions
that can be answered from the knowledge base.}
\label{fig:interface}
\end{figure}

\subsubsection{Fluency$_{Woods}$}
Woods~\cite{Woods:1973:PNL:1499586.1499695} defines \emph{fluency} as ``the degree
to which virtually any way of expressing a given request is acceptable''.
Fluency measures roughly how easy to use is a system. 
Intuitively, a good fluency means a ``natural'' interface in the sense of
natural interfaces~\cite{Hearst:2011:NSU:2018396.2018414}. This requires
advanced natural language techniques, and would probably lead to systems that
can interpret more expressions than those that can be actually answered by the
knowledge base ($S\setminus K$ in figure~\vref{fig:interface} would not be
negligeable).
 

\subsubsection{Completeness$_{Woods}$}
\emph{Completeness} was first defined in Woods' work related to the {\textsc
Lunar}~\cite{Woods:1973:PNL:1499586.1499695} system. 
It measures if there is a way of expressing any query which is logically
possible from the database. In the end, it measures if the interface can answer
all possible questions. In figure~\vref{fig:interface}, completeness would be
represented by $S$ comprising $K$ ($K\subset S$).


\subsubsection{Soundness$_{Popescu}$}
An interface is said \emph{sound} if ``any \ac{SQL} output is a valid interpretation
of the input English sentence''~\cite{Yates:2003:RNL:604045.604075}. In
figure~\vref{fig:interface}, this evaluates the expressions of $V=S\cap K$.


\subsubsection{Completeness$_{Popescu}$}
An interface is said \emph{complete} if it ``returns all valid interpretations of
input sentences''~\cite{Yates:2003:RNL:604045.604075}.
Yates et al. have reused this metrics in their \textsc{Exact} system (not surveyed
here). They do not claim that users should restrict their questions to query
only tractable qestions; but they suggest that identifying classes of questions
that are semantically tractable and measuring the prevalence of these questions
is the direction of current research.


\subsubsection{User's intent}
Yates~\cite{Yates:2003:RNL:604045.604075} makes an assumption, that if the
interface is sound, complete and if a single \ac{SQL} statement can be produced from
a \ac{NL} question, then the interface has unambiguously determined user's intent. 



\subsubsection{Predictability}
Within user interfaces, predictability has been pointed out as the essential
feature of user interfaces in the 90' by
Norman~\cite{Norman:1994:MPI:176789.176796} and Schneiderman et
Maes~\cite{Shneiderman:1997:DMV:267505.267514}. 
Predictability and the feel of control seems however to be in contradiction with
personalization, which is the one pillar of recent IR systems. 
















%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% ANATOMY OF SYSTEMS                  %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Anatomy of \ac{NL} interfaces systems}
\label{sec:anatomy}
The main problem to solve is to map user's intent expressed in a natural way
(say unstructured way) to a database query, which is a structured expression
where there is no room for ambiguity unlike natural language. The problem that
\ac{NL} interfaces systems try to solve is equivalent to finding a mapping $f$
between natural language questions $q$ and families of structured queries
$(q^\prime_i)_i$ (where the index corresponds to the \emph{rank} of the
structured query):
% \begin{equation}
$$f:\left\{
\begin{array}{l}
Q\rightarrow (Q^\prime)^I\\
q\mapsto(q^\prime_i)_i
\end{array}\right.$$
% \end{equation}
where $i\in I=[0,n]$ is the index of $q^\prime_i$.
 


\subsection{Lexicon}
The lexicon is a data structure that is used to reduce the ambiguity in words
when analyzing users' questions. 
\ac{NL} interfaces are usually composed of two lexicons: a domain-dependant lexicon and a
domain-independant lexicon. 
The domain-dependant lexicon defines words in terms of semantic rules (see
section~\vref{sec:anatomy-semantic-rules}).
The domain-independant lexicon defines how to interpret words independantly from
a specific domain. For instance, \emph{wh}-question words define constraints
on the structure of the expected database query (e.g. in \ac{SQL}). 



\subsection{Semantic rules}
\label{sec:anatomy-semantic-rules}
Semantic rules define the semantics of question words. The input is a node, or a
part of the parse tree being constructed, and the output contains information on
how to construct part of the final database query. The final database query will
be generated from the parse tree, which contains semantic information in the
nodes, in addition to lexical and syntactic information. 
The idea behind this process is a linguistic theory [missing ref], where the
global meaning of a sentence is defined by the individual meanings of words /
expressions in the sentence, and by the syntactic relationships between the
words / phrases. 




\subsection{Main problems to solve}
The main problems to solve are threefold.
First, \ac{NL} systems should have the broadest linguistic coverage (such that
in the best case, users can employ teir own terminology).
Secondly, there should be some mechanisms that allow users to query other
domains with a low porting cost. 
Thirdly, there should be some components responsible for checking that
generating database queries are valid (in terms of syntax).



%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% OVERVIEW SECTION                    %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Taxonomy of main approaches}
\label{sec:overview}

Figure~\vref{fig:big-picture} depicts the big picture of translating \ac{NL}
queries to database queries.
\begin{figure}
\centering
\begin{tikzpicture}[transform shape]
%\tikzset{VertexStyle/.append style = {minimum size = 3pt}}
\node at (0,0) [draw,circle,inner sep=2pt](1){1};
\node at (2,0) [draw,circle,inner sep=2pt](2){2};
\node at (4,0) [draw,circle,inner sep=2pt](3){3};
\node at (6,0) [draw,circle,inner sep=2pt](4){4};
\node at (8,0) [draw,circle,inner sep=2pt](5){5};
\node at (0,-1) [draw=none,fill=none,label={[label distance=3pt]{NL query}}]
{}; 
\node at (2,0) [draw=none,fill=none,label={[label
distance=3pt]{intermediate query}}] {}; 
\node at (4,-1) [draw=none,fill=none,label={[label
distance=3pt]{query graph}}] {};
\node at (6,0) [draw=none,fill=none,label={[label
distance=3pt]{logic query graph}}] {};
\node at (8,-1) [draw=none,fill=none,label={[label
distance=3pt]{database query}}] {};
\draw[->](1) edge[bend left] (2);
\draw[->](2) edge[bend left] (3);
\draw[->](3) edge[bend left] (4);
\draw[->](4) edge[bend left] (5);
\draw[<->](1) edge[bend right] (3);
\draw[->](4) edge[bend left] (3);
\end{tikzpicture}
\caption{Big picture of \ac{NL} interfaces systems}
\label{fig:big-picture}
\end{figure}
In the figure, nodes represent various steps involved in the translation (from
\ac{NL} questions to structured queries) and edges the iterations performed by
actual systems surveyed in this work.
We consider two main ranges of approaches, namely \emph{classic translation
approach} and \emph{iterative approach} introduced below, which are then further
refined.

\begin{itemize}
  \item \underline{Classic translation approach:} The classic translation
  approach consists in going from
\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}}
to \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {5}}} through
 \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {2}}},
\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {3}}}
and 
\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {4}}} (step
\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {2}}} being optional).

To reach step \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {3}}}, semantic rules are needed to find out what database elements
should be associated to question phrases. 
  \item \underline{Iterative approach:} The iterative approach does not go
  directly from \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}} to
\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {4}}}, but goes back to
\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {3}}} (and/or to
\raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {4}}}) several times before
reaching step \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {5}}}.
Those iterations correspond to question reformulations, that involve
user-feedback to interpret semantically the question in terms of a database
query.
\end{itemize}

The way translation from \ac{NL} questions to formal query has evolved over years.
Table~\vref{tab:overview} presents an overview of major systems that we take into
consideration in this survey.

\begin{table}
\centering
\rotatebox{90}{
\begin{minipage}{\textheight}
\begin{tabular}{|c|c|l|c|c|c|c|c|c|}\hline
%\begin{tabular}{|c|c|c|c|c|c|c|c|}\hline
\multirow{2}{*}{\textbf{Taxonomy}} & \multirow{2}{*}{\textbf{Approach}} &
\multirow{2}{*}{\textbf{System}} &
\multicolumn{6}{c|}{\textbf{Dimensions}}\\\cline{4-9}
& & & {\textbf Q} & {\textbf D} & {\textbf A} & {\textbf P} &
{\textbf L} & {\textbf E}\\\hline\hline 
\multirow{7}{*}{Classic translation approach} & \multirow{2}{*}{Domain-dependent
semantic parsing} & {\textsc Baseball}~\cite{Green:1961:BAQ:1460690.1460714} & & & & & & \\\cline{3-9}
& & \textsc{Lunar}~\cite{Woods:1973:PNL:1499586.1499695} & & & & & &
\\\cline{2-9}
 & \multirow{5}{*}{Complex question translation} & {\textsc
Chat-80}~\cite{Warren:1982:EEA:972942.972944} & & & & + & + & \\\cline{3-9}
& & \textsc{Irus}~\cite{Bates:1983:IRU:511793.511804} & & & & + & &
\\\cline{3-9}
&  & \textsc{Qwerty}~\cite{Nelken:2000:QTD:992730.992808} & & temporal & & - & &
\\\cline{3-9}
&  & {\textsc
Precise}~\cite{Popescu:2004:MNL:1220355.1220376,Popescu:2003:TTN:604045.604070}
& & & & & & + \\\cline{3-9}
& & \textsc{Panto}~\cite{Wang:2007:PPN:1419662.1419706} & & & & & + & + \\\hline
\multirow{10}{*}{Iterative approach} & \multirow{5}{*}{Feedback-driven
approaches} & \textsc{Masque/SQL}~\cite{Androutsopoulos93masque} & & Prolog/SQL & & + & &
\\\cline{3-9} 
& & \textsc{Team}~\cite{Grosz:1987:TED:25672.25674} & & & & + & & +
\\\cline{3-9}
& & \textsc{NaLIX} \&
\textsc{DaNaLIX}~\cite{Li:2005:NIN:1066157.1066281} & & & & + & & \\\cline{3-9}
& & \textsc{C-Phrase}~\cite{Minock:2010:CSB:1715942.1716190} & & & & & &
\\\cline{3-9}
& & {\textsc
Orakel}~\cite{Cimiano:2007:PNL:1216295.1216330} & & & & + & & \\\cline{2-9}
 & \multirow{3}{*}{Learning-based approaches} & Zettlemoyer
et al.~\cite{DBLP:conf/uai/ZettlemoyerC05} & & & & & & \\\cline{3-9}
& & Miller et al.~\cite{Miller:1996:FSA:981863.981871} & & & & & & \\\cline{3-9}
& & \textsc{Wolfie}~\cite{Thompson:2003:AWM:1622420.1622421} & & & & & &
\\\cline{2-9}
 & \multirow{2}{*}{Schema-unaware approaches} & {\textsc
PowerAqua}~\cite{DBLP:conf/esws/LopezMU06} & & & & & & \\\cline{3-9}
& & \textsc{DeepQA}~\cite{DBLP:journals/aim/FerrucciBCFGKLMNPSW10} & & & & & &
 \\\hline
\end{tabular}
\end{minipage}
}
\caption{Overview of major \ac{NL} interfaces to structured data. `Q' stands for
`Query', `D' for `Data', `A' for `Answer', `P' for `Portability', `L' for
`Linguistic coverage' and `E' for `Error feedback'}
\label{tab:overview}
\end{table}

\subsection{Domain-dependent semantic parsing}
Early systems belong to this class of approaches.
The amount of knowledge necessary to translate \ac{NL} questions to database
queries is encoded in a lexicon.
The systems allow users to employ a subset of English (i.e. a controlled
language) to query databases. 
The lexicon, however, is a huge linguistic resource which defines how each word
and/or phrases must be translated into database elements, and what semantic
rules to trigger to get, in the end, the desired database query.



\subsubsection{Lexicon}
The lexicon is a resource that defines a set of words that belong to a
domain. Those words are associated to a meaning, which is a semantic rule
that controls how this word must be interpreted in the current data domain and
for the data structure.
In addition, the lexicon also contains a list of specific rules that modify 
the global meaning of the query being generated, based on some words that are
already defined in the lexion.
The lexicon usually combines both syntactic with semantic pieces of information.
For instance, the same word would have a different interpretation whether it is
a noun or a verb in the sentence where it occurs. 

\subsubsection{Limitation}
The domain dependence is however the great limitation of those systems; this is
due to the cost of the lexicon. Indeed, porting such systems to other domain
would mean provide a new lexicon and corresponding semantic rules, which is
highly costly.


 







\subsection{Complex question translation}
The next generation of \ac{NL} interfaces aims at increasing their linguistic
coverage.
This is performed with the distinction of both domain-dependent knoweldge and
domain-independent knowledge.
The domain-dependent knowledge base consists in semantic rules triggered by
words, phrases or syntactic information encoded in a parse tree. Those rules
produce fragments of the target query language (or of the intermediate query
language) to be then combined and modified to generate the final qurey language.
The domain-independent knowledge base is composed of lexical information in a
dictionary (which might be completed with domain-dependent knowledge, like the
most likely senses of words used in the application domain).







\subsection{Feedback-driven approaches}
This range of systems translate \ac{NL} questions to database queries basing on
users' feedback. 
We distinguish between the following kinds of feedback:


\subsubsection{Authoring tool configuration}
Authoring tools are \ac{GUI} that permits users to edit domain-specific
knowledge (e.g. the lexicon that is used to translate \ac{NL} questions to
internal queries).
Editing such domain-specific knowledge within these tools can be straighforward
(like synonyms to be used in user's question to describe the same database
elements) or more complex (like semantic rules to translate user's terms to
logical elements).
Describing this knowledge is not an easy task for standard users, for instance
semantic rules that define how to map words and phrases to logical elements, and
rules that define how information from the parse tree must be combined to
produce the final logical query.
For that reason, recent advanced systems try to infer such rules basing on
dialog-like interaction with the user.


\subsubsection{Interactivity}
Authoring tools introduced in the previous section are intended to be used as a
preliminary task when porting the interface to other domains. 
Other systems do not explicitely ask users to answer questions to get the
domain-specific knowledge, but infer this knowledge on the basis of
interaction. 
This range of systems suggests \ac{NL} questions to users that could be
reformulations of the current user's question. 
To the best of our knowledge, there are two different kinds of such interaction,
when the system cannot interpret a question:
\begin{itemize}
  \item the system tries to change words and/or phrases in user's question, so
  that the system is able to interpret the question
  \item the system also comprises a repository of successfully answered
  questions, and suggests one of those questions replacing some slots with terms
  used in the user's question, and ensuring that the generated question can be
  interpreted by the system
\end{itemize}
Recent work~\cite{DBLP:conf/icde/KoutrikaSI10} investigate how to present
similar questions in \ac{NL} as interpretations of the current question. 
This is a way of making the user think she controls what is happening, which is
one of the expected features in modern interfaces. 
This paraphrasing feature is also present in \textsc{NaLIX} and {\textsc
DaNaLIX}~\cite{Li:2005:NIN:1066157.1066281,Li:2007:DDN:1247480.1247643} and {\textsc
C-Phrase}~\cite{Minock:2010:CSB:1715942.1716190} works.



\subsection{Learning-based approaches}
The \emph{learning-based approach} consists in learning a grammar or a set of
rules that map \ac{NL} sentences to logical forms. 
Learning-based approaches are popular among the NLP community. Indeed, learning
techniques reduces the cost of linguistic resources, which is being learned over
time.
In the case of \ac{NL} interfaces, this allows to port the system to other
domains with a limited cost. 
Most systems need a corpus of labelled examples, e.g. a set of sentences mapped
to their corresponding logical form.

However, such corpora are rarely available, and are costly to produce.
Some systems adopt strategies to address this (see \textsc{Wolfie}
section~\vref{sec:wolfie} for instance).


\subsubsection{Statistical models}
\label{sec:statistical-model}
Table~\vref{tab:statistical-models} summarizes the different statistical models
used by learning-based systems.
\begin{table}
\centering
\begin{tabular}{ll}\hline
\multicolumn{1}{c}{\textbf System} & \multicolumn{1}{c}{\textbf
Model}\\\hline\hline Miller et al.~\cite{Miller:1996:FSA:981863.981871} &
recursive transition network\\\hline
Zettlemoyer et Collins~\cite{DBLP:conf/uai/ZettlemoyerC05} & log-linear
model\\\hline
\end{tabular}
\caption{Different statistical models used by learning-based systems}
\label{tab:statistical-models}
\end{table}
Miller et al.~\cite{Miller:1996:FSA:981863.981871} use a probabilistic recursive
transition network and consider the probability $P(T|W)$ of a parse tree $T$
given a word string $W$:
$$P(T|W)=\frac{P(T)\times P(W|T)}{P(W)}$$
This model combines both \emph{state transition probabilities} and \emph{word
transition probabilities}. The state transition probability concerns the
labelling of a combination of semantic and syntactic information recursively
(the label of a node is computed given the labels of the previous node in the
syntactic order, and the parent node).
The word transition probability is the probability of a word, given the previous
syntactic word and a semantic information (attached to the parent node in the
parse tree).

Zettlemoyer et Collins~\cite{DBLP:conf/uai/ZettlemoyerC05} use a log-linear
model to learn a combinatory categorial grammar.
This grammar, which also defines the domain-specific lexicon of the parser,
contains semantic information (\emph{i.e.} to which $\lambda$-calculus formula a
given word and/or phrase should be associated).







\subsubsection{Shortcomings of these approaches}
These components rely on statistical models, that often require a large amount
of annotated data.
For instance, a statistical parser require a significant number of questions
annotated with corresponding parse tree. Table~\vref{tab:training-data} shows the
volume of training data corresponding to two systems (Miller et al. and
Zettlemoyer et Collins).
\begin{table}
\centering
\begin{tabular}{ll}\hline
\multicolumn{1}{c}{\textbf System} & \multicolumn{1}{c}{\textbf Training data
volume}\\\hline\hline Miller et al. & 4000 sentences\\\hline
Zettlemoyer et Collins~\cite{DBLP:conf/uai/ZettlemoyerC05} & 600/500 training
examples\footnote{The system is being experimented on two data sets}\\\hline
\end{tabular}
\caption{Volume of training data of different systems}
\label{tab:training-data}
\end{table}
Besides, training data should also be composed of negative examples, whose
availability is a strong requirement as pointed out by Giordani et
Moschitti~\cite{Giordani:2009:SSK:1617768.1617815}.


\subsection{Schema-unaware approaches}
A range of recent systems aggregate potential answers from different sources.
This range of systems have emerged in parallel with the success of the \ac{SW}
technologies.
The particularity of these approaches is that they cannot rely on the schema of
underlying knowledge bases because the different sources are potentially
modelled in very different ways. 
Thus, the set of systems represented by these approaches try to bridge the gap
between user terminologies and the different terminologies used in the different
knowledge bases that the interfaces talk to.
These approaches can be seen as an extension of several approaches presented
before, namely `complex question translation' (because the question is
further analyzed to map it with the terminology used in the different knowledge
bases) and `learning-based approaches' (because some of the systems
represented in these approaches contain learning components).

\subsubsection{Terminology mapping}
In previous approaches, end-users were not aware of how data were modeled in the
knowledge base. This requires advanced natural language processing to map user's
terminology with database terminology. In schema-unaware apppraoches, the
systems do not talk to a single database, but to potentially an unlimited number
of sources where the structured data are to be found. Each of those sources has
its own logical schema, naming conventions, etc.
Then, the system must know how to communicate with these knowledge bases, and
what strategies to adopt to reduce the computation cost of generating
distributed queries (see below).
In some cases, it's also critical to aggregate results from different sources.


\subsubsection{Shortcoming of these approaches}
\label{sec:schema-unaware-shortcoming}
The major shortcoming is related to scalability and efficiency. In particular,
in cases where the knowledge bases are searched over Internet (for instance
Linked Data\footnote{See~\url{http://linkeddata.org/}.}), the computational
complexity of mapping and expanding user query terms to the terminology of
respective knowledge bases is a limiting factor. Internet latency in this case
is also to be considered since the databases are not hosted.
















%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%   DOMAIN-DEPENDENT SEMANTIC PARSING %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Domain-dependent semantic parsing}
\label{sec:domain-dependent}
The major systems are \textsc{Baseball}~\cite{Green:1961:BAQ:1460690.1460714} and
\textsc{Lunar}~\cite{Woods:1973:PNL:1499586.1499695}.
Both systems are surveyed below.
\begin{table}
\centering
\begin{tabular}{llll}\hline
\multicolumn{1}{c}{\textbf{System}} & \multicolumn{1}{c}{\textbf{D}} &
\multicolumn{1}{c}{\textbf{SK}} & \multicolumn{1}{c}{\textbf{S}} \\\hline\hline
\multirow{2}{*}{\textsc{Baseball}} & \multirow{2}{*}{specification lists} &
lexicon & \multirow{2}{*}{cost of the lexicon}\\
 &  & semantic rules &  \\\hline
\multirow{3}{*}{\textsc{Lunar}} & \multirow{3}{*}{proprietary schema} &
\multirow{2}{*}{lexicon} & cost of the lexicon \\
 & &  \multirow{2}{*}{semantic rules} & simple data structures \\
  & & & exact term matching \\\hline
\end{tabular}
\caption{Systems belonging to the \emph{domain-dependant parsing} range of
approaches. `D' stands for ``data structure'', `SK' for ``semantic knowledge''
and `S' for ``Shortcomings''.}
\label{tab:sota-domain-dependant}
\end{table}
Table~\vref{tab:sota-domain-dependant} overviews these systems. 

\subsection{\textsc{Baseball}~\cite{Green:1961:BAQ:1460690.1460714}}
\textsc{Baseball} aims at answering questions about baseball results. The main
concepts in the data are games, teams, scores, day, month, place (of a
game). The domain is quite close, which means that there is few ambiguity in
terms of word meaning. 



\subsubsection{Data structure}
\label{sec:baseball-data-structure}
The data structure is a list structure called \emph{specification list}. This is
a hierarchical structure, where each level correspond to an attribute associated
to a value, or a nested specification list.
An attribute can also be modified.
For instance, the city \emph{Boston} is represented by
\verb?City = Boston?; an unknown number of games is represented by
\fvset{commandchars=\\\{\},codes={\catcode`$=3\catcode`_=8}}
\Verb|Game$_{\textnormal{number of}}$ = ?|.


\subsubsection{Semantic knowledge}
The semantic knowledge necessary to map questions to the data structure is
defined in the lexicon on the one hand, and in a set of semantic rules (called
subroutines) on the other hand. 

\paragraph{Lexicon}
The lexicon maps words or idioms to their meaning in the same data structure
presented section~\vref{sec:baseball-data-structure} as well as their POS
(part-of-speech).
Wh-words are also referenced in the lexicon. 


\paragraph{Subroutines}
Subroutines are a set of semantic rules that modify the query representation or
make choices in cases of ambiguity. 
For instance, a word that can have two POS (noun and verb) is disambiguated with
the help of some heuristic, like the fact that any sentence can only have one
main verb. 
A meaning modification routine consists for instance in adding a modifier in an
attribute. 
For example, the word `team' has the meaning \verb?Team = (blank)?. The word
`winning' before the word `team' will lead to the modification
\Verb|Team$_{\textnormal{winning}}$ = (blank)|. 





\subsubsection{Question translation step by step}
\label{sec:baseball-question-translation}

\paragraph{Dictionary lookup}
The question is first tokenized in words, and empty words are left aside.
The remaining words as well as adjoining words are looked up in the lexicon for
the POS and the meaning. The output is a list of attribute/value pairs with
extra information like the POS of each word and if the word is a wh-word. 


\paragraph{Syntactic bracketting}
POS of each word is used to syntactically analyze the question in term of
phrases. Phrases are surrounded by brackets and the main verb is left aside.
The parsing proceeds from right to left and bases on heuristics. For instance,
prepositions are associted to the rearest right noun phrase to generate a
prepositional phrase. 
Each phrase is then tagged with its functional role in the sentence (subject
and object).


\paragraph{Subroutines activation}
Some words trigger additional rules that modify the data structure of the query
and/or disambiguate some words. 
For instance, the word `What' followed by the word `team' (whose
meaning is \verb?Team = (blank)?) modify the latter meaning to \verb|Team = ?|. 




\subsubsection{Shortcomings}
The main shortcoming of the system is the cost that is required to produce the
semantic knowledge base (the lexicon and the set of semantic rules).
Authors suggest an improvement for handling unknown words, where the meaning
could be expressed based on existing words in the lexicon.
Porting the system to another domain requires a significant effort, since one
needs to rewrite entirely the knowledge base.






\subsection{\textsc{Lunar}~\cite{Woods:1973:PNL:1499586.1499695}}
\textsc{Lunar} was published more than ten years after \textsc{Baseball}. 
\textsc{Lunar} (unlike \textsc{Baseball}) has been experimented with scientific
data and targets expert users. 

\subsubsection{Data structure}
Authors do not give much details about the data (provided by the
NASA\footnote{See~\url{http://www.nasa.gov}.}).
It looks almost like relational tables with a dedicated formal query language.
The data are about chemical analyses of lunar rocks from the Apollo 11
expedition\footnote{See~\url{http://www.nasa.gov/mission_pages/apollo/missions/apollo11.html}.}.
The application domain is again very closed but more complex than that of
\textsc{Baseball}; some expertise is required to validate the answered provided
by \textsc{Lunar}.

\subsubsection{Internal query representation}
\label{sec:lunar-query-representation}
Woods~\cite{Woods:1973:PNL:1499586.1499695} has defined a meaning representation
language which is used to represent internally users' intent. This language is a
combination of propositions (whose evaluation leads to a truth value) and
commands (or actions to be performed by the \ac{DBMS}). Propositions are
composed of database objects (classes or table names and instances or
variables).
Propositions are combined together with logical predicates like \verb?OR?,
\verb?AND?, etc.
Commands are \verb?TEST? (to test the truth of a proposition), \verb?PRINTOUT?
to print out the evaluation of a proposition and commands for loops
(\verb?FOR?) to be used with a quantifier.

\subsubsection{Question translation step by step}
\paragraph{Syntactic parsing}the input question is first syntactically parsed
using a general-purpose grammar (domain-independent grammar). The grammar 
is based on Augmented Transition Network linguistic
formalism~\cite{Woods:1970:TNG:355598.362773}, which is almost equivalent to the
context-free grammar formalism.
The syntactic parsing also needs a lexicon which contains terms belonging to the
domain. It contains for instance technical names of samples recolted during
the expedition. `S10046' is thus recognized as a proper noun by the
parser~\cite{Woods:1986:SQN:21922.24336}.

\paragraph{Semantic mapping}a set of rules transform the syntactic parse
tree into a meaning representation (see
section~\vref{sec:lunar-query-representation}).
The rules are triggered both by the syntactic structure of the parse tree (the
label of nodes like \verb?NP? for noun phrase or \verb?VP? for verb phrase) but
also on the words present in the question (like `S10046' which is a sample name in
the lexicon, or `contain' which has a semantics also defined in the lexicon).
The rule results in a database query pattern with slots to be filled with items
from the lexicon.
As the system also supports quantification, authors presents some heuristics on
how to resolve the attachment in the generated propositions.

\paragraph{Query execution}The internal query representation is composed of
commands that the database retrieval component understands and executes to
retrieve data and display/print them.


% \subsubsection{Something to tell somewhere?}
% The meaning representation is in the form of a database query (in particular
% through database commands). This is the intermediate step between \textsc{Baseball}
% where the internal representation (spec lists) is in the form of the data
% themselves and the recent systems (like the one about household compliance)
% where the database commands are generated afterwards from a logical
% representation of user's intent (independent from the data AND from the
% database).





\subsubsection{Shortcomings}
The shortcomings of the \textsc{Lunar} system are threefold. First, the \ac{NL}
processor (called ``English processor'' by its authors) is closely tailored to
``the way geologists habitually refer to'' database elements~\cite{PIN}.
Therefore, the system is highly domain-dependant, and the lexicon cannot be
re-used in different domains.
Secondly, authors state that it is tailored to very simple data structure, and
that the system -- or part of it -- must be re-implmented in order to consider
new data sources. Last but not least, the database entities matching approach is
an exact term matching technique, which is not applicable in most domains
(e.g. the \emph{logic name} given by the database administrator to tables is
usually different from the \emph{conceptual name} considered by experts of the
domain).
As an example, compare logical names in table~\vref{tab:evaluation-census-size}
and conceptual names on figure~\vref{fig:introduction-relational}.




%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%   COMPLEX QUESTION TRANSLATION      %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Complex question translation}
\label{sec:complex-question}
This class of approaches aims at increasing the linguistic coverage of \ac{NL}
interfaces.
In addition to the systems of this class need domain knowledge, but the involved
processing are independant from the unerlying \ac{DBMS}.

As for domain-dependant systems, the domain-dependent knowledge base corresponds
to a lexicon which defines the meaning of words and expressions. This meaning is
expressed with a set of semantic rules that map words and expressions plus
syntactic information to fragments of database queries. 

Besides, the domain-independant knowledge base is composed of the following
components:
\begin{enumerate}
  \item a syntactic parser which operates iteratively with the semantic rules
  \item a set of NLP tasks that aim at resolving linguistic ambiguities (such as
  anaphora resolution and ellipsis  
  resolution~\cite{Bates:1983:IRU:511793.511804})
\end{enumerate}
The syntactic parsing might be performed iteratively and in parallel with the
semantic component process, as it is the case in
\textsc{Irus}~\cite{Bates:1983:IRU:511793.511804}.

Table~\vref{tab:sota-complex-question-translation} overviews various
systems that we compare in this section.
\begin{table}
\centering
\begin{tabular}{llll}\hline
\multicolumn{1}{c}{\textbf{System}} & \multicolumn{1}{c}{\textbf{D}} &
\multicolumn{1}{c}{\textbf{SK}} & \multicolumn{1}{c}{\textbf{S}}\\\hline\hline
\multirow{2}{*}{\textsc{Chat-80}} & \multirow{2}{*}{Prolog} & vocab. (100
domains) & NL ambiguities \\\cline{3-4}
 & & domain-indep. know- & \multirow{2}{*}{presupositions}\\
 & & ledge base & \\\hline
 \multirow{3}{*}{\textsc{Qwerty}} & \multirow{3}{*}{temporal DB} & semantic
 rules & grammar tailored\\\cline{3-3}
  & & semantics of temp- & to DB schema\\
  & & oral PPs & \\\hline
\multirow{3}{*}{\textsc{Irus}} & \multirow{3}{*}{hierarchical DB} & domain-dep.
dictionary & domain knowledge \\\cline{3-3}
 & & interpretation rules & to generate MRL\\\cline{3-3}
 & & linguistic resources & \\\hline
\multirow{5}{*}{\textsc{Precise}} & \multirow{5}{*}{relational DB} &
\multirow{2}{*}{lexicon} & prevalence of trac-\\
 & & & table questions\\\cline{3-4}
 & & \multirow{3}{*}{semantic rules} & \ac{SQL}-specific\\\cline{4-4}
 & & & lack of user-feed-\\
 & & & back\\\hline
\multirow{3}*{\textsc{Panto}} & \multirow{3}*{triple store} & dom. indep.
lexicon & parser limitation \\\cline{3-4}
 & & \multirow{2}{*}{authoring tool} & \ac{SPARQL} express-\\
 & & & iveness\\\hline
\end{tabular}
\caption{Systems belonging to the \emph{complex question translation} range of
approaches. `D' stands for ``data structure'', `SK' for ``semantic knowledge''
and `S' for ``shortcomings''.}
\label{tab:sota-complex-question-translation}
\end{table}


\subsection{\textsc{Chat-80}~\cite{Warren:1982:EEA:972942.972944}}
This system is the ancestor of many future
interfaces~\cite{DBLP:journals/corr/cmp-lg-9503016} like
\textsc{Masque}~\cite{Androutsopoulos93masque}.
The database is composed of facts about world geography
(facts about ocans, seas, rivers, cities and relations. The database itself is
implemented as ordinary Prolog.
Questions are expressed in a subset of English. The subset of
English qustions is a formal but user-friendly
language~\cite{Warren:1982:EEA:972942.972944}.
The main difference with its predecessor (\textsc{Lunar}) is that much effort has
been spent on increasing the linguistic coverage. 
In particular, the system translates English determiners (`a', `the',
`some', `all', `every') and negation and focus on linguistic
phenomena, like noun attachment and transformational aspects (that cannot
be covered by context-free grammars). 

\subsubsection{Portability}
The authors claim that the system is adaptable to other
application~\cite{Warren:1982:EEA:972942.972944}. In particular, it is
composed of a small vocabulary of about 100 domains (excluding proper nouns),
and a small domain-independant knowledge base of about 50 words.

\subsubsection{Lexicons}
The system is composed of a small vocabulary of
English words that are related to the database domain, plus a dictionary of
about 50 domain-independent words.
These lexicons consist in rules in the Extraposition grammars
formalism~\cite{Pereira:1981:EG:972911.972914}; those rules are processed by
Prolog and output Prolog clauses.
In addition to these two lexicons used to parse the question, a dictionary is
made up of semantic rules in the form of templates that define how a word
associated to a predicate must be also associated with its arguments.


\subsubsection{Question translation steps}
\paragraph{Parsing}
The parser analyzes the syntactic categories of words and determiners
(domain-independant lexicon) plus nouns and verbs which are database-related
elements (domain-dependant lexicon). 
Proper nouns are represented by logical constants while most verbs, nouns and
adjectives are represented as predicates with one or more constants.

\paragraph{Interpretation}
The output of the parser is interpreted by filling the predicates identified in
the previous step. This is performed using a set of templates that are part of
the system's initial configuration.


\paragraph{Scoping}
This step consists in defining the scope of determiners and some operators (for
instance the operator that counts items). 

\paragraph{Planning}
The output of the previous step is a logical expression. 
However, to avoid combinatory explosion when executing the Prolog query, some
strategies to optimize the query have been implemented: reordering the
predications in the Prolog query; putting braces arround independent
subproblems to avoid too many backtracking procedures


\subsubsection{Limitations}
Constraints in \ac{NL} are not convered (for instance ``Which ocean\ldots''
presuppose there is only one right answer).


\paragraph{Query execution}
Even relatively complex queries are answered in less than
one second~\cite{Warren:1982:EEA:972942.972944}.
The Prolog expression is executed to retrieve the
answer.
Authors note however, that the answering process (i.e. query execution) is the
limiting factor (while modern systems are limited by the question analysis task). 







\subsection{\textsc{Qwerty}~\cite{Nelken:2000:QTD:992730.992808}}
The specificity of this system is that it is intended to interface temporal
databases. This system has thus an increased linguistic coverage and a better
temporal expressivity. 
Input questions are expressed in controlled NL.
The grammar used in the system takes into account some aspects of
temporality of NL such as tenses and temporal PPs that modify sentences.
The system produces queries in \ac{SQL}/Temporal query language, which is dedicated
to temporal databases.

\subsubsection{Portability}
The grammar used for parsing questions and translating it
into the formal language is specifically designed for use with a particular
database schema. Thus, this system cannot be considered as a portable system.

\subsubsection{Question translation steps}
\paragraph{Semantic parsing}
The \ac{NL} question is parsed using the Type Grammar framework. While parsing, the
question is being trasformed into a logical representation called $L_{Allen}$.
This formal language is based on interval operators. 
The translation bases on a linguistic theory, that semantics of sentences is
modified by temporal preposition phrases (PPs). 
In this work, PPs are considered as variants of standard generalized
quantifiers, where the quantification is over time. 
The temporality in \ac{NL} questions can be explicit (like `When', `Which year')
or implicit (``Did Mary work in marketing?''). 
The quantification also allows iterations of PPs (``every year until 1992''). 
In addition to temporal quantifiers, the system recognized quantification over
individuals (``some employees''), coordination and negation.
The semantic mapping to logical expression is performed basing on a bottom-up
approach, simultineously with the parsing. 



\paragraph{Query translation}
The logic expression is translated in \ac{SQL}/Temporal, the database query language
dedicated to temporal databases. 
Some logical query produce infinite \ac{SQL}/Temporal queries. Some heuristics have
been implemented to prevent such behaviour. 


\paragraph{Query execution}
The \ac{SQL}/Temporal query is finally evaluated by the database engine to produce
the answers. 


\subsubsection{Shortcomings}
The translation of \ac{NL} questions in Temporal/SQL is performed using a
grammar which is tailored to the schema of the database. Therefore this system
cannot be used for other databases.

\subsection{\textsc{Irus}~\cite{Bates:1983:IRU:511793.511804}}
The \textsc{Irus} system processes the question independantly from the underlying
domain and \ac{DBMS}, which is a big change with respect to previous systems.
The meaning formalism used to represent internally the query is the same as in
the \textsc{Lunar} system (MRL, namely meaning representation language) but its
expression is domain and \ac{DBMS}-independent.
Besides, \textsc{Irus} analyses linguistically the question and integrates
state-of-the-art \ac{NL} processing components, such as anaphora and ellipsis
resolution.

\subsubsection{Internal query representation}
The internal meaning representation language is a descendant of that of {\textsc
Lunar}
The language has the following general form~\cite{Bates:1983:IRU:511793.511804}:
\begin{verbatim}
(FOR <quant> X / <class> : (p\ X) ; (q\ X))
\end{verbatim}
where \verb?quant? is a quantifier such as \verb?EVERY?, \verb?SOME?,
\verb?THREE?, \verb?HALF?, etc., \verb?X? is the variable of quantification,
\verb?<class>? is the class of quantification of \verb?X?, \verb?(P\ X)? is a
predicate that restricts the domain of quantification and \verb?(q\ X)? is an
expression being quantified, or an action such as \verb?PRINT\ Y?.


\subsubsection{Question translation steps}
\paragraph{Syntactic parsing}
The syntactic parsing of \ac{NL} questions is performed using the ATN
grammar formalism.
The authors claim that the syntactic parser can benefit from semantic mapping as
well in the syntactic parsing, both evolving in a \emph{cascaded system}.
The output of the syntactic parsing is a partse tree, where nodes correspond to
syntactic information about question words and phrases

\paragraph{Semantic mapping}
The semantic mapping is done in interaction with the syntactic parser.
It requires a domain lexicon, which defines the semantics of the words and
expressions used in users' queries.
The main subtasks involve disambiguation (pronouns and other anaphoric
expression resolution, ellipsis resolution, references resolution through
discource information).

\paragraph{Query execution}
The meaning representation language can be used to interface any database
system, at the condition that there is a component responsible for the
translation from the internal language to the target query language.

\subsubsection{Shortcomings}
The system is said \emph{transportable} -- portable to new application domains
and can interface any \ac{DBMS} -- but a new domain-specific knowledge base must
be provided. Authors propose as next step an authoring tool where this knowledge
can be written by expert users.


\subsection{\textsc{Precise}~\cite{Popescu:2003:TTN:604045.604070,Popescu:2004:MNL:1220355.1220376}}
\textsc{Precise} maps questions expressed in natural language to \ac{SQL} queries.
The interesting approach in this system, is the introduction of
\emph{semantically tractable questions}.
This class of questions is guaranteed to be mapped to the correct \ac{SQL} query.
This reduces then the classic gap between the query space and the data space.

\subsubsection{Internal query representation}
The internal query representation is different from the one of systems presented
so far.
The system builds an \emph{attribute-value graph} that maps words of the
question to database elements, and a \emph{relation graph} that maps
\emph{relation tokens} (i.e. some words of the question) to the names of
relations belonging to the database. Both graphs are eventually used to
generate the \ac{SQL} query.

\subsubsection{Lexicon}
The lexicon defines the mapping between tokens and database elements. It is
composed of 1) the tokens; 2) the database elements; 3) the binary relations
that bind both tokens and database elements. 

\subsubsection{Question translation steps}
\paragraph{Syntactic parsing} A lexicon is composed of elements that have been
automatically extracted, and is used to perform the matching with question
tokens. \ac{NL} processing task are: tokenizing the question into words,
categorizing those tokens into \emph{syntactic markers} (i.e. empty words) and
\emph{tokens} which are words that can be potentially associated with database
elements.
In addition to the lexicon, the system is composed of a parser which
is implemented as a plug-in. The parser can thus be changed for experimentation
purposes.
Authors have experimented the system with a syntactic dependency parser which
outputs a graph (namely an \emph{attribute/value graph}) composed of paths
linking database elements together. These paths will then be composed together
based on aggregation and combination of foreign keys in the case of relational
databases.
The lexicon is also composed of a set of restrictions corresponding to
prepositions and verbs. These restrictions define the join paths connecting
relation and attributes. The set of those restrictions defines the semantic part
of the lexicon.
A component is also responsible for classifying questions into \emph{tractable}
and \emph{not-tractable} questions. This is done linking words of the question
with a set of \emph{compatible} elements of the database.
\textsc{Precise} also implements strategies for correcting syntactic parsing
errors (namely \emph{semantic over-rides}) based on the semantics defined in
the lexicon.

\paragraph{Semantic mapping}
the interpretation in terms of structured query consists in choosing a path
between database elements. This choice is a constraint satisfying problem
defined in the parse tree.
Paths generate \ac{SQL} fragments that are then aggregated. 


\subsubsection{Feedback component}
In the case where there is no possible interpretation (even trying to correct
possible syntactic errors) the system asks the user to rephrase the question.

\subsubsection{Shortcomings}
The main contribution of the system is the ability of predicting whether a
question can be answered (\emph{tractable questions}) or not.
Experiments have been lead in a few domains (geography, restaurants and jobs)
but there is no proof that tractable questions are prevalant \emph{in general}.
Besides, the semantic knowledge component contains semantic rules that define
how to translate two graphs (namely \emph{attribute-value graph} and
\emph{relation graph}) to the target query in \ac{SQL}. There is no claim
regarding the cost of changing the target query language (here SQL).
Moreover, authors conclude on the user-feedback component, and suggest that
users should get better information on why answering a question had failed.

 
\subsection{\textsc{Panto}~\cite{Wang:2007:PPN:1419662.1419706}}
\textsc{Panto} generates \ac{SPARQL} queries, that can be executed
to get answers to the information need expressed through a question in NL.
The data are organized in a knowledge base, more specifically
an ontology (\ac{RDF} or
OWL\footnote{See~\url{http://www.w3.org/TR/owl-features/}.} formalism).
The most interesting aspect in \textsc{Panto} is that its most important
component (the linguistic component, i.e. the parser) is implemented as a
plug-in component, that can be easily replaced by an other one. This
permits thus to benefit from the improvements in terms of linguistic coverage,
when integrating state-of-the-art parsers.  

\subsubsection{Portablity}
``\textsc{Panto} is designed to be ontology-portable''. To
ensure portability, \textsc{Panto} comprises a domain-independant component.
This component is a lexicon composed of
WordNet\footnote{See~\url{http://wordnet.princeton.edu}.} entries.
For portability purposes, users can collaborate and improve the
domain-dependent component (the domain ontology) defining their own synonyms to
be added in the lexicon which bases the question parsing step.

\subsubsection{Internal query representation}
The internal query representation is a graph representation called \emph{query
triples}. 
It is a representation of the parse tree, that is then mapped to database
queries (i.e. \ac{RDF} triples) thanks to the lexicon.

\subsubsection{Question translation steps}
\paragraph{Syntactic parsing}
words from the \ac{NL} query are first mapped to
entities (concepts, instances, relations) of the ontology. This corresponds
to the ``entity recognition'' task, and several tools are used, such as
WordNet and string metrics algorithms.
Then, a syntactic parser is used to recognize nominal phrases in the
question. Those phrases are represented by pairs in the
parse treee.
In addition, some NLP tasks are integrated in this step, such as the
recognition of negation.

\paragraph{Semantic mapping}
Pairs of nominal phrases from the parse tree
are associated to triples in the sense of the ontology, and composed of
entities of the domain ontology.
This association is performed using the domain knowledge (the database).
Besides, two additonal components are involved in this step: the question
target identifier which identifies the target in the parse tree, and a
component responsible for the recognition of solution modifier in the sense
of \ac{SPARQL} (e.g. commands \verb?FILTER? or \verb?UNION?). Those components
basically base on rules triggered by the recognition of some words (e.g.
\emph{wh}-words for the former component).
The triples mentionned before along with the target and
modifier information constitute the internal representation of the query.

\paragraph{Query execution}
The internal representation of the query
mentionned above is interpreted into \ac{SPARQL} statements, that can be then
executed to retrieve requested facts.
The target item is used in this step to decide what to
put after the \verb?SELECT? command in the generated \ac{SPARQL} statement.
Query post-processing procedures are triggered, for example basing on the
negation recognized in the parsing step. The negation is usually translated
in the \verb?FILTER? \ac{SPARQL} clause to specificy the set of triples that
must not appear in the result of the execution of the \ac{SPARQL} statement.








% \subsection{\textsc{Exact}~\cite{Yates:2003:RNL:604045.604075}}
% \textsc{Exact} is a successor of \textsc{Precise}. It maps questions to \ac{SQL} query.
% This query is then transformed to create a PDDL goal, which is then used to
% generate a set of appliance commands to satisfy user's request.
% In the end, the system looks like other \ac{NL} interfacesDB, but has additional steps in
% addition to perform the generation of goals for appliance devices. 
% Besides, generated \ac{SQL} statements are not only ``SELECT'' ones but also
% ``UPDATE'' once; which means that the interface may change the state of the
% database (u\ac{NL} interfaceske other classic interfaces).
% 
% \subsubsection{Predictability}
% Guarantees of soundness of \ac{SQL} interpretations (semantically tractable
% questions).
% In particular, different classes of questions are not handled the same way.
% In the case of \emph{complex} questions, e.g. questions that must be decomposed
% in several ones, user is not informed of this decomposition. 
% In the case of questions that cannot be answered (impossible requests), the
% system informs explicitely the user that the request has been understook, but
% that it cannot be answered. In the case of appliances, it might be because the
% physical device does not permit to perform the desired goal; in the case of
% structured data it might be that the desired data is not present (null value) or
% is not modelled in the data model.
% 
% 
% \subsubsection{Portability}
% 
% 
% 
% 
% \subsubsection{Internal query representation}
% 
% 
% \subsubsection{Query modification}
% Bofore becoming a goal, \ac{SQL} queries must be modified. 
% 
% 
% \subsubsection{Query execution}
% U\ac{NL} interfaceske other \ac{NL} interfaces, all the generated database queries are not executed
% as is. Some of them are first translated into a goal, then into a plan. The
% plan is eventually send to devices which then inform the database about the fact
% that the devices' states have changed. 












%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%  FEEDBACK-DRIVEN APPROACHES         %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Feedback-driven approaches}
\label{sec:feedback-driven}

The systems presented in the previous section still require intensive
configuration efforts, and thus cannot be considered as \emph{portable}.
As a result, several systems have arisen where semantic grammars are created
automatically on the basis of user interaction. We distinguishes between
\emph{configuration} (authoring tools) and \emph{knowledge acquisition} through
user interaction when the system is being used.

%{\textbf What about explanatory questions?} (see Koutrika et
%al.~\cite{DBLP:conf/icde/KoutrikaSI10}).

Table~\vref{tab:sota-feedback-driven} sums up the different systems that belong
to this class of approaches.
\begin{table}
\centering
\begin{tabular}{llll}\hline
\multicolumn{1}{c}{\textbf{System}} & \multicolumn{1}{c}{\textbf{D}} &
\multicolumn{1}{c}{\textbf{SK}} & \multicolumn{1}{c}{\textbf{S}}\\\hline\hline
\multirow{4}*{\textsc{Team}} & \multirow{4}*{\emph{any}} & domain-dep. lexicon &
NLP limitations \\\cline{3-4}
 & & semantic rules & no proper eval. \\\cline{3-4}
 & & \multirow{2}*{authoring tool} & missing discourse \\
 & & & knowledge\\\hline
\textsc{Masque/-} & Prolog & lexicon (logic pred.) & exec. time \\\cline{3-4}
\textsc{SQL} & relational DB & authoring tool & use-feedback\\\hline
\textsc{NaLIX} & XML DB & lexicon & NLP limitations\\\hline
\multirow{3}*{\textsc{DaNaLIX}} & \multirow{3}*{XML DB} & lexicon
& no evaluation \\\cline{3-4}
 & & dom.-indep. knowledge & \multirow{2}*{XQuery only} \\\cline{3-3}
 & & user interaction & \\\hline
\multirow{3}*{\textsc{C-Phrase}} & \multirow{3}*{relational DB} & lexicon &
\multirow{3}*{basic evaluation} \\\cline{3-3}
 & & semantic rules & \\\cline{3-3}
 & & sentence patterns & \\\hline
\multirow{4}*{\textsc{Orakel}} & \multirow{4}*{triple store} & dom.-indep.
lexicon & \multirow{2}*{ling. assumptions}\\\cline{3-3}
 & & dom.-dep. lexicon &  \\\cline{3-4}
 & & semantic rules & labels in DB are\\\cline{3-3}
 & & authoring tool & concepts/instance names\\\hline
\end{tabular}
\caption{Systems belonging to the \emph{feedback-driven} range of
approaches. `D' stands for ``data structure'', `SK' for ``semantic knowledge''
and `S' for ``shortcomings''.}
\label{tab:sota-feedback-driven}
\end{table}


\subsection{\textsc{Team}~\cite{Grosz:1987:TED:25672.25674}}
The data are structured in a database about geographic facts like the largest
cities in the world, the population of each country etc.
The \textsc{Team} system is intended to be used with two kinds of users: standard
users and database experts, who engages dialogue to provide needed information
to port the system to other application domains. 
This system belongs thus to \emph{configurable} systems, where the required
knowledge to port the system is provided by the expert user, interacting with a
dedicated authoring tool.


\subsubsection{Internal query representation}
The meaning of users' queries are internally represented in formal logic. 

\subsubsection{Lexicon}
A lexicon is used to map words and expressions of \ac{NL} to their meaning in terms
of database elements. 
Close classes of words are supposed to be domain-independent and to have a fix
meaning.
Open classes of words, however, have a much more important frequency in users'
queries, and their meaning is supposed to be domain-independent. 
The \emph{meaning} is composed of both syntactic and semantic information.
Entries for nouns are the instances to which their refer, or a class (or
concept) in a type hierarchy; entries for adjectives and verbs correspond to
the possible predicate and how to find arguments of the predicate in the NL
question.

\subsubsection{Database schema}
In addition to the lexicon, a resource about how logical forms can be translated
in terms of database elements must be available. 
This resource expresses for instance the link between predicates (that appear in the
logical forms) and database relations and attributes or the definition of the
class hierarchy in terms of database relations or fields.




\subsubsection{Portability}
Portability is performed in using the system in a different mode
(\emph{knowledge acquisition}). In this mode, the database/domain expert informs
the system on how data are organized in the database, what are the database
elements and what words and expressions from \ac{NL} are used for those
elements.
The acquisition consists in a tool, where the expert must answer questions; the
answers of those questions will impact the resources like the lexicon and the
database schema, that are both used to interface the database. 




\subsubsection{Question translation steps}
The system is divided into two sub-systems: \textsc{Dialogic} that maps \ac{NL}
questions to formal expressions and a schema translator that translates formal logical
queries in database queries. 
\paragraph{Syntactic parsing}
The parsing of the \ac{NL} question is performed using an augmented-phrase structure
grammar. The parser produces possibly several parse trees for one question; then
one of the parse trees is selected basing on syntactic heuristics. 


\paragraph{Semantic mapping}
Several processings are responsible for resolving some domain-specific
ambiguities like noun-noun combinations and vague predicates like `have' or
`of'~\cite{Grosz:1987:TED:25672.25674}. Finally, a quantifier determination
process is triggered.
In the end, a logical form of the question is identified. 



\paragraph{Query generation}
The logical form is then translated in the database query. This translation
bases on the conceptual schema as well as the database schema (which defines the
structure of the database) to perform the translation. 


\subsubsection{Shortcomings}
\textsc{Team} relies on some \ac{NLP} modules, and therefore the overall performance 
of the system depends on the performances of these tasks.  
Moreover, the system is able to retrieve facts from the database, but not to aggregate 
these facts.


\subsection{\textsc{Masque/SQL}~\cite{Androutsopoulos93masque}}
\textsc{Masque} is a \ac{NL} interface to Prolog databases, and
\textsc{Masque/SQL} is an extension of it that supports \ac{SQL} query
language. The system is entirely written in Prolog.
The system is meant to be domain-portable. Users can indeed add new entities in
the lexicon through a domain editor.

\subsubsection{Lexicon}
The lexicon defines the semantic of words that are expected to appear
in users' questions. The meaning of words are described in logic predicate.
Possible argument types of predicates are organized using the hierarchical {\textit
is-a} relation.

\subsubsection{Internal query representation}
The internal query representation is a Prolog-like language called Logical Query
Language.
Question words are translated to predicates or predicate arguments. The types of
those arguments are described in the \emph{is-a} relation hierarchy

\subsubsection{Portability}
The system can be used with different domain databases, but
this requires to edit the specific knowledge in a dedicated editor. This
knowledge consists in entities linked with the hierarchical \emph{is-a}
relation. User also has to explicit links between words and corresponding logic
predicates; the entities in the taxonomy are used to restrict the possible
arguments of the predicates. Each predicate is also linked to the corresponding
\ac{SQL} statement.

\subsubsection{Question translation steps}
\paragraph{Syntactic parsing}
A dictionary consists of all English words that the
system understands and is used by an extraposition grammar to parse the
question.
\paragraph{Semantic mapping}
The dictionary composed of lexical units
also associates words to their meaning in the form of logic predicate.
The internal formal representation is a Prolog-like meaning
representation language (LQL).
\paragraph{Query execution}
The internal representation is translated to
SQL using an algorithm, and then \ac{SQL} is executed in the underlying DBMS.
The translation consists of rules triggered by the structure of the LQL
expression; each unit LQL expression is associated with a \ac{SQL} fragment; all
fragments are then combined to produce the final \ac{SQL} expression.
The system has also been experimented with Prolog as query language in
association with a Prolog database.

\subsubsection{Shortcomings}
Authors report two major limitations of the system. First, the execution time of
the system for any query is not less than 6 seconds.
Secondly, users are not properly informed of the reason why their queries have
failed (i.e. which step of the processing has failed).










\subsection{\textsc{NaLIX}~\cite{Li:2005:NIN:1066157.1066281} and {\textsc
DaNaLIX}~\cite{Li:2007:DDN:1247480.1247643}}
\textsc{NaLIX} is an interactive interface to XML databases for questions expressed
in natural language.

\subsubsection{Question history}
The system is composed of a query history that keeps all
successfully answered queries. Queries from this history are intended to be
used as templates for formulating new queries. This feature also permits users
better understand the linguistic coverage of the system.


\subsubsection{Internal query representation}
The internal query representation is the target database query, i.e.
XQuery\footnote{See~\url{http://www.w3.org/TR/xquery/}.}.

\subsubsection{Portability}
%% tell something about domain portability
There is no internal representation of the query (the NL
question is directly translated to an XQuery expression). Thus, the translation
is dependent on the \ac{DBMS} and the target query language (i.e. XQuery).
In addition to the portability feature of \textsc{NaLIX}, \textsc{DaNaLIX} takes
advantage of domain-dependent knowledge which can be automatically acquired
on the basis of user interaction. When ported to a new domain, the system starts
with a generic framework; then domain-dependent knowledge is being learned from
user interaction. 





\subsubsection{Question translation steps}
\paragraph{Syntactic parsing}
The parsing consists in identifying words and
phrases using the \textsc{Minipar} dependency parser (dependency
among words and not hierarchical constituents).
An other component is responsible for checking whether words and phrases
identified in the previous step can be mapped to directives (e.g.
\verb?return? or \verb?group by? clauses) in the target query language
(XQuery).
Each word and phrase that can match a directive is further typed, depending on
the kind of directive. The output of the parser is a tree with those roles as
labels of phrases of the initial query. The vocabulary mismatch is overcome using
WordNet. In \textsc{DaNaLIX}, an additional step consists in transforming the
parse tree using domain knowledge basing on relevant rules. 

\paragraph{Semantic mapping}
Each item from the parse tree is translated into a fragment of the target
query language (XQuery). This is done using a series of processings. Basically,
a set of rules define how to combine items of the parse tree to get XQuery
clauses. There are also further treatments like \emph{nesting} and
\emph{grouping} developped in~\cite{DBLP:conf/edbt/LiYJ06}.
In cases when the system does not understand how to map a given phrase to a
XQuery constituent, the system interacts with users and suggests potential
reformulations. \textsc{NaLIX} seems to be the first \ac{NL} interface that
introduces user interaction to select the correct parse interpretation.
\paragraph{Query execution}
The question is directly
translated into a XQuery expression, which is the data query language. The
XQuery expression is then executed to retrieve answers.
Answers are basically XML answers. Different visualization
of the answers are possible (text view for simple answers, hierarchical list
view or raw XML).
Besides answers to queries, the system implements an advanced error manager
that also supports users in rephrasing queries that were not correctly parsed
nor accurately translated.



\subsection{\textsc{C-Phrase}~\cite{Minock:2010:CSB:1715942.1716190}}
\textsc{C-Phrase} is a system that translates questions expressed in \ac{NL} in
queries for relational databases.
The system outputs expressions in tuple calculus -- closed to \ac{FOL} -- that
can be easily translated in a database query language like \ac{SQL}.

\subsubsection{Lexicon}
Semantic information are encoded in a lexicon. It maps tokens with syntactic
information (such as the head and the modifier in dependency analysis) and
semantic information (translation in $\lambda$-calculus).
It is represented as a set of rules that form the grammar of the parser.
An example of such a rule is:
\begin{center}
$\textnormal{HEAD}\rightarrow\langle\textnormal{``cities''},\lambda
x.\textnormal{City}(x)\rangle$
\end{center}
In addition, the system comprises a set of sentence patterns, in the form of
context-free rules. Such a rule is for instance:
\begin{center}
$\textnormal{QUERY}\rightarrow\langle\textnormal{``list the''}.\textnormal{NP},
\textnormal{answers}({x|\textnormal{NP}(x)})$
\end{center}
Authors say that they ``rarely'' see users who do not use already defined
sentence paterns.
This set of rules are automatically created by an authoring tool.
The tool explicitely asks for meaningful names of different database elements
(i.e. relations, attributes and join paths). 
It also permits to define new concepts in \ac{NL}. 



\subsubsection{Internal query representation}
The internal query reprsentation is $\lambda$-calculus.

\subsubsection{Question translation steps}
\paragraph{Semantic parsing}
The parsing bases on a context-free grammar, augmented with $\lambda$-calculus
expressions.
The framework ($\lambda$-SCFG) works with two parsing trees: one for
parsing syntactically the \ac{NL} sentence; the other one for expressing the
semantics of the first one in $\lambda$-calculus.
Input questions are first tokenized and normalized. 
Then, the sequence of tokens is analyzed to idenfity database elements, numeric
values, and proceed to some spelling corrections. 
The structure is then transformed in the form of tuple calculus queries. 
This is done based on a set of rules that map lexical and syntactic parses to
tuples. Each rule has a plausibility; in the end, the product of plausitilities
of all rules used in a parse is used to rank potential semantic interpretations. 
In case of ambiguity, the user is asked to rephrase the question, or to select
the best rephrasing proposition. 
\paragraph{Query generation}
The tuple query is converted in \ac{SQL}. 

\subsubsection{Shortcomings}
Authors of the \textsc{C-Phrase} system suggest that the system should be further 
evaluated.
Besides, they point out that bootstrapping the authoring tool is a tedious task.












\subsection{\textsc{Orakel}~\cite{Cimiano:2007:PNL:1216295.1216330}}
\textsc{Orakel}~\cite{Cimiano:2007:PNL:1216295.1216330}'s main feature is the
portability. This portability is based on the use of subcategorization frames,
which are linguistic structures composed of predicate and arguments.
The feedback consists in an authoring tool. This tool (which is
called \textsc{FrameMapper}) is used to generate the domain-specific knowledge
and is intended to be used by expert of the domain. 


\subsubsection{Role of lexicon}
The system is composed of three kinds of lexicons: 
\begin{itemize}
\item domain-independant lexicon defining determiners, \emph{wh}-pronouns and
spatio-temporal prepositions
\item domain-specific lexicon which defines the meaning of verbs, nouns and
adjectives
\item  ontological lexicon which is
automatically created from the data, and maps ontology instances and concepts
to proper nouns and standard nouns
\end{itemize}
Categories of the domain-independant lexicon are generic categories such as
those of a generic ontology like
DOLCE\footnote{See~\url{http://www.loa.istc.cnr.it/DOLCE.html}.}.


\subsubsection{Internal query representation}
The internal query representation is $\lambda$-calculus.
The question is represented in a language which is an extension of FOL (in
addition to FOL: quantifiers and operators).

\subsubsection{Customization process and portability}
\label{sec:orakel-portability}
Users can customize the system, i.e. create a domain-specific lexicon
which maps subcategorization frames (arity $n$) to an ontology relation (arity
$n$). This mapping is called a \emph{definition}.
The interesting thing is that binary relations where the range is an integer
-- for instance $height(mountain,int)$ -- can also be mapped to adjectives with
various degrees (base, comparative, superlative forms) and possibly
the positive or negative scale (as in \textsc{Team}).
This mapping is performed by users themselves through a front-end tool.

\subsubsection{Question translation steps}
\paragraph{Semantic mapping}
Both processes question parsing and semantic mapping are done in a single
process.
The parsing is based on the LTAG\footnote{Lexicalized Tree Adjoining Grammar,
see~\url{http://www.cis.upenn.edu/~xtag/tech-report/}.} linguistic
representation.
The parsing operates in a bottom-up fashion: each question word is associated
with an elementary tree. Then, all elementary trees are combined together to get
the syntactic parse tree of the entire sentence.
\paragraph{Query execution}
The \ac{FOL} internal query representation is then translated in the database
query language like \ac{SPARQL} (for ontologies) or the
language used for ontologies expressed in F-Logic.
This translation is peformed using a Prolog program.




\subsubsection{Shortcomings}
Supports only factoid questions (\emph{wh}-questions plus questions starting
with expressions like ``How many'' for counting) but not complex questions
like those starting with `why' or `how'.
Besides, there is a strong assumption that categories of the
domain-independant lexicon should be aligned with those of the database (the
domain ontology). The second one, is that the generation of the ontological
lexicon assumes that the labels used in the database correspond to
instance and concept names.
The front-end system used to improve the domain-specific knowledge allows to
increase the linguistic coverage and is a nice tool for porting the interface;
however it might not be user-friendly since the underlying concepts (like the
semantic frames) are complex.




%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%  LEARNING-BASED  APPROACHES         %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Learning-based approaches}
\label{sec:leraning-based}

Learning-based approaches are approaches where machine learning algorithms are
the core of the translation mechanism. 
These algorithms aim at learning domain-specific knowledge (e.g. a lexicon).
This knowledge is used to parse the question and get clues on on how to
translate it to a database query. 


Table~\vref{tab:sota-learning-based} compares the different systems that belong
to this class.
The first system (\textsc{M. et al.} in short) is more theoretical, and
therefore has not been evaluated with any specific database implementation.
In the following, we further describe each of these systems.
\begin{table}
\centering
\begin{tabular}{llll}\hline
\multicolumn{1}{c}{\textbf{System}} & \multicolumn{1}{c}{\textbf{D}} &
\multicolumn{1}{c}{\textbf{SK}} & \multicolumn{1}{c}{\textbf{S}}\\\hline\hline
\textsc{M et al.} & \emph{any} & stat. models & no evaluation \\\hline
\multirow{2}*{\textsc{Z. et C.}} & \multirow{2}*{$\lambda$-claculus DB} &
dom.-dep. lexicon & \multirow{2}*{only $\lambda$-calculus} \\
 & & semantic rules & \\\hline
\multirow{2}*{\textsc{Wolfie}} & \multirow{2}*{Prolog DB} &
\multirow{2}*{machine-learning (lexicon)} & no eval.
for long
\\
 & & & phrases \\\hline
\end{tabular}
\caption{Systems that incorpore machine learning approaches in order to
interface structured data.
 `D' stands for ``data structure'',
`SK' for ``semantic knowledge'' and `S' for ``shortcomings''.}
\label{tab:sota-learning-based}
\end{table}


\subsection{Miller et al.~\cite{Miller:1996:FSA:981863.981871}}
The main caracteristics of this system is that it is fully statiscal.
It is composed of three components for parsing, semantic interpretation and
discourse resolution and are associated with corresponding statistic models.
Each component produces a set of ranked items, and the chosen interpretation of
the question is the best one of the final component.





\subsubsection{Question translation steps}
The different steps correspond to the different components of the system.
\paragraph{Syntactic parsing}The string of words $W$ is searched for
the $n$-best candidates of parse trees $T$ basing on the measure $P(T)P(W|T)$.
The parse tree contains syntactic as well as semantic information.
The statistical model is based on the recursive transition network
model\footnote{See~\url{http://www.informatics.sussex.ac.uk/research/groups/nlp/gazdar/nlp-in-pop11/ch03/chapter-03-sh-1.1.html}.}.
This model is more detailed in section~\vref{sec:statistical-model}.
The model is trained with a set of questions annotated with the corresponding
parse trees.

\paragraph{Semantic mapping}The semantic mapping step is composed of two
sub-steps: first, a model associates a \emph{pre-discourse meaning} to both a
parse tree and the corresponding string of words. Second, a \emph{post-discourse
meaning} is retrieved from the pre-discourse meaning, the string of caracters,
the parse tree and the history.
Both pre-discourse meaning and post-discourse meaning are represented using
semantic frames.
The construction of the frames is integrated in the parse tree. Then, the
statistical model is used to disambiguate the frames, for instance when there is
no information about the frame type or when there is no information about a slot
to fill.
The second phase (post-discourse meaning) corresponds to ellipsis resolution. It
takes the previous meaning (final meaning of previous questions) and the
pre-discourse meaning (of the current question), and finds the best meaning.
Previous meaning and pre-discourse meaning are represented as vectors, whose
elements correspond to slots in the frame meaning representation. The
statistical model applies different kinds of operations on those vectors
(\verb?INITIAL?, \verb?TACIT?, \verb?REITERATE?, \verb?CHANGE? and
\verb?IRRELEVANT?).
Those operations combine elemnts of both vectors to compute the vector which
corresponds to the meaning of the current question.
\paragraph{Query generation}
The paper does not expalin how is performed the query generation.





\subsection{Zettlemoyer et Collins~\cite{DBLP:conf/uai/ZettlemoyerC05}}
The system aims at translating \ac{NL} sentences to $\lambda$-calculus expressions.
The system is based on a lerning algorithm that needs a corpus of sentences
labelled with $\lambda$-calculus expressions. The system induces then a grammar.
The statistic model is a log-linear model.
The paper focus only on the generation of $\lambda$-calculus expression from
questions expressed in \ac{NL}.


\subsubsection{Portability}
The proposal has been tested for two application domains: a database of US
geography and a database of job listings.

\subsubsection{Internal query representation}
The query is internally represented in $\lambda$-calculus.

\subsubsection{Question translation}
The syntactic parsing is performed using a combinatory categorial grammar.
The nodes of the resulting parse tree are composed of both syntactic and
semantic information.
The grammar of the parse operates with a domain-specific lexicon, which maps
question words and expressions to a syntactic type as well as a semantic type.
Several funtional rules define how syntactic types can be associated

\subsubsection{Learning component}
The learning algorithm takes as input a training set composed of pairs
$(S_i,L_i)$ where $S_i$ is a string of words and $L_i$ a logical form. The
algorithm also takes as input a lexicon.
The learning algorithm will determine how a string of words will be parsed
to produce a parse tree, and what words from the lexicon are required to produce
such trees. The algorithm involves then learning the lexicon and learning
the distribution over parse trees for a given string of words.

\subsubsection{Shortcomings}
The experiments are based on databases rewritten in $\lambda$-calculus. It would
be interesting to implement an additional module that transforms
$\lambda$-calculus expressions to any database query language and measure the
overall performance. 











\subsection{\textsc{Wolfie}~\cite{Thompson:2003:AWM:1622420.1622421}}
\label{sec:wolfie}
\textsc{Wolfie} is a system that learns a lexicon that is used to translate NL
questions in database queries.

\subsubsection{Lexicon} 
The lexicon consists in a mapping from sentences to semantic representation
(logical database queries).
The mapping consists in two steps: first, phrases that compose the sentence 
are associated to database elements (or \emph{symbols} in the representation).
Second, it says how those intermediate representation must be combined to get
the final database query. 



\subsubsection{Learning}
Due to the nature of the lexicon, the number of possible interpretations of a
\ac{NL} sentence is huge (the problem is computationally intractable).
To cope with that, authors propose an active learning algorithm which selects 
only most relevant examples to be then annotated.
The learning algorithm bases on a greedy approach. The generation of candidate
meanings for each phrase consists in finding maximally common meaning for each
phrase.
But, the algorithm also aims at finding a general meaning for the whole sentence
and not only meanings for the phrases that are part of the sentence.
The evaluation corpus is composed of 250 questions on US geography paired with
Prolog queries.


\subsubsection{Shortcomings}
The system has not been evaluated with long phrases; the paper describes how it
works for phrases of maximum two words. Besides, ``the algorithm is not
guaranteed to learn a correct lexicon in even a noise-free corpus''. 














%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% DATA-DRIVEN APPROACHES              %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\section{Data-driven approaches}
%\label{sec:data-driven}

% \subsection{Han et al.~\cite{Han:2010:\ac{NL} interfaces:1719970.1720022}}
% A big advantage of this system, is that it does not distinguish the linguistic
% understanding of users' questions and the database coverage of the question
% (the fact that the question can actually be asnwered from the database).
% The system answers questions expressed in natural language, and the data are
% organized in an ontology. The database queries are expressed in \ac{SPARQL}.
% 
% \subsubsection{Portability}
% Underlying data are structured in an ontology, which
% permits easily to change the domain. Dictionary is generated automatically
% from the ontology.
% Porting the interface to other domains is easy using an authoring tool that
% permits users to specify the semantics of domain-specific words/expressions in
% terms of schema-level information in the ontology (what word or expression used
% in \ac{NL} questions is associated to which relations, concept or instance in the
% ontology?).
% 
% 
% 
% \subsubsection{Question translation step by step}
% Traditional interfaces map question words or expressions to database elements.
% The candidate paths that link those elements are then used to generate valid
% database queries.
% Here, the approach consists in exploring all possible database (or formal)
% queries from database elements.
% The cost of exploring all possible database queries is huge. To reduce the
% formal query space, authors consider templates of database queries.
% 
% \paragraph{Query template generation}
% Queries are generated at the schema level (not the instance level). Each
% relation in the ontology is used to generate a ``schema-level query group'',
% where the object and the predicate of the relation can be either a value or a
% variable (of the expected type of the resource in the ontology).
% The next step (preparation of the question) will permit to match the question to
% the query group.
% 
% \paragraph{Question parsing}
% \label{sec:han-question-parsing}
% The \ac{NL} question is first syntactically/semantically parsed in a similar fashion
% as in classic systems.
% The question is tokenized and is possibly annotated with semantic
% labels. Semantic information can be a concept or an instance (e.g. ``I:City''
% for the question word \emph{Berlin} or ``C:River'' for the question word {\textit
% river}). Such semantic annotation is performed using a lexicon (which can be
% built automatically from the ontology).
% 
% \paragraph{``Preparation'' of \ac{NL} questions}
% Each formal query template is associated with a set of \ac{NL} questions. This
% association is performed with the help of an authoring tool.
% Within this tool, users can type a natural language query corresponding to a
% schema-level query group. As a result, the tool generate the normalized query
% (the query parsed as presented in section~\vref{sec:han-question-parsing}). In
% case of ambiguity, users can select the correct normalized query, which is then
% saved in the knowledge base of normalized query associated to schema-level query
% groups.
% 
% \paragraph{Semantic mapping}
% user's question is compared with prepared queries. To perform this, the user's
% question is normalized, and the normalized question is being compared with
% all normalized queries. In the case where the question is composed of two
% predicates, two normalized queries must be retrieved.
% The matching is based on similarity measures, and the best association is
% retrieved.
% 
% 
% \subsubsection{Shortcomings}
% Preparing queries from natural language sentences has to be performed by a
% domain expert, and a technician aware of the underlying data structure
% (how the data are semantically encoded in the database: predicates, instances
% and concepts). We believe this limits somehow the portability of the system.









%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%  SCHEMA-UNAWARE APPROACHES          %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Schema-unaware approaches}
\label{sec:schema-unaware-details}

This section describes modern systems that are portable, because they operate
accross various domains simultaneously (they interface databases of various
domains).
Table~\vref{tab:sota-schema-unaware} sums up the different systems that are
surveyed in this section.
\begin{table}
\centering
\begin{tabular}{llll}\hline
\multicolumn{1}{c}{\textbf{System}} & \multicolumn{1}{c}{\textbf{D}} &
\multicolumn{1}{c}{\textbf{SK}} & \multicolumn{1}{c}{\textbf{S}}\\\hline\hline
\multirow{2}{*}{\textsc{PopwerAqua}} & \multirow{2}{*}{triple stores} &
\multirow{2}{*}{dom.-indep. lexicon} & data indexing\\
 & & & assumption\\\hline 
\multirow{4}{*}{\textsc{DeepQA}} & \multirow{2}{*}{semantic data} & dom.-indep.
lexicon & \multirow{2}{*}{primarily designed for}\\
 &  & dom.-dep. lexicon & \\
  & \multirow{2}{*}{sources} & syntactic frames & \multirow{2}{*}{unstructured
  documents}\\
  & & semantic frames & \\\hline
\end{tabular}
\caption{Systems belonging to the \emph{schema-unaware} range of
approaches. `D' stands for ``data structure'', `SK' for ``semantic knowledge''
and `S' for ``shortcomings''.}
\label{tab:sota-schema-unaware}
\end{table}



\subsection{\textsc{PowerAqua}~\cite{DBLP:conf/esws/LopezMU06}}
\textsc{PowerAqua} takes as input several semantic sources (heterogeneous
ontologies) and a question (expressed in \ac{NL}).
This system is an improvement of the system {\textsc
Aqualog}~\cite{Garcia:2006:AOQ:1225785.1225790}. The latter system was able to
answer questions related to one domain ontology only (which is a problem in the
context of the \ac{SW} composed of many heterogeneous ontologies).
The geatest contribution of \textsc{PowerAqua} is the method for mapping user
terminology with ontology terminology (and the system is unaware of the data
structure of the semantic sources).

\subsubsection{Domain portability}
The portability cost is negligible.
Indeed, the system has a learning component responsible for the acquisition of
domain-specific lexicon which maps users' relations (expressed in \ac{NL}) to
knowledge represented in the domain ontology.

\subsubsection{Internal query representation}
The internal query representation is called \emph{query triples}. It is triples
like \ac{RDF} triples used in the data (ontology), but these triples are based on
terms of the query.

\subsubsection{Lexicon}
\textsc{PowerAqua} as well as \textsc{Aqualog} use domain-independant lexicon
WordNet. 

\subsubsection{Question translation steps}
\paragraph{Syntactic parsing}
Users' questions are first parsed syntactically, and questions are categorized
(for instance on the basis of the \emph{wh}-question word). Both the parse tree
and the category of the question are used to generate the internal query
representation, query triples.
The sense of words used in the question are disambiguated using word sense
disambiguation algorithms. 
Those query triples are composed of question words, and may be then modified so
that they are \emph{compatible} with the database.
The linguistic component of \textsc{PowerAqua} also consists in algorithms that
make decisions, for instance for correctly interpreting the conjunctions and
disjunctions terms (`and'/`or'). The question may also contain constraints, that
are sub-questions to be anewered prior the actual question. 
Furthermore, the system provides a way to treat two instances (from two
semantic sources) as equivalent, and thus ease the answering process (in
particular in cases where more than one semantic source are required to answer
the question). 
The mapping from users' terms to database elements is done using a metrics
similar to the edition distance.


\paragraph{Semantic mapping}
the semantic mapping consists in processing query triples to retrieve the
ontology triples that are associated to the query triples. 
First, the algorithm tries to filter the sources (the ontologies) in order to
only consider those that contain all or most of the \emph{query triples}. 
Query triples must then be filtered to take into account the potential high
number of triples generated in the previous step, and the fact that a question
can involve several different semantic sources.
The mappings established in the previous step (query terms and ontology terms)
are ranked basing on a sense disambiguation algorithm that uses WordNet and the
\emph{is-a} taxonomy. In this step, the equivalence between two terms is
computed based on the label of the term (edition distance) but also on the
position in the taxonomy (ancestors and descendants). 


\paragraph{Query generation}
Potential ontology terms (database elements) must be processed to generate the
final ontology query. 
Terms identified in different relevant ontologies must be used to generate
triples (first sub-step); these triples must then be linked, these triples
belonging or not to the same ontology (second sub-step). 
In this step, relations will be created from terms identified in the previous
step. A relation (in the query triple) does not always correspond to a relation
(expressed in ontology triple): sometimes, a new triples must be generated. 






\subsubsection{Limitation}
Authors make the assumption that the \ac{SW} provides an indexing mechanism.







\subsection{\textsc{DeepQA}~\cite{WATSON}}
The \textsc{DeepQA} project is part of the well-known \textsc{Watson} system. 
\textsc{Watson} has been the major event in the Q\&A community in 2010. Indeed,
it is the first artificial system which has won the american \emph{Jeopardy!} quizz.
%We classify the underlying \textsc{DeepQA} system in machine learning-based
%approaches. The system involves massive parallelism approach to get multiple
%interpretations and hypothesis. In the end, statistical models are used to
%merge and rank hypothesis and evidences. 
\textsc{Watson} queries a wide range of different sources and aggregates the
different results to produce answers. 


\subsubsection{Question}
\textsc{Watson} has been designed for \emph{Jeopardy!} where questions are not
standard questions, but \emph{clues} that should help understanding what the
question is about. For instance, in a classic Q\&A process a question would be
``What drug has been shown to relieve the symptoms of ADD with relatively few
side effects?''. In \textit{Jeopardy!} the corresponding clue would be ``This
drug has been shown to relieve the symptoms of ADD with relatively few side
effects''.
The expected response would then be ``What is the
Ritalin?''~\cite{FerrucciBCFGKLMNPSW10}.


\subsubsection{Answers}
\label{sec:jeoparty-answers}
Another particularity of \textit{Jeopardy!} is that an answer are not simply a
phrase corresponding to what the clues are about, but the answer must be the
expected question corresponding to the clues. For instance, a valid response
can be ``Who is Ulysses S. Grant?'' but not ``Ulysses S.
Grant''~\cite{FerrucciBCFGKLMNPSW10}. 



\subsubsection{Data}
The data (sources) used by \textsc{Watson} are mostly unstructured documents as
in most Q\&A systems. \textsc{DeepQA} however also leverages databases and
ontologies such as
\textsc{DBpedia}\footnote{See~\url{http://dbpedia.org/About}.} and the {\textsc
Yago}\footnote{See \url{http://www.mpi-inf.mpg.de/yago-naga/yago/}.} ontology;
this is why we consider this system in this survey.



\subsubsection{Question translation}
Question translation for databases involves different steps that are detailed
below:

\paragraph{Question analysis}
\label{sec:jeoparty-question-analysis}
The parsing of the question involves many tasks: shallow parses;
deep parses; logical forms; semantic role labelling; coreference resolution;
relations (attachment relation and semantic relations); named entities; etc.
Semantic relation detection is one of those tasks. 
For instance for the question ``They're the two states you could be reentering
if you're crossing Florida's northern border'', the relation would be
\verb+borders(Florida, ?x, north)+.

\paragraph{Hypothesis generation}
The module responsible for this step takes as input the analysis parses and
generates candidate answers for every system sources. Those candidate answers
are considered as hypothesis to be then proved basing on some degree of
confidence.
To produce candidate answers, a large variety of search techniques are used.
Most of them concern textual search, but some consist in searching knowledge
bases, more specifically triple stores. 
The different search techniques lead to the generation of multiple search
queries for a single question; then the result list is modified to take into
account constraints identified in the question. 
The search is based on named entities identified in the clue. If a semantic
relation has been identified in the question analysis
step, a more specific \ac{SPARQL} query can be performed on the triple store. 
In this step, recall is preferred as precision, leading to the generation of
hundreds of candidate answers to be then ranked according to the confidence of
each candidate.




\paragraph{Soft filtering}
Candidate answers are not directly scored and ranked, since those algorithms 
require lots of resources. 
Instead, the hundreads of candidate answers corresponding to a single question
are first pruned, to produce a subset of initial candidate answers. The involved
algorithms are lightweight in the sense that they do not require intensive
resources (soft filtering).
The candidate that do not pass the soft filtering threshold are routed directly
to the final merging stage. 
The model used to filter the candidate in this step as well as the threshold are
determined on the basis of machine learning over training data. 


\paragraph{Hypothesis and evidence scoring}
Candidate answers that successfully pass the previous step are scored again in
this step. This involves a wide range of scoring analysis to support evidences
for candidate answers.
First, evidences for the candidate answers are being retrieved. In the context
of triple stores, those evidences are triples related to entities and semantic
relations identified in the question. Those evidences are then scored, to
measure the degree of certainty of these evidences. 
For instance the score is determined on the basis of subsumption, geospacial and
temporal reasoning.  
The \textsc{DeepQA} framework suports a wide range of scorers
(components) that provide scores for evidences with respect to a candidate
answer.
The evidences from different types of sources can also be combined (for
instance from unstructured content and from triple stores) using a wide range
of metrics. 
In the end, scores of evidences are combined into an \emph{evidence profile}.
This profile ``groups features into aggragate evidence dimensions that provide
an intuitive view of the feature view''. 

\paragraph{Final merging and ranking}
This step aims at ranking and merging ``the hundreds of hypotheses based on
potentially hundreds of thousands of scores to identify the single
best-supported hypthesis given the evidence and to estimate its confidence'' (or
likelyhood).

\paragraph{Answer merging}
To cope with candidate answers that might be equivalent (e.g. leading to
the same answer) but with different surface forms, authors propose to first find
similar candidate answers and to merge them in a single candidate answer. 

\paragraph{Final ranking}
After merging candidate answers, they must be ranked based on merged scores.
This is done using a machine-learning approach. This approach requires a
training dataset of questions with known answers with appropriate scores. 
In this step ranking scores and confidence estimation (estimation of the
likelihood of a given candidate answer) must be computed in two separated
processes in intelligent systems. 
For both processes, the set of scores can be grouped according to the domain and
intermediate models specifically used for the task. The output of these
intermediate models is a set of intermediate scores.
Then, a \emph{metalearner} is trained over this ensemble of scores.
This approach allows for iteratively improving the system with more complex
models, and adding new scorers. 
The metalearner uses multiple trained models to handle different question
classes (for instance, scores for factoid questions might not be appropriate
for puzzle questions). 


\subsubsection{Drawbacks}
\textsc{Watson} has been primarly designed for unstructured content: ``Watson's
current ability to effectively use curated databases to simply `look up' the
answers is limited to fewer than 2 percent of the clues''. 




%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%  CHALLENGES                         %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Challenges}
\label{sec:challenges}
According to Hearst~\cite{Hearst:2011:NSU:2018396.2018414}, challenges of modern
NL interfaces to structured data are twofold:
\begin{itemize}
  \item speech input (dialog-like interaction: Siri). This means that the future
  systems must be able to understand ill-formed sentences and mispelled words.
  \item social search (collaboration, asking people, crowdsourcing). This could
  also comprise systems that broadcast the question to other systems that
  are expert in some fields, and then aggregate respective answers.
\end{itemize}
The next generation of interfaces will not focus on the user (personalized
systems) but on non-textual information through non-textual
input~\cite{Hearst:2011:NSU:2018396.2018414}. 
%
A promising research direction is thus to analyze non-textual documents
(pictures, videos, voice records, etc.) and try to find \emph{clues} based on
textual (or non-textual) formulations of any user's information need.


Going back to \ac{BI}, nowadays' challenges are to make business tools more
user-friendly in terms of user experience.
In the last few years, the major challenge was to prevent users from writing
structured queries (e.g. in \ac{SQL} or \ac{MDX}). Now, they do not want to
spend to much time: 1- learning how to use their business tools; 2- how to adapt
the existing tools to new infrastructures (e.g. DBMS) or domain.
The latter challenge meets the ones of the so-called \emph{gamification} domain
which is a research area that has become popular in \ac{BI}.




%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%  CONCLUSION                         %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Conclusion}
\label{sec:conclusion}
We have reviewed most significant systems that belong to \ac{NL} interfaces. 
These systems are however only a tiny subset of all systems that interface
structured data, or of Q\&A systems.
For instance, a popular domain of research nowadays is the search over
structured data on the basis of keywords (instead of natural language inputs).
Besides, we tried to focus on recent systems, since Androutsopoulos et
al.~\cite{DBLP:journals/corr/cmp-lg-9503016} surveyed systems in the mid 90's.

We proposed a rough taxonomy of approaches, namely \emph{classic translation
approaches} and \emph{iterative translation approaches} that are then refined in
the following classification of approaches: approaches basing on
domain-dependant semantic parsing; approaches performing complex question
translation; feedback-driven approaches; learning-based approaches and
schema-unaware approaches.
This classification is motivated by the various methods employed by the systems
of these different classes. 
Early years of NL interfaces focused on specifics of some DBMS in order to
generate valid database queries. Later on, while standards in database query
languages (e.g. \ac{SQL}) have appeared, the focus was more on increasing the
linguistic coverage of the respective systems. Then, in parallel with the
success of semantic technologies, the need has appeared to interface various
databases of possibly different domains: the respective systems are not aware
anymore on how data have been modeled, and what are the terms in use in the
particular domain. This knowledge has then to be learnt automatically.

A learning from this survey, is that the performance of most systems (or maybe
of all of them) rely on the performances of underlying tasks, namely natural
language tasks (e.g. entity recognition, identification of frames, etc.) which
still focus much researchers' attention.

Challenges for future natural language interfaces are even more \emph{natural}
interfaces, such as speech-to-text features (users won't have to type their
queries anymore in the near future) as well as the role of communities (in
particular \emph{crowdsourcing} which is already very popular among the Q\&A
community).

In the next three chapters
(chapter~\vref{sec:chapter-personalized},
chapter~\vref{sec:chapter-patterns} and chapter~\vref{sec:chapter-modeling}) we
will present our contribution to the state of the art as a proposal for a
system interfacing data warehouses in the context of \ac{BI}.


\stopcontents[chapters]


