% IEEE Paper Template for US-LETTER Page Size (V1)
% Sample Conference Paper using IEEE LaTeX style file for US-LETTER pagesize.
% Copyright (C) 2006 Causal Productions Pty Ltd.
% Permission is granted to distribute and revise this file provided that
% this header remains intact.
%


% note: industrial papers are 12 pages long!!

\documentclass{acm_proc_article-sp}
%\usepackage[\interfootnotelinepenalty=10000]{bigfoot}
\usepackage[caption=false]{subfig}
\usepackage{times,amsmath,url}
\usepackage{graphicx}
\usepackage{listings}
\lstset{language=SQL,basicstyle=\footnotesize}
\usepackage{amssymb}
\usepackage[width=0.49\textwidth]{caption}
\usepackage{tikz}
\usepgflibrary{arrows}
\usepackage{pgfplots}
\usetikzlibrary{fit, arrows, decorations.markings}
\usepackage{float}
\floatstyle{plain}
\newfloat{program}{thp}{lop}
\floatname{program}{Listing}


\newcommand\TODO[1]{{\textcolor{red}{\underline{TODO:}#1}}}

\begin{document}
\title{QUASL: A Framework for Question Answering \\and its Application to
Business Intelligence}

\numberofauthors{3} %  in this sample
% file, there are a *total* of EIGHT authors. SIX appear on the 'first-page'
% (for formatting reasons) and the remaining two appear in the
% \additionalauthors section.
%
\author{
% You can go ahead and credit any number of authors here,
% e.g. one 'row of three' or two rows (consisting of one row of three
% and a second row of one, two or three).
%
% The command \alignauthor (no curly braces needed) should
% precede each author name, affiliation/snail-mail address and
% e-mail address. Additionally, tag each line of
% affiliation/address with \affaddr, and tag the
% e-mail address with \email.
%
% 1st. author
\alignauthor
		Nicolas\\
       \affaddr{Institute for Clarity in Documentation}\\
       \affaddr{1932 Wallamaloo Lane}\\
       \affaddr{Wallamaloo, New Zealand}\\
       \email{trovato@sap.com}
% 2nd. author
\alignauthor
		Falk\\
       \affaddr{Institute for Clarity in Documentation}\\
       \affaddr{P.O. Box 1212}\\
       \affaddr{Dublin, Ohio 43017-6221}\\
       \email{webmaster@sap.com}
% 3rd. author
\alignauthor Marie-Aude Aufaure\\
		\affaddr{\'Ecole Centrale Paris}\\
       \affaddr{Grande Voie des
       Vignes}\\
       \affaddr{Ch\^atenay-Malabry,
       France}\\
       \email{marie-aude.aufaure@ecp.fr}
}


\maketitle
\begin{abstract} 
Question Answering (Q\&A) from structured data is a technique that may
revolutionize enterprise search.
An very promising use case for such technology is Business Intelligence (BI).
Data warehouses became an important facility for monitoring and decision
making. In order to make the BI paradigm more accessible to end users, some
efforts have been made in the field of search for existing reports. However,
the problem of converting an end user's natural language input to a valid
structured query in an ad-hoc fashion hasn't been sufficiently solved yet. This
is in particular important as todays' self-service Business Intelligence (BI)
tools for reporting are still too difficult to operate for non-technology
affine business users.

In this paper, we present a framework for Question Answering systems. It
builds on top of natural language processing technologies. The main innovation
is that the framework allows defining a mapping between recognized semantics of
a user's questions to a well structured query model that can be executed on 
arbitrary data warehouses. It bases on popular standards like RDF and SparQL 
and is therefore very easy to adapt to other domains or use cases.

We will describe the application of this framework at hand of a BI question
answering use case, which also includes the personalization of generated
queries, demonstrating the real world applicability of our approach.


\end{abstract}

% A category with the (minimum) three required fields
\category{H.4}{Information Systems Applications}{Miscellaneous}
%A category including the fourth, optional field follows...
\category{D.2.8}{Software Engineering}{Metrics}[complexity measures,
performance measures]

\terms{Theory}

\keywords{Data Warehouses, Natural Language, Information retrieval} 

\section{Introduction}


\newsavebox{\firstlisting}
% removed from listing
% City."CITY" AS city,
% Customer."AGE" AS age,
\begin{lrbox}{\firstlisting}
\begin{lstlisting}
SELECT
 sum(Invoice_Line."DAYS"
  * Invoice_Line."NB_GUESTS"
  * Service."PRICE") AS revenue,
 Customer."LAST_NAME" AS customer
FROM City
INNER JOIN Customer
 ON (City."CITY_ID"=Customer."CITY_ID")
INNER JOIN Sales
 ON (Sales."CUST_ID"=Customer."CUST_ID")
INNER JOIN Invoice_Line
 ON (Invoice_Line."INV_ID"=Sales."INV_ID")
INNER JOIN Service
 ON (Invoice_Line."SERVICE_ID"=Service."SERVICE_ID")
WHERE
 city = 'Palo Alto' AND
 age >= 20 AND
 age <= 30
GROUP BY
 customer
ORDER BY revenue
LIMIT 5
\end{lstlisting}
\end{lrbox}
\begin{figure*}[!ht]
\centering
 \subfloat[A user's question and derived semantic units (comparable to a parse
tree in natural language processing).
Successive tokens that satisfy some constraints (e.g., linguistics pattern or
dictionary matches) are marked with a dotted lined boxes. Inferred semantics
are drawn with solid lined rectangles. These semantic units of recognized
entities form parts of potential structured queries that might fullfill the
users information need. In addition, the system has to propose a measure for
`?1' to compute a valid multi-dimensional query.] {\label{fig:running-example}
\begin{minipage}[c][1\width]{0.5\textwidth}
\centering
%\includegraphics[scale=0.75]{img/running-example}
\includegraphics[clip=true, trim = 8cm 11cm 9.5cm
3cm,width=1.00\linewidth]{img/running-example.pdf}
\end{minipage}}
\subfloat[Example SQL-query that was generated from the user question in
figure~\ref{fig:running-example}.
Natural language patterns, constraints given by the data and metadata of the
data warehouse (see figure~\ref{fig:data-schema}) have been applied to infer
query semantics.
This where mapped to a logical, multi-dimensional query, which in turn was
translated to SQL. Note, that the `revenue' represents a proposed measure,
depicted as `?1' in figure~\ref{fig:running-example}.
The computation of the measure `revenue' and the join paths are configured in
the data warehouse metadata.]{\label{lst:structured-query-1}
\begin{minipage}[c][1\width]{ 0.5\textwidth}
\centering
{\usebox{\firstlisting}}
%\vspace{-.5cm}
\end{minipage}}
%\vspace{-.2cm}
\caption{Translating a user's question into a strcutured query.}
\label{fig:translation-process}
\end{figure*}


In the last decades data warehouses became an important information source for
decision making and controlling. A lot of progress has been made to support
casual end-users by allowing interactive navigation inside complex reports or
dashboards (e.g., by interactive filtering or calling OLAP-operations such as
drill-down in a user-friendly way). In addition there has been a lot of effort
in making reports or dashboards searchable.

However, most casual users still have to rely on pre-canned reports that are
provided by the IT-department of a company because todays' Business Intelligence 
(BI) self-service tools still require a lot of technical insides such as 
an understanding of the data warehouse schema. 
This is especially cumbersome because data warehouses
grew dramatically in size and complexity. A popular use case for BI is for
instance the segmentation of customers to plan marketing campaigns (e.g., to
derive the most valuable, middle-aged customers in a certain region). It is not
unusual that business users planning a campaign have to cope with hundreds of
key performance indicators (KPIs) and attributes, which they have to combine in
an ad-hoc fashion to cluster their customer base. A keyword or even natural
language based interface to formulate their information need would ease this
task a lot as users are more comfortable to use unstructured query interfaces
compared to very structured ones~\cite{Hearst:2011:NSU:2018396.2018414}.
This can be underlined with the recent success of question answering systems,
such as WolframAlpha\footnote{See \url{http://www.wolfram.com/mathematica/}.},
especially in conjunction with speech-to-text technologies like
Siri\footnote{See \url{http://www.apple.com/iphone/features/#siri}.} and the
huge efforts in the database community to enable keyword based search in
databases (e.g., 
\cite{He:2007:BRK:1247480.1247516,
Tata:2008:SDM:1376616.1376705,
Tran:2007:OIK:1785162.1785201}).

However, the keyword based approaches developed so far lack many important
features to fully enable a question-driven data exploration by end-users,
where the consideration of range queries, the support to include application
specific vocabulary (e.g., ``middle-aged'') or leveraging of the users'
context (e.g., ``customers in my region'') are only the most obvious ones. Note
that the problem is not only to extract semantics from a user's question (e.g.,
from a range phrase such as ``between 1999 and 2012''), which is supported by
our framework as well. The more important problem is to relate findings
detected in a user's questions to formulate a well defined structured query.

The framework presented in this paper supports the whole process of defining
and executing a domain or application-specific Question Answering system.


\section{Problem statement}
\label{sec:problem}

In our work, we reduce the problem of interpreting natural language queries to a
constraint-solving problem as explained later on. In general, the
problem of Question Answering (Q\&A) from structured data can be formalized as
follows: given a structured query language $L$ and a user's question $q_{\rm
u}$, we define a mapping $q_{\rm u} \mapsto R$ to a ranked list $R$ (results) 
of structured queries  $r_{\rm 1},r_{\rm 2},\ldots,r_{\rm n} \in L$, where
$r_{\rm 1}$ represents the highest scored interpretation of the user's questions 
with respect to the data and metadata of the underlying structured data source. 

Note, that we focus in this paper mainly on multi-dimensional queries (as
structured queries) and data warehouses (as data source), without restricting 
the generality of our approach. A simple multi-dimensional query 
(see e.g., ~\cite{Giacometti:2009:RMQ:1617540.1617589} for a more detailed
definition) is usually represented by a number of dimensions (which are
organized in hierarchies) or their attributes, measures (aggregated KPIs with
respect to the used dimensions or attributes) and filters, and is executed on
top of a physical or virtual data cube (e.g., via an abtraction layer
generating a mapping to SQL as in our case).
In addition result modifiers, e.g., for sorting and 
truncation, can be added to a query.   
The interested reader might already look up 
the logical schema of the example data mart (part of a data warehouse) used
throughout this paper in figure~\ref{fig:data-schema}. 

An example question, i.e. $q_{\rm u}$=`\emph{Top 5 middle-aged customers in my
city}', is shown in figure~\ref{fig:running-example}, while
listing~\ref{lst:structured-query-1} depicts an example result query $r \in R$
in SQL-syntax (we will later switch to a more conceptual notation, but we use
here SQL because we assume that the majority of readers is more experienced
with SQL than multi-dimensional query languages such as MDX). 
Note that the join paths in the example, 
the aggregation function `\emph{sum()}', and the expression inside the
aggregation function representing the measure `\emph{revenue}' are predefined
in the data warehouse metadata.

We also depict in figure~\ref{fig:running-example} intermediate steps to derive
the final structured query. Successive tokens that satisfy some constraints 
(e.g., `\emph{customers}', which matches the name of an entity in the data 
warehouse metadata) are marked with dotted lined boxes. Inferred semantics 
(e.g., the filter for the age range) are drawn with solid lined rectangles.

In order to derive a structured query $r \in R$ from a given question $q_{\rm
u}$, we have to solve a series of problems, which form a kind of process and
which we like to detail in the following:

%\paragraph{(1) Information Extraction}
\textbf{(1) Information Extraction:} 
The first step in the processing pipeline is 
to derive lower level semantics from the user's question. This is, data and
metadata of the underlying structured data source (i.e., domain terminology) 
has to be recognized (e.g., \emph{customers} refers to the dimension
`\emph{customer}' in the data warehouse schema). In this stage precision and
recall of the recognition algorithms influence the overall quality of the
generated queries.
Even though matching the question and domain terminology is not the main scope 
of this paper, nevertheless we will give an overview of the technologies that
 are used currently by our system in section~\ref{information-extraction}. 
 
To support more sophisticated queries (e.g., range queries or queries with
orderings) and go beyond keyword-like questions, we have to allow
adminsitrators to define custom vocabulary (such as \emph{middle-agged}) and
more complex linguistic patterns, where \emph{top 5} is a simple example. Such
artifacts may export variables, such as the beginning and end of the age range
in the case of \emph{middle agged} or the number of objects to be shown to the
user in the case of \emph{top 5}. We will discuss our solution for defining
linguistic patterns in section~\ref{information-extraction}. 

\textbf{(2) Normalization and Transformation:} In many cases, we cannot 
map the users' input directly to a structured query. For instance dates or
numbers may be expressed in different forms (e.g., \emph{15000} as 
\emph{15,000}, \emph{15k} or even \emph{fifteen thousand}). The same holds 
true for custom vocabulary such as \emph{middle agged}, which translates to 
a range on the \emph{age} attribute of the dimension `\emph{customer}' (see 
figure~\ref{fig:data-schema}). Therefore, we need a configurable mechanism
to translate input variables derived from linguistic patterns to variables
that conform with the query language $L$. Similarly, we require a mechanism 
to define variables for custom vocabulary (e.g., \emph{middle agged} defines 
a range starting at \emph{20} and ending at \emph{30}). The details on how 
to configure normalization and transformation functions can be found 
in section~\ref{information-extraction}.

\textbf{(3) Structural Constraints and Missing Artifacts:}
One of the most challenging problems is to generate valid structured queries  
(valid in the sense that $q_{u} \in L$ holds and that $q_{u}$ returns a 
-- maybe empty -- result). To go deeper into our example BI-domain, 
a series of constraints need to be considered in order 
to generate valid multi-dimensional queries in a next processing step, 
such as:
\begin{itemize}
  \item Artifacts used within one query have to occur in the same data 
  warehouse and data mart (if several ones are used).
  \item A query contains at least one measure or dimension.
  \item A query that contains two dimensions requires one or more 
  measures that connect these dimensions (see figure~\ref{fig:data-schema}),
  which includes that these dimensions relate to the same fact table.
  \item Different interpretations for ambiguously recognized dimensions 
  (i.e. different matches covering the same text fragment in the users' 
  question) shall not be used within the same structured query because 
  it would change the aggregation level and lead to unexpected results.
  Using ambiguously recognized measures or filter terms for the same 
  dimension is not an issue because they do not change the overall 
  semantics. 
  %Still a couple of special cases need to be considered, 
  %e.g., if a fragment of a query refers to a dimension or a filter
  %term.  
  
  \item A sorting or truncation criteria (e.g., \emph{top 5}) requires 
  an assigned measure and a dimension (note that several sortings 
  and truncations can be applied within one query, e.g., for a question
   like \emph{`Revenue for the top 5 cities with respect to 
   number of guests and the top 10 resorts with respect to days'}). 
   If no dimension or measure for a sorting or truncation criteria 
   can be extracted from the question, the system has to propose 
   one (or better a set of alternative) measures such as \emph{revenue}
  as shown in listing~\ref{lst:structured-query-1} for the missing 
  input `\emph{?1}' in figure~\ref{fig:running-example}. 
\end{itemize}

The examples above are an excerpt of the constraints that occur in 
the domain of BI. Other constraints might be 
imposed by the specific application and consideration with respect 
to the users' expectation. Our example application aims for instance 
to provide insidefull vizualisations to the user's business. 
Thus, a user typing `\emph{revenue}' as question is
not necessarily interested in the total revenue achieved by the company 
since its existence (which a query consisting only of the measure 
would return). Most likely, he is interested in a specific aspect such as 
the `\emph{revenue per year}' or `\emph{revenue per resort}', even though
he is not yet sure, which of these aspects he would like to explore.
 Therefore, our application proposes in such situations dimensions that 
 form together with the measures of the user's question a valid 
 structured query and returns a couple of suggested queries (represented 
 as chart to the user) together with the total value. Same is true if 
 the user does not mention a measure, but only a dimension as shown 
 in the example in figure~\ref{fig:running-example}. In this case, the
 system proposes related measures (here \emph{revenue}).

Another related, but more interesting aspect is contextualization. 
A user might have a user profile, which augments the systems 
metadata (e.g., with the city the user is located in). In order to 
simplify the data exploration for him and return more relevant data for 
his current task, we may leverage the user profile to impose 
additional filters. In figure~\ref{fig:running-example} 
the user asks for instance for \emph{my city}. Knowing that
there is a mapping between `\emph{Palo Alto}' in the user profile and a value 
inside the data underlying the dimension `\emph{city}', we may automatically 
generate a strucured query with a filter for `\emph{Palo Alto}'.

Considering all these structural constraints and the possibilites 
to augment queries with artifacts from the underlying metadata 
(i.e. to propose queries to guide the user more efficiently), 
it is obvious that such constraints are difficult to express and 
maintain in an imperative programming language. In 
section~\ref{sec:structural-constraints} we will present 
as one of our major contributions a declarative way to capture such 
domain or application-specific constraints to generate structured queries.

\textbf{(4) Mapping Artifacts into Structured Queries:} Once we 
have defined constraints and artifacts that shall be considered, 
such as dimensions, measures, attributes and custom variables (e.g., 
for \emph{top 5} or \emph{middle agged}), they 
have to be mapped into semantic units (e.g., \emph{basic query} 
or \emph{range filter} as shown in figure~\ref{fig:running-example}). 
Now we have to create a new data structure that represents 
a query such that artifacts refered in the constraints are mapped 
to the respective parts of a structured query. These are usually 
projections (i.e. measures and dimensions), selections (i.e. filters) 
and result modifiers (such as ordering or truncation expressions). 
Again, different query languages, domains and applications may 
impose different requirements.

Clearly, measures and dimensions that where recognized as \emph{basic query}
in figure~\ref{fig:running-example} have to be included in the projections.
\emph{Age} as dimension refered by the range filter derived 
from \emph{middle-aged} shall not be included in the projections, 
since it would change the semantic of the query if included in 
the \emph{GROUP BY}-statement in listing~\ref{lst:structured-query-1}. 

In general, it is often application-specific whethera dimension used 
within a filter expression shall be mapped to a projection. 
In our case, there is a direct mapping from the query result to a 
chart. Since most traditional chart types (e.g., bar charts) support 
only 2 dimensions, we have to reduce the number of projections to a 
minimum and therefore split very complex queries into several ones, each 
showing different views of the data set of interest. Another way of 
reducing the number of projections is to neglect projections for 
filters having only one value (e.g., as shown for \emph{Palo Alto} in 
listing~\ref{lst:structured-query-1}) since it does not change the 
semantics of the query. 

Again, we realize that it is not straightforward to express mappings
from recognized or proposed artifacts to a data structure 
representing a valid query. The approach of mapping data structures 
derived from the user's question to a data structure describing a technical 
query in a declarative way is detailed in conjunction with our
solution for defining structural constraints in  
section~\ref{sec:structural-constraints}.

\textbf{(5) Scoring of Queries:} The previous step generates potentially 
several structured queries. Thus queries have to be ranked 
by relevance with respect to the user's question. We propose a series 
of heuristic for ranking generated queries in section~\ref{sec:scoring}.


\textbf{(6) Query Execution:} Finally, ranked queries have to be executed. 
To improve query execution time, we have parallelized not only 
the execution of queries, but also the whole process of generating the queries.
We will give a brief overview on our approach for speeding up response time in  
section~\ref{sec:execution}. 



\begin{figure}
\centering
\includegraphics[clip=true, trim = 0cm 7cm 8.5cm
0.5cm,width=1.00\linewidth]{img/data-schema.pdf} \caption{Logical data model of
an example data mart (maps to a physical, relational galaxy schema) in notation
proposed by Golfarelli et Rizzi~\cite{Golfarelli:1998:MFD:294260.294261}: two
fact tables (reservation, sales) define different measures (e.g., revenue or
reservation days).
These facts connect the different dimensions (e.g., customer and sales person). 
Dimensions are organized in hierarchies (e.g., customer, city, region and
country) and may have assigned a set of attributes (e.g., age for customer).
Multi-dimensional queries are posed in our case in a proprietary query language
and are translated via an abstraction layer into SQL-queries (which may contain
sub-queries).}
\label{fig:data-schema}
\end{figure}

% recognize entities, predefined in some kind of background knowledge 
%  (either data or metadata of the data warehouse, or natural language rules),
%  such as \emph{top 5}, \emph{middle-aged}, \emph{customer} or \emph{my city}; 
%  \item define and execute normalizations and transformations to map the 
%  natural language input to valid expressions that can be used in the 
%  data warehouse query language $L$, e.g., to translate \emph{middle-aged} 
%  into a numerical range;
%  \item  
%\end{enumerate}



% The problem that we are interested in can be formalized as follows.
% Given a database query language $L$ and a user $u$, we define a mapping from a
% user's question $q_u$ to a set $\{\langle
% query_{i,L},rank_i\rangle\}$ of structured queries expressed in the database
% query language $L$ ordered by $(rank_i)$:
% \begin{equation}
% q_u\mapsto \left\{\left\langle query_{i,L},rank_i \right\rangle \right\}
% \label{eq:problem}
% \end{equation}
% where $rank_i\in[0,1]$.
% To illustrate the problem that we tackle, let's consider the following question:
% \begin{equation*}
% \textnormal{\emph{Top 5 middle-aged customers in my city}}
% %\label{eq:running-example}
% \end{equation*}
% This problem of translating this question into a database query is illustrated
% figure~\ref{fig:translation-process}.







%\begin{figure}
%\centering
%\includegraphics[width=8cm]{img/running-example}
%\caption{Steps to map the question~(\ref{eq:problem}) to the
%database query}
%\label{fig:running-example}
%\end{figure}
%In this figure, we have represented with boxes the successive tokens that
%satisfy some constraints, and therefore trigger translation rules.
% Some of such rules \emph{export} some of the content of the matched tokens, and
% the exported tokens may also be \emph{recomputed} before triggering the
% translation rules.
% The process is detailed below.

\section{Processing Pipeline and \\ Metadata Managemt}
\label{system-overview}

Before going into the details of our approach, we like to briefly introduce 
the overall architecture of our framework in section~\ref{architecture} 
and detail the metadata and data management components in 
section~\ref{data-management}. 

\subsection{Plugin-based processing pipeline}
\label{architecture}

The core of our system is a framework for federated search. On top we built 
 abstractions for declarative Q\&A as explained later on. The system follows 
a plugin-based architecture \cite{conf/eurocast/WagnerWPKBBA07}. 
We distinguish three types of plugins, each type responsible for one 
processing step within our system, i.e. extracting information from the 
user's question, formulating and executing a structured query, and
post-processing the result (e.g., by rendering a chart). All plugins within a
certain processing step are executed independent of each other, 
allowing a huge degree of parallelism as explained in
section~\ref{sec:execution}.
The three different plugin types are:

\textbf{(1) Information Extraction Plugins:} The first type of plugin analyzes
 the user's question by applying information extraction components. These
 components contribute to a common data structure, the so called parse 
 graph (see next subsection). By default, the system is equipped with three
 types of information extraction plugins that can be instantiated for 
 different data sources or configurations (see also 
 section~\ref{information-extraction}). These are plugins for matching 
 artifacts of the data source's metadata within the query, plugins for 
 recognizing data values (directly executed inside the underlying database) 
 and plugins for applying natural language patterns (e.g., for range 
 queries). These plugins jointly capture lower level semantic information to
 interpret a users question that can be interpreted in the context of the
 dataware house metadata in subsequent processing steps.
 
\textbf{(2) Search Plugins:} Search plugins operate on the common result of all
information extraction plugins -- the parse graph. The name
\emph{Search Plugin} is used because their untend is to execute some kind
of query on some backend-system, which could be indeed a traditional
search engine in the scope of the federated search framework. 
They may use recognized semantics from the parse graph and 
formulate arbitrary types of queries. In the case of leveraging
a traditional search engine, a plugin might take the users' question, rewrite
the question using the recognized semantics to achieve higher recall or
precision.
However, for Q\&A we define a abstract implementation of a plugin that is
equipped with algorithms to transform the semantics captured in the parse graph
to a structured query (see section~\ref{sec:structural-constraints}). The
output of a search plugin is a stream of objects that represent an well
structured result together with its metadata (e.g., consisting hte datasource
that was used to retrieve the object or a score computed inside the plugin).

\textbf{(3) Post-processing Plugins:} Post-processing plugins might be used
for different purposes. They can alter the result objects in arbitrary 
way (or even aggregate different results). The main funtionality of 
post-processing plugins in the application presented in this paper is
to select appropriate chart-types for a given result object 
(see~\cite{text2query}) and render a corresponding image.
 

\subsection{Metadata Management and Parse Graphs}
\label{data-management}


\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[remember picture]
\tikzstyle{bigbox} = [draw, draw=black!20, rounded corners, rectangle]
\tikzstyle{boxed} = [minimum height=0.8cm, draw=black!50, rounded corners, rectangle] 
\tikzstyle{unboxed} = [minimum height=0.8cm,rounded corners, rectangle] 
\tikzstyle{node}=[circle,minimum size=10pt,draw=black, font=\tiny] 
\tikzstyle{blind}=[]
\tikzstyle{title} =[fill=white, text=black!80]
\tikzstyle{edge} = [->,text=black]
% user profile graph
  	\node[node, fill=blue!10](userNode){U};
  	\node[unboxed,left of=userNode,node	distance=1.1cm](userNodeLabel){User\#123}; 
  	\node[blind,left of=userNode,node distance=3.5cm](userProfileLeft){}; 
  	\node[blind,right of=userNode,node distance=3.5cm](userProfileRight){};
  	%
  	\node[node, below of=userNode,node distance=2cm, fill=blue!10](userName){?};
	\node[unboxed,below of=userName,node distance=.4cm](userNameLabel){John
	Smith};
	%
  	\node[node, left of=userName,node distance=2cm, fill=blue!10](userLocale){?};
	\node[unboxed,below of=userLocale,node distance=.4cm](userLocaleLabel){US};  
	%
  	\node[node, right of=userName,node distance=2cm, fill=blue!10](userCity){?};
	\node[unboxed,below of=userCity,node distance=.4cm](userCityLabel){Palo Alto};  			
	% 		
	\path[edge] (userNode) edge node {name} (userName);
	\path[edge,left=.1pt] (userNode) edge node {locale} (userLocale);
	\path[edge,right=.1pt] (userNode) edge node {location} (userCity);
% user profile box
\node[bigbox, fit=(userNode) (userName) (userLocale) (userCity)
(userNameLabel) (userLocaleLabel) (userCityLabel)(userNodeLabel)
(userProfileLeft)(userProfileRight)](userProfile) {};
\node[title, right=10pt, font=\large] at (userProfile.north west) {User Profile};

% schema graph
  	\node[node](schemaNode)[below of=userNode,node distance=4.7cm, fill=red!10]{W};
  	\node[unboxed,right of=schemaNode,node
  	distance=.9cm](schemaNodeLabel){Resorts}; \node[blind,left
  	of=schemaNode,node distance=3.5cm](schemaNodeLeft){}; \node[blind,right
  	of=schemaNode,node distance=3.5cm](schemaNodeRight){};
  	%
  	\node[node, left of=schemaNode,node distance=2cm,fill=red!10](revenueNode){M};
	\node[unboxed,below left of=revenueNode,node
	distance=.6cm](revenueNodeLabel){Revenue};

  	%
  	\node[node, below of=schemaNode,node distance=2cm,fill=red!10](customerNode){D};
	\node[unboxed,below left of=customerNode,node
	distance=.6cm](customerNodeLabel) {Customer};
	%
  	\node[node, left of=customerNode,node distance=2cm,fill=red!10](ageNode){A};
	\node[unboxed,left of=ageNode,node distance=.6cm](ageNodeLabel) {Age};
	%
  	\node[node, right of=customerNode,node distance=2cm,fill=red!10](cityNode){D};
	\node[unboxed,below left of=cityNode,node distance=.6cm](cityNodeLabel){City};  
	% 		
	\path[edge,left=.1pt] (customerNode) edge node {dimOf} (schemaNode);
	\path[edge,above=.1pt] (ageNode) edge node {attrOf} (customerNode);
	\path[edge] (cityNode) edge node {dimOf} (schemaNode);
	\path[edge,above=.1pt] (revenueNode) edge node {measOf} (schemaNode);
% schema box
\node[bigbox, fit=(schemaNode) (customerNode) (cityNode) (revenueNode)
(schemaNodeLabel) (customerNodeLabel) (cityNodeLabel)(revenueNodeLabel)
(schemaNodeLeft)(schemaNodeRight)](metadata) {};
\node[title, right=10pt, font=\large] at (metadata.north west) {Schema};

% parse graph
	\node[node,fill=yellow!30, below of=schemaNode,node
	distance=6.2cm, font=\large](queryNode){Q}; \node[blind,left of=queryNode,node
	distance=3.5cm](queryNodeLeft){}; \node[blind,right of=queryNode,node
	distance=3.5cm](queryNodeRight){};
    %
    \node[boxed,below of=queryNode,node distance=1.5cm](txtTop5){Top 5};
    \node[boxed,left of=queryNode,node
    distance=2cm](txtMiddleAged){miggle-agged}; \node[boxed,above
    of=queryNode,node distance=1.5cm](txtCustomer){customers};
    \node[boxed,right of=queryNode,node distance=2cm](txtMyCity){my city};
    %
    \path[edge] (queryNode) edge node {hasAnnot} (txtTop5);
    \path[edge,above=.1pt] (queryNode) edge node {} (txtMiddleAged);
    \path[edge] (queryNode) edge node {hasAnnot} (txtCustomer);
    \path[edge,above=.1pt] (queryNode) edge node {} (txtMyCity);
% parse graph box
\node[bigbox, fit=(queryNode)(txtMiddleAged)(txtTop5) (txtCustomer) (txtMyCity)
(queryNodeLeft)(queryNodeRight)](question) {};
\node[title, right=10pt, font=\large] at (question.north west) {Parse Graph};

% pattern graph
	\node[node, below of=queryNode,node distance=5.5cm,fill=green!10](patternNode){C};
	\node[unboxed,below of=patternNode,node
	distance=.6cm](patternNodeLabel){PatternConfig}; 
	\node[blind,left of=patternNode,node distance=3.5cm](patternNodeLeft){}; 
	\node[blind,right of=patternNode,node distance=3.5cm](patternNodeRight){};
	%
  	\node[node, above of=patternNode,node distance=2cm,fill=green!10](topkNode){P};
	\node[unboxed,below left of=topkNode,node distance=.6cm](topkNodeLabel){TopK};
	%
	\node[node, left of=topkNode,node distance=2cm,fill=green!10](middleAgedNode){P};
	\node[unboxed,below left of=middleAgedNode,node
	distance=.6cm](middleAgedNodeLabel){AgeTerms};
	%
  	\node[node, right of=topkNode,node distance=2cm,fill=green!10](myNode){P};
	\node[unboxed,below right of=myNode,node distance=.6cm](myNodeLabel){Context};
	% 		
	\path[edge,left=.1pt] (patternNode) edge node {hasRule} (middleAgedNode);
	\path[edge] (patternNode) edge node {hasRule} (topkNode);
	\path[edge,right=.1pt] (patternNode) edge node {hasRule} (myNode);
% pattern box
\node[bigbox, fit=(patternNode)(topkNode)(middleAgedNode)(myNode)
(patternNodeLabel) (topkNodeLabel) (middleAgedNodeLabel)(myNodeLabel)
(patternNodeRight)(patternNodeLeft)](patterns) {};
\node[title, right=10pt, font=\large] at (patterns.south west) {Natural Language Patterns};

% realtions among the different graphs
%
\path[edge,left=.1pt] (txtCustomer) edge node {matches} (customerNode);
\path[edge,right=.1pt] (txtMyCity) edge node {matches} (cityNode);
%
\path[edge,right=.1pt] (userCity) edge node {occursIn} (cityNode);
%
\path[edge,left=.1pt] (txtTop5) edge node {matches} (topkNode);
\path[edge,right=.1pt] (txtMiddleAged) edge node {matches} (middleAgedNode);
\path[edge,left=.1pt] (txtMyCity) edge node {matches} (myNode);

\path[edge,left=.1pt,bend left] (middleAgedNode) edge node {appliesTo} (ageNode);
\path[edge,left=.1pt,bend right] (myNode) edge node {appliesTo} (userCity);

\end{tikzpicture}
\end{center}
\caption{Caption..}
\label{fig:query-graph}
\end{figure}


An important foundation of the overall system is the metadata management and
the runtime information captured in the parse graph. We use the term 
parse graph to state its close relationship to the term parse tree,
often used in the context of natural language processing. In our 
case the parse graph and other metadata required to interpret 
a question is captured in form of 
RDF\footnote{Resource Description Framework, see~\url{http://www.w3.org/RDF/}.} 
since it is a widely accepted standard for representing graphs. We also
benefit a lot from the power of the graph pattern query language
SparQL\footnote{SPARQL Protocol and RDF Query Language} as detailed later on. 
 
In figure~\ref{fig:query-graph} we show an example of a parse graph 
for our running example question and other graph-organized metadata.
Before discussing the parse graph itself, we like to detail the 
metadata graphs. 

On top in figure~\ref{fig:query-graph} you see a graph capturing 
the user profile and below an excerpt of the graph representing 
the data warehouse's schema. We only show one
data warhouse/ datamart (see `\emph{Resorts}'-node) for brevity and
only one measure (`\emph{Revenue}'-node), two dimensions 
(`\emph{Customer}' and `\emph{City}') and one attribute (`\emph{Age}').      
The `\emph{City}'-node is linked to the location node (`\emph{Palo Alto}')
in the user profile. Currently this link is established automatically 
by matching the values of the user profile against the data warehouse data.

On the bottom we see a graph capturing metadata of some configured 
natural language patterns. We keep this information inside RDF to relate 
these patterns to other resources. For instance the custom vocabulary
for age terms (node labeled with `\emph{AgeTerms}') -- used to identify 
`\emph{middle-agged}' -- applies to the schema's attribute `\emph{Age}' by its
pattern definition. The user context pattern (`\emph{Context}'-node) relates to
all user profile nodes that have a corresponding value in the data warehouse
data (here: `\emph{Palo Alto}'). In addition, the nodes 
of the natural language pattern graph have properties
like a executable information extraction rule (see 
section~\ref{information-extraction}), a set of variables that can 
be exported, e.g., that the `\emph{TopK}'-pattern exports the number
 of items that a user likes to see and that the ordering is ascending
 (cf. figure~\ref{fig:running-example}).

The parse graph itself is depicted in the third box from the top. It is 
generated by \emph{Information Extraction Plugins} as mentioned 
in previous subsection. The graph mainly consists of a central 
node representing the users question (the larger node 
marked with `$Q$') and so called \emph{annotation}-nodes 
(nodes with a retancle shape in figure~\ref{fig:query-graph}, labeled 
with the correpsonding fragment of the users' question). 
\emph{Annotation}-nodes capture metadata that was aquired during
the matching process (e.g., when matching data warehouse metadata 
or natural language patterns) and link to relevant resources used 
or identified during this process (e.g., a dimension or the used 
natural language pattern). 

As runtime metadata we keep for instance the position of a match (offset
and length of the matched fragment within the question), the type of
the occured match (e.g., match in data warehouse metadata or match with 
natural lanugage patterns) and a confidence value. In addition 
\emph{Information Extraction Plugins} may capture specific metadata 
such as instantiated output variables for natural language patterns
(e.g., the `\emph{5}' extracted from `\emph{Top 5}').

Before going into the details on how to translate semantics captured
during the information extraction process into structured queries,
we like to detail how to derive a structure such as the one shown in 
figure~\ref{fig:query-graph} and how to configure natural language
patterns.


\section{Entity recognition}
\label{information-extraction}

To derive a graph structure such as shown in 
figure~\ref{fig:query-graph} we need to match the users'
question with the metadata and data of the underlying 
structured datasource and apply all configured natural language patterns.
In the beginning of this section, we will briefly introduce the matching 
algorithms. The last subsection contains a more detailed description
on how natural language patterns are configured and executed.

\begin{figure*}[htb]
\begin{program}[H]
{\footnotesize
\begin{verbatim}
1 :yearsBackWardsToDateRange
2   rdf:type features:NlpPattern;
3   rdfs:label "Computes the beginning and end date, given a time range in years"^^xsd:string;
4   nlp:outputVariables "yearsBack,rangeBegin,rangeEnd"^^xsd:string;
5   nlp:rule
6     "((<last>|..|<previous>)([OD yearsBack]<POS:Num>[/OD])?<STEM:year>)"^^xsd:string;    
7   nlp:computeVariablesWithScript """
8     var today = new Date(); var dd = today.getDate(); var yyyy = today.getFullYear();     	
9     rangeEnd =  calculateEnd(dd,mm,yyyy);
10    rangeBegin = calculateStart(dd,mm,yyyy,yearsBack);
11		
12    function calculateEnd(dd,mm,yyyy) {
13        return 	yyyy + '-' + mm + '-' + dd; 						   
14    }
15    function calculateStart(dd,mm,yyyy,yearsBack) {
16        return (yyyy-yearsBack) + '-01-01';							   
17    } """^^xsd:string ;      
18  nlp:appliesTo dataType:Date;
\end{verbatim}}
\caption[width=10cm]{Pattern to compute dates from phrases like `last 3 years'.}
\label{rule_example}
\end{program}
\end{figure*}


\subsection{Matching Metadata}

The federated search platform used within our work to implement a 
generic Q\&A framework consists of an abstract 
\emph{Information Extraction Plugin} for indexing RDF-graphs and
matching a text (i.e. the users' question) and the indexed data.
It is used in the scenario presented to spot terms of the
user profile and the data warehouse schema inside the users'
question.
   
We use a standard information extraction system (SAP BusinessObjects 
Text Analysis\texttrademark, a succesor of the system presented in 
\cite{DBLP:conf/trec/Hull99}) with a custom scoring function. 
As scoring function for evaluating indiviual matches 
 we adapted the scoring that we presented in 
 \cite{DBLP:conf/www/BrauerHHLNB10}. In a nutshell, it combines 
 a TF-IDF like metric with Levenhtein and punishes in addition
 matches where the length of a term in the metadata is mutch 
 longer than the string occuring in the users' question. A 
 threshold on the score limits the number of matches
 that are considered for further processing.
 
 If a substring of the users' question was identified 
 as matching a term in the datasources metadata (user profile or 
 schema), the plugin generates an annotation node in the parse
 graph. This node links the matched node and the question node
  (cf. figure~\ref{fig:query-graph}).   
 As discussed before, runtime metadata such as the
 offset, length and score of the match are stored as
 attributes of the annotation node. 

\subsection{Matching Data}

Since replicating a data warehouse or any other large data source
to RDF is not very efficient (in particular here, because we would 
only replicate the data for indexing), we use custom plugins 
that are operating close to the data to 
match the question's content and data warehouse data. 

For the BI 
use case that means retrieving the database mapping from the 
abstraction layer of the data warehouse (e.g., to retrieve the 
table and column storing the customer age) and calculating matches
directly inside the database (in our case SAP HANA\texttrademark) 
using SQL-scripts. This functionality inside the database makes 
heavy use of the built-in fulltext search capabilities. 
For scoring, we use basically the raw TF-IDF values, again
with a threshold filter to reduce the number of matches.

For each match the system creates an annotation node like for 
metadata as explained above, linking the question and the 
metadata node, to signal the system that a value 
for a dimension was identified in the users' question. 
The difference between metadata annotations
mentioned in previous subsection to the ones created for values
is the annotation type, which is assigned as a property to the
annotation.


\subsection{Natural Language Patterns}

A very powerful generic plugin for information extraction is the 
one for handling complex natural language features. It can be used to
implement custom functionality (e.g., range queries, top-k queries
 or custom vocabulary such as shown for ``middle-agged''
 in figure~\ref{fig:running-example}) that goes beyond keword 
 matching.  
 
 As explained in previous section, natural language patterns 
are configured using RDF. The three main parts of a natural 
language patterns are:

 \textbf{(1) Extraction Rules:} The basis for natrual language
  patterns are extraction rules. In our case we use the GGUL
  rule language\footnote{\url{http://help.sap.com/businessobject/product_guides/boexir4/en/sbo401_ds_tdp_ext_cust_en.pdf}},
  which can be executed using SAP BusinessObjects
  Text Analysis\texttrademark. It bases similarly as
  CPSL~\cite{Appelt:1998:CPS:1119089.1119095} or
  JAPE~\cite{Cunningham99jape:a} on the idea of
  \emph{Cascading Finite-state Grammars}- meaning that you
  can build extraction rules in a cascading way. In our
  implementation we make heavy use of built-in primitives
  for part-of-speech tagging, regular expressions and
  the option to define and export variables (e.g.,
  the `\emph{5}' in `\emph{top 5}'). Note, that a rule might
  simply consist of a token or a phrase list, e.g., containing
  `\emph{middle-agged}'.

  \textbf{(2) Transformation Scripts:} Once a rule fired, exported
  variables may require some post processing (i.e., a normalization
  or transformation), e.g., to transform \emph{15,000} or \emph{15k}
  into \emph{15000}, a expression that can be used within a
  structured query. In many cases there is also the need to compute
  additional variables. The most simple case for such functionality
   is to output  beginning and end of the age range defined 
   by a term such as   \emph{middle-agged}. 
   To do additional computations and transformations,
  we allow to embed scripts inside a natural language patterns,
  which can consume output variables of the extraction rule and
  can define new variables as needed.

  \textbf{(3) Referenced Resources:} Often a rule is specific for
  a resource in some metadata graph. For instance the pattern
  for `\emph{AgeTerms}' (see figure~\ref{fig:query-graph}) applies to
  the dimension `\emph{Age}', the `\emph{Context}'-pattern only to
  nodes within the user profile and other patterns might only apply
  to certain data types (e.g., patterns for ranges to numerical 
  dimension values) --  which also are represented as nodes. 
  In order to restrict the  domain of patterns, 
  we allow to specify referenced resources. Later, we
  will detail how these references can be used in
  generating structured queries.
  
You can see an example natural language pattern in listing~\ref{rule_example}.
We show here an example, which does not match our running example question
to underline the power and flexibility of the described mechanism. It depicts
a pattern to compute from a phrase like `\emph{for the last 3 years}' two
date expressions (namely beginning date and end data) that can be used in a 
structured query. The example pattern is presented in the RDF Turtle format, 
which has certain advantages with respect to readibility 
compared to the more often used XML-representation. 

The first line defines the URI of the pattern (i.e. the subject of all 
following properties). All remaining lines define the patterns 
properties (in terms of predicates and objects). Line 2 and 3 
contain the type and description in the sense of RDF and RDF-Schema. 
In line 4 we define the variables that are output of the pattern, here 
the number of years that shall be used for the calculation (i.e. 
`\emph{yearsBack}') and the actual dates (`\emph{rangeBegin} and 
`\emph{rangeEnd}'). 

The extraction rule is defined in line 5 and 6. It consists of 
some trigger words like `\emph{last}' or `\emph{previous}',
the exported number (`\emph{[OD]}' marks that the expression between 
shall be exported, `\emph{<POS:Num>}' references the part-of-speech tag 
for numbers) and the ending token `\emph{year}' (and its stems).

Between line 7 and line 17 we see the script that is used to compute
the actual values for the variables `\emph{rangeBegin} and 
`\emph{rangeEnd}'. We use Java Script, because it can 
be executed easily in our host programming language (Java) and embed it 
into the RDF representation to store the rule definition and 
the transformation logic together. In the last line, we define
that this rule only applies to dimensions which have values of 
data type `\emph{Date}'.

Once, a extraction rule fired and the attached script has been 
evaluated, an annotation node in the parse graph 
 is created as shown in figure~\ref{fig:query-graph}. The 
annotation node cares as properties runtime metadata such as
the match position (again offset and length inside the users' 
question), the annotation type and the computed variables.


\begin{figure*}[htb]
\begin{center}
\begin{tikzpicture}[remember picture]
\tikzstyle{bigbox} = [draw, draw=black!20, rounded corners, rectangle]
\tikzstyle{boxed} = [minimum height=0.8cm, draw=black!50, rounded corners,rectangle] 
\tikzstyle{unboxed} = [minimum height=0.8cm,rounded corners,rectangle] 
\tikzstyle{node}=[circle,minimum size=10pt,draw=black, font=\tiny ] \tikzstyle{blind}=[]
\tikzstyle{title} =[fill=white, text=black!80]
\tikzstyle{edge} = [->,text=black]
% user profile graph
  
	\node[node,fill=yellow!30,font=\small](queryNode){Q};	
	\node[blind,left of=queryNode,node distance=4.9cm](queryNodeLeft){}; 
	\node[blind,right of=queryNode,node	distance=4.9cm](queryNodeRight){};
    %
    \node[node,below of=queryNode,node distance=1.5cm, fill=yellow!10](dimAnnot1){A};
    \node[unboxed,left of=dimAnnot1,node distance=.6cm](dimAnnotLabel1){?a3};
    \path[edge,dashed] (queryNode) edge node {hasAnnotation} (dimAnnot1);    
        
    \node[node,below of=dimAnnot1,node distance=2.5cm, fill=red!10](dim1){D};
    \node[unboxed,left of=dim1,node distance=.6cm](dimLabel1){?d1};
    \path[edge,above=5pt] (dimAnnot1) edge node {matches} (dim1);
    
    \node[node,above left of=dim1, node distance=.8cm, fill=black!100](dim1Label){};
    \node[unboxed,above of=dim1Label, node distance=.4cm](dim1LabelLabel){?dL1};
    \path[edge,below=.1pt] (dim1) edge node {} (dim1Label);
    
    \node[node,below of=dim1,node distance=1.5cm, fill=red!10](dw){W};
    \node[unboxed,below right of=dw,node distance=.5cm](dwLabel1){?w};
    \path[edge] (dim1) edge node {} (dw);
    
    \node[node,left of=dw, node distance=1cm, fill=black!100](dwLabel){};
    \node[unboxed,left of=dwLabel, node distance=.5cm](dwLabelLabel){?wL1};
    \path[edge,below=.1pt] (dw) edge node {} (dwLabel);
    
    \node[node,below of=dw,node distance=1.5cm, fill=red!10](dim4){D};
    \node[unboxed,right of=dim4,node distance=.6cm](dimLabel4){?d4};
    \path[edge,right=.1pt,dashed] (dim4) edge node {dimOf} (dw);
    
    \node[node,below of=dim4, node distance=.8cm, fill=black!100](dim4Label){};
    \node[unboxed,right of=dim4Label, node distance=.6cm](dim4LabelLabel){?mL2};
    \path[edge,below=.1pt] (dim4) edge node {} (dim4Label);
    
    \node[node,left of=dim4,node distance=2cm, fill=red!10](meas2){D};
    \node[unboxed,left of=meas2,node distance=.6cm](measLabel2){?m2};
    \path[edge,left=.1pt,dashed] (meas2) edge node {measOf} (dw);
    
    \node[node,below of=meas2, node distance=.8cm, fill=black!100](meas2Label){};
    \node[unboxed,left of=meas2Label, node distance=.6cm](meas2LabelLabel){?dL4};
    \path[edge,below=.1pt] (meas2) edge node {} (meas2Label);
    
    \node[node,left of=dimAnnot1, node distance=1.5cm, fill=yellow!10](measAnnot1){A};
    \node[unboxed,left of=measAnnot1, node distance=.6cm](measAnnotLabel1){?a2};
    \path[edge,dashed,above=.1pt] (queryNode) edge node {} (measAnnot1);
    
    \node[node,below of=measAnnot1,node distance=2.5cm, fill=red!10](meas1){M};
    \node[unboxed,left of=meas1,node distance=.6cm](measLabel1){?m1};
    \path[edge,above=5pt] (measAnnot1) edge node {matches} (meas1);
    \path[edge,left=.1pt] (meas1) edge node {measOf} (dw);
    
    \node[node,above left of=meas1, node distance=.8cm, fill=black!100](meas1Label){};
    \node[unboxed,above of=meas1Label, node distance=.4cm](meas1LabelLabel){?mL1};
    \path[edge,below=.1pt] (meas1) edge node {} (meas1Label);
        
    \node[node,left of=measAnnot1, node distance=1.5cm, fill=yellow!10](topKAnnot){A};
    \node[unboxed,above of=topKAnnot, node distance=.5cm](topKAnnotLabel){?a1};
    \path[edge,dashed,above=.1pt] (queryNode) edge node {} (topKAnnot);
    
    \node[node,left of=topKAnnot, node distance=1.5cm, fill=black!100](topKBegin){};
    \node[unboxed,above of=topKBegin, node distance=.5cm](topKAnnotBeginLabel){?begin};
    \path[edge,above=.1pt] (topKAnnot) edge node {begin} (topKBegin);
    
    \node[node,below of=topKBegin, node distance=1.5cm, fill=black!100](topKEnd){};
    \node[unboxed,above of=topKEnd, node distance=.5cm](topKAnnotEndLabel){?end};
    \path[edge,above=.1pt] (topKAnnot) edge node {end} (topKEnd);
    
    \node[node,below of=topKAnnot,node distance=2.5cm, fill=green!10](topK){P};
    \node[unboxed,left of=topK,node distance=.65cm](topKLabel1){TopK};
    \path[edge,above=5pt] (topKAnnot) edge node {matches} (topK);    
    
    \node[node,right of=dimAnnot1,node distance=1.5cm, fill=yellow!10](valueAnnot1){A};
    \node[unboxed,left of=valueAnnot1,node distance=.6cm](valueAnnotLabel1){?a4};  
    \path[edge,dashed] (queryNode) edge node {} (valueAnnot1);
    
    \node[node,below of=valueAnnot1,node distance=2.5cm, fill=red!10](dim2){D};
    \node[unboxed,left of=dim2,node distance=.6cm](dimLabel2){?d2};
    \path[edge,above=5pt] (valueAnnot1) edge node {valueOf} (dim2);
    \path[edge,left=.1pt] (dim2) edge node {dimOf} (dw);
    
    \node[node,above left of=dim2, node distance=.8cm, fill=black!100](dim2Label){};
    \node[unboxed,above of=dim2Label, node distance=.4cm](dim2LabelLabel){?dL2};
    \path[edge,below=.1pt] (dim2) edge node {} (dim2Label);
    
    \node[node,right of=valueAnnot1,node distance=1.5cm, fill=yellow!10](profileAnnot1){A};
    \node[unboxed,left of=profileAnnot1,node distance=.6cm](profileAnnotLabel1){?a5};  
    \path[edge,dashed] (queryNode) edge node {} (profileAnnot1);
    
    \node[node,below of=profileAnnot1,node distance=2.5cm, fill=red!10](dim3){D};
    \node[unboxed,left of=dim3,node distance=.6cm](dimLabel3){?d3};
    \path[edge,above=5pt] (profileAnnot1) edge node {matches} (dim3);
    \path[edge] (dim3) edge node {dimOf} (dw);
    
    \node[node,above left of=dim3, node distance=.8cm, fill=black!100](dim3Label){};
    \node[unboxed,above of=dim3Label, node distance=.4cm](dim3LabelLabel){?dL3};
    \path[edge,below=.1pt] (dim3) edge node {} (dim3Label);
    
    \node[node,below of=dim3,node distance=1.5cm, fill=blue!10](profile){?};
    \node[unboxed,left of=profile,node distance=1.1cm](profileLabel){?profileItem};
    \path[edge,right=.1pt] (dim3) edge node {occursIn} (profile);
    
    \node[node,right of=profile, node distance=1cm, fill=black!100](itemLabel){};
    \node[unboxed,below of=itemLabel, node distance=.5cm](itemLabelLabel){?pItemL1};
    \path[edge,below=.1pt] (profile) edge node {} (itemLabel);
    
    \node[node,below of=profile,node distance=1.5cm, fill=blue!10](user){U};
    \node[unboxed,left of=user,node distance=.7cm](userLabel){?user};
    \path[edge,left=.1pt] (user) edge node {?rel} (profile);
    
\node[bigbox, fit=(queryNode)(meas2Label)(queryNodeLeft)(queryNodeRight)](where) {};
\node[title, right=10pt, font=\Large] at (where.north west) {Where};


\end{tikzpicture}
\end{center}
\caption{Caption..}
\label{fig:structural-constraints}
\end{figure*}




% 
% 
% \subsection{Back Up}
% 
% First, a set of constraint templates are applied to the question.
% A constraint template is a set of constraints (some of them being optional)
% associated to translation rules.
% In the example~\ref{fig:running-example}, four \emph{groups} of successive
% tokens can be identified. Each group correspond to a set of constraints. The
% constraints are based on the set of \emph{annotations} from the parse tree. 
% The constraint are of different kinds: annotation's label (regular expression,
% or defined in a lexicon); position of the annotation in the question; \ldots
% \begin{itemize}
%   \item ``Top 5'': the constraints are displayed in the upper box in the figure. 
%   The inner boxes are the exported tokens. Thus, the exported tokens are:
%   ``order:Top''; ``nb\_items:5'' and ``measure:?'' where ``?'' means an optional
%   constraint (here there is no measure in this group). 
%   Rules defined in the domain-independant lexicon define how to interpret
%   ``Top'' at the translation step. 
%   \item ``middle-aged'': this constraint is defined in the domain-specific
%   lexicon, which defines how to interpret ``middle'' in the domain. It will be
%   rewritten as an explicit constraint (see
%   section~\ref{sec:running-example-normalization}).
%   \item ``in my city'': the keyword ``my'' triggers a rule involving contextual
%   information about the user. In this example, ``city'' is a dimesion, to be
%   rewritten as a filter corresponding to the city where is located the user.
% \end{itemize}
% The (database) entities recognized in this step (e.g., dimension, measures,
% filters) are indexed in a lexicon with their base and variant forms. 
% Thus, ``customers'' is recognized as the dimension ``[Customer]''. 
% 
% 
% \section{Normalization}
%  \label{sec:running-example-normalization}
% 
% Some expressions of natural language must be \emph{normalized}, i.e. rewritten
% such that they can be mapped with database elements.
% In the example~(\ref{eq:running-example}), the expression
% `\emph{middle-aged}' should be normalized in terms belonging to the dataset. 
% This expression can thus be rewritten for instance in ``of age 20 to 30''. The
% running example becomes then ``Top 5 \textit{of age 20 to 30} customers in my
% city$\star$''\footnote{This is not a well-formed English statement (it is
% marked with $\star$).}.
% The rewritten expression is then interpreted as a filter. 
% This example of normalization is domain-dependant, because ``middle-aged'' might
% have a various interpretations in different domain application. 
% Similarly, some expressions require run-time computations: for instance ``last
% year'' should be rewritten in ``2012''. This is also a domain-dependant
% normalization, because the beginning of the year differs with applications
% (e.g., fiscal year in the USA is from 1st October to 30 September while calendar
% year is from 1st January to 31st December).
% 
% There are other domain-independant examples (but language-dependant), like 
% ``1.5k'' to be rewritten in ``1,500''.
% Different normalization examples have been reported
% table~\ref{tab:normalization}.
% \begin{table}[!h]
% \centering
% \begin{tabular}{ll}\hline
% \multicolumn{1}{c}{\textbf{Expression}} &
% \multicolumn{1}{c}{\textbf{Rewriting}}\\\hline \hline
% ``$n$k'' & $n\times 1,000$\\\hline
% ``$n$M'' & $n\times 1,000,000$\\\hline
% ``$n$B'' & $n\times 1,000,000,000$\\\hline
% ``last year'' & $currentYear()-1$\\\hline
% \end{tabular}
% \caption{Rewritten expressions at the normalization step}
% \label{tab:normalization}
% \end{table}
% 


\section{Structural constraints}
\label{sec:structural-constraints}
Recognized and rewritten entities in the parse tree must be evaluated together
to generate a valid database query.
The multidimensional queries that we target are intended to be visualized in a
chart. For that reason, a query contains at least one dimension and one measure.
Moreover, those entities must belong to the same data schema (they must be
\emph{compatible}). 
An obvious constraint is that entities must be distinct.

To offer personalized results, user context should
be taken into account. 
For example, ``my city'' can be resolved, either basing on the users' current
location, or on her profile. 
The algorithm maps an annotation of type dimension (in this example the
dimension ``[City]'') to a dimension value that is in the user's context (e.g.,
``Palo Alto''). User's context comes either from user's profile, or from
information sent by the client (e.g., geo-location of the mobile device used as
a client).


To illustate this paper, we consider the data model reproduced 
figure~\ref{fig:data-schema}.





\section{Missing items}
In the example~(\ref{fig:running-example}), ``Top 5'' end up with a
constraint that says that we need to {\it order} something, and take the five
first items. Here the semantics would be to suggest a measure (which does not
appear in the query) that would be for instance [Sales revenue], and use this
measure to order the already present dimension, (Customer), and eventually to
consider only the five best (Customer) with respect to the measure [Sales
revenue].

There is not only one entity satisfying this kind of constraint. As a result,
all compatible entities can be used to generate a potential queries. These
queries must be scored (see section~\ref{sec:scoring}) to determine the
best result with respect to the question.



\section{Mapping to structured query}
The whole structure (or ``parse tree'') that has been generated
must then be mapped to a (database) structured query. 
The challenge here, is to generate a valid database query that captures the
intention of the user. 
The question~\ref{fig:running-example} is mapped to internal representations
of the multidimensional queries represented below:
\begin{equation*}
\begin{small}
Q_1=\left[
\begin{array}{lcl}
\textnormal{data model} & = & \textnormal{Club}\\
\textnormal{dimensions} & = & \{[\textnormal{Customer}]\}\\
\textnormal{measures} & = & \{(\textnormal{Revenue})\}\\
\textnormal{filters} & = & 
\left\{
\begin{array}{lcl}
[\textnormal{City}] & = & \textnormal{`Palo
Alto'},\\
\left[\textnormal{Age}\right] & \geq & 20,\\
\left[\textnormal{Age}\right] & \leq & 30
\end{array}
\right\}
\\
\textnormal{orderings} & = & (\textnormal{Customer},\textnormal{Revenue})\\
\textnormal{truncation} & = &
\{((\textnormal{Customer},\textnormal{Revenue}),\downarrow,5)\}
\end{array}
\right]
\end{small}
%\label{eq:internal-query-1}
\end{equation*}
and
\begin{equation*}
\begin{small}
Q_2=\left[
\begin{array}{lcl}
\textnormal{data model} & = & \textnormal{Club}\\
\textnormal{dimensions} & = & \{[\textnormal{Customer}]\}\\
\textnormal{measures} & = & \{(\textnormal{Margin})\}\\
\textnormal{filters} & = & 
\left\{
\begin{array}{lcl}
[\textnormal{City}] & = & \textnormal{`Palo
Alto'},\\
\left[\textnormal{Age}\right] & \geq & 20,\\
\left[\textnormal{Age}\right] & \leq & 30
\end{array}
\right\}
\\
\textnormal{ordering} & = & \{[\textnormal{Customer}].(\textnormal{Margin})\}\\
\textnormal{truncation} & = &
\{([\textnormal{Customer}],(\textnormal{Margin}),\downarrow,5)\}
\end{array}
\right]
\end{small}
%\label{eq:internal-query-2}
\end{equation*}

The internal queries $Q_1$ and $Q_2$ represent the semantics captured by the
structured (parse tree).
There are two internal queries, because the constraints concerning the measures
simply state that it must be an entity compatible with the other entities
(i.e. [Customer], [City] and [Age]).
The arrow ($\downarrow$) in the truncation clause of the conceptual queries
stands for the descending order (here we consider the natural order $<$,
and measures' domain values are supposed to be in $\mathbb{R}$).



\section{Scoring}
\label{sec:scoring}
The previous step generates potentially several structured queries. Those
queries must evenutally be ranked by relevance with respect to the user's
question.
We combine text retrieval metrics (similar to the edition distance
between user's terms and database entities names) with popularity metrics (the
fact that a database entity is more often used than another one) in a way that
``complex'' and less ``generic'' queries are better scored. 
The scoring function is computed based on a combination of the following
functions:

\textbf{(1) entity recognition confidence:} the named entity recognizer
associates a value of confidence to named entity and phrases being recognized.
This confidence is a variant of the Levenshtein
distance~\cite{levenshtein1966bcc}.
The entity recognition confidence of result $r$ can be formulated as:
$s_1(r)=\sum_{i,j}\frac{d_ic_{i,j}}{k\left|f'_i(Q)\right|}$ where $c_{i,j}$ is
the confidence of entity $e_j$ appearing in pattern $p_i$, $d_i$ a weighting
factor of $p_i$ and $\frac{1}{k|f^\prime_i(Q)|}$ a factor that ensures that
$s_1(r)$ is a confidence (in $[0,1]$).

\textbf{(2) selectivity:} defines how specific is the pattern corresponding to
a result. This ``specificity" is computed as the inverse number of generated BI
queries.
  
\textbf{(3) complexity:} based on the number of constraints in the pattern

\textbf{(4) popularity:} is computed from the query logs. When users open
results, the popularity of entities (dimensions and measures) that are part of
this result increases. Co-occurrency metrics are also computed.





\section{Execution of queries}
\label{sec:execution}
The ranked queries are executed. 
To improve query execution time we have parallelized the execution of queries.
The resulting dataset is transformed to feed a visualization engine to generate
a chart.



\section{System overview}
\label{sec:system-overview}
In the following, we assume that the database query language is SQL.
The system architecture is composed of following components:
\begin{itemize}
  \item question parsing component
  \item pattern matching component
  \item query execution \& answer generation component
\end{itemize}




\subsection{Constraints on the terminology}
The terminology gap is a well-known problem in IR. Its origin is the fact that
data model designers and end-users do not share a common agreement on how to
model the world. 
Consider the dataset ``eFashion'' (see the model
figure~\ref{fig:data-schema-2}). The user may want to use the word ``article''
to refer to the dimension ``line'', or use the expression ``article ID'' instead
of the attribute ``SKU number''. 
To face this issue, the natural solution is to integrate a lexicon which
defines synonyms and related words. 
We have implemented algorithms that generate this lexicon automatically from the
database. Synonyms of common English words can be found automatically from
WordNet\footnote{See~\url{http://wordnet.princeton.edu/}.}. Experts can also be
involved to customize the lexicon with domain-specific content.

Mapping a word from user's queries to all possible interpretation of the lexicon
is an expensive computation.
For this task, we use {\it SAP BusinessObjects TextAnalysis} which is an
efficient implementation of dictionary look-up originally meant for processing
huge amount of text corpora. 
Here is an example of automatically generated configuration file for the
lexicon, with only one measure [Margin] associated with its plural form (for
clarity purposes):
\begin{lstlisting}
<?xml version="1.0" encoding="UTF-8" ?>
 <catalog>
  <entity_category name="measure">
   <entity_name canonical="Margin">
    <query_only type="id" 
    name="_QlwAsNPXEeCNAsc0_cQJgQ"/>
    <query_only type="computed_weight" 
    name="1.0"/>
     <variant type="alternative_singularForm" 
     name="Margin" />
     <query_only type="margin" name="1.0"/>
     <variant type="alternative_pluralForm" 
     name="Margins" />
     <query_only type="margins" name="1.0"/>
   </entity_name>
   </entity_category>
</catalog>
\end{lstlisting}



\subsection{Constraints on the ambiguity of term meanings}
Two terms are {\it ambiguous} if they can be mapped to different database
elements.
We address partly this issue, and consider that terms with a low edition
distance might be {\it similar}. There are two cases where this similarity helps
in resolving term ambiguity:
\begin{itemize}
  \item the query term is mispelled, and the distance edition is a good measure
  \item the query term is a part of a modified english noun phrase
\end{itemize}
Our implementation of dictionary look-up natively supports this kind of search. 
For example, the dictionary maps the query term ``revenue'' to the database
element [Revenue] from the dataset ``Club'' (see figure~\ref{fig:data-schema})
and to the database element [Sales revenue] from the dataset ``eFashion'' (see
figure~\ref{fig:data-schema-2}). 
At this step of question processing, we keep all possible interpretations in the
pare tree. 


\subsection{Constraints on ellipsis in NL}
In this work, we handle a subset of possible ellipsis.
For instance, the query ``Revenue'' is ambiguous not only for the
semantic mapping (see above), but also for the following reasons:
\begin{itemize}
  \item user might not be interested by the fully aggregated result.
  Furthermore, there might be a level of granularity in the hierarchy that might
  be of more interest (see~\cite{Giacometti:2009:RMQ:1617540.1617589}). For
  instance, the Geographic dimension at the state granularity might be a good
  choice
  \item user might prefer a geo-chart instead of a bar chart when a geographic
  dimension is available. The decision of which chart type cannot always be
  made on the basis of the dataset (or data type).
\end{itemize}



\subsection{Contextual information}
We illustrate this with the query ``revenue of my stores''. In this example, the
word ``my'' triggers the processing of available contextual information about
the user. This information is then being added to the parse tree. 
In our proposal, contextual information is composed of:
\begin{itemize}
  \item profile information
  \item preferences
  \item information send by the device \& application where the system is
  running
\end{itemize}


Profile information is a list of property-value pairs, like age, name,
email address, etc.

Preferences are of two kinds, {\it explicit preferences} and {\it implicit
preferences}. The former ones corrsepond to user's ratings, and the latter
nones to preferences that can be inferred from user interaction. A more
detailed explanation can be found in~\cite{dasfaa12}.

The kind of device is used by the query execution component (see
section~\ref{sec:system-overview}).
Other corporate applications can contribute (see context-aware applications).
Examples: corporate social network (``my manager's store'')



\subsection{Personalization}
We offer personalized answers. The personalization is performed using
some quantitative preferences (top$k$).
The quantitative preferences are computed basing on confidence scores

The consideration of $u$ in the mapping has following reasons:
she is not allowed to access the entire data schema of the data
warehouse (security settings). 
Moreover, information can be derived from $u$'s context and used in the
translation process: her {\it profile}, her
{\it preferences} and the kind of device she is using (contextual settings).



We use the same formalism as that used by Golfarelli et
Rizzi~\cite{Golfarelli:1998:MFD:294260.294261}. In the following, we represent
measures with square brackets (``[Revenue]'') and dimensions with parentheses
(``(Year)'').

The problem (see section~\ref{sec:problem}) faces the same issues as other
interfaces in NL summarized below:
\begin{itemize}
  \item user's terminology is different from that of the data model
  \item terms used by users might be ambiguous (leading to different
  interpretations in terms of database elements)
  \item user's intention is not always explicit (importance of taking into
  account context for interpreting user's intent)
\end{itemize}
These issues are not the main purpose of this paper, and will only be
overviewed section~\ref{sec:system-overview}.





\section{Evaluation}
Evaluating retrieval systems is not an easy task, because each system meets a
specific goal, and therefore classic evaluation metrics (such as precision and
recall) might not be the best ones. Moreover, there is no dataset nor
goldstandard queries that can be used besides those from TREC evaluation, but
are not suitable for BI purposes.
We present an evaluation based on a popular public available dataset


\subsection{Evaluation corpus}
ManyEyes\footnote{See~\url{www-958.ibm.com/}.} is a collaborative platform
where users publish datasets and corresponding visualizations (i.e. charts).
One of the most popular datasets come from the american Census
Bureau\footnote{Data can be downloaded
from~\url{http://www.census.gov/main/www/access.html}.}.
Theses data are geographic, demographic and social census data.
We have integrated the Census dataset in our data warehouse and created
manually the data schemas corresponding to these dataset.
The evaluation corpus is composed of the titles of the 50 best ranked
visualizations (corresponding to the census dataset).
This ensures that queries are not formulated by agents who designed the data
schema of the warehouse.

We present some of the \emph{goldstandard} queries
Table~\ref{tab:goldstandard-queries}.
\begin{table}[h!]
\centering
\begin{tabular}{ll}\hline
\multicolumn{1}{c}{\textbf{Query}} & \multicolumn{1}{c}{\textbf{Data
model}}\\\hline\hline
State Population Change & Community\\\hline
Home Ownership by State & Community\\\hline
And the whitest name in America is\ldots & Community\\\hline
40+ Population Projections by Age & Community\\\hline
US Violent Crime & Crime\\\hline
\end{tabular}
\caption{Examples of queries which are part of the evaluation corpus}
\label{tab:goldstandard-queries}
\end{table}
We have measured performance in terms of execution time on the one
hand, and IR metrics on the other hand.

%\subsection{Gold standard queries}
%\label{sec:evaluation-goldstandard}
%In~\cite{blunschi2012}: definition of handwritten gold standard queries,
%computation of recall and precision; definition of the ``query complexity''
%(what operations are needed; what execution time does it require?). 




\subsection{Performance}
\label{sec:evaluation-performance}
We have measured the processing time of the overall answering system (see
Figure~\ref{fig:evaluation-processing-time}).
On this figure, we represent the processing time before rendering the charts
(plots ``*'') and after rendering the charts (plots ``x'').

\begin{figure}[!h]
\begin{tikzpicture}
    \begin{axis}[width=\linewidth,
        xlabel=input nodes count,
        ylabel={processing time (ms)},axis y
        line*=left,ymin=10000,ymax=15000,legend pos=north west]
        \addplot[smooth,mark=*,style=dotted] plot coordinates { 
        (11203,11203)
        (11655,11655) (12137,12137)
		(12615,12615)
		(13018,13018)
    };
    \legend{before chart generation}
    \end{axis}
  
  \begin{axis}[ylabel={$y_2$},axis y
  line=right,width=\linewidth,ymin=0,ymax=5000000,grid=both, legend pos=south east]
    \addplot[smooth,mark=x]
        plot coordinates {
   		(11203,2470619)
		(11655,2972001)
		(12137,3472285)
		(12615,3976904)
		(13018,4477776)
        };
        \legend{after chart generation}
  \end{axis}
  
    \end{tikzpicture}
\caption{Processing time before and
after the chart generation process as
a function of the schema input nodes
count}
\label{fig:evaluation-processing-time}
\end{figure}
On this figure, we see that as expected the processing time seems to be a
proportion of the size of the graph used as input of the pattern matching
algorithm. 
The part of the execution time dedicated to rendering the chart is
approximatively a third of the global execution time.
This is due to the fact that datasets  that are rendered as charts are too
valuminous.

\subsection{Results}
Recall is not a metrics of interest in our case, because the goldstandard
queries correspond to exactly one chart, i.e. one database query.
We thus consider a measure derived from precison called \emph{success at k}
that measures how far is the first relevant answers within the list of results. 
\begin{figure}[h!]
\begin{tikzpicture}
\begin{axis}[width=\linewidth,xlabel=$k$,grid=both,legend
entries={success@k,corrected success@k,wa success@k,corrected wa
success@k},legend pos=north west,ymax=1.5]
\addplot[smooth,mark=*,color=black] plot coordinates { (1,0.636363636) (2,0.727272727)
	(3,0.757575758)
	(4,0.787878788)
	(5,0.787878788)
	(6,0.787878788)
	(7,0.787878788)
	(8,0.848484848)
	(9,0.878787879)
	(10,0.909090909)
};
\addplot[smooth,mark=x,color=black] plot coordinates {
	(1,0.787878788)
	(2,0.878787879)
	(3,0.878787879)
	(4,0.909090909)
	(5,0.939393939)
	(6,0.939393939)
	(7,0.939393939)
	(8,0.939393939)
	(9,0.939393939)
	(10,0.939393939)
};
\addplot[smooth,mark=*,color=black,style=dotted] plot coordinates {
	(1,0.151515152)
	(2,0.151515152)
	(3,0.181818182)
	(4,0.242424242)
	(5,0.303030303)
	(6,0.303030303)
	(7,0.303030303)
	(8,0.303030303)
	(9,0.303030303)
	(10,0.303030303)
};
\addplot[smooth,mark=x,color=black,style=dotted] plot coordinates {
	(1,0.212121212)
	(2,0.212121212)
	(3,0.242424242)
	(4,0.303030303)
	(5,0.363636364)
	(6,0.393939394)
	(7,0.393939394)
	(8,0.393939394)
	(9,0.393939394)
	(10,0.393939394)
};
\end{axis}
\end{tikzpicture}
\caption{Success of answering goldstandard queries compared to
Wolfram|Alpha}
\label{fig:success-k-1}
\end{figure}
Figure~\ref{fig:success-k-1} compares the success for $k$ varying from zero to
ten for the answering system and Wolfram|Alpha.
\emph{Corrected} success stands for results where the query has been modified in
such a way that the system can better respond.
For instance, we have observed that Wolfram|Alpha provides better results if
some queries are prefixed or suffixed with ``US'' or ``Census'' to explicit a
restriction to a subset of available data (``US'') or to the dataset itself
(``Census'').
Given the goldstandard queries, our system answers better than Wolfram|Alpha.
However, our observation is that one of the reasons why we perform better is
that Wolfram|Alpha does not include the whole Census dataset.
Therefore, we have computed a secondary success measure, which takes into
account if the dataset is known or not (i.e. if the system is able to answer
the question). For Wolfram|Alpha, this has been determined by reformulating the
questions several times until the expected result comes up. If not, we
considered the dataset unknown by the system.
\begin{figure}[h!]
\begin{tikzpicture}
\begin{axis}[width=\linewidth,xlabel=$k$,grid=both,legend
entries={success@k,corrected success@k,wa success@k,corrected wa
success@k},legend pos=south east]
\addplot[smooth,mark=*,color=black] plot coordinates {
	(1,0.677419355)
	(2,0.774193548)
	(3,0.806451613)
	(4,0.838709677)
	(5,0.838709677)
	(6,0.838709677)
	(7,0.838709677)
	(8,0.903225806)
	(9,0.935483871)
	(10,0.967741935)
};
\addplot[smooth,mark=x,color=black] plot coordinates {
	(1,0.838709677)
	(2,0.935483871)
	(3,0.935483871)
	(4,0.967741935)
	(5,1)
	(6,1)
	(7,1)
	(8,1)
	(9,1)
	(10,1)
};
\addplot[smooth,mark=*,color=black,style=dotted] plot coordinates {
	(1,0.384615385)
	(2,0.384615385)
	(3,0.461538462)
	(4,0.615384615)
	(5,0.769230769)
	(6,0.769230769)
	(7,0.769230769)
	(8,0.769230769)
	(9,0.769230769)
	(10,0.769230769)
};
\addplot[smooth,mark=x,color=black,style=dotted] plot coordinates {
	(1,0.538461538)
	(2,0.538461538)
	(3,0.615384615)
	(4,0.769230769)
	(5,0.923076923)
	(6,1)
	(7,1)
	(8,1)
	(9,1)
	(10,1)
};
\end{axis}
\end{tikzpicture}
\caption{Variant of success of answering goldstandard queries compared to
Wolfram|Alpha}
\label{fig:success-k-2}
\end{figure}
The results have been plotted Figure~\ref{fig:success-k-2}.
On this figure, we see that Wolfram|Alpha performs better (under the assumption
presented above) from $k=4$.
This can be explained by the fact that Wolfram|Alpha has a better average
precision than \textsc{Quasl} (see
table~\ref{tab:evaluation-average-precision}).
\begin{table}[h]
\centering
\begin{tabular}{ll}\hline
\multicolumn{1}{c}{\textbf{\textsc{Quasl}}} &
\multicolumn{1}{c}{\textbf{Wolfram|Alpha}}\\\hline\hline
$0.26$ & $0.43$\\\hline
\end{tabular}
\caption{Average precisions of \textsc{Quasl} and Wolfram|Alpha}
\label{tab:evaluation-average-precision}
\end{table}
Table~\ref{tab:evaluation-average-precision},
values have been computed based on
the gold-standard queries.
The definition of average precision can
be found in~\cite{Moffat:2008:RPM:1416950.1416952}.


% It would be great to lead some user satisfaction evaluation.
% We present figure~\ref{fig:html5} a screenshot of the result page for the query
% ``Revenue per store in New York''.
% \subsection{User experience}
% \begin{figure}[!h]
% \centering
% \includegraphics[width=5cm]{img/screenshot-1}
% \label{fig:html5}
% \caption{HTML5 result page for the query ``Revenue per store in New York''}
% \end{figure}
% Figure~\ref{fig:iphone} is a screenshot of the iPhone app that we have
% implemented, and that implements a siri-like support for three languages
% (English, German and French).
%  \begin{figure}[!h]
% \centering
% \includegraphics[width=5cm]{img/screenshot-2}
% \label{fig:iphone}
% \caption{iPhone app with Siri-like support}
% \end{figure}









\section{State of the art}



Keyword search over databases can be seen as a graph matching problem, where the
database is represented as a graph. The problem is thus to find the minimum
Steiner tree containing database nodes corresponding to keywords
in user's
queries~\cite{2002:DSK:876875.879013,He:2007:BRK:1247480.1247516,
Hristidis:2003:EIK:1315451.1315524,Hristidis:2002:DKS:1287369.1287427,
Liu:2006:EKS:1142473.1142536}.
Recent
approaches~\cite{Kandogan:2006:ASS:1142473.1142591,
Tata:2008:SDM:1376616.1376705, Tran:2007:OIK:1785162.1785201,
Zhou:2007:SAK:1785162.1785213} translate keyword queries into a set of
structured queries to be ranked.
This kind of computation is very expensive; besides, keyword-based approaches
suffer by the fact that most of the meaning of a sentence is not conveyed by
its \emph{vocabulary} (i.e. words or \emph{key}words in the sentence) but by
the \emph{syntax} of the sentence (i.e. its structure), as pointed out
by Orsi \& al.~\cite{Orsi:2011:KCS:1951365.1951390}.

\textsc{Soda}~\cite{blunschi2012} is a keyword-based search system over data
warehouses.
It uses some kinds of patterns to map keywords and some operators in the user's
query to rules to generate SQL fragments. It integrates various knowledge
sources like a domain ontology etc. However, this system does not focus on
``using natural language processing to understand the
input''~\cite{blunschi2012}, nor any contextual information that should be taken
into account in modern system~\cite{Hearst:2011:NSU:2018396.2018414}.

\textsc{Safe}~\cite{Orsi:2011:KCS:1951365.1951390} is an answering system
dedicated to mobile devices in the medical domain. It uses patterns in order to
relate keyword queries and NL queries. It uses semantic technologies to
overcome the problem of matching users' terminology terms and database terms. 
They propose a variant of auto-completion, they suggest terms and
relationships from the ontology (and not only from the database).
While authors insist on the necessity of getting answers rapidly due to the
domain (medical domain), more than 30\% of questions presented in their
evaluation are answered in more than 50s.
\textsc{Safe} is a keyword-based interface
to data warehouses that interprets keyword queries as natural language questions
by the mean of templates filled with parameters. The parameters of these
templates can be modified by users in an interactive mode. 




\section{Conclusions}
This is the conclusion of our paper.\\
We have achieve\ldots\\
The next steps are\ldots


%\vfill


\section*{Acknowledgment}
Our work has been supported by SAP Research.

\bibliographystyle{IEEEtran}

\bibliography{bibliography}

\end{document}
