\chapter{Linguistic patterns}
\label{sec:chapter-patterns}
\startcontents[chapters]
\Mprintcontents


Linguistic patterns are widely used in \acf{IE} and \acf{IR}. In \ac{IE},
patterns are used in unstructured documents (text corpora, Web documents) to
extract information basing on the structure (syntactic information) and on the
terms (lexical information) identified in the text.
More generally, the idea is that the same piece of information can be expressed
in various ways, and that a pattern (i.e. a set of constraints) captures the
common characteristics of those different expressions of the same idea.
\begin{table}
\centering
\begin{tabular}{ll}\hline
\multicolumn{1}{c}{\textbf{Question}} &
\multicolumn{1}{c}{\textbf{Entities}}\\\hline\hline 
Sales revenue per year & (Sales revenue) [Year]\\\hline 
Revenue over years & (Sales revenue) [Year]\\\hline 
Sales results per FY & (Sales revenue) [Year]\\\hline
\end{tabular}
\caption{The same idea can be represented in different ways. The second column
expresses constraints that are satisfied by all questions from the
first column}
\label{tab:pattern-characteristics}
\end{table}
This is depicted in table~\vref{tab:pattern-characteristics}. The strong
assumption there is that given a sentence, it is possible to find a set of
constraints that can define the meaning of the sentence, or the information
that a user is looking for.
It is obviously not true in general, because there is not a single
interpretation of NL sentence inputs, but constraints in the patterns help in
limiting the ambiguity of the textual input.


\section{Linguistic patterns in \ac{IR}}
A pattern (or part of it) is usually defined and associated with
the notion of syntactic axis. Indeed, \emph{pattern} in this field
can be reffered to as \emph{lexico-syntactic patterns} (terms and syntax) or
\emph{morpho-synatactic patterns} (terms and their categories plus syntax). 
Despite the fact that they are extensively used, there are very few comments on
the definition of such patterns.
They have been defined in the linguistic theory~\cite{LAI} as ``a schematic
representation like a mathematical formula using terms or symbols to indicate
categories that can be filled by specific morphemes''.
Patterns used by Sneiders~\cite{AQAT} are regular strings of characters where
sets of successive tokens are replaced by entity slots (to be filled by
corresponding terms in the actual textual document).
This means that a pattern in this sense is an extension of regular expressions
(where \textit{patterns} from the regular expression are \textit{slots} that
may be of various type in linguistic patterns).
An innovation in~\cite{AEA} is the definition of a pattern being composed of two
subpatterns: one \emph{required pattern} (regular patterns) and one
\emph{forbidden pattern}, that corresponds to pattern that must not match the
message.
Finkelstein-Landau~et~Morin~\cite{ESR} formally define morpho-syntactic
patterns related to their \ac{IE} task: they aim at
extracting semantic relationships from textual documents. 
A pattern $A$ can be decomposed as:
\begin{equation}
\label{eq:pattern-morin-definition}
A=A_1\ldots A_i\ldots A_j\ldots A_n
\end{equation}
In this formula, $A_k\ k\in[1,n]$ denotes an \textit{item} of the pattern $A$
which is a part of a text (without any \textit{a priori} constraint on the
sentence boundaries).
An \textit{item} is defined as an ordered set of \textit{tokens} composing
words\footnote{Delimiting tokens is not an easy task in any language, since a
word might be composed of several tokens, and in some languages words
boundaries are not obvious.}.
In this approach the syntactic isomorphy hypothesis is adopted.
Let $B=B_1\ldots B_i\ldots B_j\ldots B_n$ be a pattern.
%\begin{equation}
%\label{morin-second-equation}
%B=B_1\ldots B_i^\prime\ldots B_j^\prime\ldots B_n^\prime
%\end{equation}
This hypothesis states the following
assertion~\cite{Kuchmann-Beauger:2011:SDQ:2176246.2176251}:
\begin{equation}
\exists (i,j)
\left.\begin{array}{r}
A\sim B\\
win(A_1,\ldots,A_{i-1})=win(B_1,\ldots,B_{j-1})\\
win(A_{i+1},\ldots,A_{n})=win(B_{j+1},\ldots,B_{n})
\end{array}\right\}
\Longrightarrow A_i\sim B_j
\end{equation}
which means that if two patterns $A$ and $B$ are \textit{equivalent} (they
denote the same string of characters), and if it is possible to split both
patterns in identical \textit{windows} composed of the same tokens (when applied
to string of characters), then the remaining items of both patterns (i.e. $A_i$
and $B_j$) share the same syntactic function.
%Some standards have been proposed like Ontology Design
%Patterns\footnote{See~\url{http://ontologydesignpatterns.org/wiki/Main_Page}
% for more details.} in the Semantic Web community

The introduction of \textit{windows} in the definition of patterns imposes their
components to be ordered given the syntactic order (for instance left to right
in English).
The kind of patterns presented here has become very popular, in particular
among the \ac{SW} community, for instance through Ontology Design
Patterns\footnote{See~\url{http://ontologydesignpatterns.org/wiki/Main_Page}.},
which is a platform meant to define classes of patterns widely used in
\ac{IE}.

In the following, we adopt an other formalism, where the syntactic isomorphy
is not kept, i.e. the expected characteristics (or
\emph{features}) from the patterns do not have to be ordered along the syntactic
axis in the sentence (or the question) matched by the pattern.




\section{Patterns for structured information}
\label{sec:pattern-structured}
Patterns for structured content consist of structures that define a mapping
between text and a data structure (like a database).
In our proposal, we relax the syntactic constraint\footnote{The usefullness of
this approach is also motivated by our iPhone\texttrademark{} application, where
the generated question is more a keyword query rather than a well-formed NL
question.} and wanted to have any kind of features to appear in the pattern.
This mapping is performed in several tasks: 
\begin{itemize}
  \item declaring the constraints that
must be satisfied for the pattern to match the question
\item defining which part
of the question must be exported according to the constraint defined in the
previous step
\item defining what to recompute from the exported information in
the question
\item defining the actual mapping from the exported information to
the data structure
\end{itemize}
We will detail these tasks in the following.

\subsection{Running example}
\label{sec:pattern-running-example}
In the following, we will consider as an example the following user's query:
\begin{equation}
\textnormal{``Top 5 middle-aged customers in my city''}
\label{eq:pattern-running-example}
\end{equation}
which is also depicted figure~\vref{fig:pattern-running-example}.

\newsavebox{\firstlisting}
% removed from listing
% City."CITY" AS city,
% Customer."AGE" AS age,
\begin{lrbox}{\firstlisting}

\begin{lstlisting}[basicstyle=\ttfamily\footnotesize]
SELECT
 sum(Invoice_Line."DAYS"
  * Invoice_Line."NB_GUESTS"
  * Service."PRICE") AS revenue,
 Customer."LAST_NAME" AS customer
FROM City
INNER JOIN Customer
 ON (City."CITY_ID"=Customer."CITY_ID")
INNER JOIN Sales
 ON (Sales."CUST_ID"=Customer."CUST_ID")
INNER JOIN Invoice_Line
 ON (Invoice_Line."INV_ID"=Sales."INV_ID")
INNER JOIN Service
 ON (Invoice_Line."SERVICE_ID"=Service."SERVICE_ID")
WHERE
 city = 'Palo Alto' AND
 age >= 20 AND
 age <= 30
GROUP BY
 customer
ORDER BY revenue
LIMIT 5
\end{lstlisting}
\end{lrbox}


\begin{figure}
\centering
\subfloat[Running example of question with annotations. The \emph{custom rule}
corresponds to the feature $f_5$.]{
\includegraphics[
	scale=1,
	trim=100pt 500pt 200pt 130pt]{img/running-example}
	\label{fig:pattern-running-example-1}
	}\\
\subfloat[Example SQL query that was generated from the above user's question]{
{\usebox{\firstlisting}}
\label{fig:pattern-running-example-2}
        }
\caption[Translating a question in a structured query]{Translating a user's
question into a structured query.
NL patterns, constraints given by the data and metadata of the data warehouse (see
figure~\vref{fig:introduction-bi-tool}) have been applied to infer query
semantics.
This was mapped to a logical, multi-dimensional query, which in turn was
translated to SQL. Note that the `revenue' represents a proposed measure,
depicted as `?1' in figure~\ref{fig:pattern-running-example-1}.
The computation of the measure `revenue' and the join paths are configured in
the metadata of the warehouse.}
\label{fig:pattern-running-example}
\end{figure}



% \begin{equation}
% \textnormal{``Sales revenue per quarter in New York''}
% \label{eq:pattern-running-example}
% \end{equation}

% \subsection{Definitions}
% We provide in this section some definitions and notations.
\subsubsection{Question}
Let $Q=\left\{q_1,\ldots q_n\right\}$ be a user's question and $q_i\ i\in[0,n]$
tokens of the question.
For example, the question~(\ref{eq:pattern-running-example}) can be
represented as follows:
\begin{equation}
Q=\left\{\textnormal{`Top'},\textnormal{`5'},\textnormal{`middle-aged'},
\textnormal{`customers'},\textnormal{`in'},\textnormal{`my'},\textnormal{`city'}\right\}
\end{equation}
Note that the word `middle-aged' can also be decomposed in two words (i.e.
$\{\textnormal{`middle'},\textnormal{`age'}\}\subset Q^\prime$). In the
following, we use the term \emph{query} to denote a database query and the term
\emph{question} to denote a user's query.

\subsection{Users' graphs and annotations}
The different information held by questions' annotations as well as additional
information about users (e.g. contextual information) are stored in a
\emph{graph repository}. The framework incorporates a situation manager
described in~\cite{thollotThesis}. This framework is used to capture and
monitore different kinds of events of context-aware corporate applications. A
use-case of the framework is the modeling of the situation of users. This
modeling was initially used for offering personalized recommendations. An other
use-case that we propose is to provide personalized search results. The
situation framework brings a so-called graph repository. Every user can access
a set of personalized graphs stored in this repository. The graphs are created
by \emph{providers} that decide who is allowed to access what information.
We have implemented one provider for each origin of annotations. For instance,
all annotations corresponding do data warehouse entities come from a graph (in
the graph repository) that has been generated by a specific provider, the
one dedicated to data models. At run-time, all graphs belonging to the
user (i.e. containing information about the user only, and information that
user is allowed to see) are used together with the parse graph (see
above), and used as a model (in the sense of \ac{RDF}) when executing patterns.
In nutshell, the graph of the parse graph contains annotation about the user's
query, and the other graphs (i.e. the graphs coming from the user's graph
repository) define all entities that have been referenced in the parse graph.

So far, the graph repository is composed of the following graphs:
\begin{itemize}
  \item the graph composed of textual annotations from the current question
  (i.e. the parse graph, see section~\vref{sec:pattern-parse-graph})
  \item the graph composed of annotations about the physical device being used
  and other contextual information like the geo-location of the user
\end{itemize}
Besides, some user-independant graphs are also part of the repository:
\begin{itemize}
  \item the graph composed of domain-specific knowledge
  \item the graph composed of language-dependant linguistic knowledge
\end{itemize}
The latter graph has been detailed in
section~\vref{sec:personalized-extensibility-linguistic-resources}.

An \emph{annotation} is defined as a rooted graph which root is the set of
tokens (possibly empty) from the user's question involved in the annotation.
It refers to a node of type \verb?Annotation? belonging to set set of
personalized graphs.
We note $a\in A$ such an annotation and $A$ the space of annotations.
Note that an annotation is not always bound to actual tokens of $Q$ (for
instance, the annotation related to the physical device is not bound to any
token from $Q$).


\subsection{Feature}
\label{sec:patterns-feature}
\begin{figure}[h!]
\centering
\includegraphics[scale=1,trim=50pt 170pt 320pt 170pt]{img/parse-graph}
\caption[Example of a parse graph]{Example of the parse graph generated for the
question ``Top 5 middle-aged customers in my city''. The upper-box (`User
Profile') corresponds to the feature $f_8$; the box `Schema' to the feature
$f_6$. The lower-box corresponds to a class of features to which feature $f_5$
belongs.}
\label{fig:patterns-query-graph}
\end{figure}


A feature $f_i:Q\rightarrow A\in\mathcal{F}$ is a mapping from the query to the
set of annotations for the feature $i$. 
The set of all annotations given a query $Q$ for the feature $i$ is given by
$f_i(Q)$. In the case where no annotation $a\in f_i(Q)$ is bound to the query
$Q$, $a$ is independant from $Q$ (but is still user-dependant). 
The number of annotations in the query $Q$ for the feature $i$ is
given by $\left|f_i(Q)\right|$. 
For example, let the first feature be: $f_1:t\mapsto t$, i.e. the $id$ mapping
and the second feature be: $f_2:t\mapsto t^\prime$ where $t^\prime$ is the base
form of $t$, i.e. the mapping from terms to their base forms. 
We have thus:
\begin{equation}
f_1(Q)=id(Q)=Q
\end{equation}
and:
\begin{equation}
f_2(Q)=\left\{t^\prime\ t\in Q\right\}
\end{equation}
A feature can also map to more complex structures (a tree structure).
Let $f_3$ be the \emph{range} feature and $f_4$ be the \emph{top-$k$} feature. 
An illustration of the latter feature was given
figure~\vref{fig:pattern-running-example}.
The \emph{range} feature (i.e. feature $f_3$) is a domain-specific rule (which
belongs to the range of \emph{custom rules}, see
figure~\vref{fig:pattern-running-example-1}) which exports the following
information:
\begin{itemize}
  \item {[dimension]: the related dimension} 
  \item {[begin]: the beginning value of the exported segment of
  text}
  \item {[end]: the ending value of the exported segment of text}
\end{itemize}
The \emph{top-$k$} feature (i.e. feature $f_4$) exports:
\begin{itemize}
  \item {[order]: what order will be used (i.e. ascending or
  descending)}
  \item {[nb]: the maximum number of results to be retrieved (i.e.
  the query modifier operator \verb?LIMIT? in MDX)}
  \item {[dimension]: the related dimension}
  \item {[measure]: the related measure}
\end{itemize}
Note that in the case of the \emph{top-$k$} feature, the measure cannot be
matched (\verb??1? in figure~\vref{fig:pattern-running-example}), in which case
a \emph{valid} measure will be selected as explained later on.
The exported items will then be rewritten or recomputed.
For instance in the case of feature $f_4$, `five' from `Top five customers'
can be rewritten `5' (so that it can be processed by the query generator), and
`customers' should be normalized as `customer'.


Let $\mathcal{F}$ be the set of features considered in a given application.
The above example (\ref{eq:pattern-running-example}) generates
$|\mathcal{F}|=4$ feature types and the features summarized in
table~\vref{tab:pattern-feature-annotation}. 
\begin{table}[!h]
\centering
\begin{tabular}{lll}\hline
\multicolumn{1}{c}{\textbf{Feature}} & \multicolumn{1}{c}{\textbf{Example}} &
\multicolumn{1}{c}{\textbf{Ann. count}}\\\hline\hline 
$f_1$ ($id$) & `Top'; `5'; `middle-aged';\ldots
& $|f_1(Q)|=|Q|=7$\\\hline
$f_2$ (base forms) & `top'; `five'; `middle'; `age';\ldots &
$|f_2(Q)|=8$\\\hline
$f_3$ (range rule) & $\emptyset$ & $|f_3(Q)|=0$\\\hline
$f_4$ (top-$k$) & ``[top] [5] [customers] [?1]'' & $|f_4(Q)|=1$\\\hline
$f_5$ (custom rule) & ``middle-aged'' & $|f_5(Q)|=1$\\\hline
\multirow{2}{*}{$f_6$ (data model entities)} & (Sales revenue) &
\multirow{2}{*}{$|f_6(Q)|=x$}\\
& [Customer]; [City]\ldots & \\\hline
$f_7$ (geographic entities) & $\emptyset$ & $|f_7(Q)|=0$\\\hline
\multirow{2}{*}{$f_8$ (user profile)} & locale=``US''
 & \multirow{2}{*}{$|f_8(Q)|=y$}\\
& location=``Palo Alto'' & \\\hline
\end{tabular}
\caption{Features and annotations for the example
(\vref{eq:pattern-running-example})}
\label{tab:pattern-feature-annotation}
\end{table}

Note that all annotations do not have to refer to actual tokens in the user's
question. For instance $f_7$ in table~\vref{tab:pattern-feature-annotation} is
independant from $Q$. 
When an annotation actually refers to tokens from $Q$, it keeps information
about which tokens are involved (via properties like \emph{offset} and
\emph{length}).


% \subsection{Implementation}
%We have chosen this language because it is dedicated for expressing
% constraints (the different annotations expressed in a pattern are constraints
% to be satisfied in a parse tree).





\subsection{Parse graph}
\label{sec:pattern-parse-graph}
The \emph{parse graph} of the query $Q$ can be written: 
\begin{equation}
p(Q)=\left\{f_1(Q),\ldots,f_k(Q)\right\}
\end{equation}
where $f_i$ is the feature $i$.
The parse graph of $Q$ is the set of annotations for all features.
The total number of annotations in the parse graph of $Q$ is given by:
\begin{equation}
\left|p(Q)\right|=\sum_{i=1}^k\left|f_i(Q)\right|
\end{equation}
The parse graph is implemented as an \ac{RDF} graph.
Each annotation $a\in f_i(Q)$ for the query $Q$ are nodes of type
\verb?Annotation? in the RDF graph. 
The annotations are defined by a set of attributes and predicates (having the
annotation as subject):
\begin{itemize}
  \item \url{urn:grepo/query-tree#hasAnnotationType} defines the \emph{type} of
  the annotation. Those types are sub-types of the \verb?Annotation? type. They
  are summarized in table~\vref{tab:annotation-types}.
  \item \url{urn:grepo/query-tree#referencesResource} defines a reference to an
  external resource. For instance, database entities are defined in the data
  model of the data warehouse.
  \item \url{urn:grepo/query-tree#confidence} defines the \emph{confidence} of
  the annotation, i.e. a score that measures how relevant is the entity with
  respect to user's query. The computation of this score is performed by the
  named entity recognition module.
  \item \url{http://www.w3.org/2000/01/rdf-schema#label} defines the
  \emph{label} of the annotation, i.e. the \emph{normalized} text that carries
  this annotation. The normalization process is further described in
  section~\vref{sec:chapter-personalized}. 
  \item \url{urn:grepo/query-tree#originUri} defines the vendor-specific ID of
  database entities. 
\end{itemize}
\begin{table}[!h]
\centering
\begin{tabular}{ll}\hline
\multicolumn{1}{c}{\textbf{Type}} &
\multicolumn{1}{c}{\textbf{Uri}}\\\hline\hline Dimension &
\verb?DimensionAnnotationType?\\\hline Measure & \verb?MeasureAnnotationType?\\\hline
Member & \verb?DimensionValueAnnotationType?\\\hline
NLP feature &
\verb?NlpFeatureAnnotationType?\\\hline
\end{tabular}
\caption{Annotation types in the parse graph}
\label{tab:annotation-types}
\end{table}
Other predicates and attributes are defined in the case of NLP features,
for instance for describing the kind of NLP feature, what item can be exported,
etc.

\subsection{Pattern}

We present an example pattern in the appendix,
section~\vref{sec:appendix-pattern}.
%
\begin{figure}[h!]
\includegraphics[width=\textwidth,trim=50pt 220pt 60pt
200pt]{img/pattern}
\caption{Example for parse graph constraints and mapping rules to generate a
structrued query}
\label{fig:pattern}
\end{figure}
%
%
A \emph{pattern} is the technique used to translate questions in structured
queries.
It is defined as a set of possibly optional \emph{constraints} to be satisfied
by the graphs, plus rules that define how these constraints are translated in
structured queries composed of slots to be filled with actual data.
Figure~\vref{fig:pattern} is an illustration of a pattern in two parts.
The left-hand side (entitled `Where') represents the set of constraints that
the different graphs must satisfy (i.e. the expected form of the parse graph). 
The pattern triggers only if the constraints defined there are satisfied. 
Some constraints defined here can be optional (as it will be explained in
section~\vref{sec:pattern-relational-constraints}).
The right-hand side (entitled `Construct') represents the template of the
structured query\footnote{Note that the syntax of the generated structured query will be
introduced in chapter~\vref{sec:chapter-modeling}.} that will be generated.

Once the parse graph is created for a particular user's question, 
the system has to ensure domain- and aplication-specific constraints.
%that we discuss in this section.
%, before an actual structured 
%query can be generated (see next section).  
%
%For our BI use-case, some constraints have been mentioned in
%section~\vref{sec:problem}.
In addition to constraints mentioned on
figure~\vref{fig:pattern-running-example} dedicated to \ac{BI} use-cases, other
%The most obvious ones were to ensure that recognized entities
%belong to one data warehouse and that a structured query contain at least one
% measure and one dimension. Other constraints 
apply on how entities occur together in the query, 
the data types of the recognized entities or 
to which other entities 
%the recognized entities 
they relate to. 
Another constraint takes the form of entity recommendation
%These includes also to suggest entities 
when the user did not include all necessary information to compute a
valid query or to add additional filter for personalization.
%
% In reality there are many and quiet complex constraints that need to be 
% ensured to compute valid structured queries. The problem that we need to 
% solve is how to encode such constraints on metadata-level without encoding 
% them into the actual program logic, which is almost impossible to manage 
% (as previous implementations of the use case
% demonstrated). In addition it make sense to separate the constraint solving
% problem from the actual mapping and query generation process to keep it managable
% and ease the configuration.
% 
% Our proposal is to express structural constraints in a SparQL {\scriptsize
% \verb|WHERE|} clause  and to select and modify properties defined in the
% \emph{parse graph} to be used in the technical query.
% By defining variables in this {\scriptsize \verb|WHERE|} clause, we can
% separate the selection of items that we like to reuse in the mapping step.
% These variables  are then mapped into the query model (representing the
% actual query language $L$) using  a {\scriptsize \verb|CONSTRUCT|}
 %
% clause
%\footnote{\url{http://www.w3.org/TR/2008/REC-rdf-sparql-query-20080115/#constructGraph}}
% which we will detail in the next section. The main advantage of these
% approach is that  it allows the engineer describe certain situations that may
% evolve from the user's question  instead of programming the generation of
% alternative solutions.
 %
We discuss in the following two types of constraints: 
\begin{enumerate}
  \item  \emph{relational
constraints} %that apply on edges and nodes in the graph 
(see section~\vref{sec:pattern-relational-constraints})
 % They 
 which describe 
 %certain 
 situations like the fact that 
 %that we expect in the user's question 
 %(e.g. 
 a dimension and a measure 
 %that
 should belong to the same data warehouse
 \item \emph{property constraints}, which 
 %can be used to 
 filter nodes based on property
  values (see section~\vref{sec:pattern-property-constraints}),
  like the fact that two annotations are overlapping or close to each other.  
\end{enumerate}
 % the suggestion of additional artifacts that ensure the validity of the final
 % query;
  %to determine situations that 
  %resolve situations that
  %cannot be expressed on graph level (e.g. whether two annotations are
  % overlapping or close to each other).
 Then, we discuss a convenient feature of \ac{SPARQL} to inject additional
 variables (see section~\vref{additional-variables}), e.g. to
 %define default values or 
 to generate additional values to be used in the structured query if a certain
 graph pattern occurs.

\subsubsection{Relational Constraints}
\label{sec:pattern-relational-constraints}

\textsc{SparQL} queries are essentially graph patterns. 
Nodes and edges are adressed by URIs\footnote{Uniform Resource Identifier} or
are assigned to variables
which
%if they are kind of placeholders. 
%These variables 
are bound to 
URIs or literals by the \emph{\textsc{SparQL} query processor}. 
This mechanism of expressing graph constraints and of binding variables
 eases the configuration of our approach tremendously.
%
%To demonstrate the power of our \textsc{SparQL} approach in expressing 
%complex constraints we show in 
Figure~\vref{fig:pattern} 
is 
a vizualized example of complex constraints for selection and mapping rules that
are used in our application setting.
On the left-hand side 
stands an
%in figure~\vref{fig:pattern} we see an
excerpt of the constraints and variables used in our BI use-case. 
%(the right
%part will be discussed in next section).
The markers attached to a node represent in contrast to
figure~\vref{fig:patterns-query-graph} assigned variables or URIs. URIs are
expressed in short form and do not care a leading question mark.
Edges between nodes and literals refer to \verb|rdfs:label| 
if they are not marked otherwise. Dashed lines illustrate that a particular
part of the graph pattern is optional (implemented trough an 
\verb|OPTIONAL| statement 
%assigned to the correpsonding block 
in \textsc{SparQL}). 
%
\circled{Q} depicts the user's question.
%The node representing the user's question is depicted on top with .
Below 
%we see 
are
 annotation nodes and left to them  
assigned variable names (like `?a1') which form the \emph{parse graph}. 
Other nodes reference metadata graphs (see figure~\vref{fig:patterns-query-graph}). 
%(we use here the same color coding). 
The 
%light colored 
nodes like \lightrednode{M} in figure~\vref{fig:pattern}
represent resources, while nodes like \darkrednode{}
represent literals, which will be reused later for query composition. 
%As we will detail 
As discussed in next section, we map only literal variables to the final query
model, to separate the input and output model on conceptual level. Note also
that we define much more variables in the real-world use-case, e.g. to handover
data types and scores from the question to the query model, which we leave out
of the examples for sake of brevity. Our example exhibits constraints for the
 following situations:
 %
\paragraph{Natural Language Patterns (`?a1')} The annotation (`?a1')
addresses the natural language pattern for the top-\emph{k} feature (see
figure~\vref{fig:patterns-query-graph}).
The top-\emph{k} pattern exports the variables `number' and  `order', which are 
bound to `?nb' and `?ord'. This rule might be combined with a rule triggered by
phrases like `order by \ldots' to assign a variable holding the dimension or
measure
%for 
such that an ordering 
%shall 
can be applied. 
%but we like to show here how we handle situations without such user input. 
In general, 
%one can easily combine 
different natural language patterns can be easily combined
%by 
using
\emph{property constraints} as explained in next subsection.
%
Patterns for ranges, custom vocabulary, \emph{my}-questions 
%and so on
etc. are treated similarly. In particular 
%for
in situations related to ranges and the mapping of other data types, we define
additional variables for the data type of the of certain objects (e.g. `Date'
or `Numeric') to handle them separately in the final query generation. These
attributes eventually influence the \emph{serialization} of the structured
query.

\paragraph{Data Warehouse Metadata (`?a2' and `?a3')} The 
%second and third 
annotations `?a2' and `?a3' refer to a recognized measure and dimension bound
via a \emph{matches} relationship in figure~\vref{fig:pattern}
and triggered by questions like ``revenue per year''. 
%Note that we
 %show here only
 %the assigment of one measure and dimension, while we 
 %actually allow 
An arbitrary number of measures and two dimensions are allowed (due to the
requirement of rendering charts).
 % in the real-world use-case
By assigning the nodes for the measure (`?m1') and dimension (`?d1') to the same
node for the data warehouse (`?w') we ensure that these objects 
%are \emph{compatible}
%(i.e. 
can be used together in a structured query
%) 
and lead therefore to a valid query. 
%In reality, we further check whether
More precisely we check whether recognized dimensions and measures are linked
through a fact table (see figure~\vref{fig:data-schema}).
For reuse in the structured query, we assign the labels of the recognized objects to 
variables (i.e. `?mL1' for the measure label and `?dL1' for the dimension label).
%
%As discussed in section~\vref{sec:problem}, there might also be cases where
%the user mentions only one of the object types: either measures or dimensions,
%to get an overview of his data warehouse from multiple views. 
In 
%such 
some
cases
the system has to suggest fitting counterparts 
(e.g. \emph{compatible} dimensions)
%(e.g. \emph{compatible} 
%dimensions for a measure) 
to not aggregate all facts. 
%for the mentioned measures
% or show all values of the mentioned dimensions.
%, which would not be very usefull in a real-world BI scenario.
%
In the example in figure~\vref{fig:data-schema} we choose `?d4' as dimension if
the question contains only measures and `?m2' as measure if it contains only
dimensions. Thus the system generates multiple interpretations for the user's
question.
%by proposing different views on the warehouse's data. 
The \textsc{SparQL} blocks that contain `?d4' and `?m2' are optional and contain
a filter (i.e. a \emph{property constraint} as explained later) such that they
are only triggered if either `?mL1' or `dL1' are not bound. The label of the
recommended measure or dimension is finally bound to the respective label
variable that would otherwise be unbound (i.e. `?mL1' or `dL1').

\paragraph{Data Warehouse Values (`?a4')} 
%Matches for dimension values 
%are differently handled than matches on dimension names
% as shown for `?a4' in figure~\vref{fig:pattern}. 
% We use 
Instead of the \emph{matches} relationship, we use the URI
\emph{valueOf} to assign the dimension value to the corresponding dimension
(i.e. `?d2'). For later reuse, we assign the label of the value's node (`?vL2'),
e.g. `2009' for a year, and the dimension value to a variable (`?dL2'). 
%In addition
%we ensure that `?d2' is compatible with the other other 
%data warehouse 
%objects. 
%used in this interpretation.
% / structured query (ensuring that there is a
%relation \emph{dimOf} starting at `?d2' and ending at `?w').
%
In the real-world use-case we consider not only one match situation (like in
the example) but a couple of other situations, where the declarative
approach is very valuable. For instance, we show here only the case where the
matched value does not belong to an already-recognized dimension (i.e. `d2'
would be an additonal dimension in the query).
For the situation where the value belongs to `?d1' -- an already-recognized
dimensions -- we define another optional \textsc{SparQL} block which is
triggered by the \emph{valueOf} relationship between the annotation and the
corresponding dimension.
%
%If there is more than one recognized value, we need to assign them to the 
%corresponding dimensions, which can be easily done with graph patterns. 
%In addition, we like to t
We treat single value matches for one dimension differently than matches on
multiple values that belong to the same dimension.
%(i.e. to define an additional
%projection finally as discussedd in section~\vref{sec:problem}). 
Our declarative approach eases this, because another set of constraints can be
simply defined with separate variables. 

\paragraph{Personalization (`?a5')} Annotation `?a5' shows the personalization
feature, which applies a filter for a 
%mentioned 
dimension if a corresponding value is part of the user profile (see `my city'
on figure~\vref{fig:pattern-running-example}).
The constraint captures following situation: an annotation (`?a5') refers to a 
dimension (`?d3') that \emph{occursIn} some resource (`?profileItem') that 
%in turn 
has some relationship (`?rel') to the user (`?user'). From the 
%defined
graph pattern, we consider for later porcessing the label of the 
%mentioned
dimension (`?dL3') and the label of the user profile 
%value 
that occurs in this dimension (`?pItemL').
%
Note that  constraints for personalization (as shown in 
figure~\vref{fig:pattern}) do not refer to the  \emph{my}-pattern
(shown in figure~\vref{fig:patterns-query-graph}) due to space constraints. If the
constraints would be applied as shown here, we would simply test for every
matched dimension whether there is a value mapping to the user profile. 
%
%In summary, 
These examples 
%mentioned above 
highlight the flexibility 
%and convinence 
of using \textsc{SparQL} graph patterns to manage constraints and variables
%that shall be used 
for query composition in Q\&A systems.
%In particular the assigment of suggested measures and  
%dimensions for `incomplete' user questions and ease in describing 
% different match situations demonstrate the power of our approach. 
%Still, graph patterns are not enough. 
Addtitional constraints have to be applied on property or litteral level. They
are detailed in the following sub-section.

\subsubsection{Property Constraints}
\label{sec:pattern-property-constraints}
The following details the use of constraints through \ac{SPARQL}
\verb|FILTER| statements 
%In the following we 
%like to 
%give examples for constraints (or more specificly \textsc{SparQL}
% {\footnotesize \verb|FILTER| statements}) that have to be 
considered in addition to graph patterns. They are less important on conceptual
level, but have many practical implications, e.g. to not generate duplicated
queries or to add further functionality, which cannot be expressed on graph
pattern level.
%
The first obvious additional constraint is to check whether 
two annotations 
%that shall for instance 
matching two distinct dimensions are different:
%, using for instance
%\vspace{-2mm}
\begin{verbatim}
FILTER(!sameTerm(?a1, ?a2))
\end{verbatim}
%
%We mentioned in section~\vref{sec:problem} that  
It is often crucial to separate objects  
that matched the same part of the user's question into 
several structured query;
%(e.g. if the user entered `New York',
%one match is derived on a value for the dimension `state' and 
%another one for the dimension `city').
this is even more important for dimension names because they 
define the aggregation level of the final result. 
%
This kind of constraints can be expressed using the metadata 
acquired during the matching process. Assuming that the 
%offset of one annotation (i.e. the 
position of a match inside the question has been assigned to the variable
\verb$?o1$ and the offset and length of another annotation are
assigned to \verb$?o2$ and \verb$?l2$,
the filter for ensuring that the latter one does not begin within
the range of the first annotation can be expressed by:
\begin{verbatim}
FILTER(?o2 < ?o1 || ?o2 > (?o1 + ?l2)))
\end{verbatim}
Property constraints are also used for 
%a number of 
more complicated
query generation problems. 
%A nice example are range queries for 
%arbitrary numeric dimension values. One
%can 
% For instance a generic \emph{natural language pattern} for ranges would
% look similar as the one shown in figure~\vref{rule_example}. 
It would then apply
to \verb?dataType:Numeric?, be triggered by phrases like `between
$x$ and $y$' and include a script for normalizing numbers. In combination
with matched numeric dimension values, one can define a filter that tests
whether two members where matched inside the matched range phrase and generate
variables defining the beginning and ending of a structured range query.

\subsubsection{Additional Variables}
\label{additional-variables}

It is often usefull to define default values or to bind additional
variables.
%for other purposes based on some constraints, 
%which is possible by using \emph{SparQL 1.1}.
% 
An example for a default value would be to limit the size of the 
query results if it is not specified by the user. To do so, we add 
an optional \textsc{SparQL} block that checks the variable `?nb' 
and assigns a value if it is unbound using: 
% The following 
% statement inside an optional \textsc{SparQL} block that 
% checks whether the variable `?nb' was already bound beforehand:\\
\begin{verbatim}
BIND(1000 AS ?nb)
\end{verbatim}
%\\{\footnotesize \verb$BIND('ASC' AS ?ord)$}
%
There are plenty of other use-cases to inject additional variables,
%For instance one can 
like defining analysis types (which are part of the 
not-illustrated metadata that is assigned to a structured query). 
These are indicators used to select the best fitting chart type for 
a single result. To capture the analysis type, we use 
certain trigger-words (see \cite{TTQ}) and additional 
constraints such as the number of measures and dimensions and the 
 cardinality of dimensions. For instance we would select a \emph{pie chart}
 if a single measure and dimensions is mentioned in the question 
 and the user is interested in a `comparison' 
 (e.g. triggered by  the term `compare' or `versus'). However, if the cardinality
 of the dimension (which is maintained in the metadata graph) would exceed 
 a certain value (e.g. 10), a \emph{bar chart} would be a better fit because a
 pie chart would be difficult to interpret otherwise.
 


As result of the mapping step, we get an RDF graph containing all potential
interpretations (structured queries) of the user's question. Since the query 
model as such reflects the features of the underlying query language $L$ (e.g. 
projections and different types of selections) it is straightforward to 
serialize this model to an actual \emph{string} that can be executed on a data 
source. The constraints defined in previous sections ensure on the one hand 
how to treat different match situations and on the other hand 
that the generated queries are valid.
%(i.e. return a result). 
The great advantage of this approach
is that complex constraints can be defined in a declarative way and that they are
to some extend separated from the mapping problem, making the implementation
much %more easy
easier in presence of complex requirements. 
%What remains is to score
The generated structured queries must then be scored
% in order 
to provide a usefull ranking of results 
%to the end-user
and to define an order
according to which the computed queries are eventually executed.




 
 
 







% The whole structure (or ``parse tree'') that has been generated
% must then be mapped to a (database) structured query.
% The challenge here, is to generate a valid database query that captures the
% intention of the user.
% The question~\vref{fig:running-example} is mapped to internal representations
% of the multidimensional queries represented below:
% \begin{equation*}
% \begin{small}
% Q_1=\left[
% \begin{array}{lcl}
% \textnormal{data model} & = & \textnormal{Club}\\
% \textnormal{dimensions} & = & \{[\textnormal{Customer}]\}\\
% \textnormal{measures} & = & \{(\textnormal{Revenue})\}\\
% \textnormal{filters} & = &
% \left\{
% \begin{array}{lcl}
% [\textnormal{City}] & = & \textnormal{`Palo
% Alto'},\\
% \left[\textnormal{Age}\right] & \geq & 20,\\
% \left[\textnormal{Age}\right] & \leq & 30
% \end{array}
% \right\}
% \\
% \textnormal{orderings} & = & (\textnormal{Customer},\textnormal{Revenue})\\
% \textnormal{truncation} & = &
% \{((\textnormal{Customer},\textnormal{Revenue}),\downarrow,5)\}
% \end{array}
% \right]
% \end{small}
% %\label{eq:internal-query-1}
% \end{equation*}
% and
% \begin{equation*}
% \begin{small}
% Q_2=\left[
% \begin{array}{lcl}
% \textnormal{data model} & = & \textnormal{Club}\\
% \textnormal{dimensions} & = & \{[\textnormal{Customer}]\}\\
% \textnormal{measures} & = & \{(\textnormal{Margin})\}\\
% \textnormal{filters} & = &
% \left\{
% \begin{array}{lcl}
% [\textnormal{City}] & = & \textnormal{`Palo
% Alto'},\\
% \left[\textnormal{Age}\right] & \geq & 20,\\
% \left[\textnormal{Age}\right] & \leq & 30
% \end{array}
% \right\}
% \\
% \textnormal{ordering} & = & \{[\textnormal{Customer}].(\textnormal{Margin})\}\\
% \textnormal{truncation} & = &
% \{([\textnormal{Customer}],(\textnormal{Margin}),\downarrow,5)\}
% \end{array}
% \right]
% \end{small}
% %\label{eq:internal-query-2}
% \end{equation*}
% 
% The internal queries $Q_1$ and $Q_2$ represent the semantics captured by the
% structured (parse tree).
% There are two internal queries, because the constraints concerning the measures
% simply state that it must be an entity compatible with the other entities
% (i.e. [Customer], [City] and [Age]).
% The arrow ($\downarrow$) in the truncation clause of the conceptual queries
% stands for the descending order (here we consider the natural order $<$,
% and measures' domain values are supposed to be in $\mathbb{R}$).




\subsubsection{Application domain}
A pattern for $Q$ is a chosen pattern $t_Q$ matching the query $Q$ and must
satisfy:
\begin{equation}
t_Q\in \left\{t\ t(Q)\neq\emptyset\right\}
\end{equation}
Let $T=\left\{t_{Q_1},\ldots,t_{Q_n}\right\}$ be the set of patterns used in
the context of a domain application.
This demonstrates that the linguistic coverage of the system is finite (since
$T$ is a finite set).
$|T|$ is thus a simple way of expressing how broad is the linguistic coverage of
the system.

The implementation of the pattern encodes information on how the ranking
should be computed. This ranking relies on a confidence score.
This score is then incorporated in the query graph (defined in the first
section) basing on information defined in the second section (the parse graph). 
The computation of this confidence score is presented
section~\vref{sec:pattern-confidence}.



\subsection{Structured queries}
We note $B(t,Q)$ the set of queries generated by pattern
$t$ for the question $Q$ (see the right-hand side of
figure~\vref{fig:pattern} and entitled `Construct').
The set of constraints defined in a pattern matching the query $Q$ is defined
as follows:
\begin{equation}
t(Q)=\left\{f'_1(Q),\ldots,f'_k(Q)\right\}
\end{equation}
where $\forall i\in[1,k]\ f'_i(Q)\subset f_i(Q)$.
In other words, a pattern matching a query $Q$ is defined by a subset of
annotations in $Q$. Note that $f'_i(Q)$ can be the empty set.



\subsubsection{Result}
A \emph{result} is determined by a pattern $t$ and a query $b\in B(t,Q)$. We
note $r=(t,b)$.



\subsection{Query logs}
Query logs are used to keep trace of users' queries, what results have been
opened, which patterns have triggered which results, etc.
These logs are used to compute metrics further detailed
section~\vref{sec:metrics-query-logs}.
An example of generated query log has been reproduced in the
listing~\vref{lst:query-log}.

Query logs are of utmost importance for analyzing the behavior of users with
respect to the data on the one hand, and the software being evaluated on the
other hand.
We will focus further on the former in
section~\vref{sec:modeling-prediction-model}; in particular we will focus on
\emph{patterns} of users' queries in the context of \ac{OLAP} sessions.





\section{Ranking the results}
\label{sec:pattern-ranking}
Not only one pattern usually matches the parse graph. For instance, some pattern
are more \emph{specific} than other ones, i.e. there more constraints in terms
of expected annotations in the pattern section defining the constraints.
As we expect the best results to appear first, we need to rank
the results based on several metrics that we present in this section.

\subsection{Confidence}
\label{sec:pattern-confidence}
Let $c_{i,j}\in[0,1]$ be the confidence of an annotation (where $(i,j)$ is the
position of the annotation in the parse graph) and $d_i\in[0,1]$ be a weight
given to the $i$-th feature in the parse graph. Then the confidence of a result
$r=(t,b)$ triggered by pattern $t$ basing on annotations is given by:
\begin{equation}
s_1(r)=\sum_{i,j}\frac{d_ic_{i,j}}{k\left|f'_i(Q)\right|}
\label{eq:confidence-1}
\end{equation}
where $k$ is the number of features in the pattern $t$ and $|f'_i(Q)|$ the
number of annotations of the $i$-th feature in the pattern $t$.

\subsection{Selectivity}
\label{sec:pattern-selectivity}
Selectivity is based on the number $\sigma$ of structured queries that can be
generated given a pattern and a question.
Let $\sigma=|B(t,Q)\ b\in B(t,Q)|$ be the number of queries that have been
generated by the pattern $t$ corresponding to the result $r=(t,b)$. 
Then, confidence based on selectivity can be computed as:
\begin{equation}
s_2=\left\{\begin{array}{ll}\frac{1}{\left|B(t,Q)\right|} & \textnormal{if
}\sigma\neq 0\\
0 & \textnormal{otherwise}
\end{array}\right.
\end{equation} 
The case where $\sigma=0$ is the one where the pattern does not generate any
query (useless pattern).

\begin{table}[!h]
\centering
\begin{tabular}{lr}\hline
\multicolumn{1}{c}{\textbf{Pattern}} &
\multicolumn{1}{c}{$\sigma$}\\\hline\hline 
\verb?1Measure? & $5$\\\hline
\verb?1Measure_1Dimension? & $3\times 7+2\times
7=35$\\\hline 
\verb?1Measure_2Dimension? & $3\times C_2^7+2\times C_2^7=105$\\\hline
\end{tabular}
\caption{Example of selectivity metrics for the example question
(\ref{eq:pattern-running-example}) and the dataset \emph{eFashion} (see
figure~\vref{fig:introduction-multidimensional-model})}
\label{tab:pattern-selectivity-example}
\end{table}
Note that all results $r$ triggered by the pattern $t$ will have the same
selectivity.



\subsection{Complexity}
Complexity of a result $r$ corresponds to the number of \emph{entities} in the
generate conceptual structured query (i.e. $b$).
These entities can be dimensions, measures, filters, \ldots

Let $b=\{b_1,\ldots,b_m\}$ be the query decomposed in its $m$ entities s.t.
$r=(t,b)$. 
Let $T$ be the entity types (dimension, measure, filter, \ldots).
Let $b^\prime=(count_t(b))_{t\in T}$ be the vector representing the number
of entities of type
%\footnote{Note that $t$ here is not the same $t$ used before
%for patterns.} 
$t$ in $b$ (if the type $t$ is not represented in the query
$b$, then the vector item for $t$ has the value $0$).
Then, the complexity of a result $r$ is defined by:
\begin{equation}
s_3(r)=\frac{1}{|T|}\sum_{t\in T}\theta_t b_{i,t}^\prime
\end{equation} 
where $0<\theta_t<1$ is a weight given to type $t$ and experimentally
determined s.t. $\sum_{t\in T}\theta_t<1$.



\subsection{Metrics from query logs}
\label{sec:metrics-query-logs}
The query logs is a rich source of information about implicit user
feedback on the result provided by the system.
We focus on some metrics defined below.








\subsubsection{Popularity}
The definition of the popularity of a search result $r=(t,B)$ for user $u$ is
given by:
\begin{equation}
p_u(r)=\frac{t_{u,c}(r)-t_{u,o}(r)}{\max_r(t_{u,c}(r)-t_{u,o}(r))}
\label{eq:popularity-1}
\end{equation}
where $t_{u,c}(r)$ is the time when the result $r$ was closed by user $u$ and
$t_{u,o}(r)$ the time when it was opened.
This metrics measures how long a search result has been seen by the user. 
This metrics should be used in conjunction with a threshold. For instance, user
might jump to another aplication once she gets the desired result. As a result
this metrics would be extremely high, because she did not close the search
result. 
%The equation~\vref{eq:popularity-1} leads to the definition of the popularity of
%a pattern, which is the sum of the popularity of all search results triggered
% by the pattern $t$:
%\begin{equation}
%p_u(t)=\frac{1}{|R|}\sum_{r\in R}p_u(r)
%\label{eq:popularity-2}
%\end{equation}
%where $R=\left\{(t,B)\ B\neq\emptyset\right\}$.

When the system is being used for the first time by user $u$ (i.e. when $u$ has
never opened search result $r$), $t_{u,o}(r)$ and $t_{u,c}(r)$ are undefined.
This case corresponds to the \emph{cold-start} phase, and we consider the mean
values for users in the social network of $u$ .


\subsubsection{Co-occurrency}
Co-occurrency measures how likely different database entities could apppear
together in a query. 
This measure is also used as a ranking function in the context of
auto-completion presented
section~\vref{sec:evaluation-experiments-auto-completion} and further detailed
in~\cite{dasfaa12}.
The assumption behind this metrics, is that a pattern should get an higher rank
if it generates a query composed of (database) entities with high co-occurrency
(i.e. entities that appear often in user's queries).
The co-occurrency between two database entities $e_1$ and $e_2$ is given by the
Jaccard index of the sets $occ_u(e_1)$ and $occ_u(e_2)$:
\begin{equation}
cooc_u(e_1,e_2)=J(occ_u(e_1),occ_u(e_2))=\frac{\left|occ(e_1)\cap
occ_u(e_2)\right|}{\left|occ_u(e_1)\cup occ_u(e_2)\right|}
\end{equation}
where $occ_u(e)$ is the set queries that contain the
entity $e$ (computed from the query logs of user $u$).
The co-occurrency of all entities in a structured query $B$
is given by
\begin{equation}
cooc_u(B)=\binom{2}{\left|B\right|}\sum_{b,b'\in
B}cooc_u(b,b')
\end{equation}
The co-occurrency metrics for a result is defined as follows:
\begin{equation}
cooc_u(r)=\frac{1}{|B'|}\sum_{B\in B'}cooc_u(B)
\end{equation}
where $B'=\left\{B\ r=(t,B)\right\}$.

\subsubsection{Implicit user preference}
The popularity metrics (see equation~\vref{eq:popularity-1}) is used as a weight
for the co-occurrence metrics to define users' implicit preferences:
\begin{equation}
pref_{u,impl}(r)=\frac{1}{|R|}\sum_{r\in R}\alpha_rp_u(r)cooc_u(r)
\label{eq:user-preference-1}
\end{equation}
where $R=\left\{r=(t,B)\right\}$ and $\alpha_r$ is a parameter to be
experimentally determined s.t. $\sum_{r\in R}\alpha_r=1$.


\subsubsection{Collaborative user preference}
Ranking search results meets a similar goal as providing recommendations (in
the sense of recommender systems).
The metrics presented equation~\vref{eq:user-preference-1} presents a problem for
\emph{cold start} users, i.e. those new to the system. Indeed, those users do
not have triggered search results, from which co-occurrences can be computed.
Collaborative recommender systems have introduced the contribution of other
users in the item scoring function to improve the system's coverage and enable
the exploration of resources previously unknown (or unused) by the user. We
follow the simple linear combination of the user-specific value and the average
over the set of all users. 
Instead of considering ``the whole world'' where all users have the same role
(weight), trust-based recommender systems illustrate the importance of
considering users' social network, e.g., favorating users close to the current
user.
Let $SN(u)$ be the set of users in user $u$'s social network, filtered in order
to keep only users up to a certain maximum distance. 
The refiend user preference can thus be rewritten as:
\begin{equation}
pref_{impl}(u,r)=\alpha\cdot pref_{u,impl}(r)+\frac{\beta}{SN(u)}\sum_{u'\in
SN(u)}\frac{pref_{u',impl}(r)}{d(u,u')}
\label{eq:implicit-preference}
\end{equation}
where $\alpha$ and $\beta$ are to be experimentally adjusted s.t. $\alpha
+\beta =1$ and $d(u,u')$ denotes the distance between both users $u$ and $u'$.

\subsubsection{Explicit user preferences}
Explicit user preferences have not yet been implemented in the system (see
chapter~\vref{sec:chapter-personalized}). 
This kind of preference is expressed by users' ratings:
\begin{equation}
pref_{u,expl}(r)=\left\{\begin{array}{l}rating_{u,r}\ \textnormal{if $u$ has
already rated $r$}\\
\overline{rating_u}\ \textnormal{otherwise}\end{array}\right.
\label{eq:explicit-preference}
\end{equation}
where $rating_{u,r}\in[0,1]$
is the rating of user $u$ for the result $r$ and $\overline{rating_u}$ the
overage rating given by $u$.

\subsubsection{User preference}
From both implicit (collaborative) user preference and explicit user preference
defined equations~\vref{eq:implicit-preference} and~\vref{eq:explicit-preference}, 
we define the global user preference as a simple linear combination of
$pref_{impl}(u,t)$ and $pref_{u,expl}(t)$:
\begin{equation}
s_4(r)=\alpha\cdot pref_{impl}(u,r)+\beta\cdot pref_{u,expl}(r)
\end{equation}
where $\alpha$ and $\beta$ are coefficients to be experimentally determined.
This formulation through a linear combination allows us to easily see the impact
of the implicit and the explicit preferences in the global metrics. As future
work, one could investigate other formulations, but the current one seems to be
satisfactory in our use-case.


\subsection{Overall measure}
We combine the different scores $s_1$ to $s_4$ to get the final score used as a
ranking metrics. 
The scores $s_2$ and $s_3$ depend on the the question $Q$.
In our experiments, we have combined these metrics as a linear combination of
equal weight.
The reader interested in metrics based on query logs will also find in
section~\vref{sec:evaluation-experiments-auto-completion} an application of
similar metrics to auto-completion.






\section{Summary \& discussion}
In this chapter, we have presented the core techniques used to translate users's
question (formulated in NL) in structured queries (e.g. SQL or \textsc{SparQL}).
We have proposed a new way of formulating patterns inspired from the \ac{IE}
community as well as a base of patterns, which form together a knowledge-base
dedicated to a \ac{QA} system for \ac{BI} purposes.
Patterns define a set of constraints that users' questions must satisfy as well
as the data (data models of the warehouses and data themselves) and knowledge
bases (domain knowledge and linguistic knowledge for language-dependant NL
expressions). These constraints are associated with a template of
conceptual structured queries (that is then translated in each target query
language as explained in the chapter~\vref{sec:chapter-personalized}).
Moreover, we have explained how we define ranking scores for the results that
are generated by these patterns.
The ranking function combines different metrics which take into consideration
the confidence of the named-entity recognition involved in the mapping of words
from the user's question to known terms, the complexity of the pattern (i.e.
measuring the number of entities that are part of the generated structured
query) and the specifics of the pattern (i.e. measuring the fact that a
pattern is specific or generic).


However, the patterns used to translate users' questions in structured queries
are costly resources.
Therefore, we like to provide in the following thoughts on how to acquire such
patterns. 
Developing additional patterns (for instance, for a new application domain, or
in order to improve the linguistic coverage of the system) is not a
straightforward task (see the example pattern reproduced
appendix~\vref{sec:appendix-pattern}). To ease this task, two classic approaches
are:
\begin{itemize}
  \item learning approaches, which are algorithms that base on trained (or
  labelled)  data, on user interaction data or on both
  \item authoring front-end tools (generally user-friendly \ac{GUI}) where
  users can easily generate new patterns on the basis of positive and/or
  negative examples for validating rules (i.e. patterns) being created
\end{itemize}
This aspect of the problem has not been fully investigated since we
focus further on the implementation of the end-to-end system (see
section~\vref{sec:conclusion}).
We develop both possible approaches introduced above, as a starting point for
future work on the subject.

\subsection{Learning approaches}
\subsubsection{Case-based reasoning}

Figure~\vref{fig:cbr} is an illustration of the case-based reasoning approach for
patterns.
\begin{figure}
\centering
\begin{tikzpicture}

\tikzstyle{box} = [
	draw,
	rectangle,
	rounded corners,
	minimum height=20pt,
	fill=white,
	fill opacity=0.5,
	text opacity=1
]
\node[
	draw,
	ellipse,
	minimum width=170pt,
	minimum height=100pt,
	line width=10pt,
	color=black!20
](ell1) {};
\node[
	box,
	minimum width=30pt
](new-case) at (ell1.north west) {New case};
\node[
	box,
	minimum width=40pt
](similar-cases) at (ell1.north east) {};
\node[
	box,
	minimum width=40pt,
	yshift=-2pt,
	xshift=-2pt
](similar-cases-bis) at (similar-cases) {};
\node[
	below=0pt of similar-cases-bis.north
](similar-cases-label-1) {Similar};
\node[
	above=0pt of similar-cases-bis.south
](similar-cases-label-2) {cases};
\node[
	box,
	right=10pt of similar-cases
](new-case-2) {New case};
\node[
	box,
	minimum width=30pt
](solved-case) at (ell1.south east) {Solved case};
\node[
	box,
	minimum width=30pt
](revised-case) at (ell1.south west) {Revised case};
\node[
	below=-2pt of ell1.north
](retrieve) {Retrieve};
\node[
	above=-2pt of ell1.south
](revise) {Revise};
\node[
	rotate=-90,
	yshift=-6pt
](reuse) at (ell1.east) {Reuse};
\node[
	rotate=90,
	yshift=-6pt
](retain) at (ell1.west) {Retain};
\node [
	cloud, 
	draw,
	cloud 
	puffs=10,
	cloud puff arc=120, 
	aspect=2, 
	inner ysep=1em,
	left=20pt of new-case
] (problem) {};
\node[](problem-label) at (problem) {\textit{Problem}};
\path[->](problem) edge (new-case);
\node [
	cloud, 
	draw,
	cloud 
	puffs=10,
	cloud puff arc=120, 
	aspect=2, 
	inner ysep=1em,
	right=20pt of solved-case
] (solution) {};
\node[] (solution-label) at (solution) {\textit{Solution}};
\path[->](solved-case) edge (solution);
\end{tikzpicture}
\caption{Case-based reasoning approach applied to the problem of
pattern learning}
\label{fig:cbr}
\end{figure}
In this approach, the problem to solve can be formulated as follows:
\begin{quote}
``Given a parse graph $p(Q)$ for the question $Q$, what features $f_i(Q)$ shall
be considered in the new pattern, and for each of these features, what
annotations $a\in f_i(Q)$ from the parse graph shall be included in the pattern.
Eventually, what conceptual query shall be associated with the pattern being
build.''
\end{quote}

\paragraph{New case}
The case correspond thus to the selection of 
\begin{itemize}
  \item relevant features (i.e. the choice
of $F\subset \{f_1(Q),\ldots,f_k(Q)\}$ where $k$ is the number of features in
$p(Q)$)
	\item a mapping from the chosen features to the annotations, (i.e. $F\rightarrow A$)
	\item a generated conceptual query referred to as $B$
\end{itemize}

\paragraph{Retrieve}
The `Retrieve' step consists in retrieving similar cases.
The retrieving bases on similarity measures.

\paragraph{Reuse}
The `Reuse' step aims at transforming the set of similar cases retrieved in the
preceeding step in order to build a new case (called \emph{Solved case} in the
figure) which corresponds to the proposed solution to the problem stated above.
Thus, a solved case is made of a pattern -- a set of constraints to be satisfied
by a parse graph plus a template of structured query.

\paragraph{Revise}
The solution proposed in the previous step has not been yet validated (to
prevent the system from being corrupted). The `Revise' step consists in
validating cases proposed in the previous steps.
For instance, a case can be validated based on user's feedback about positive
and negative examples.

\paragraph{Retain}
The `Retain' step consists in storing the case in the system, so that it can be
easily re-used in the future when a similar problem arises.



The approach described so far has been employed in the Q\&A domain by Ben
Mustapha et al.~\cite{BenMustapha:2010:SSU:1754239.1754243}. They propose a
system that learns an ontology used to guide the interpretation of users'
questions in the context of semantic search.


\subsubsection{Genetic approach}
\label{sec:pattern-acquisition-genetic}
We have investigated the case where there might be a large number of patterns in
the repository. This case is relevant when the number of patterns increases over
time, for instance through machine learning algorithms.
In this case, we need a classification to decide which patterns are closer to
the user's question. 
The approach that we have investigated is represented
figure~\vref{fig:pattern-acquisition-genetic-problem}.
\begin{figure}
\centering
\includegraphics[width=\textwidth,trim=150pt 540pt 100pt 120pt]{img/genetic}
\caption{Classification problem involved for a large number of patterns}
\label{fig:pattern-acquisition-genetic-problem}
\end{figure}
In this approach, the classification algorithm must first be determined. This
classification is then used to determine the translation of queries into
patterns, using a genetic algorithm and a learning base.

\paragraph{Constraints}
Translation rules must satisfy following constraints:
\begin{itemize}
  \item examples must be correctly translated
  \item generated patterns should be valid
\end{itemize}
These constraints, to be satisfied by individuals, lead to bonus scores.
\begin{figure}[h]
\includegraphics[scale=0.9,trim=130pt 550pt 0pt 100pt]{img/genetic-example}
\caption[Mutation and reward of individuals]{Illustration of how individuals are
mutated and rewarded.
Dashed circles on the left-hand side ($p_i^\prime$) are individuals automatically
generated out of queries $q_i$. Individuals that are not patterns (e.g.
$q_3^\prime$) are left out. To be rewarded, generated patterns must be
\emph{similar} to original examples (i.e. $p_i$).}
\label{fig:patterns-genetic-example}
\end{figure}
Figure~\vref{fig:patterns-genetic-example} is an illustration of the mutation
process and selection of \emph{good} individuals based on their structure (for
instance individuals that are not patterns will be left out) and based on how
faithfully they reproduce good examples (i.e. how close are $p_i^\prime$ to
$p_i$).
\begin{figure}[h]
\centering
\includegraphics[scale=1,trim=100pt 540pt 0pt 150pt]{img/genetic-dunes}
\caption[\emph{Novelty} and \emph{efficiency} of newly-created individuals]{The
curve represents both \emph{novelty} ($x$-axis) and \emph{efficiency}
($y$-axis) of newly-created individuals being mutated. The problem can be
reduced to the search for local maximums. Area `1' leads to better individuals,
but they are similar to the known ones; area `2' is both efficient and novel;
area `3' is not efficient and not novel and this area should be avoided; area
`4' leads to worse individuals. In the latter case, one must decide whether one
should pursue or stop the local discovery of new individuals}
\label{fig:pattern-genetic-dunes}
\end{figure}
On figure~\vref{fig:pattern-genetic-dunes} one illustrates the problem of
generating new individuals, that must be both \emph{efficient} and \emph{novel}.
The former property relates to the generality of the generated pattern (with
respect to selectivity, see section~\vref{sec:pattern-selectivity}).
The latter one depends on the distance with known examples and already-generated
individuals.

\paragraph{See also}This approach has been further described by
Watel~\cite{Watel:2011}.
 



\subsection{Authoring tool}
Authoring tools have been quite recently used in interfaces for Q\&A (one the
most representative related work is
\textsc{NaLIX}~\cite{Li:2005:NIN:1066157.1066281}).
The benefits of such tools, is that they allow non-expert users to enrich the
system with new semantic rules. 
Typically, users are assisted graphically in creating semantic mappings, and
examples of known pairs of questions and answers are displayed to users, so
that they can validate the rules being created. 
In our proposal (see figure~\vref{fig:pattern-authoring}), the SPARQL pattern
would be generated by the tool; the user would simply express graphically a set
of constraints based on positive and negative examples.



\begin{figure}[h]
\centering
\includegraphics[width=11cm]{img/authoring-1}
\caption[Authoring tool]{Authoring tool: user loads unresolved questions}
\label{fig:pattern-authoring}
\end{figure}




\subsubsection{Identification of common annotations}
\label{sec:pattern-authoring-step1}
First, users are invited to type a serie of questions (or to import them from an
external file). All these questions are supposed to be captured by the same
pattern. 
The drawback is that users are supposed to know the data, so that they can
think of queries (the system can also keep trace of unresolved question that
have been asked in the past, and suggest those questions).
For example, the questions ``Revenue and margin in New York in Q2'' and ``Sales
revenue and margin in Texas in Q3'' seem to correspond to the same pattern (i.e.
two measures and two filters).
The set of questions are parsed, and the annotations (with common annotation
types) are used as annotations in the generated pattern. 
In the end of this process, users can remove some annotations if they think
that they are not relevant for the current pattern.


\subsubsection{Graphic construction of the structured query}
\label{sec:pattern-authoring-step2}
The semantic information in the pattern lies in the mapping between the set of
annotation (the \verb?WHERE? section of the pattern) and the generated
structured query (the \verb?CONSTRUCT? section of the pattern).
To this end, the graphic editor guides users in designing the
conceptual query, based on the annotations that were identified in the
previous step.
\begin{figure}
\centering
\includegraphics[width=12cm]{img/authoring-2}
\caption{Graphic construction of the pattern}
\label{fig:pattern-authoring-2}
\end{figure}
Figure~\vref{fig:pattern-authoring-2} is a screenshot of a mockup.



\subsubsection{Validation of the candidate pattern}
Once user has finished designing the conceptual query, the pattern should be
validated to check that the results are the ones that the user expects.
The questions used in the first step (see
section~\vref{sec:pattern-authoring-step1}) are used to test the candidate
pattern.
The validation process, for each question, is as follows:
\begin{enumerate}
  \item check that the pattern matches the question
  \item execute the query and display the corresponding chart
\end{enumerate}
If users are not satisfied with the pattern execution, they are able to get back
to the first step (see
section~\vref{sec:pattern-authoring-step1}) or the second one (see
section~\vref{sec:pattern-authoring-step2}).
Eventually, when they are satisfied with the generated pattern, they can add it
to the pattern repository.
Finally, a last step would be required to test that the newly created pattern
does not generate conflicts with respect to other patterns in the system.

\stopcontents[chapters]
