\documentclass{memoir}


\usepackage{pgf-pie}

% urls
\usepackage[hyphens]{url}

% use of acronyms
\usepackage[nolist,withpage]{acronym}

% use of multirow
\usepackage{multirow}

% nice fonts for the math sets
\usepackage{amsfonts}

\usepackage[francais,english]{babel}

\usepackage[T1]{fontenc}

\begin{hyphenrules}{francais}
\hyphenation{pr\'e-f\'e-ren-ces}
\hyphenation{fra-me-work}
\hyphenation{questions-r\'eponses}
\end{hyphenrules}

\usepackage{pgfplots}

\usepackage{graphicx}
\usepackage[hang,small,bf]{caption}
\usepackage{subfig}

\usepackage{lscape}
\usepackage{longtable}
\usepackage{rotating}

\usepackage{abstract}

\usepackage{tikz}
\usetikzlibrary{fit, arrows, decorations.markings,positioning,trees,shapes}

% use of listings
\usepackage{listings}
\usepackage{framed}
\usepackage{MnSymbol}
\lstset{
	language=SQL,
	basicstyle=\ttfamily\fontsize{8}{11}\selectfont,
	aboveskip=6pt plus 2pt, 
	belowskip=2pt plus 8pt,
	morekeywords={PREFIX,java,rdf,rdfs,url,xsd},
	numbers=left,
	numberstyle=\tiny,
}


% use of minitoc
\usepackage{shorttoc,titletoc}

\usepackage{subfig}

\newcommand\partialtocname{Outline}
\newcommand\ToCrule{\noindent\rule[5pt]{\textwidth}{1.3pt}}
\newcommand\ToCtitle{{\large\bfseries\partialtocname}\vskip2pt\ToCrule}
\makeatletter
\newcommand\Mprintcontents{%
  \ToCtitle
  \ttl@printlist[chapters]{toc}{}{1}{}\par\nobreak
  \ToCrule}
\makeatother


\newcommand\TODO[1]{{\textcolor{red}{\underline{TODO}: #1\\}}}

\hyphenation{spe-ci-fic}


\setcounter{secnumdepth}{2}
\setcounter{tocdepth}{2}

\newcommand{\Chapter}[1]{\chapter{#1} \setcounter{figure}{1}}


\begin{document}

\begin{acronym}[TDMA]
\acro{AI}{Artificial Intelligence}
\acro{BI}{Business Intelligence}
\acro{CMS}{Content Management System}
\acro{CRM}{Customer Relationship Management}
\acro{DBMS}{Database Management System}
\acro{DSS}{Decision Support Systems}
\acro{ER}{Entity/Relationship}
\acro{IDF}{Inverse document frequency}
\acro{IE}{Information Extraction}
\acro{IR}{Information Retrieval}
\acro{MDX}{Multidimensional expression}
\acro{NER}{Named entity recognizer}
\acro{NL}{Natural Language}
\acro{NLP}{Natural Language Processing}
\acro{OLAP}{Online analytical Processing}
\acro{QA}[Q\&A]{Question Answering}
\acro{RDF}{Resource Description Framework}
\acro{SPARQL}{SPARQL Protocol and RDF Query Language}
\acro{SQL}{Structured query language}
\end{acronym}

\chapter{Linguistic patterns}
\label{sec:chapter-patterns}
\startcontents[chapters]
\Mprintcontents


Linguistic patterns are widely used in IE and IR. In IE,
patterns are used in unstructured documents (text corpora, Web documents) to
extract information basing on the structure (syntactic information) and on the
terms (lexical information) identified in the text.
More generally, the idea is that the same piece of information can be expressed
in various ways, and that a pattern (i.e. a set of constraints) captures the
common characteristics of those different expressions of the same idea.
\begin{table}
\centering
\begin{tabular}{ll}\hline
\multicolumn{1}{c}{\textbf{Question}} &
\multicolumn{1}{c}{\textbf{Entities}}\\\hline\hline 
Sales revenue per year & (Sales revenue) [Year]\\\hline 
Revenue over years & (Sales revenue) [Year]\\\hline 
Sales results per FY & (Sales revenue) [Year]\\\hline
\end{tabular}
\caption{The same idea can be represented in different ways. The second column
expresses an empiric constraint that are satisfied by all questions from the
first column}
\label{tab:pattern-characteristics}
\end{table}
This is depicted in table~\ref{tab:pattern-characteristics}. The strong
assumption there is that given a sentence, it is possible to find a set of
constraints that can defines the meaning of the sentence, or the information
that a user is looking for.
It is obviously not true in general, because there is not a single
interpretation of NL sentence inputs, but constraints in the patterns help in
limiting the ambiguity of the text input.


\section{Linguistic patterns in IR}
In IE, a pattern (or part of it) is usually defined and associated with
the notion of syntactic axis. Indeed, ``pattern'' in this field
can be reffered to as ``lexico-syntactic patterns'' (terms and syntax) or
``morpho-synatactic patterns'' (terms and their categories plus syntax). 
Despite the fact that they are extensively used, there are very few comments on
the definition of such patterns.
They have been defined in the linguistic theory~\cite{LAI} as ``a schematic
representation like a mathematical formula using terms or symbols to indicate
categories that can be filled by specific morphemes''.
Patterns used by Sneiders~\cite{AQAT} are regular strings of characters where
sets of successive tokens are replaced by entity slots (to be filled by
corresponding terms in the real text).
This means that a pattern in this sense is an extension of regular expressions
(where \textit{patterns} from the regular expression are \textit{slots} that
may be of various type in linguistic patterns).
An innovation in~\cite{AEA} is the definition of a pattern being composed of two
subpatterns: one required pattern (regular patterns) and one forbidden
patterns, that corresponds to pattern that must not match the message.
Finkelstein-Landau~et~Morin~\cite{ESR} define formally morpho-syntactic
patterns related to their Information Extraction (IE) task: they aim at
extracting semantic relationship from textual documents. 
A pattern $A$ can be decomposed as displayed in
formula~\ref{eq:pattern-morin-definition}.
\begin{equation}
\label{eq:pattern-morin-definition}
A=A_1\ldots A_i\ldots A_j\ldots A_n
\end{equation}
In this formula, $A_k\ k\in[1,n]$ denotes an \textit{item} of the pattern
which is a part of a text (no constraint \textit{a priori} on the sentence
boundaries).
An \textit{item} is defined as an ordered set of \textit{tokens} composing
words\footnote{Delimiting tokens is not an easy task in any language.}.
In this approach the syntactic isomorphy hypothesis is adopted.
Let $B=B_1\ldots B_i\ldots B_j\ldots B_n$ be a pattern.
%\begin{equation}
%\label{morin-second-equation}
%B=B_1\ldots B_i^\prime\ldots B_j^\prime\ldots B_n^\prime
%\end{equation}
This hypothesis states the following assertion:
\begin{equation}
\exists (i,j)
\left.\begin{array}{r}
A\sim B\\
win(A_1,\ldots,A_{i-1})=win(B_1,\ldots,B_{j-1})\\
win(A_{i+1},\ldots,A_{i+1})=win(B_{j+1},\ldots,B_{j+1})
\end{array}\right\}
\Longrightarrow A_i\sim B_j
\end{equation}
which means that if two patterns $A$ and $B$ are \textit{equivalent} (they
denote the same string of characters), and if it is possible to split both
patterns in identical \textit{windows} composed of the same tokens when applied
to string of characters, then the remaining items of both patterns ($A_i$ and
$B_j$) share the same syntactic function.
%Some standards have been proposed like Ontology Design
%Patterns\footnote{See~\url{http://ontologydesignpatterns.org/wiki/Main_Page}
% for more details.} in the Semantic Web community

The introduction of \textit{windows} in the definition of patterns imposes their
components to be ordered given the syntactic order (for instance left to right
in English).
In the following, we adopt an other formalism, where the syntactic isomorphy
is not kept, i.e. the expected ``characteristics'' (or
features) from the patterns do not have to be ordered according to the sentences
(or questions) matched by the pattern.

This kind of patterns have become very popular, in particular among the Semantic
Web community, for instance through Ontology Design
Patterns\footnote{See~\url{http://ontologydesignpatterns.org/wiki/Main_Page}.}.
This platform is meant to define classes of patterns widely used in the 



\section{Patterns for structured content}
\label{sec:pattern-structured}
Patterns for structured content consist of structures that define a mapping
between text and a data structure (like a database).
As said above, we relax the syntactic constraint\footnote{The usefullness of
this approach is demonstrated with an iPhone application, where the generated
question is more a keyword query rather than a well-formed NL question.} and
wanted to have any kind of features to appear in the pattern.
This mapping is performed in several tasks: 1) declaring the constraints that
must be satisfied for the pattern to match the question; 2) defining which part
of the question must be exported according to the constraint defined in the
previous step; 3) defining what to recompute from the exported information in
the question and 4) defining the actual mapping from the exported information to
the data structure.
We will detail these tasks throughout the section.

\subsection{Running example}
In the following, we will consider as an example the following user's query:
\begin{equation}
\textnormal{``Top 5 middle-aged customers in my city''}
\label{eq:pattern-running-example}
\end{equation}
which is also depicted figure~\ref{fig:pattern-running-example}.
\begin{figure}[h]
\centering
\includegraphics[scale=1.2,trim=100pt 500pt 100pt 130pt]{img/running-example}
\caption{Running example of question with annotations}
\label{fig:pattern-running-example}
\end{figure}
% \begin{equation}
% \textnormal{``Sales revenue per quarter in New York''}
% \label{eq:pattern-running-example}
% \end{equation}

\subsection{Definitions}
We provide in this section some definition and the notations used in the
chapter.
\subsubsection{Question}
Let $Q=\left\{q_1,\ldots q_n\right\}$ be a user's question and $q_i\ i\in[0,n]$
be terms used in the question.
For example, the question~(\ref{eq:pattern-running-example}) can be
represented as follows:
\begin{equation}
Q=\left\{\textnormal{Top},\textnormal{5},\textnormal{middle-aged},
\textnormal{customers},\textnormal{in},\textnormal{my},\textnormal{city}\right\}
\end{equation}
Note that the word ``middle-aged'' can also be decomposed in two words (i.e.
$\{\textnormal{middle},\textnormal{age}\}\subset Q^\prime$). In the
following, we use the term \emph{query} to denote a database query and the term
\emph{question} to denote a user's query.

\subsubsection{Users' graphs and annotations}
The graph repository (see section~\ref{sec:pattern-graph-repository}) is
composed of user-specific graphs. Examples of such graphs are:
\begin{itemize}
  \item the graph composed of textual annotations from the current question
  \item the graph composed of annotations about the physical device being used
\end{itemize}
Besides, some user-independant graphs are also part of the repository:
\begin{itemize}
  \item the graph composed of domain-specific knowledge
  \item the graph composed of language-dependant linguistic knowledge
\end{itemize}
The latter graph has been detailed in
section~\ref{sec:personalized-extensibility-linguistic-resources}
page~\pageref{sec:personalized-extensibility-linguistic-resources}.
In the following, \textit{annotation} refers to a node of type
\verb?Annotation? belonging to set set of personalized graphs. 
We note $a\in A$ such annotations and $A$ the space of these annotations.
This means that an annotation is not always bound to actual tokens of $Q$ (for
instance, the annotation related to the physical device is not bound to any
token from $Q$).


\subsubsection{Feature}
\label{sec:patterns-feature}
\begin{figure}[h!]
\centering
\includegraphics[scale=1,trim=50pt 170pt 320pt 170pt]{img/parse-graph}
\caption{Example of the parse tree generated for the question ``Top 5
middle-aged customers in my city''}
\label{fig:patterns-query-graph}
\end{figure}


A feature $f_i:Q\rightarrow A\in\mathcal{F}$ is a mapping from the query to the
set of annotations for the feature $i$. 
The set of all annotations given a query $Q$ for the feature $i$ is given by
$f_i(Q)$. In the case where no annotation $a\in f_i(Q)$ is bound to the query
$Q$, $a$ is independant from $Q$ (but is still user-dependant). 
The number of annotations in the query $Q$ for the feature $i$ is
given by $\left|f_i(Q)\right|$. 
For example, let the first feature be: $f_1:t\mapsto t$, i.e. the $id$ mapping
and the second feature be: $f_2:t\mapsto t^\prime$ where $t^\prime$ is the base
form of $t$, i.e. the mapping from terms to their base forms. 
We have thus:
\begin{equation}
f_1(Q)=id(Q)=Q
\end{equation}
and:
\begin{equation}
f_2(Q)=\left\{t^\prime\ t\in Q\right\}
\end{equation}
A feature can also map to more complex structures (a tree structure).
Let $f_3$ be the \emph{range} feature and $f_4$ be the \emph{top-$k$} feature. 
An illustration to these features was given
figure~\ref{fig:pattern-running-example}
page~\pageref{fig:pattern-running-example}.

In this case, the \emph{range} feature is a domain-specific rule (caled
\emph{custom rule} in the figure) which exports three pieces of information:
\begin{itemize}
  \item {\scriptsize[dimension]}: the related dimension 
  \item {\scriptsize[begin]}: the beginning value of the exported segment of
  text
  \item {\scriptsize[end]}: the ending value of the exported segment of text
\end{itemize}
The \emph{top-$k$} feature exports:
\begin{itemize}
  \item {\scriptsize[order]}: what order will be used (i.e. ascending or
  descending)
  \item {\scriptsize[nb]}: the maximum number of results to be retrieved (i.e.
  the query modifier operator \verb?LIMIT? in MDX)
  \item {\scriptsize[dimension]}: the related dimension
  \item {\scriptsize[measure]}: the related measure
\end{itemize}
Note that in the case of the \emph{top-$k$} feature, the measure cannot be
matched (``?1'' in figure~\ref{fig:pattern-running-example}), in which case a
\emph{valid} measure will be selected as explained later on.
The exported items will then be rewritten or recomputed.
For instance in the case of feature $f_4$, ``five'' from ``Top five customers''
can be rewritten ``5'' (so that it can be processed by the query generator), and
``customers'' should be normalized as ``customer''.


Let $\mathcal{F}$ be the set of features considered in a given application. In
the above example (\ref{eq:pattern-running-example}) generates $|\mathcal{F}|=4$
feature types and the features summarized in
table~\ref{tab:pattern-feature-annotation}. 
\begin{table}[!h]
\centering
\begin{tabular}{lll}\hline
\multicolumn{1}{c}{\textbf{Feature}} & \multicolumn{1}{c}{\textbf{Example}} &
\multicolumn{1}{c}{\textbf{Ann. count}}\\\hline\hline $f_1$ ($id$) & ``Sales''; ``revenue''; ``per''\ldots
& $|f_1(Q)|=8$\\\hline
$f_2$ (base forms) & ``sale''; ``revenue''; ``per''\ldots &
$|f_2(Q)|=|Q|=7$\\\hline
$f_3$ (see figure~\ref{fig:feature3}) & $\emptyset$ & $|f_3(Q)|=0$\\\hline
$f_4$ (see figure~\ref{fig:feature4}) & $\emptyset$ & $|f_4(Q)|=0$\\\hline
\multirow{2}{*}{$f_5$ (data model entities)} & (Sales revenue);
(Revenue) & \multirow{2}{*}{$|f_5(Q)|=4$}\\
& [Quarter]\ldots & \\\hline
$f_6$ (geographic entities) & ``New York'' & $|f_6(Q)|=1$\\\hline
\multirow{2}{*}{$f_7$ (user entities)} & city=``Paris''
 & \multirow{2}{*}{$|f_7(Q)|=x$}\\
& position=``PhD student''\ldots & \\\hline
\end{tabular}
\caption{Features and annotations for the example (\ref{eq:pattern-running-example})}
\label{tab:pattern-feature-annotation}
\end{table}

Note that all annotations do not have to refer to an actual token in the user's
question. For instance $f_7$ in table~\ref{tab:pattern-feature-annotation} is
independant from $Q$. 





\subsubsection{Parse tree}
The parse tree of the query $Q$ can be written: 
\begin{equation}
p(Q)=\left\{f_1(Q),\ldots,f_k(Q)\right\}
\end{equation}
where $f_i$ is the feature $i$.
The parse tree of $Q$ is the set of annotations for all features.
The total number of annotations in the parse tree of $Q$ is given by:
\begin{equation}
\left|p(Q)\right|=\sum_{i=1}^k\left|f_i(Q)\right|
\end{equation}
The parse tree is called \emph{parse graph} in
figure~\ref{fig:pattern-running-example}.


\subsubsection{Pattern}

\begin{figure}[h!]
\includegraphics[width=\textwidth,trim=50pt 220pt 60pt
200pt]{img/pattern}
\caption{Pattern}
\label{fig:pattern}
\end{figure}

A \emph{pattern} is the technique used to translate questions in structured
queries.
It is defined as a set of \emph{constraints} (possibly optional) to be satisfied
by the graphs, plus rules that define how these constraints are translated in
structured queries (composed of slots to be filled with actual data).
Figure~\ref{fig:pattern} is an illustration of a pattern in two parts.
The left-hand side (entitled ``Where'') represents the set of constraints that
the different graphs must satisfy (e.g. the parse graph). The right-hand side
represents the template of the structured query that will be generated.

The set of constraints defined in a pattern matching the query $Q$ is defined
as follows:
\begin{equation}
t(Q)=\left\{f'_1(Q),\ldots,f'_k(Q)\right\}
\end{equation}
where $\forall i\in[1,k]\ f'_i(Q)\subset f_i(Q)$.
In other words, a pattern matching a query $Q$ is defined by a subset of
annotations in $Q$. Note that $f'_i(Q)$ can be the empty set.
A pattern for $Q$ is a chosen pattern $t_Q$ matching the query $Q$ and must
satisfy:
\begin{equation}
t_Q\in \left\{t\ t(Q)\neq\emptyset\right\}
\end{equation}
Let $T=\left\{t_{Q_1},\ldots,t_{Q_n}\right\}$ be the set of patterns used in
the context of a domain application.

\subsubsection{Structured queries}
\emph{Structured query} or \emph{query} stands for the conceptual query
generated by patterns. 
We note $B(t,Q)$ the set of queries generated by pattern
$t$ for the question $Q$.


\subsubsection{Result}
A \emph{result} is determined by a pattern $t$ and a query $b\in B(t,Q)$. We
note $r=(t,b)$.



\subsection{Implementation}
We present an example pattern in the appendix section.
The implementation is done using the \textsc{SparQL} language. We have chosen
this language because it is dedicated for expressing constraints (the different
annotations expressed in a pattern are constraints to be satisfied in a parse
tree).
The \textsc{SparQL} pattern is composed of two sections: 
\begin{itemize}
  \item business query definition
  \item constraints definition
\end{itemize}

The first section of the pattern is a serialization of the conceptual
multidimensional query (see section~\ref{sec:model}).
The second section of the pattern is the pattern itself,
which defines the constraints on the parse tree. 
The pattern triggers only if the constraints defined there are satisfied. 
Some constraints defined here can be optional.

The implementation of the pattern also encodes information on how the ranking
should be computed. This ranking relies on a confidence score.
This score is then incorporated in the query graph (defined in the first
section) basing on information defined in the second section (the parse tree). 
The computation of this confidence score is presented
section~\ref{sec:pattern-confidence}.

\subsubsection{Parse tree}
The parse tree is implemented as a RDF graph.
Each annotation $a\in f_i(Q)$ for the query $Q$ are nodes of type
\verb?Annotation? in the RDF graph. 
The annotations are defined by a set of attributes and predicates (having the
annotation as subject):
\begin{itemize}
  \item \url{urn:grepo/query-tree#hasAnnotationType} defines the \emph{type} of
  the annotation. Those types are sub-types of the \verb?Annotation? type. They
  are summarized in table~\ref{tab:annotation-types}.
  \item \url{urn:grepo/query-tree#referencesResource} defines a reference to an
  external resource. For instance, database entities are defined in the data
  model of the data warehouse.
  \item \url{urn:grepo/query-tree#confidence} defines the \emph{confidence} of
  the annotation, i.e. a score that measures how relevant is the entity with
  respect to user's query. The computation of this score is performed by the
  named entity recognition module.
  \item \url{http://www.w3.org/2000/01/rdf-schema#label} defines the
  \emph{label} of the annotation, i.e. the \emph{normalized} text that carries
  this annotation. The normalization process is further described in
  section~\ref{sec:chapter-personalized}. 
  \item \url{urn:grepo/query-tree#originUri} defines the vendor-specific ID of
  database entities. 
\end{itemize}
\begin{table}[!h]
\centering
\begin{tabular}{ll}\hline
\multicolumn{1}{c}{\textbf{Type}} &
\multicolumn{1}{c}{\textbf{Uri}}\\\hline\hline Dimension &
\verb?DimensionAnnotationType?\\\hline Measure & \verb?MeasureAnnotationType?\\\hline
Member & \verb?DimensionValueAnnotationType?\\\hline
NLP feature &
\verb?NlpFeatureAnnotationType?\\\hline
\end{tabular}
\caption{Annotation types in the parse tree}
\label{tab:annotation-types}
\end{table}
Other predicates and attributes are defined in the case of NLP features,
for instance for describing the kind of NLP feature, what item can be exported,
etc.




\subsubsection{Graph repository}
\label{sec:pattern-graph-repository}
The framework incorporates a ``situation'' manager described
in~\cite{thollotThesis}.
This framework is used to capture and monitore different kinds of events of
context-aware corporate applications. A use-case of the framework is the
modeling of users' situation. This modeling was initially used for offering
personalized recommendations. An other use-case is to provide personalized
search results.
The situation framework brings a so-called ``graph repository''. Every user can
access a set of graphs stored in this repository. The graphs are created by
\emph{providers} that decide who is allowed to access what information. 
We have implemented one provider for each origin of annotations. 
For instance, all annotations corresponding do data warehouse entities come from
a graph (in the graph repository) that has been generated by a specific
provider, the provider dedicated to data models. 
At run-time, all graphs belonging to the user (i.e. containing information
about the user only, and information that user is allowed to see) are used
together with the parse-tree graph (see above), and used as a model (in the
sense of RDF) when executing patterns. 
In nutshell, the graph of the parse tree contains annotation about the user's
query, and the other graphs (i.e. the graphs coming from the user's graph
repository) defines all entities that have been referenced in the parse tree.

 
\subsubsection{Pattern}
The pattern itself is a \textsc{SparQL} file. The query describe the constraints
that the parse tree must satisfy in the \verb?WHERE? section, and the internal query
representation is encoded in the \verb?CONSTRUCT? section.
The content of the latter section is a RDF serialization of the internal
query representation. In the process, this RDF representation is
de-serialized to get the object representation (Java) of the internal query.
All patterns ($\approx$10) are loaded in memory when the framework is launched.
Pattern matching consists in executing SPARQL queries on the graph repository. 


\subsubsection{Query logs}
Query logs are used to keep trace of users' queries, what result has been
opened, which pattern has triggered the result, etc.
These logs are used to compute metrics further detailed
section~\ref{sec:metrics-query-logs}.
An example of generated query log has been reproduced in the
listing~\ref{lst:query-log} page~\pageref{lst:query-log}.






\section{Ranking the results}
\label{sec:pattern-ranking}
Not only one pattern usually matches the parse tree. For instance, some pattern
are more \emph{specific} than other ones, i.e. there more constraints in terms
of annotations in the pattern body.
As we expect the best results to be appear first, we need to rank
the results based on several metrics that we present in this section.

\subsection{Confidence}
\label{sec:pattern-confidence}
Let $c_{i,j}\in[0,1]$ be the confidence of an annotation (where $(i,j)$ is the
position of the annotation in the parse tree) and $d_i\in[0,1]$ be a weight
given to the $i$-th feature in the parse tree. Then the confidence of a result
$r=(t,b)$ triggered by pattern $t$ basing on annotations is given by:
\begin{equation}
s_1(r)=\sum_{i,j}\frac{d_ic_{i,j}}{k\left|f'_i(Q)\right|}
\label{eq:confidence-1}
\end{equation}
where $k$ is the number of features in the pattern $t$ and $|f'_i(Q)|$ the
number of annotations of the $i$-th feature in the pattern $t$.

\subsection{Selectivity}
Selectivity is based on the number $\sigma$ of structured queries that can be
generated given a pattern and a question.
Let $\sigma=|B(t,Q)\ b\in B(t,Q)|$ be the number of queries that have been
generated by the pattern $t$ corresponding to the result $r=(t,b)$. 
Then, confidence based on selectivity can be computed as:
\begin{equation}
s_2=\left\{\begin{array}{ll}\frac{1}{\left|B(t,Q)\right|} & \textnormal{if
}\sigma\neq 0\\
0 & \textnormal{otherwise}
\end{array}\right.
\end{equation} 
The case where $\sigma=0$ is the one where the pattern does not generate any
query (useless pattern).

\begin{table}[!h]
\centering
\begin{tabular}{ll}\hline
\multicolumn{1}{c}{\textbf{Pattern}} &
\multicolumn{1}{c}{$\sigma$}\\\hline\hline 
\verb?1Measure? & $x$\\\hline
\verb?1Measure_1Dimension? & $y$\\\hline
\verb?1Measure_2Dimension? & $z$\\\hline
\end{tabular}
\caption{Example of selectivity metrics for the example question
(\ref{eq:pattern-running-example}) and the dataset presented chapter~\ref{sec:chapter-personalized}}
\label{tab:pattern-selectivity-example}
\end{table}
All results $r$ triggered by the pattern $t$ will have the same selectivity.



\subsection{Complexity}
Complexity of a result $r$ corresponds to the number of \emph{entities} in the
query model of this result.
These entities can be dimensions, measures, filters, \ldots

Let $b=\{b_1,\ldots,b_m\}$ be the query decomposed in its $m$ entities s.t.
$r=(t,b)$. 
Let $T$ be the entity types (dimension, measure, filter, \ldots).
Let $b^\prime=(count_t(b))_{t\in T}$ be the vector representing the number
of entities of type\footnote{Note that $t$ here is not the same $t$ used before
for patterns.} $t$ in $b$ (if the type $t$ is not represented in the query
$b$, then the vector item for $t$ has the value $0$).
Then, the complexity of a result $r$ is defined by:
\begin{equation}
s_3(r)=\frac{1}{|T|}\sum_{t\in T}\theta_t b_{i,t}^\prime
\end{equation} 
where $0<\theta_t<1$ is a weight given to type $t$ and experimentally
determined s.t. $\sum_{t\in T}\theta_t<1$.



\subsection{Metrics from query logs}
\label{sec:metrics-query-logs}
The query logs is a rich source of information about implicit user
feedback on the result provided by the system.
We focus on some metrics defined below.








\subsubsection{Popularity}
The definition of the popularity of a search result $r=(t,B)$ for user $u$ is
given by:
\begin{equation}
p_u(r)=\frac{t_{u,c}(r)-t_{u,o}(r)}{\max_r(t_{u,c}(r)-t_{u,o}(r))}
\label{eq:popularity-1}
\end{equation}
where $t_{u,c}(r)$ is the time when the result $r$ was closed by user $u$ and
$t_{u,o}(r)$ the time when it was opened.
This metrics measures how long a search result has been seen by the user. 
This metrics should be used in conjunction with a threshold. For instance, user
might jump to another aplication once she gets the desired result. As a result
this metrics would be extremely high, because she did not close the search
result. 
%The equation~\ref{eq:popularity-1} leads to the definition of the popularity of
%a pattern, which is the sum of the popularity of all search results triggered
% by the pattern $t$:
%\begin{equation}
%p_u(t)=\frac{1}{|R|}\sum_{r\in R}p_u(r)
%\label{eq:popularity-2}
%\end{equation}
%where $R=\left\{(t,B)\ B\neq\emptyset\right\}$.




\textbf{Attention}
Same case as the ranking, when no information ? -> mean value; cold-start

\subsubsection{Co-occurrency}
Co-occurrency measures how likely different database entities could apppear
together in a query. 
This measure is also used as a ranking function in the context of
auto-completion presented section~\ref{sec:chapter-personalized} and further
detailed in~\cite{dasfaa12}.
The assumption behind this metrics, is that a pattern should get an higher rank
if it generates a query composed of (database) entities with high co-occurrency
(i.e. entities that appear often in user's queries).
The co-occurrency between two database entities $e_1$ and $e_2$ is given by the
Jaccard index of the sets $occ_u(e_1)$ and $occ_u(e_2)$:
\begin{equation}
cooc_u(e_1,e_2)=J(occ_u(e_1),occ_u(e_2))=\frac{\left|occ(e_1)\cap
occ_u(e_2)\right|}{\left|occ_u(e_1)\cup occ_u(e_2)\right|}
\end{equation}
where $occ_u(e)$ is the set queries that contain the
entity $e$ (computed from the query logs of user $u$).
The co-occurrency of all entities in a structured query $B$
is given by
\begin{equation}
cooc_u(B)=\binom{2}{\left|B\right|}\sum_{b,b'\in
B}cooc_u(b,b')
\end{equation}
The co-occurrency metrics for a result is defined as follows:
\begin{equation}
cooc_u(r)=\frac{1}{|B'|}\sum_{B\in B'}cooc_u(B)
\end{equation}
where $B'=\left\{B\ r=(t,B)\right\}$.

\subsubsection{Implicit user preference}
The popularity metrics (see equation~\ref{eq:popularity-1}) is used as a weight
for the co-occurrence metrics to define users' implicit preferences:
\begin{equation}
pref_{u,impl}(t)=\frac{1}{|R|}\sum_{r\in R}\alpha_rp_u(r)cooc_u(r)
\label{eq:user-preference-1}
\end{equation}
where $R=\left\{r=(t,B)\right\}$ and $\alpha_r$ is a parameter to be
experimentally determined s.t. $\sum_{r\in R}\alpha_r=1$.


\subsubsection{Collaborative user preference}
Ranking search results meets a similar goal as providing recommendations (in
the sense of recommender systems).
The metrics presented equation~\ref{eq:user-preference-1} presents a problem for
\emph{cold start} users, i.e. those new to the system. Indeed, those users do
not have triggered search results, from which co-occurrences can be computed.
Collaborative recommender systems have introduced the contribution of other
users in the item scoring function to improve the system's coverage and enable
the exploration of resources previously unknown (or unused) by the user. We
follow the simple linear combination of the user-specific value and the average
over the set of all users. 
Instead of considering ``the whole world'' where all users have the same role
(weight), trust-based recommender systems illustrate the importance of
considering users' social network, e.g., favorating users close to the current
user.
Let $SN(u)$ be the set of users in user $u$'s social network, filtered in order
to keep only users up to a certain maximum distance. 
The refiend user preference can thus be rewritten as:
\begin{equation}
pref_{impl}(u,t)=\alpha\cdot pref_{u,impl}(t)+\frac{\beta}{SN(u)}\sum_{u'\in
SN(u)}\frac{pref_{u',impl}(t)}{d(u,u')}
\label{eq:implicit-preference}
\end{equation}
where $\alpha$ and $\beta$ are to be experimentally adjusted s.t. $\alpha
+\beta =1$ and $d(u,u')$ denotes the distance between both users $u$ and $u'$.

\subsubsection{Explicit user preferences}
Explicit user preferences have not yet been implemented in the system (see
chapter~\ref{sec:chapter-personalized}). 
This kind of preference is expressed by users' ratings:
\begin{equation}
pref_{u,expl}(t)=\left\{\begin{array}{l}rating_{u,r}\ \textnormal{if $u$ has
already rated $r$}\\
\overline{rating_u}\ \textnormal{otherwise}\end{array}\right.
\label{eq:explicit-preference}
\end{equation}
where $rating_{u,r}\in[0,1]$
is the rating of user $u$ for the result $r$ and $\overline{rating_u}$ the
overage rating given by $u$.

\subsubsection{User preference}
From both implicit (collaborative) user preference and explicit user preference
defined equations~\ref{eq:implicit-preference} and~\ref{eq:explicit-preference}, 
we define the global user preference as a simple linear combination of
$pref_{impl}(u,t)$ and $pref_{u,expl}(t)$:
\begin{equation}
s_4=\alpha\cdot pref_{impl}(u,t)+\beta\cdot pref_{u,expl}(t)
\end{equation}
where $\alpha$ and $\beta$ are coefficients to be experimentally determined.



\subsection{Overall measure}
We combine the different scores $s_1$ to $s_4$ to get the final score used as a
ranking metrics. 
The scores $s_2$ and $s_3$ depend on the the question $Q$.
In our experiments, we have combined these metrics as a linear combination of
equal weight.
Please refer to chapter~\ref{sec:chapter-evaluation} the results that we have
obtained.







\section{Summary \& discussion}
In this chapter, we have presented the core techniques used to translate users's
question (formulated in NL) in structured queries (e.g. SQL or \textsc{SparQL}).
It consists in defining a set of \emph{patterns} inspired from the IE community.
Patterns define a set of constraints that users' questions must satisfy as well
as the data (data models of the warehouses and data themselves) and knowledge
bases (domain knowledge and linguistic knowledge for language-dependant NL
expressions). These constraints are associated with a template of
conceptual multidimensional query (that is then translated in each target query
language as explained in the chapter~\ref{sec:chapter-personalized}).
Moreover, we have explained how we define a ranking for the results that have
been generated by these patterns.
The ranking function combines different metrics which take into consideration
the confidence of the named-entity recognition involved in the mapping of words
from the user's question to known terms, the complexity of the pattern (i.e.
measuring the number of entities that are part of the generated structured
query) and the specificiy of the pattern (i.e. measuring the fact that a
pattern is specific or generic).


However, the patterns used to translate users' questions in structured queries
are costly resources.
Therefore, we like to provide in the following thoughts on how to acquire such
patterns. 
Developing additional patterns (for instance, for a new application domain, or
in order to improve the linguistic coverage of the system) is not a
straightforward task (see the example pattern reproduced
appendix~\ref{sec:appendix-pattern}). To ease this task, two classic approaches
are:
\begin{itemize}
  \item learning approaches, which are algorithms that generate patterns
  \item authoring tools, that are tools (generally user-friendly user
  interfaces) where users can easily generate themselve new patterns
\end{itemize}
This aspect of the problem has not been fully investigated since we
focus further on the implementation of the end-to-end system (see
section~\ref{sec:conclusion}).
We develop both possible approaches introduced above, as a starting point for
future work on the subject.

\subsection{Learning approaches}
\subsubsection{Case-based reasoning}

Figure~\ref{fig:cbr} is an illustration of the case-based reasoning approach for
patterns.
\begin{figure}
\centering
\begin{tikzpicture}

\tikzstyle{box} = [
	draw,
	rectangle,
	rounded corners,
	minimum height=20pt,
	fill=white,
	fill opacity=0.5,
	text opacity=1
]
\node[
	draw,
	ellipse,
	minimum width=170pt,
	minimum height=100pt,
	line width=10pt,
	color=black!20
](ell1) {};
\node[
	box,
	minimum width=30pt
](new-case) at (ell1.north west) {New case};
\node[
	box,
	minimum width=40pt
](similar-cases) at (ell1.north east) {};
\node[
	box,
	minimum width=40pt,
	yshift=-2pt,
	xshift=-2pt
](similar-cases-bis) at (similar-cases) {};
\node[
	below=0pt of similar-cases-bis.north
](similar-cases-label-1) {Similar};
\node[
	above=0pt of similar-cases-bis.south
](similar-cases-label-2) {cases};
\node[
	box,
	right=10pt of similar-cases
](new-case-2) {New case};
\node[
	box,
	minimum width=30pt
](solved-case) at (ell1.south east) {Solved case};
\node[
	box,
	minimum width=30pt
](revised-case) at (ell1.south west) {Revised case};
\node[
	below=-2pt of ell1.north
](retrieve) {Retrieve};
\node[
	above=-2pt of ell1.south
](revise) {Revise};
\node[
	rotate=-90,
	yshift=-6pt
](reuse) at (ell1.east) {Reuse};
\node[
	rotate=90,
	yshift=-6pt
](retain) at (ell1.west) {Retain};
\node [
	cloud, 
	draw,
	cloud 
	puffs=10,
	cloud puff arc=120, 
	aspect=2, 
	inner ysep=1em,
	left=20pt of new-case
] (problem) {};
\node[](problem-label) at (problem) {\textit{Problem}};
\path[->](problem) edge (new-case);
\node [
	cloud, 
	draw,
	cloud 
	puffs=10,
	cloud puff arc=120, 
	aspect=2, 
	inner ysep=1em,
	right=20pt of solved-case
] (solution) {};
\node[] (solution-label) at (solution) {\textit{Solution}};
\path[->](solved-case) edge (solution);
\end{tikzpicture}
\caption{Case-based reasoning approach applied to the problem of
pattern learning}
\label{fig:cbr}
\end{figure}
In this approach, the problem to solve can be formulated as follows:
\begin{leftbar}
``Given a parse tree $p(Q)$ for the question $Q$, what features $f_i(Q)$ shall
be considered in the new pattern, and for each of these features, what
annotations $a\in f_i(Q)$ from the parse tree shall be included in the pattern.
Eventually, what conceptual query shall be associated with the pattern being
build.''
\end{leftbar}

\paragraph{New case}
The case correspond thus to the selection of 
\begin{itemize}
  \item relevant features (i.e. the choice
of $F\subset \{f_1(Q),\ldots,f_k(Q)\}$ where $k$ is the number of features in
$p(Q)$)
	\item a mapping from the chosen features to the annotations, (i.e. $F\rightarrow A$)
	\item a generated conceptual query referred to as $B$
\end{itemize}

\paragraph{Retrieve}
The ``Retrieve'' step consists in retrieving simlar cases.
The retrieving bases on similarity measures.



\subsubsection{Genetic approach}
\label{sec:pattern-acquisition-genetic}
We have investigated the case where there might be a large number of patterns in
the repository. This case is relevant when the number of patterns increases over
time, for instance through machine learning algorithms.
In this case, we need a classification to decide which patterns are closer to
the user's question. 
The approach that we have investigated is represented
figure~\ref{fig:pattern-acquisition-genetic-problem}.
\begin{figure}
\centering
\includegraphics[width=\textwidth,trim=150pt 540pt 100pt 120pt]{img/genetic}
\caption{Classification problem involved for a large number of patterns}
\label{fig:pattern-acquisition-genetic-problem}
\end{figure}
In this approach, the classification algorithm must first be determined. This
classification is then used to determine the translation of queries into
patterns, using a genetic algorithm and a learning base.

\paragraph{Constraints}
Translation rules must satisfy following constraints:
\begin{itemize}
  \item examples must be correctly translated
  \item generated patterns should be valid
\end{itemize}



\subsection{Authoring tool}
Authoring tools have been quite recently used in interfaces for Q\&A (one the
most representative related work is \textsc{NaLIX}~\cite{NALIX}).
The benefits of such tools, is that they permit non-expert users to enrich the
system with new semantic rules. 
Typically, users are assisted graphically in creating the semantic mappings, and
examples of known questions/answers are displayed to the user, so that she can
validate the rules she is creating. 
In our proposal (see figure~\ref{fig:pattern-authoring}), the SPARQL pattern
would be generated by the tool; the user would simply express graphically a set of constraints.
\begin{figure}
\centering
\includegraphics[width=11cm]{img/authoring-1}
\caption[Authoring tool]{Authoring tool: user loads unresolved questions}
\label{fig:pattern-authoring}
\end{figure}




\subsubsection{Identification of common annotations}
\label{sec:pattern-authoring-step1}
First, user is invited to type a serie of questions (or to import them from an
external file). All these questions are supposed to be captured by the same
pattern. 
The drawback is that the user is supposed to know the data, so that she can
think of queries (the system can also keep trace of unresolved question that
have been asked in the past, and suggest those questions).
For example, the questions ``Revenue and margin in New York in Q2'' and ``Sales
revenue and margin in Texas in Q3'' seem to share the same pattern (i.e. 2
measures and two filters).
The set of questions are parsed, and the annotations (with common annotation
types) are used as annotation in the shared pattern. 
In the end of this process, the user can remove some annotations if she thinks
that they are not relevant for the current pattern.


\subsubsection{Graphic construction of the structured query}
\label{sec:pattern-authoring-step2}
The semantic information in the pattern lies in the mapping between the set of
annotation (the \verb?WHERE? section of the pattern) and the generated
structured query (the \verb?CONSTRUCT? section of the pattern).
To support users in this task, the graphic editor guides users in designing the
conceptual query, based on the annotations that have been identified in the
previous step.
\begin{figure}
\centering
\includegraphics[width=12cm]{img/authoring-2}
\caption{Graphic construction of the pattern}
\label{fig:pattern-authoring-2}
\end{figure}
Figure~\ref{fig:pattern-authoring-2} is a screenshot of a mockup.



\subsubsection{Validation of the candidate pattern}
Once user has finished designing the conceptual query, the pattern should be
validated to check that the results are the ones that the user expects.
The questions used in the first step (see
section~\ref{sec:pattern-authoring-step1}) are used to test the candidate
pattern.
The validation process, for each question, is as follows:
\begin{enumerate}
  \item check that the pattern matches the question
  \item execute the query and display the corresponding chart
\end{enumerate}
If the user is not satisfied with the pattern execution, she is able to get back
to the first step (see
section~\ref{sec:pattern-authoring-step1}) or the second one (see
section~\ref{sec:pattern-authoring-step2}).
Eventually, when she is satisfied with the generated pattern, she can add it to
the pattern repository.

\stopcontents[chapters]

\TODO{Compare the ranking of the pattern with the one in
\textsc{Safe}~\cite{Orsi:2011:KCS:1951365.1951390}.}

\end{document}

