\chapter{Query Modeling}
\label{sec:chapter-modeling}
\startcontents[chapters]
\Mprintcontents


% introductive words
Multidimensional models have been introduced in chapter~\vref{sec:introduction}.
These models support users to express queries, since terms
used to describe model \emph{entities} are terms used by the community
(e.g. business terms), while logical terms (e.g. the table names in a
relational database schema) are manipulated by database administrators.

A model can be represented as $\mathcal{M}=(A,H,M)$, where $A=\{a_i\}$
is a set of attributes (i.e. dimensions and attributes of dimensions),
$H=\{h_i\}$ is a set of hierarchical levels and $M=\{m_i\}$ is a set of
measures.

\section{Fact tables and functional dependencies}
\label{sec:modeling-fact-tables}
The execution of a multidimensional query results in a set of \emph{facts}.
Those facts are extacted and aggregated from the fact tables.
How these operations are performed depends on the implementation of the database
system.
In the following, a fact is written $f$ and graphically display as a row, where
each column correspond to an attribute of at a given level or to a measure. In
the former case, the cell contains the selected attribute value at the given
level (or \verb?Members? if no value is selected, i.e. if the attribute is not
used as a filter). In the latter case the cell contains the measure's value
corresponding to the filters expressed (or not) in the other cells of the row.
\begin{table}[!h]
\centering
\begin{tabular}{cclr}\hline
\textbf{Fact} & \textbf{[Year]} &
\multicolumn{1}{c}{\textbf{[Title]}} &
\multicolumn{1}{c}{\textbf{(Sales amount quota)}}\\\hline\hline 
$f_1$ & 2001 & Pacific Sales Manager & 383200\\\hline
$f_2$ & 2001 & European Sales Manager & 1124400\\\hline
$f_3$ & 2001 & North American Sales Manager & 1660050\\\hline
$f_4$ & 2001 & \verb?Members? & 3167650\\\hline
\end{tabular}
\caption{Facts answering the question ``Sales target per department in 2001''}
\label{tab:modeling-facts}
\end{table}
Table~\vref{tab:modeling-facts} provides an example of three facts, with the
dimension $[\textnormal{Title}]$ and the measure $(\textnormal{Sales
Amount Quota})$ from the dataset AdventureWorks\footnote{The
dataset can be downloaded at
\url{http://msdn.microsoft.com/en-us/library/ms124623(v=sql.105).aspx}.}.
The last row of the table is an example of aggregation of the facts along the
$[\textnormal{Title}]$ dimension (\verb?Members? stands for a selection of all
possible members of the corresponding dimension). This fact ($f_4$) would be
part, in our case, of an aggregate fact table. There can be more than one
attribute and more than one measure for each fact (i.e. on each row of the fact
table).
Let $f_i$ be the $i$-th fact in the table~\vref{tab:modeling-facts} (i.e. the
$i$-th row).
We note $f_i\cdot m_j$ the value of the measure $m_j$ for the fact $f_i$ and
$f_i\cdot a_j$ the selected value of the attribute $a_j$ (similar notation as
the one used by Golfarelli et al.~\cite{Golfarelli:2011:MAE:1990771.1991001}).
Thus, according to table~\vref{tab:modeling-facts}, $f_1\cdot
(\textnormal{Sales amount quota})=38200$; $f_1\cdot
[\textnormal{Title}]=\textnormal{`Pacific Sales Manager'}$;
$f_4\cdot[\textnormal{Year}]=2001$ and
$f_4\cdot[\textnormal{Title}]=\texttt{Members}$.

Dimensions and measures that appear in a single fact table are said
\emph{compatible} or \emph{functionally dependant}, because they can be used
together in a database query (\ac{MDX} or \ac{SQL}).
Let $E$ be the set of entities (attributes, hierarchy levels and measures).
Functional dependency $\mathcal{D}$ is an equivalence relation:
\begin{itemize}
  \item $\mathcal{D}$ is reflexive: $\forall e\in E\ e\mathcal{D}e$
  \item $\mathcal{D}$ is symetric: $\forall e,f\in E\ e\mathcal{D}f\Rightarrow
  f\mathcal{D}e$
  \item $\mathcal{D}$ is transitive: $\forall e,f,g\in E\
  \left\{\begin{array}{l}e\mathcal{D}f\\f\mathcal{D}g\end{array}\right.\Rightarrow
  e\mathcal{D}g$
\end{itemize}
We note $d(e)$ the equivalence class to which entity $e$ belongs.
Thus, $e\in d(f)$ means that $e$ belongs to the set of entities with which $f$
is compatible.
Compatible entities are of interest, for instance to make sure that suggested
entities can be used to generate a valid query (this is illustrated in an
application for auto-completion in
section~\vref{sec:evaluation-experiments-auto-completion}).



\section{Multidimensional queries and preferences}
A multidimensional query can be expressed as a set of dimensions (at a specific
hierarchy level), a set of measures plus constraints (or filters) on members
(e.g. selection of a subset of the members, or selection of all members) or on
measures (e.g. selection of all members where a specific measure has values in
a specific range) plus query modifier (such as ordering clauses or the
truncating operator).

In the following, we will present a formal representation of a query
section~\vref{sec:modeling-mdx-formal-representation}. Then, we will present how
this formal representation can be automatically translated into a common
representation section~\vref{sec:modeling-mdx-mdx}.



\subsection{Formal representation of a query}
\label{sec:modeling-mdx-formal-representation}
The following structure:
\begin{equation}
Q=\left[\begin{array}{lcl}
dimensions & = & \{[\textnormal{Department}]\}\\
measures & = & \{(\textnormal{Sales target})\}\\
filters & = & \{[\textnormal{Year}]=2001\}\\
truncation & = & \emptyset\\
ordering & = & [([\textnormal{Department}],(\textnormal{Sales
target}).\uparrow)] \end{array}\right]
\label{eq:modeling-conceptual-query-1}
\end{equation}
stands for a conceptual representation of a multidimensional query.
The different attributes of the structure are:
\begin{itemize}
  \item dimensions: (unordered) set of dimensions
  \item measures: (unordered) set of measures
  \item filters: (unordered) set of filters. A filter is a restriction on the
  resultset, based on a dimension (selection of some dimension values) or on a
  measure (where the measure value must verify a specific condition, like being
  in a range of values)
  \item truncation: ordered list of truncation clauses. In some cases, only
  part of the resultset must be returned.
  In particular, a truncation clause is used in cunjunction with an ordering
  clause in queries like `top $n$', where $n$ is the number of facts to be
  returned (see the \verb?LIMIT? operator in \ac{MDX}.
  \item ordering: the facts from the resultset must be ordered along a
  dimension or measure, in ascending order ($\uparrow$) or descending order
  ($\downarrow$).
\end{itemize}
We further describe this structure in the following.

\subsubsection{Analysis axes or dimensions}
Dimensions are used to describe and aggregate facts (see
section~\vref{sec:modeling-fact-tables}). Some dimensions belong to the same
hierarchy (i.e. they are different levels of the same hierarchy, as introduced
in section~\vref{sec:introduction:functional-dependencies}
).
For instance, there are usually time and geographic hierarchies in
multidimensional models. The time hierarchy can be described as
follows:
\begin{align}
\fbox{Year}\rightarrow\fbox{Quarter}\rightarrow\fbox{Month}\rightarrow\fbox{Week}\rightarrow\fbox{Day}\
\ldots
\end{align}
and similarly the geographic hierarchy can be described as:
\begin{align}
\fbox{Region}\rightarrow\fbox{Country}\rightarrow\fbox{State}\rightarrow\fbox{County}\rightarrow\fbox{City}\rightarrow\fbox{Street}\
\ldots
\end{align}
Note that hierarchies can be seen as ordered sets of dimensions, where
successive dimensions share a hierarchical relation.
As a result, there is not a unique decomposition of hierarchies in hierarchy
levels (the decomposition depends on the chosen hierarchical relation).

In our model, we introduce a structure called \emph{axis group}, which
represents a branch or a \emph{segment} from a hierarchy of the formal
representation. This axis group is then broken down into \emph{headers} which
is then broken down into \emph{header items}, as represented
figure~\vref{fig:modeling-axis-group}.
\begin{figure}[h!]
\centering
\begin{tikzpicture}
\node[draw,rectangle,minimum width=270pt,minimum height=10pt] (axis-group-1) {};

\node[text=black!80,below=-2pt,font=\small] at (axis-group-1.north) {Axis group
1};

\node[draw,rectangle,minimum width=50pt,minimum
height=30pt,below=0pt,xshift=-110pt] at (axis-group-1.south) (header-1) {};

\node[text=black!80,below=-2pt,font=\small] at (header-1.north) {Header 1};

\node[text=black,below=7pt] at (header-1.north) {FR};

\node[text=black,below=15pt] at (header-1.north) {Paris};

\node[draw,rectangle,minimum width=80pt,minimum
height=30pt,below=0pt,xshift=-45pt] at (axis-group-1.south) (header-2) {};

\node[text=black!80,below=-2pt,font=\small] at (header-2.north) {Header 2};

\node[text=black,below=7pt] at (header-2.north) {FR};

\node[text=black,below=15pt] at (header-2.north) {Levallois-Perret};

\node[draw,rectangle,minimum width=50,minimum height=30pt,below=0pt,xshift=20pt]
at (axis-group-1.south) (header-3) {};

\node[text=black!80,below=-2pt,font=\small] at (header-3.north) {Header 3};

\node[text=black,below=7pt] at (header-3.north) {DE};

\node[text=black,below=15pt] at (header-3.north) {Berlin};

\node[draw,rectangle,minimum width=50,minimum
height=30pt,below=0pt,xshift=70pt] at (axis-group-1.south) (header-4) {};

\node[text=black!80,below=-2pt,font=\small] at (header-4.north) {Header 4};

\node[text=black,below=7pt] at (header-4.north) {DE};

\node[text=black,below=15pt] at (header-4.north) {Dresden};

\node[draw,rectangle,below=0pt,xshift=115pt,minimum width=40pt,minimum
height=30pt] at (axis-group-1.south) (header-7) {};

\node[text=black!80,font=\small,below=-2pt] at (header-7.north) {Header $n$};

\node[text=black!80,font=\small,below=7pt] at (header-7.north) {Item 1};

\node[text=black!80,font=\small,below=20pt] at (header-7.north) {\textbf\ldots};

\node[draw,rectangle,left=30pt,minimum width=10pt, minimum
height=80pt,yshift=-75pt] at (axis-group-1.west) (axis-group-2) {};

\node[text=black!80,font=\small,rotate=90] at (axis-group-2) {Axis group 2};

\node[draw,rectangle,minimum width=30,minimum height=40,right=0pt,yshift=20pt]
at (axis-group-2.east) (header-5) {};

\node[text=black!80,font=\small,rotate=90,yshift=10pt] at
(header-5) {Header 1};

\node[text=black,rotate=90,yshift=-3pt] at (header-5) {2010};

\node[draw,rectangle,minimum width=30,minimum height=40,right=0pt,yshift=-20pt]
at (axis-group-2.east) (header-6) {};

\node[text=black!80,font=\small,rotate=90,yshift=10pt] at
(header-6) {Header 2};

\node[text=black,rotate=90,yshift=-3pt] at (header-6) {2011};

\node[draw,rectangle,right=0pt,minimum width=270pt,minimum
height=80pt,yshift=-20pt] at (header-5.east) (body) {};

\node at (body) {\textbf\ldots};

\end{tikzpicture}
\caption{Decomposition of \emph{axis groups} in the formal query representation}
\label{fig:modeling-axis-group}
\end{figure}
We have defined this structure, because it looks similar to a bi-dimensional
table (or \emph{crosstable}), where both dimensions are the two axis groups.
There can be more axis groups in our structure. Rendering data structures with
more than two axis groups is illustrated with an example in the appendix,
section~\vref{sec:appendix-search-results}.
Each axis group is decomposed in \emph{headers} which correspond to the
selection of members for the different dimensions involved in the hierarchy. For
instance, in figure~\vref{fig:modeling-axis-group}, \verb?Header 1? stands for
the selection of country `FR' and city `Paris' (because
$[\textnormal{Country}]$ and $[\textnormal{City}]$ belong to the same
hierarchy).
Then, each header is decomposed in a series of items, namely \emph{header
items}. Those items correspond to the different hierarchy levels of the
hierarchy which corresponds to the axis group. 

\subsubsection{Measures}
Measures values are numeric values (in general integer or double values) that
can be aggregated along different dimensions used in the model. A measure is
thus always associated to an aggregation function (which can be \verb?sum? or
\verb?avg?, but can be also more complex and defined in the data schema).
In figure~\vref{fig:modeling-axis-group} which serves as a visual representation,
measures' values would be represented in imaginary multidimensional cells in
the body of the table. The dimension of these cells would be the number of
measures.

\subsubsection{Filters}
Filters are restrictions of a resultset based on dimensions' or measures'
values.
For instance, the query~(\vref{eq:conceptual-query-2}):
\begin{equation}
Q=\left[\begin{array}{lcl}
dimensions & = & \{[\textnormal{Year}]\}\\
measures & = & \{(\textnormal{Sales revenue})\}\\
filters & = & \{\textnormal{City}\in \{\textnormal{`New
York'},\textnormal{`Boston'}\}\}\\
truncation & = & \emptyset\\
ordering & = & [[\textnormal{Year}],[\textnormal{Sales revenue}].\uparrow)]
\end{array}\right]
\label{eq:conceptual-query-2}
\end{equation}
leads to the database query listing~\vref{lst:modeling-mdx-filters}:
\begin{lstlisting}[caption={SQL
query generated from conceptual
query~(\ref{eq:conceptual-query-2})},captionpos=b,label=lst:modeling-mdx-filters]
SELECT
 sum(Table__2."AMOUNT_SOLD"), Table__7."YR"
FROM
 "EFASHION"."CALENDAR_YEAR_LOOKUP" Table__7 
INNER JOIN 
 "EFASHION"."SHOP_FACTS"  Table__2 ON (
  Table__2."WEEK_ID"=Table__7."WEEK_ID"
 )
INNER JOIN 
 "EFASHION"."OUTLET_LOOKUP"  Table__1 ON (
  Table__1."SHOP_ID"=Table__2."SHOP_ID"
 )
WHERE
 Table__1."CITY" IN ('New York', 'Boston')
GROUP BY
 Table__7."YR"
ORDER BY 2
\end{lstlisting}
In this example, the query is composed of one filter which restricts possible
values of dimension $[\textnormal{City}]$ to `New York' and `Boston'.
It is possible to have more than one filter, and filters based on measures'
values. 
In general, the generated SQL query looks different (the clause \verb?HAVING?
is introduced when a join is involved in the condition), but we keep a similar
representation in the conceptual query.
Query~(\vref{eq:conceptual-query-3}) is an example of a query with a filter based
on the values of a measure:
\begin{equation}
Q=\left[\begin{array}{lcl}
dimensions & = & \{[\textnormal{Year}]\}\\
measures & = & \{(\textnormal{Sales revenue})\}\\
filters & = & \{\textnormal{Sales revenue}>1,000,000\ \$\}\\
truncation & = & \emptyset\\
ordering & = & [([\textnormal{Year}],[\textnormal{Year}].\uparrow)]
\end{array}\right]
\label{eq:conceptual-query-3}
\end{equation}
The generated SQL query is reproduced listing~\vref{lst:modeling-mdx-filters-2}.
\begin{lstlisting}[caption={SQL
query generated from conceptual
query~(\ref{eq:conceptual-query-2})},captionpos=b,label=lst:modeling-mdx-filters-2]
SELECT
 sum(Table__2."AMOUNT_SOLD"),
 Table__7."YR"
FROM
 "EFASHION"."CALENDAR_YEAR_LOOKUP" Table__7 
INNER JOIN "EFASHION"."SHOP_FACTS" Table__2 ON (
 Table__2."WEEK_ID"=Table__7."WEEK_ID"
)  
GROUP BY
 Table__7."YR"
HAVING
 sum(Table__2."AMOUNT_SOLD") > 10000000
ORDER BY 2
\end{lstlisting}
Keywords \verb?HAVING? and \verb?WHERE? can be combined in the generated
SQL or MDX queries.

\subsubsection{Truncation}
Truncation clauses are used to select a subset of the resultset. It can be
associated with an ordering clause (see below).
The truncation clause is composed of an ordering and an integer
value, which is the number of items from the resultset to be returned. The
ordering specifies along which dimension or measure should the resultset be
ordered before being truncated. In MDX, for instance, this truncation clause is
translated in the keyword \verb?LIMIT?.


\subsubsection{Ordering}
\label{sec:modeling-conceptual-ordering}
The ordering clause specifies how the rows of the resultset should be ordered.
There can be more than one ordering clause, but the order of the clauses is
important (we use $[]$ to note the ordered set).
Each item of the clause is a triple of the form
$(a_1,a_2,\textnormal{direction})$ where $a_1$ and $a_2$ can be measures or
dimensions and $\textnormal{direction}$ stands for $\uparrow$ or $\downarrow$
(i.e. for ascending or descending ordering respectively).
There can be more than one ordering clause. The second clause is considered when
there are \emph{equal} rows (in the sense of the order defined by $\uparrow$
and $\downarrow$). In practice, there are rarely more than two ordering clauses.

\subsection{Database queries}
\label{sec:modeling-mdx-mdx}
The most common syntax for multidimensional queries is MDX.
An example of a MDX query is reproduced
listing~\vref{lst:modeling-mdx-example}:
\begin{lstlisting}[caption={Example of MDX
query},captionpos=b,label=lst:modeling-mdx-example]
SELECT
 { [Measures].[Store Sales] } ON COLUMNS,
 { [Date].[2002], [Date].[2003] } ON ROWS
FROM Sales
WHERE ( [Store].[USA].[CA] )
\end{lstlisting}
In addition, MDX provides several built-in functions, such as date
or member functions.
MDX or SQL queries can be automatically generated from the data schema
of the warehouse.
The conceptual query is first decomposed in its clauses, and each clause is
associated with a database query fragment. All these fragments are then merged
into a single database query. 
We have experimented SAP BusinessObjects Semantic Layer\texttrademark{} to
automatically generate  database queries out of conceptual queries.
\begin{figure}
\centering
\begin{tikzpicture}
\node[draw, rectangle,minimum width=55pt,minimum height=20pt] (dimensions) {};
\node[below=3pt] at (dimensions.north) {dimensions};

\node[draw,rectangle,minimum width=55pt,minimum height=20pt,below=5pt] at
(dimensions.south) (measures) {};
\node[below=3pt] at (measures.north) {measures};

\node[fit=(dimensions)(measures)] (group-1) {};

\node[draw,rectangle,rounded corners,right=20pt,minimum
width=55pt,minimum height=20pt] at (group-1.east) (select) {};

\node[below=3pt] at (select.north) {\verb?SELECT?};

\path[->] (dimensions) edge (select);

\path[->] (measures) edge (select);

\node[draw,rectangle,minimum width=55pt,minimum height=20pt,below=5pt of
group-1] (filters) {};

\node[below=3pt] at (filters.north) {filters};

\node[draw,rectangle,rounded corners,minimum width=55pt,minimum
height=20pt,below=20pt] at (select) (where) {};

\node[below=3pt] at (where.north) {\verb?WHERE?};

\node[draw,rectangle,rounded corners,minimum width=55pt,minimum
height=20pt,below=15pt] at (where) (having) {};

\node[below=3pt] at (having.north) {\verb?HAVING?};

\path[->] (filters) edge (where);
\path[->] (filters) edge (having);

\node[draw,rectangle,minimum width=55pt,minimum height=20pt,below=30pt] at
(filters) (truncations) {};

\node[below=3pt] at (truncations.north) {truncations};

\node[draw,rectangle,rounded corners,right=25pt,minimum width=55pt, minimum
height=20pt] at (truncations.east) (limit) {};

\node[below=3pt] at (limit.north) {\verb?LIMIT?};

\path[->] (truncations) edge (limit);

\end{tikzpicture}
\caption{Automatic database query generation}
\label{fig:modeling-automatic-mdx-generation}
\end{figure}
Figure~\vref{fig:modeling-automatic-mdx-generation} is an overview of the
translation of conceptual queries into database queries (MDX or SQL, depending
on the data source).


\subsubsection{Semantic layer}
The above-mentioned \emph{semantic layer} is a software that interface any
\ac{DBMS} and let users build queries as an object representation, and unburden
them from formulating \ac{DBMS}-specific expressions. 

This tool is widely used in \ac{BI}. We experimented it in an early version of
our \ac{QA} system for data
warehouses~\cite{Kuchmann-Beauger:2011:NLI:2026011.2026034}.



\subsection{Personalization of multidimensional queries}
Personalizing multidimensional queries is an approach, where
\emph{preferences} can be expressed in queries.
These preferences are of different types.
On the one hand, quantitative preferences, like the one defined
in~\cite{Agrawal:2000:FEC:335191.335423} map objects of preference to a
value representing the degree of preference (for instance, a scoring function
is used to rank individual tuples based on preferences).
On the other hand, qualitative preferences are not associated with scoring
functions and can be expressed in various ways, like
in~\cite{KieBling:2002:FPD:1287369.1287397}.
The latter kind of preference is usually expressed in an algebra.
Chomicki~\cite{Chomicki:2003:PFR:958942.958946} has shown that qualitative
preferences yield a broader expressiveness than quantitative preferences.

In order to offer personalized results, database queries are often enriched
with \emph{preferences}~\cite{Golfarelli:2011:MAE:1990771.1991001}.
For example, consider the structured query displayed
listing~\vref{lst:modeling-structured-query-1} that has been generated from
conceptual query (\ref{eq:modeling-conceptual-query-1}):
\begin{lstlisting}[caption={Structured query generated from conceptual query
(\ref{eq:modeling-conceptual-query-1})},captionpos=b,label=lst:modeling-structured-query-1]
SELECT
 NON EMPTY {[Measures].[Sales Amount Quota]} ON COLUMNS,
 NON EMPTY Hierarchize(Crossjoin({[Employee].[Employee
Department].[Department].Members}, 
 [Date].[Calendar Year].[Calendar Year].Members)) 
 DIMENSION PROPERTIES
 PARENT_UNIQUE_NAME ON ROWS 
FROM( 
 SELECT {[Date].[Calendar Year].&[2001]} 
ON COLUMNS FROM [Adventure Works])
\end{lstlisting}
In some cases, this structured query leads to only one tuple, which
probably means that the query should be drilled down along the dimension
[Employee Department], that is:
\begin{lstlisting}
[Employee].[Employee Department].[Department].Members
\end{lstlisting}
should be replaced with hierarchy level
\begin{lstlisting}
[Employee].[Employee Department].[Title].Members
\end{lstlisting}
This is a naive example of preferences that can be naturally expressed
in addition.
In the following, we introduce a framework for expressing this kind of
preferences plus additional preferences well suited for BI use-cases.

\subsubsection{Expressing multidimensional preferences}
The preference in the previous example (see
listing~\vref{lst:modeling-structured-query-1}) states that finest aggregated
facts along the given hierarchy are \emph{preferred} than the coarsest ones.
This kind of qualitative preferences is well expressed in the \textsc{MyOLAP}
algebra~\cite{Golfarelli:2011:MAE:1990771.1991001}:
\begin{equation*}
\textnormal{FINEST}([\textnormal{Employee}][\textnormal{Employee Department}])
\end{equation*}
The algebra enables the expression of preferences on hierarchies (like in the
example above), say $\textnormal{CONTAIN}(h,a)$ (facts including the level $a$
along $h$ are prefered), $\textnormal{NEAR}(h,a_2,a_1)$ (facts which group-by
set along $h$ is between $a_2$ and $a_1$ are prefered),
$\textnormal{COARSEST}(h)$ and $\textnormal{FINEST}(h)$. 
Preferences on attributes are $\textnormal{POS}(a,c)$ (facts mapping to the
member $c$ are prefered) and $\textnormal{NEG}(a,c)$ (facts not mapping to $c$
are prefered). For measures, preferences are $\textnormal{BETWEEN}(m,v_1,v_2)$
(facts which value on $m$ is between $v_1$ and $v_2$ are prefered),
$\textnormal{LOWEST}(m)$ (facts which value on $m$ is as low as possible) and
$\textnormal{HIGHEST}(m)$.
In addition, preferences can be composed with two operators: the composition
$P_1\otimes P_2$ (composed preferences $P_1$ and $P_2$ are equally important)
and prioritization $P_1\rhd P_2$ (preference $P_1$ is prioritized with respect
to $P_2$).
The formal definitions and notations introduced above can be found
in~\cite{Golfarelli:2011:MAE:1990771.1991001}.

\begin{landscape}
\begin{longtable}{lccccc}\hline
\multicolumn{1}{c}{\textbf{Name}} & \textbf{Distribution} &
\textbf{Comparison} & \textbf{Trend} & \textbf{Composition} & \textbf{Gradient}
\\\hline\hline Tree map &  &  &  & x & 
\\\hline\hline Heat map &  &  &  & x & x
\\\hline Pie chart &  &  &  & x & 
\\\hline Pie with variable slice depth &  &  &  & x & 
\\\hline Multiple pie chart &  &  &  & x & 
\\\hline Donut chart &  &  &  & x & 
 \\\hline Column chart &  & x &  &  & 
 \\\hline Bar chart &  & x &  &  & 
 \\\hline Column chart with dual axes &  & x &  &  & 
 \\\hline Line chart &  &  & x &  & 
 \\\hline Line chart with dual axes &  &  & x &  & 
 \\\hline Area chart &  &  &  & x & 
 \\\hline Combined column and line chart &  & x & x &  & 
 \\\hline Combined column and line chart with dual axes &  & x & x &  & 
 \\\hline Stacked column chart &  & x &  & x & 
 \\\hline 100\% stacked column chart &  & x &  & x & 
 \\\hline Stacked bar chart &  & x &  & x & 
 \\\hline 100\% stacked bar chart &  & x &  & x & 
 \\\hline Multiple bar chart &  & x &  &  & 
 \\\hline Multiple dual bar chart &  & x &  &  & 
 \\\hline Multiple line chart &  &  & x &  & 
 \\\hline Multiple dual line chart &  &  & x &  & 
 \\\hline Multiple surface chart &  &  &  & x & 
 \\\hline 3D column chart &  & x &  &  & 
 \\\hline Box plot chart &  & x &  &  & 
 \\\hline Waterfall chart & x &  &  &  & 
 \\\hline Radar chart &  & x &  &  & x
 \\\hline Multiple radar chart &  & x &  &  & x
\\\hline Tag cloud chart & x &  &  &  & 
 \\\hline Scatter chart & x &  &  &  & 
 \\\hline Multiple scatter chart & x &  &  &  & 
 \\\hline Bubble chart & x &  &  &  & x
 \\\hline Multiple bubble chart & x &  &  &  & 
 \\\hline Scatter matrix chart &  & x &  &  & 
 \\\hline Polar scatter plot &  & x &  &  & 
 \\\hline Polar bubble chart &  & x &  &  & x
\\\hline 
\caption{Some chart types and their associated analysis types}
\label{tab:modeling-chart-types}
\end{longtable}
\end{landscape}

\subsubsection{Chart type preference \& analysis type}
\label{sec:modeling-chart-type}
In addition to qualitative preferences from MyOLAP, we consider the chart type
as a preference. Indeed, all multidimensional queries that we generate are
intended to be visualized as charts.
Given the structure of the internal query
(\ref{eq:modeling-conceptual-query-1}), there are many ways to render it as a
chart.
However, chart types are often associated with an \emph{analysis type} which
partly corresponds to the meaning of the chart for the user. 
This has been outlined in~\cite{thollotThesis}.

Table~\vref{tab:modeling-chart-types} outlines different analysis types
corresponding to different chart types.
This table has been created by hand on the basis of analysis of some existing
dashboards. A broader and deeper analysis should be performed, in order to:
\begin{enumerate}
  \item take into consideration new chart types (that are not part of the
  dashboards that we have considered)
  \item validate the content of the table, which is subject to personal point of
  view with respect to the semantics of charts belonging to the dashboards
\end{enumerate}


\subsection{Example of how patterns are mapped to structured queries}

%Before going into the details of mapping variables as defined by the
% constraints that have been discussed above to structured queries, we 
%like to remind the reader of the the actual target of this operation:
% generating a well defined model describing structured queries (not necessarily the query
% string that can be executed on the data warehouse).
For most of the cases like the example given in
figure~\vref{fig:pattern-running-example}, a structured query contains a
\emph{data source},
%(i.e. data warehouse), 
a set of \emph{dimensions} and \emph{measures},
a set of \emph{filters} and an optional set of result modifiers, e.g.
\emph{ordering} or \emph{truncation} expressions. 
%There a more complex cases,
%where for instance the measure itself could be a computation based on other
%measures. However, for 
For the example from figure~\vref{fig:pattern-running-example}, a
stuctured query could be represented as follows:
%
%$
%\begin{small}
\begin{equation*}
Q_1=\left[
\begin{array}{lcl}
\textnormal{data source} & = & \textnormal{Resorts}\\
\textnormal{dimensions} & = & \{\textnormal{Customer}\}\\
\textnormal{measures} & = & \{\textnormal{Revenue}\}\\
\textnormal{filters} & = &
\left\{
\begin{array}{lcl}
\textnormal{City} & = & \textnormal{`Palo
Alto'},\\
\textnormal{Age} & \geq & 20,\\
\textnormal{Age} & \leq & 30
\end{array}
\right\}
\\
\textnormal{truncation} & = &
\{(\textnormal{Revenue},\downarrow,5)\}
\end{array}
\right]
\end{equation*}
%\end{small}
%$
%
In there, curly brackets represent a set of objects, which might have
a complex structure (e.g. for filters, which consist of a measure 
or dimension, an operator and a value). For truncations we use 
a triple consisting of the dimnesion or measure on which the 
ordering is applied, the ordering direction (ascending $\uparrow,$ or
descending $\downarrow$) and the number of items.
%
Another intepretation for the user's question
%(since it does not contain a measure) 
would be $Q_2$, which is similar to $Q_1$ except 
the proposed measure:  
%
%$
%\begin{small}
\begin{equation*}
Q_2=\left[
\begin{array}{lcl}
\ldots\\
\textnormal{measures} & = & \{\textnormal{Margin}\}\\
\ldots\\
\textnormal{truncation} & = &
\{\textnormal{Margin},\downarrow,5)\}
\end{array}
\right]
\end{equation*}
%\end{small}
%$
%
Since the reprentation shown above captures only a fraction of the 
potential queries, we use 
%again 
\ac{RDF} to capture the 
structure and semantics of the structured query which is than serialized
to an executable query in a subsequent step.
%
As discussed earlier\footnote{We refer in this section to the
figure~\vref{fig:patterns-query-graph}.}, we define in the left part of
figure~\vref{fig:pattern} how to derive potential interpretations (i.e.
variables and the constraints between them) using a \textsc{SparQL}
\verb|WHERE| clause.
Now we need to define the basic structure of a query (in RDF) and how to map variables into this model using a \textsc{SparQL} \verb|CONSTRUCT| clause (illustrated in the right part of figure~\vref{fig:pattern}). In this way, we separate the pattern matching, which can be quiet complex, from the actual mapping problem
%(which can be complex as well) 
and ensure a fine-grained  flexible control on how to generate structured queries. 
  
%As mentioned before, we do not map resource variables (such as a dimension), 
%but only literal variables into the query model to ensure the reusability 
%of the query model for other domains or use cases. The query model
%is therefore defined by its own RDF schema. 

%Graphs containing all possible interpretations of the user's question are generated at runtime.

%The schema of our query model is quiet complex and
%cannot be discussed here in full detail due to space constraints. However, 
%We show 
Some of the most important concepts of our query model are illustrated in
figure~\vref{fig:pattern}. On the top, stands the root node
\yellownode{Q} defining a structured query. 
%(`B' in figure~\vref{fig:pattern}).
Below, 
%we show with 
dashed lines represent 
%the 
parts that are 
%defined to be 
optional in the left side.
% of figure~\vref{fig:pattern}. 
These parts of the \verb|CONSTRUCT| clause are only triggered if the 
respective variables are in the result of the \verb|WHERE| clause, making it easy to describe alternative mappings for different situations as described in the \emph{parse graph}.
%
Besides of the actual query semantics, we attach some metadata nodes to the 
query node such as the \emph{data source} \lightrednode{DS}. 
%(`DS' in figure~\vref{fig:pattern}).
It is 
%in turn 
bound to the variable `?w' representing the actual data warehouse
%on that 
upon which the generated query shall be executed. 
%Other important concepts are: 
Additional nodes are dedicated to:
\emph{projection items} \lightyellownode{PI}, capturing all projections 
that are part of the final structured query; \emph{filter items}
\lightyellownode{FI}, expressing selections on a certain measure or
dimension and \emph{truncation and ordering} clauses
\lightyellownode{TO}.
%We will detail 
The underlying structures are detailed in the following.

\subsubsection{Projections} The most important part of the actual query are
projections, which in our use-case consists at least of one measure and
dimension.
%However, an arbitrary number of measures and up to two dimensions are supported
%in the underlying real-world use case (defined as optional in the
%{\footnotesize \verb|WHERE|} clause).
%The limitations to two dimensions is due to the requirement to generate
%meaningfull charts as mentioned in section~\vref{sec:problem}.
% 
To give a glimpse on our full query model and further detail the example, 
we define different kinds of expressions (via a common anchestor
RDF type) where we depict here the subclasses \emph{measure expression}
\lightyellownode{ME} and \emph{dimensions expression}
\lightyellownode{DE}.
These nodes capture common metadata (not shown here), such as navigation paths
(e.g. for drill-down operations) or confidence scores and refer 
%in addition 
to the actual object that defines the projection, here the \emph{measure
reference} \lightyellownode{MR} and \emph{dimension reference}
\lightyellownode{DR}.
%These references
They are in our case the labels of recognized objects.
% measures as depicted in the left of figure~\vref{fig:pattern}
% (`?mL1' and `?dL1'). Again, we like to point out that we 
%do not 
%need to 
It does not matter whether we use the recognized dimensions and measures (derived from `m1' or `d1') or the suggested ones (derived from `m2' or `d4') in the final query since we defined in the \verb|WHERE| clause that suggestions are only made if no user input is available.
%
%Note that 
We plan to include more complex artifacts such as subnodes of the \emph{expression} anchestor node to support for instance computed measures.
%(e.g. for ad-hoc computation derived from questions containing for instance
%`relative margin' or `margin divided by revenue').    

\subsubsection{Truncation and Ordering} 
%One of the first nodes in the left of figure~\vref{fig:pattern}
The node \lightyellownode{TO} in figure~\vref{fig:pattern} stands
for \emph{Truncation and Ordering}.
It represents \verb|ORDER BY| and \verb|LIMIT| clauses of a structured query or of a certain sub-select within such a query.
Thus, several nodes \lightyellownode{TO} can occur as sub-node of a query
node. If the variable `?nb' is not bound by the `TopK' pattern, the default value as described in section~\vref{additional-variables} will be used and a single \verb|LIMIT| will be generated. The `Sorting Expression' \lightyellownode{SE} representing an \verb|ORDER BY| is not being generated in that case because the variable `?ord' is unbound.
%
If the user entered a question starting with `Top\ldots' both variables 
`?nb' and `?ord' would be bound and we would suggest an artifact to apply the
ordering (unless the user entered `order by \ldots', which is parsed by a
dedicated pattern). Since \emph{top-$k$} questions usualy relate to a
particular measure (even if the query would be `top 5 cities'), we 
 can safely apply the order to the recognized or
 suggested measure by simply relating 
 the node for the `Sorting Expression' \lightyellownode{SE} to 
 the one for the measure \lightyellownode{MR}. Note that in any case
 %(also if the user mentioned several measures) 
every possible interpretation with respect to the \verb|ORDER BY| assigment would be generated.

\subsubsection{Filters} \emph{Filter expressions} depicted as \lightyellownode{FE}
%in figure~\vref{fig:pattern} 
represent a set of members 
%(i.e. dimension values) 
or numerical values in the case of measures 
%that shall
to be used to filter the
actual result.
From a data model perspective, filter expressions capture the metadata's
object (either dimension or measure) on which the restriction is applied and a
set or range of values that defines the actual restriction.
More complex filter expressions can be defined as well (e.g. containing
a sub-query).
%
In our example, we show only examples for \emph{member sets}
\lightyellownode{MS} containing a single member which is represented by a
\emph{value reference} \lightyellownode{VR}. In the first case,
% shows the situation where 
a member was directly recognized in the user's question. The variable `?dL2'
originating from the dimension `?d2' is directly assigned to the \emph{member
set} and a node for the \emph{value reference} \lightyellownode{VR} is generated
with a property for the actual value (i.e. `?vL1').
%Note that 
%It is straightforward to assign multiple matched members since
% references for not-bound variables will not be generated. 
%Another interesting aspect here is that 
Note that we do not need to care whether the respective dimensions will be
considered in the projections since this can be handled by constraints 
%as defined 
(see left part of figure~\vref{fig:pattern}).
%
The second example 
%that we show here is the one for
handles personalization (e.g. 
%to handle for instance 
``my city'') and uses a filter leveraging the user profile.
It works similarly as the one for matched members except that 
%we define for 
the \emph{value reference} \lightyellownode{VR} 
%to
relates to the label of the object in the user profile that cares a similar
value as one of the members of a certain dimension (e.g. `Palo Alto' for the
dimension `City'). 
%Again we see that complex mappings like the one for
%personalization can be implemented easily.  
%
Range queries are conceptually similar to the ones containing a \emph{member
set}, no matter whether they are applied on dimensions or measures. 
The only difference is that a natural language pattern is used for 
detecting numeric or date range 
expressions in the user's question to define variables and that there are two 
\emph{value references} defining the 
%upper and lower bound 
bounds of the 
actual \emph{filter expression}.  


\section{Similarity measure based on preferences}
\label{sec:sim-pref}

In the following, we use the notations
from~\cite{Giacometti:2009:RMQ:1617540.1617589}:
we note $C=<C_1,\ldots,C_n,F>$ a $n$-dimensional cube where $n$ is the number of
dimensions and $F$ stands for the fact table. 
The values of the dimension $D_i$ noted $Dom(D_i)$ are called members and
organized in a hierarchy $H_i$. 
A reference is a $n$-tuple $<r_1,\ldots,r_n>$ where $r_i\in Dom(D_i)$ for
$i\in[1,n]$.
A MDX query is modeled as a set of references for a given instance of a cube. 

The distance between two MDX queries is defined
as~\cite{Giacometti:2009:RMQ:1617540.1617589}:
\begin{equation}
d_{MDX}(q_1,q_2)=\gamma\times d_{dim}(q_1,q_2)+(1-\gamma)\times
d_h(q_1,q_2)
\label{eq:mdx-similarity}
\end{equation}
where $\gamma\in[1,n]$ is a parameter, $d_{dim}$ measures the number of
different references and $d_h$ the measure inspired from the Hausdorff
distance~\cite{hausdorff1914grundzge} which measures the distance between
two sets based on the distance between the elements of the sets.


The similarity measure $d_{MDX}$ (see equation~\vref{eq:mdx-similarity})
does not take into account preferences that appear in MyOLAP
queries~\cite{Golfarelli:2011:MAE:1990771.1991001}.
To remedy this, we propose an alternative measure
$d_{MLP}$ as follows:
\begin{equation}
d_{MLP}(q_1,q_2)=\delta\times
d_{MDX}(q_1,q_2)+(1-\delta)\times d_{pref}(q_1,q_2)
\label{eq:distance-mlp}
\end{equation}
where 
%$\gamma$ is the parameter defined equation~\vref{eq:mdx-similarity},
$\delta\in[0,1]$ is a parameter and where $d_{pref}(q_1,q_2)$ measures a
distance in terms of preferences held by MyOLAP queries $q_1$ and $q_2$.


Let $C_{q_1}$ be the set of facts in the result of $q_1$ on the cube $C$ and
$C_{q_2}$ be the set of facts in the result of $q_2$ on the cube $C$ 
%according to the BMO preference evaluation (n
Notations and definitions can be found
in~\cite{Golfarelli:2011:MAE:1990771.1991001}.
We define the \emph{priority} $p_{P_i}$ as a binary indicator that two facts
share a preference relation (and are not equivalent in terms of preference);
i.e.
\begin{equation*}
p_{P_i}(f_1,f_2)=\left\{\begin{array}{ll}     
%0 & \textnormal{if }
%\end{array}
%f_1\cong_{P_i}f_2\\
1 & \textnormal{if }f_1<_{P_i}f_2\textnormal{ or }f_2<_{P_i}f_1\\
0 & \textnormal{otherwise}
\end{array}\right.
\end{equation*}
where $f_1\in C_{q_1}$ and $f_2\in C_{q_2}$.
Let $P_1$ be the preference of query $q_1$ and $P_2$ be the preference of query
$q_2$; we define the distance between $q_1$ and $q_2$ based on preferences $P_1$
and $P_2$ as:
\begin{align*}
d_{pref}(q_1,q_2)=\epsilon\sum_{(f_1,f_2)\in C_{q_1}\times C_{q_2}}\left( 
\frac{(p_{p_1}(f_1,f_2)+p_{p_2}(f_1,f_2)}{2|C_{q_1}|\cdot|C_{q_2}|}\right)\\
+\left(1-\epsilon\right)d_{pref}(C_{q_1},C_{q_2}))
%d_{pref}(q_1,q_2)=\sum_{(f_1,f_2)\in C_{q_1}\times C_{q_2}}\frac{
%p_{P_1}(f_1,f_2)+p_{P_2}(f_1,f_2)}{2\times|C_{q_1}|\times |C_{q_2}|}
\label{eq:pref-similarity}
\end{align*}
where 
$\epsilon\in[0,1]$ is a parameter and
$d_{pref}(C_{q_1},C_{q_2})$ is the preference distance which is not based on
individual facts, but on the sets of facts that belong to the same result
$C_{q_i}$.
We define $d_{pref}(C_{q_i},C_{q_j})$ as:
\begin{equation*}
d_{pref}(C_{q_i},C_{q_j})=\left\{\begin{array}{ll}
\frac{1}{Jacc(C_{q_i},C_{q_j})} & \textnormal{if }Jacc(C_{q_i},C_{q_j})\neq 0\\
1 & \textnormal{otherwise}
\end{array}\right.
\end{equation*}
%\subsection{Similarity measure based on visualization preferences}
%\label{sec:similarity-viz-pref}
An example of preference that does not apply to two facts but to a \emph{result}
(i.e. to a set of facts) is the preference about which chart type to choose in
order to render the result.
%A visualization preference states what kind of visualization should
%be preferred to render a set of facts.
Let the facts $f\in C_q$ be summarized in table~\vref{tab:modeling-facts}
 where $q$ is the question \emph{``Sales
target per department in 2001''}.
Two popular representations for these facts could be a bar chart or a pie
chart representations (these charts have been reproduced figure
\vref{fig:personalized-chart-representation}).
The prefered chart type is inferred from both the data, and \emph{annotations}
coming from the NL analysis of the question, or even the application which
triggers the chart rendering. 
Bar charts are usually used for comparing data, while pie
charts is an analysis of the composition or contribution of data.
The analysis types that we consider for some chart types are reported
table~\vref{tab:modeling-chart-types}.
The information about which is the preferred chart type (for a given
structured query) is stored in the conceptual query (see
section~\vref{sec:modeling-mdx-formal-representation}).

A simple similarity measure between two chart types is based on the common
number of analysis types.
Let $A=\{a_j\}$ be the set of analysis types (first column of 
table~\vref{tab:modeling-chart-types}) and $C=\{c_i\}$ be the set
of chart types (headers of table~\vref{tab:modeling-chart-types}).
%Let $M=(m_{ij})_{i,j}$ be the mapping from chart types to analysis types
%partly defined table~\vref{tab:chart-types}.
%Let $nb(c_i)$ be the number of analysis types corresponding to chart $c_i$
Let $c_i$ be the chart associated to the result $C_{q_i}$ and
$types:C\rightarrow A$ be the function such that $types(c_i)\subset A$ 
is the
set of analysis types corresponding to chart
$c_i$. 
The Jaccard index on these types is defined as:
\begin{equation*}
Jacc(C_{q_i},C_{q_j})=Jacc^\prime(c_i,c_j)=\frac{|types(c_i)\cap
types(c_j)|}{|types(c_i)\cup types(c_j)|}
\end{equation*} 
This index measures the number of common analysis types between two chart
types.
%Then, the distance $d_{chart}(q_1,q_2)$ with respect to the chart types between
%$q_1$ and $q_2$ is
%\begin{equation}
%d_{chart}(q_1,q_2)=\left\{
%\begin{array}{ll}
%\frac{1}{Jacc(c_1,c_2)} & \textnormal{if }Jacc(c_1,c_2)\neq 0\\
%1 & \textnormal{otherwise}
%\end{array}
%\right.
%\label{eq:dist-chart}
%\end{equation}
%where $c_1$ ($c_2$) is the chart type associated with query $q_1$ ($q_2$).
%
%The distance $d_{chart}(q_1,q_2)$ is combined
%with the main distance $d_{MLP}^{\gamma,\delta}(q_1,q_2)$
%(equation~\vref{eq:distance-mlp}) as a linear combination to get the final
%ranking of results associated to queries $q_1$ and $q_2$.

Note that we have presented here only one example of preference relating two
results (which are defined as sets of facts) for the chart type preference,
because is relevant to our application. 
The metrics $d_{pref}(C_{q_i},C_{q_j})$ can be further defined taking into
account additional preferences. Examples are the size of the result (i.e. which
would be the preferred size for the chart?) or the preferred colors of the
chart, etc.



\section{Prediction model}
\label{sec:modeling-prediction-model}
One believe that successive queries in logs follow some pattern, because users
usually build queries in an iterative
manner~\cite{Sapia:2000:PPQ:646109.679288}.
This assertion is hard to prove though, because real query logs are costly
resources.
For reducing execution time of queries in OLAP sessions, it would therefore be
of interest to pro-actively execute queries that would be the next queries
trigered by users, according to the pattern mentioned above. 

In this section, we present a proposal for a prediction model for predicting the
most likely next queries in the context of an OLAP session.




\subsection{Motivating Scenario}
Let us suppose an OLAP user during a typical week at work who interacts with the
company data warehouse to perform several tasks:
\begin{itemize}
\item She is responsible for making management reports for the daily and weekly
sales of one product branch, and for analyzing the effect of publicity on the sales targets in different regions for that specific branch.
\item She is also responsible for market research for all products, finding
factors that significantly influence sales as they evolve over time.
\end{itemize}

While performing queries to obtain management reports and doing market research,
the user has learned that it gives best results first to get a sales report per
region, and then to analyze the effects of publicity on the sales according to
region. 
It is generally hard to determine which factors influence measures like `Sales',
and therefore user experience helps in choosing the correct axis of analysis.
A good choice is often the time dimension (e.g. `Year'), which leads to a first
division according to that axis.
Of secondary importance are location, publicity, and so on, according to which
axis the user will update her query in a next phase. In this phase, the user
also makes tryouts of factors, since she is not confident with the factors that
influence all products. Contrarily, when working on daily and weekly sales
reports of the one product branch she is responsible for, she knows what to
query due to her experience.

When working on the management reports, the user asks the data to be represented
in a set of bar charts, since reporting styles are fixed in the company. On the
other hand, when researching sales factors of products, the user likes to have
the data also in numerical table format, since quite often when taking the
train home from work, she wants to take a look again at the data and do offline
tryout manipulations searching for sales factors on her smartphone or tablet.

From this scenario, we note the following:
\begin{itemize}
\item \emph{Querying is contextualized:} User query behavior depends on the
context of the user. For example, the order of querying depends on the specific
activity the user is performing (management report or market research).
\item \emph{Querying is preferenced:} Depending on the context, the preferences
of the user also change. For example, the visualization preference changes
according to activity.
\item \emph{Query structure is similar:} While performing an activity, the
structure of a series of queries stays the same. For example, the user checks
for factors that influence sales by first querying according to time of year,
then location, and so on. So although the specific content of the queries
changes, the order in which dimensions are treated in subsequent queries stays
the same.
\end{itemize}     

Thus for predicting the most likely next query, context and preferences have to
been taken into account together with the current query, as will be described
in section~\vref{sec:modeling-query-prediction}. However, since query structures
can be similar, as illustrated in the last point above, it is important to cluster similar queries for making
predictions, which we present in section~\vref{sec:modeling-query-clustering}.



\subsection{Architecture}
The first element in our architecture is the context manager (CM), which purpose
is to hide the context management complexity by providing a uniform way to access context information. Context information is stored in a knowledge database using a context model. User preferences are also part of this context manager.
The query translation engine (QT) is responsible for the automatic transformation of a natural language query to a conceptual query.
Next, the query processor (QP) is in charge of processing a conceptual query.
Such query represents user's search and preferences, represented in a structure
(see for example $Q$ in section~\vref{sec:modeling-mdx-formal-representation}),
according to a specific template.
The QP enriches this query with context obtained from CM. This enriched request, represented in MDX format, is then transferred to the query execution module (QE).
Then, the query execution module allows the satisfaction of the immediate query by executing it taking into account the given context.
Next, the learning module (LM) is responsible for dynamically determining the user's behaviour model (classification step) from the recognized clusters representing similar user's situations (clustering). The user's behaviour model, learned and maintained by the LM, will be then used by the prediction module.
Finally, the prediction module (PM), guided by the user's query, preferences
and context, is based on the results from the discovery process previously
stored in the past history of user queries. From this data, the PM is able to
anticipate the future user's needs and to predict, and then execute in a
proactive manner, the next query. A triplet
\begin{equation}
<\textnormal{query}, \textnormal{preferences}, \textnormal{context}>
\end{equation}
representing the user situation $S$ is sent to the prediction module. From the
user's behaviour model inferred by the LM, PM will determine the user's most
likely future query.
Thus, the PM is responsible for selecting, from the user's behaviour model, the
query that best represents the user's future situation.

\begin{figure}
\centering
\includegraphics[scale=1,trim=150pt 460pt 0pt 120pt]{img/architecture}
\caption{Architecture of the module responsible for predicting and
recommendating queries}
\label{fig_arch}
\end{figure}

Two main processes compose this query prediction mechanism: the learning
process and the prediction process, as illustrated in figure~\vref{fig_arch}.
In the learning process, similar situations are grouped into clusters during
the clustering step, as a way to reduce the size of the history log by looking
for recurring situations. A \emph{situation} is defined as a triplet
\begin{equation}
<\textnormal{intention}, \textnormal{context}, \textnormal{preferences}>
\end{equation}
containing the intention of the user in the form of a query, together with
contextual and preferences information. In the next step, these clusters are
interpreted as states of a state machine, and the transition probabilities from
one state to another are calculated based on the history. This step, called the
classification step, aims to represent, from the recognized clusters, the
user's behaviour model (M) based on their situations. The interpretation of the
changed situation as a trajectory of states allows to anticipate most probable
future states. In our approach, this process consists of estimating the
probabilities of moving from one situation to other possible future situations.



\subsection{Query Clustering}
\label{sec:modeling-query-clustering}
The first step of our mechanism is the clustering of users'
queries. Indeed, as the history log contains a trace of all observed
queries of a user for a given situation, it is likely that some of them are
similar. Since the size of this history in a dynamic environment can be quite
large, clustering similar queries for a user represents an appropriate solution
to reduce the data size and arrive to a feasible number of states, which are
not too general and not too specific. Also, the analysis of the clusters allows
a better definition on users habits, which can improve the accuracy of our
prediction mechanism. The input to this step are vectors representing users
queries stored in the history.
The main task of the clustering is to detect recurrent and similar queries
from all the queries situations in the history. In fact, the clustering is
responsible to determine the query that is the closest to a set of
queries. This provides us a powerful mechanism to evaluate
users' queries, as a user can express the same query in a slightly different
way by using queries that have a sufficient similarity. 

To calculate the similarity between two queries we use the distance metric
defined in the previous section. To build the clusters, we start with randomly
selecting x queries from the query logs and making sure that a minimal distance
exists between them. They serve as the seeds for the clusters. Then, we treat
each query of the logs, and assign it to the cluster for which the associated
seed has the minimum distance. In that way, clusters of similar queries are
obtained.
The clusters are used for recommendation purposes. A BI user that is exploring
a dataset and performs a query will possibly be interested in similar queries,
so using the distance metric we identify the query cluster which is most
similar to the current user query, and select a number of queries contained in
that cluster to recommend to the user.


Context information which we consider relevant for our purposes considers, but
is not limited to, the following items:
\begin{itemize}
\item \emph{User role:} This refers to whether the user is for the moment
acting as (real time) decision maker in a business process, or whether she is
manager, only interested in overview and periodically generated reports.
\item \emph{Current activity:} This context item is related to the user role,
but refers to the specific task a user is performing. Tasks include gathering
information for periodic sales reports, querying to analyze current customer
behavior and so on.
\item \emph{Active applications:} The set of applications that are momentarily
actively used by the user. For example, if the user is also working on an Excel
stylesheet, data representation preferences in context can be inferred from
this.
\item \emph{Device capabilities:} The technical and hardware specifications of
the device the user is working with, or by extent to the surrounding devices,
can be taken into account. For example, data representation preferences will be
different if the user is working with a smartphone, a tablet or with a desktop
computer.
\item \emph{Location:} The current location of the user can influence her needs
or preferences with regards to geographical information like points of
interests, etc.
\end{itemize}
% 
% We use the algorithm Growing Neural Gas ~\cite{citeulike:6678375} for
% clustering, which seems to be the most appropriate candidate, since it meets
% our criteria: it adapts itself to the dynamics of the environment, does not
% require knowledge a priori and has a reasonable real-time execution. The GNG,
% compared to other algorithms, offers more flexibility, allowing us to cope with
% the dynamics of BI environments.
% The GNG clustering algorith belongs to the class of neural networks. The role
% of GNG is to recognize and update a set of clusters according to the input
% vector. It connects the input to a set of output nodes that we call clusters.
% The GNG applies the neighbour property by connecting some neighbouring nodes
% together. Applied to our clustering step, the input represents the users
% situations composed of a query, a context and preferences. The output
% represents the recognised clusters representing similar situations.
% Once the clustering process is completed, recognized clusters are then
% interpreted as states of the users behaviour model. This represents the
% %classification 
% prediction
% process, presented next.

\subsection{Query Prediction}
\label{sec:modeling-query-prediction}
We propose an approach predicting users' future query in order to proactively
execute a query that can fulfill her future needs. Indeed, this approach is
based on the assumption that common situations can be detected, even in dynamic
and changing environment. Based on this assumption, this prediction mechanism
considers a time series (as a set of states being the clusters of the previous
step) representing the users observed situations. These observations are time
stamped and stored in a log file after each query execution (history). Thus, by
analysing the history $H$ represented by the triplet 
\begin{equation}
<\textnormal{intention},\textnormal{context}, \textnormal{preferences}>
\end{equation}
the prediction mechanism can learn the users behaviour model $M$ in a dynamic
environment, and thus deduce the most likely next query.

\begin{figure}[tb]
\centering
\includegraphics[scale=1,trim=80pt 450pt 100pt 120pt]{img/modeling-markov}
\caption[Illustration of the Markov chains]{Illustration of the Markov chains. Cloud shapes represent \emph{clusters} of similar queries (e.g. $q_3$ or $q_4$). A particular query is selected in each cluster (e.g. $q_1^\prime$, $q_2^\prime$, \ldots). This particular query corresponds to the most likely next user's query of any query belonging to the preceeding cluster, according to the model.}
\label{sec:modeling-markov-chains}
\end{figure}
We infer the most probable next query by modeling user query behavior as Markov
chains\footnote{This choice is motivated by the fact that queries from query
logs are supposed to be iteratively created. Thus each cluster of the Markov
model corresponds to an intermediate step of data analysis.}.
Each query cluster represents a state in the associated Markov chain.
The series of states of the system has the Markov property. A series with the
Markov property is such that the probability of reaching a state in the future,
given the current and past states, is the same probability as that given only
the current state. So past states give no information about future states. If
the machine is in state $x$ at time $n$, the probability that it moves to state
$y$ at time $n + 1$, depends only on the current state $x$ and not on past
states (see figure~\vref{sec:modeling-markov-chains} for an illustration of the model). The transition probability distribution can be represented as a matrix
$P$, called a transition matrix, with the $(i, j)$-th element of $P$ equal to:
\begin{equation}
P_{ij} = Pr(X_{n+1} = j | X_n = i)
\end{equation}
 
The initial probability $Pr(X_{n+1} = j | X_n = i)$ is $\frac{1}{m}$
where $m$ is the number of queries that can follow the current query $Q_i$.
$Pr(X_{n+1} = j | X_n = i)$ could be updated by counting how often query $Q_j$
is preceded by query $Q_i$ and dividing this number by the total amount of
queries that were observed as following query $Q_i$. This means however that the
past is as important as the present. In most cases the series of queries a user
performs will evolve through time. For example, if the history considers
queries performed by the user during the last six months, then we want the more
recent queries having relatively more influence on the user behavior model than
older queries. Consequently, the transition probability function should be
updated in a way that recent transitions have more importance than older ones.
For that, the following exponential smoothing method can be used so that the
past is weighted with exponentially decreasing weights:
\begin{equation}
P_{ij} = \alpha\times x_j + (1 - \alpha)P'_{ij}
\end{equation}
$P'_{ij}$ represents the old probability and $x_j$ is the value for the choice
taken at query $Q_i$ with respect to query $j$. $x_j$ is zero or one. If $x_j =
1$ then query $Q_j$ was executed after query $i$, $x_j = 0$ if not. Using this
method, the sum of all outgoing probabilities remains $1$, as it is required
for a transition probability matrix. The parameter $\alpha$ is a real number
between $0$ and $1$ that controls how important recent observations are
compared to history.
If $\alpha$ is high, the present is far more important than history. In this
setting, the system will adapt quickly to the behavior of the user. This can be
necessary in a rapidly changing environment or when the system is deployed and
starts to learn. In a rather static environment, $\alpha$ can be set low. In
conclusion, by incorporating the learning rate, we make sure the user behavior
model is dynamic and evolves together with changing habits or preferences of
the user.



\section{Summary \& discussion}
In this chapter, we presented results of multidimensional queries. These
results are displayed in fact tables. Headers of the table are the measures and
dimensions involved in the multidimensional query.
Then, we have introduced the functional dependencies, as entities that belong to
the same hierarchy (i.e. dimensions of different levels in the same hierarchy)
or entities that are valid in the sense that they can appear together in the
same query.
As we will describe in chapter~\vref{sec:chapter-evaluation}, this relation is
usefull in some application, for instance for suggesting valid entities when
auto-completing queries.
In the answering framework that has been detailed in
chapter~\vref{sec:chapter-personalized}, we have mentioned conceptual queries as
abstract multidimensional queries. In this chapter, we described the conceptual
model of these queries, and the links between this conceptual model and the
visualization of data. Indeed, one aims at rendering query results as charts or
table, as it will be shown in the chapter~\vref{sec:chapter-evaluation}.
Then, we have made explicit the translation from conceptual queries to actual
implementation of query languages (e.g. SQL or MDX). In our case, this is
performed by underlying BI systems.
The second part of the chapter is dedicated to personalization of
multidimensional queries.
Therefore, we have integrated some existing approach for
qualitative personalization (the \textsc{MyMDX}
approach~\cite{Golfarelli:2011:MAE:1990771.1991001}) with other kinds of
preferences (which are domain-specific), such as chart preference.
These preferences are intended to be used in some human-computer interaction
process, where users formulate queries and refine them in an iterative way
(i.e. in the context of a session of multidimensional queries).
The proposal is thus to suggest queries of interest for the user, as being the
range of most likely next queries.
The prediction is made by a prediction model, that is trained based on a corpus
of queries, and computed distance between multidimensional queries (expressing
preferences as explained above).

The conceptual query model introduced above has been fully integrated in the
answering framework (see also chapter~\vref{sec:chapter-personalized}). 
The scenario which recommends queries of interest based on a prediction model is
currently being evaluated. 
%This part of our work is borderline with respect to
%the scope of our thesis, since it deals more with recommender systems than
%question answering systems, but we believe that this topic is a good starting
%point for research in this direction.
This topic is interesting in practice, since predicting most likely
next queries and executing them in an pro-active manner reduces
execution time of the system.


%% here
%the facts that are selected according to this conceptual query and the
%qualitative preferences presented above, we state that a preferred chart would
%be a pie chart rather than a bar chart.
%This preference is integrated in the structured query  
%is not integrated in the structured query 
%(as we do for the
%qualitative preferences above), and is used to compute similarity measures
%between structured queries.



\stopcontents[chapters]
