\chapter{Experiments \& evaluation}
\label{sec:chapter-evaluation}
\startcontents[chapters]
\Mprintcontents

In the area of \ac{IR}, there are many methods and techniques for achieving a
similar goal.
For instance, a popular survey~\cite{DBLP:journals/corr/cmp-lg-9503016} about
\ac{NL} interfaces to structured data (which is a tiny area within \ac{IR})
reports not less than six ranges of approaches for translating \ac{NL} in
database queries.
As a result, much effort has been devoted to promoting evaluation metrics on
the one hand, and campaigns on the other hand, such that the wide range of
systems can be compared.

We present in section~\vref{sec:evaluation-presentation} classic evaluation metrics
and campaigns for \ac{IR}.
Then, we describe section~\vref{sec:evaluation-proposal} our evaluation protocol
and the results that we got.
Besides the \ac{QA} system itself, we detail in
section~\vref{sec:evaluation-experiments-auto-completion} the experiment that
we led in the context of the search platform.



\section{Evaluating an \ac{IR} system}
\label{sec:evaluation-presentation}
Evaluating an \ac{IR} system is not an easy task. Indeed, considering classic
metrics of \emph{precision} and \emph{recall} (see
section~\vref{sec:classic-evaluation}), we believe that for the sake of
objectivity, queries from the test corpus should not be formulated by users who
know well the data schema.
Moreover, there is no dataset nor goldstandard queries that can be used besides
those from TREC evaluation, but are not suitable for BI purposes.


\subsection{Classic evaluation metrics}
\label{sec:classic-evaluation}
Main evaluation metrics in \ac{IR} are \emph{precision} and \emph{recall} and
their variants.
Their definition is given below.

\subsubsection{Precision and recall}
Precision, on the one hand, measures relevancy taking into account the total
number of retrieved documents, and thus estimates the brevity of the system's
response.
It is defined as:
\begin{equation}
p=\frac{|\{\textnormal{retrieved documents}\}\cap\{\textnormal{relevant
documents}\}|}{|\{\textnormal{retrieved documents}\}|}
\end{equation}
Recall, on the other hand, measures relevancy but in terms of completeness: the
more relevant results are returned, the better the recall metric is.
It is defined as:
\begin{equation}
r=\frac{|\{\textnormal{retrieved documents}\}\cap\{\textnormal{relevant
documents}\}|}{|\{\textnormal{total relevant documents}\}|}
\label{eq:recall}
\end{equation}
In many applications, recall is preferred than precision,  because the
precision metric is not relevant in the case of system that returns huge number
of results (e.g., search systems over the Web).
In other cases, precision is much more interesting than recall, for instance
when there is at most one relevant document (thus recall is always 0 or 1).

\subsubsection{Precision at $k$}
In some cases, especially when recall is not relevant, an interesting metric is
the \emph{rank} of correct results. 
\emph{Precision-at-$k$} (noted $p@k$) is such a metric:
\begin{equation}
p@k=\frac{\min(k,|\{\textnormal{retrieved documents}\}\cap\{\textnormal{relevant
documents}\}|)}{\min(|\textnormal{retrieved documents}|,k)}
%p@k=\frac{|\{\textnormal{retrieved documents}\}\cap\{\textnormal{relevant
%documents}\}|}{|\{\textnormal{retrieved documents}\}|}
\label{eq:precision-k}
\end{equation}
It measures the precision taking into account \emph{at best} the $k$ first
results triggered by any search procedure.


% \subsubsection{Representation}
% A classic representation of precision and recall is a precision-recall
% curve\footnote{See also average precision}, where we plot $p(r)$, i.e. the
% precision as a function of recall.
% 
% \subsubsection{Average precision}
% An alternative metric is the \emph{average precision} which we note $aveP$ as:
% \begin{equation}
% aveP=\int_0^1p(r)\cdot dr=\sum_{k=1}^np(k)\Delta(k)
% \end{equation}
% 
% 
% \subsubsection{Relevance metrics}
% Relevance metrics are used in \ac{IR} to compute how a document is relevant
% with respect to a query.
% A classic measure is Okapi~$BM25$:
% \begin{equation}
% score(D,Q)=\sum_{i=1}^nIDF(q_i)\frac{f(q_i,D)\times
% (k_1+1)}{f(q_i,D)+k_1\left(1-b.\frac{|D|}{l}\right)}
% \end{equation}
% where $Q$ is the query composed of query terms $q_i$, $f(q_i,D)$ is the
% frequency of the term $q_i$ in document $D$, $IDF(q_i)$ is the \ac{IDF} of
% query term $q_i$ and $l$ the average document length. 



\section{Evaluation proposal}
\label{sec:evaluation-proposal}
In this section, we apply the metrics introduced in the previous section, and
describe the results that we have got.

We propose an evaluation where:
\begin{itemize}
  \item the document repository (i.e. the structured data) are external data
  (i.e. they are not the data that have been used for experimenting the system
  in the first place)
  \item queries are formulated by real users, but not users who took part to the
  implementation of the system
  \item the document repository can be used to compare the system to other
  similar system(s)
\end{itemize}
We introduce the data (i.e. the document repository) as well as the queries in
the following.

\subsection{US Census Bureau data}
\label{sec:evaluation-census-data}
The public dataset from the US Census
Bureau\footnote{See~\url{http://www.census.gov}.} (called Census dataset in the
following) is composed of many demographic and economic facts.
This dataset is available as a relational database which we have integrated in
our \ac{DBMS} (we use the SAP Hana\texttrademark{}
database\footnote{See~\url{http://www.saphana.com}.}).
\begin{figure}
\centering
\includegraphics[width=12cm]{img/census-tables}
\caption{A few tables from the census dataset}
\label{fig:census-tables}
\end{figure}
Figure~\vref{fig:census-tables} is an extract of some of the 48 tables that we
have in our database dump. 
On this figure, we can see that the table and field names are not the terms
that can be used by real users to query the dataset. 
For that reason, we have designed a multidimensional data model which defines a
mapping from the database elements to query terms. 
\begin{table}[h!]
\centering
\begin{tabular}{ll}\hline
\multicolumn{1}{c}{\textbf{Logic name}} & \multicolumn{1}{c}{\textbf{Data model
term}}\\\hline\hline 
\verb?HHDFAM? & Family households\\\hline
\verb?HINCY00_10? & Household income 0\$-10,000\$\\\hline
\verb?STATENAME? & State\\\hline
\verb?PSTATABB? & State\\\hline
\verb?PLSO2CRT? & Sulfur oxide combustion output emission rate\\\hline
\end{tabular}
\caption{Comparison of logical names and terms used in the data model of the
Census dataset}
\label{tab:census-terms}
\end{table}
As we can see on table~\vref{tab:census-terms}, the same query term can be used
for more than one database element. For instance, the term ``State'' maps to
the table field \verb?STATENAME? from the table \verb?Community? and to the
table field \verb?PSTATABB? from the table \verb?Plant?.

The Census dataset is quite voluminous. To give an idea of the size of the
dataset, we provide table~\vref{tab:evaluation-census-size} the count of rows for the
tables that are included in the dataset. 
\begin{table}[!h]
\centering
\begin{tabular}{lr}\hline
\multicolumn{1}{c}{\textbf{Table}} &
\multicolumn{1}{c}{\textbf{Size}}\\\hline\hline 
\verb?DISTANCETOPOWERPLANTS? & 329802 \\\hline 
\verb?FREEZIPCODEDATABASE? & 65535
\\\hline 
\verb?HOUSING_AGG_AVM? & 44113
\\\hline 
\verb?HOUSING_AGG_COMMUNITY_PROFILE? & 30223
\\\hline 
\verb?HOUSING_DATAAMENITIES_2011? & 3907451
\\\hline 
\verb?HOUSING_DATAAVM_NOV11? & 68844541
\\\hline 
\verb?HOUSING_DATACOMMUNITY_PROFILE_2011? & 33416
\\\hline 
\verb?HOUSING_DATADISTRICTS_Q32011? & 15258
\\\hline 
\verb?HOUSING_DATAID_LOOKUP_SEP11? & 96658
\\\hline 
\verb?HOUSING_DATAMEASURERESULTS_Q32011? & 535365
\\\hline 
\verb?HOUSING_DATAPROGRAMS_Q32011? & 661936
\\\hline 
\verb?HOUSING_DATASALESAGG_OCT11? & 286502
\\\hline 
\verb?HOUSING_DATASALES_OCT11? & 380875
\\\hline 
\verb?HOUSING_DATASCHOOLRATINGS_SEP11? & 81242
\\\hline
\verb?HOUSING_DATASCHOOLREVIEWS_SEP11_2? & 658026
\\\hline 
\verb?HOUSING_DATASCHOOLS_Q32011? & 115243
\\\hline 
\verb?HOUSING_DATAZONEON_AMENITIES_LKP_SEP11? & 3907451
\\\hline 
\verb?HOUSING_DATAZONEON_GEOS_SEP11? & 85961
\\\hline 
\verb?HOUSING_DATAZONEON_PROFILE_SEP11? & 48703
\\\hline 
\verb?HOUSING_DATAZONEON_RELATES_SEP11? & 88868
\\\hline 
\verb?HOUSING_DATAZONEON_SALESAGG_SEP11? & 320311
\\\hline 
\verb?HOUSING_DATAZONEON_SCHOOLS_LKP_SEP11? & 71764
\\\hline 
\verb?HOUSING_DATAZONEON_TO_BG_LKP_SEP11? & 246731
\\\hline 
\verb?HOUSING_DATAZONEON_TO_ZIP_LKP_SEP11? & 124273
\\\hline 
\verb?HOUSING_EGRD_EGRDPLT_V2? & 4998
\\\hline 
\verb?HOUSING_EGRD_PLANT? & 4998
\\\hline 
\verb?HOUSING_EGRD_PLANTMEASURES? & 9839
\\\hline 
\verb?HOUSING_SAMPLE_COMMUNITY_PROFILE_2011? & 1757
\\\hline 
\verb?HOUSING_SAMPLE_MEASURERESULTS_Q12011? & 8811
\\\hline 
\verb?HOUSING_SAMPLE_PROGRAMS_Q12011? & 3002
\\\hline 
\verb?HOUSING_VIEWS_ZIPS_2? & 29905
\\\hline 
\verb?HOUSING_WALMART_RSS? & 33272
\\\hline 
\verb?HOUSING_WEATHERFORECAST? & 1109137
\\\hline 
\verb?WALMART_LOCATIONS? & 4411
\\\hline 
\end{tabular}
\caption{Comparison of the sizes of tables from Census dataset}
\label{tab:evaluation-census-size}
\end{table}

These data are geographic, demographic and social census data.
We have integrated the Census dataset in our data warehouse and created
manually the data schemas corresponding to these dataset.


\subsection{Evaluation corpus}
As introduced section~\vref{sec:evaluation-proposal}, the evaluation corpus
should be composed of queries that have not been formulated by the same users as
those who had designed the data warehouse schema.

We present below the external collaboration platform ManyEyes\texttrademark\ on
the one hand, and the \emph{goldstandard queries} that have been extracted from
this platform on the other hand.



\subsubsection{Collaborative visualization platform}
ManyEyes\texttrademark{} is a collaborative
platform\footnote{See~\url{http://www-958.ibm.com/}.} where users share both
datasets and visualizations (e.g., charts) corresponding to these datasets.
Besides, users are invited to rank datasets and visualizations.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{img/manyeyes}
\caption{ManyEyes collaborative platform}
\label{fig:manyeyes}
\end{figure}
Figure~\vref{fig:manyeyes} is a screenshot of the homepage of the platform. 
When exploring the available datasets used by contributors, we observed that
many datasets come from the US Census
Bureau, which we have described in section~\vref{sec:evaluation-census-data}.




\subsubsection{Goldstandard queries}
The dataset presented in section~\vref{sec:evaluation-census-data} can be used
to evaluate \ac{BI} systems. Indeed, when modeling this dataset in a
 multidimensional schema, we end up with hundreds of dimensions and measures
(see table~\vref{tab:evaluation-census-size}).
Besides, we have observed that this dataset is very popular on search systems
for publicly available data sources (see popular tags on
Figure~\vref{fig:evaluation-tags}).
\begin{figure}
\centering
\includegraphics[width=2.5cm]{img/evaluation-tags}
\caption{Popular dataset tags on ManyEyes}
\label{fig:evaluation-tags}
\end{figure}
For example,
WolframAlpha\texttrademark{} made part of the Census
data\footnote{About WolframAlpha\texttrademark{}:
see~\url{http://www.wolframalpha.com/}.
See also~\url{http://blog.wolframalpha.com/2012/05/09/compute-american-community-survey-data-for-every-geographic-area/}.}
available.

We have randomly selected titles of charts corresponding to Census dataset,
and suggest those titles as \emph{goldstandard queries} that could be used
as a test corpus in order to evaluate any search system. 
These queries have been used to evaluate our proposal (see
section~\vref{sec:evaluation-results}).

The evaluation corpus is composed of the titles of the 50 best ranked
visualizations (corresponding to the census dataset).
This ensures that queries are not formulated by agents who designed the data
schema of the warehouse.
We present some of the \emph{goldstandard} queries
Table~\vref{tab:goldstandard-queries}.
We have measured performance in terms of execution time on the one
hand, and IR metrics on the other hand.

Our experiment shows that the system behaves to some extend similarly as
WolframAlpha\texttrademark{}, which is a well-proven system.
However, in the follwing we like to discuss optimizations that would further
improve our performance for the given questions.
The second column of table~\ref{tab:goldstandard-queries} 
(`updated query') depicts modification  required for the system to correctly
interpret users' requests.
In the last column (`comment') we provide a brief explanation on how the
system could be easily improved in order to correctly interpret users' initial
requests. 

For instance, the second question ``Home ownership by State'' fails, because the
term `ownership' is not part of the terminology of the data warehouse; 
but the term `dewllings' appears in some measures. Thus, basic linguistic
resources (like WordNet) could be used to relate synonyms or terms with similar
meanings. 
The fifth question (``And the whitest name in America is'') also requires little
effort to be understood by the system. Indeed, the base form of the word
`whitest' is `white' (which is known to the system). Thus adding a stemming
component would lead to a successful answered question in that case.

An interesting question is ``Where are rich people?''. It would require 
a little more effort in order to be correctly processed by the system. To
answer this question, it requires to attach additional semantics to the data
warhouse to determine locations that would be recognized by the term `where'.
In addition, one would configure a range filter (e.g, using natural language
patterns) to declare the meaning of `rich' (i.e. a certain income range). The
question ``of Americans covered by health insurance'' is of similar kind,
because the term `cover' can be translated to a filter on the fact table for
``health insurance''. The question ``500MW+ Power Plants'' would need a
special natural language pattern to correctly parse the expression ``500MW+''.

\begin{landscape}
\centering
\begin{longtable}{lllll}\hline
\multicolumn{1}{c}{\textbf{Query}} & 
\multicolumn{1}{c}{\textbf{Updated query}} & &
\multicolumn{1}{c}{\textbf{Entities}} & 
\multicolumn{1}{c}{\textbf{Comment}}\\\hline\hline
State Population Change & & & $\{\textnormal{State, Population change}\}$ &
\\\hline
Home Ownership & Owner-occupied & &
$\{\textnormal{State, Owner-occupied}$ & \multirow{2}{*}{home $\sim$ dwellings}\\
by State & dwellings by state & & $\textnormal{dwellings}\}$ & \\\hline
 USA States information & & & $\{\textnormal{State}\}$ & \\\hline
 Generation Y in  & population by year & &
 \multirow{2}{*}{$\{\textnormal{Year, Population}\}$} &
 generation $\sim$ demographic \\
 2010 (Ages 10-32) & for ages 10-32 &  & & information\\\hline
 And the whitest name & \multirow{2}{*}{names white percent} & &
 \multirow{2}{*}{$\{\textnormal{Name, White percent}\}$} &
 \multirow{2}{*}{whitest $\sim$ white} \\
 in America is & & & & \\\hline
 40+ Population Projections & & & $\{\textnormal{Age,
 Population}$ & \\
 by Age & & & $\textnormal{projection}\}$ & \\\hline
 Average Time Spent & & & $\{\textnormal{State}, \textnormal{Median travel}$ &
 \\
Commuting by State & & & $\textnormal{time to work}\}$ & \\\hline
Percent Hispanic by State & & & $\{\textnormal{State}, \textnormal{Hispanic
population}\}$ &\\\hline 
\multirow{5}{*}{Population by ethnicity} & \multirow{5}{*}{population by race}
& & $\{\textnormal{State}, \textnormal{White race count},$
& \multirow{5}{*}{ethnicity $\sim$ race*} \\ 
 & & & $\textnormal{Other race count}, \textnormal{American indian}$ & \\
 & & & $\textnormal{Eskimo and Aleut race count},$ &\\
 & & & $\textnormal{Asian and pacific islander race count},$ & \\
 & & & $\textnormal{Black}, \textnormal{race count}\}$ & \\\hline
Change in city \& town & & & \multirow{2}{*}{$\{\textnormal{Town name},
\textnormal{Population change}\}$} & \\
populations & & & & \\\hline
\multirow{2}{*}{Population} & & &$\{\textnormal{State}, \textnormal{County},
\textnormal{Town name},$ & \\
& & & $\textnormal{Population}\}$ & \\\hline
\multirow{2}{*}{Domestic Net Migration} & & & $\{\textnormal{State},
\textnormal{Domestic net}$ & \\
& & & $\textnormal{migration}\}$  & \\\hline
 Population (by County) & & & $\{\textnormal{County}, \textnormal{Population}\}$
 & \\\hline 
 US surnames & US names & & $\{\textnormal{Count}, \textnormal{Name}\}$ &
 surnames $\sim$ names \\\hline Age distribution  & & & $\{\textnormal{Age},
 \textnormal{Male number},$ & \\
by US population & & & $\textnormal{Female number}\}$ & \\\hline
\multirow{2}{*}{Where are rich people?} & highest  &
& $\{\textnormal{State}, \textnormal{Average household income},$ &
\multirow{2}{*}{rich $\sim$
 household income} \\
 &  household income %per state 
 & & $\textnormal{Median household income}\}$ & \\\hline 
 People covered and not covered & & & $\{\textnormal{State}, \textnormal{Covered}\}$ & \\
by Health Insurance by State & & & $\textnormal{Not covered}\}$ & \\\hline
Southeast Asian American & & & $\{\textnormal{County}, \textnormal{Asian and
Pacific}$ & \\
Population by US County & & & $\textnormal{islander race count}\}$ & \\\hline
Black vs. white college  & & & 
$\{\textnormal{State}, \textnormal{Percent white males with at least}$ & \\
experience percentage  & & & $\textnormal{some college, Percent black
males with}$ &
\\
by state & & & $\textnormal{at least some college}\}$ & \\\hline
Dirtiest states from  & Plant name and  &
&$\{\textnormal{State}, \textnormal{Carbon dioxide}$ & \multirow{2}{*}{dirtiest
$\sim$ emissions} \\
coal pollution & carbon dioxide emissions & & $\textnormal{emissions}\}$ &
 \\\hline \multirow{2}{*}{50MW+ Power Plants} & Plant name with nameplate & &
\{Plant name, & 500MW+ $\sim$ nameplate \\
 &  capacity > 500MW & & Nameplate capacity\}& capacity \\\hline
Emissions Per State & & &
$\{\textnormal{State},\textnormal{Methane emissions},\textnormal{Nitrogen}$ & \\
Per Capita & & & $\textnormal{oxide emissions},\textnormal{Mercury emissions}\}$
& \\\hline US Violent Crime & & & $\{\textnormal{Crime rate}\}$ & \\\hline
Deaths per Year & & &
\multirow{2}{*}{$\{\textnormal{Cause}, \textnormal{Per year}\}$} & \\
in the United States & & & & \\\hline
Home Valuation of & & & $\{\textnormal{City},\textnormal{Min valuation},$ & \\
Major Cities in US & & & $\textnormal{Max valuation}\}$ & \\\hline
of Americans without & Americans not covered & &
\multirow{2}{*}{$\{\textnormal{Sate},\textnormal{Without health insurance}\}$} &
\multirow{2}{*}{health insurance $\sim$ covered}\\
health insurance & per state & & & \\\hline
Percent of men in Mass.,  & & & \multirow{2}{*}{$\{\textnormal{Percent men}\}$} & \\
15 and over, never married & & & & \\\hline
Marriages in the United States & & &
$\{\textnormal{State},\textnormal{Marriage rage per}$ & \\
per 1,000 women by state & & & $\textnormal{1,000 women}\}$ & \\\hline
Percentage of population  & & & $\{\textnormal{County}, \textnormal{Percent
population}$ & \\
aged 85+, by county & & & $\textnormal{aged 85+}\}$ & \\\hline
\multirow{2}{*}{Population by Age} & \multirow{2}{*}{Estimate by age} & &
\multirow{2}{*}{$\{\textnormal{Age},\textnormal{Population estimate}\}$} &
population $\sim$ numeric \\
 & & & & estimate\\\hline
Civilians Employed  & \multirow{2}{*}{Amount by category} & &
\multirow{2}{*}{$\{\textnormal{Occupation category}\}$} &
\multirow{2}{*}{occupation $\sim$ category} \\
by Occupation & & & & \\\hline
Percent of Population  & & &
$\{\textnormal{State},\textnormal{Percent population}$ & \\
18+ (by state) & & & $\textnormal{of age 18+}\}$ & \\\hline
Population categorized & Population categorized & &
\multirow{2}{*}{$\{\textnormal{Age},\textnormal{Population}\}$} &
\multirow{2}{*}{areas $\sim$ states}
\\
by ages and areas & by ages and states & & & \\\hline 
\end{longtable}
\label{tab:goldstandard-queries}
\end{landscape}


\subsection{Performance results}
\label{sec:evaluation-performance}
We have measured the processing time of the overall answering system (see
figure~\vref{fig:evaluation-processing-time}).
On this figure, we represent the processing time before rendering the charts
(plots ``*'') and after rendering the charts (plots ``x'').

\begin{figure}[!h]
\centering
\begin{tikzpicture}
    \begin{axis}[width=\linewidth,
        xlabel=input nodes count,
        ylabel={processing time (ms)},axis y
        line*=left,ymin=10000,ymax=15000,legend pos=north west]
        \addplot[smooth,mark=*,style=dotted] plot coordinates { 
        (11203,11203)
        (11655,11655) (12137,12137)
		(12615,12615)
		(13018,13018)
    };
    \legend{before chart generation}
    \end{axis}
  
  \begin{axis}[ylabel={$y_2$},axis y
  line=right,width=\linewidth,ymin=0,ymax=5000000,grid=both, legend pos=south east]
    \addplot[smooth,mark=x]
        plot coordinates {
   		(11203,2470619)
		(11655,2972001)
		(12137,3472285)
		(12615,3976904)
		(13018,4477776)
        };
        \legend{after chart generation}
  \end{axis}
  
    \end{tikzpicture}
\caption{Processing time before and
after the chart generation process as
a function of the schema input nodes
count}
\label{fig:evaluation-processing-time}
\end{figure}
On this figure, we see that as expected the processing time seems to be a
proportion of the size of the graph used as input of the pattern matching
algorithm. 
The part of the execution time dedicated to rendering the chart is
approximatively a third of the global execution time.
This is due to the fact that datasets  that are rendered as charts are too
voluminous.

\subsection{Evaluation results}
\label{sec:evaluation-results}
We consider the following protocol:
\begin{enumerate}
 \item Build a data schema for a goldstandard dataset (presented in previous
 section)
 \item Run the queries of the test corpus, and compute following evaluation
 metrics:
 \begin{enumerate}
   \item average precision for all goldstandard queries
   \item precision-at-$k$ (or precision@$k$)
   \item overall processing time, and before chart generation of these queries
 \end{enumerate}
 % \item Select a popular dataset representative of a real-life environment.
 % \item Select $k$ most popular visualization on the platform corresponding to
 % this dataset.
 % \item Consider the chart titles as a queries
 % \item Run the queries on the answering system, and compute evaluation metrics.
\end{enumerate}
Recall is not a metrics of interest in our case, because the goldstandard
queries correspond to exactly one chart, i.e. one database query.
We thus consider a measure derived from precison called \emph{success at k}
that measures how far is the first relevant answers within the list of results. 
The advantage of this protocol, is that the considered queries during the
evaluation (i.e. the test corpus) have been formulated by real users.
The major drawback is that many of such queries contain noise (first because
the ``queries'' were not supposed to be queries, but titles). Therefore, we
have only considered titles corresponding to actual data in the dataset.




\begin{figure}[h!]
\begin{tikzpicture}
\begin{axis}[width=\linewidth,xlabel=$k$,ylabel={success@$k$},grid=both,legend
entries={\textsc{Quasl},\textsc{Quasl} with updated
queries,\textsc{Wolfram|Alpha},\textsc{Wolfram|Alpha} with updated
queries},legend style={cells={anchor=center, fill},nodes={inner
sep=1,below=-1.1ex},at={(0.5,-0.3)},anchor=south}]


\addplot[smooth,mark=*,color=black]
plot coordinates { (1,0.636363636) (2,0.727272727) (3,0.757575758) (4,0.787878788)
	(5,0.787878788)
	(6,0.787878788)
	(7,0.787878788)
	(8,0.848484848)
	(9,0.878787879)
	(10,0.909090909)
};
\addplot[smooth,mark=x,color=black] plot coordinates {
	(1,0.787878788)
	(2,0.878787879)
	(3,0.878787879)
	(4,0.909090909)
	(5,0.939393939)
	(6,0.939393939)
	(7,0.939393939)
	(8,0.939393939)
	(9,0.939393939)
	(10,0.939393939)
};
\addplot[smooth,mark=*,color=black,style=dotted] plot coordinates {
	(1,0.151515152)
	(2,0.151515152)
	(3,0.181818182)
	(4,0.242424242)
	(5,0.303030303)
	(6,0.303030303)
	(7,0.303030303)
	(8,0.303030303)
	(9,0.303030303)
	(10,0.303030303)
};
\addplot[smooth,mark=x,color=black,style=dotted] plot coordinates {
	(1,0.212121212)
	(2,0.212121212)
	(3,0.242424242)
	(4,0.303030303)
	(5,0.363636364)
	(6,0.393939394)
	(7,0.393939394)
	(8,0.393939394)
	(9,0.393939394)
	(10,0.393939394)
};
\end{axis}
\end{tikzpicture}
\caption{Success of answering goldstandard queries compared to
WolframAlpha\texttrademark{}}
\label{fig:success-k-1}
\end{figure}
Figure~\vref{fig:success-k-1} compares the success for $k$ varying from zero to
ten for the answering system and WolframAlpha\texttrademark{}.
\emph{Corrected} success stands for results where the query has been modified in
such a way that the system can better respond.
For instance, we have observed that WolframAlpha\texttrademark{} provides better
results if some queries are prefixed or suffixed with ``US'' or ``Census'' to explicit a
restriction to a subset of available data (``US'') or to the dataset itself
(``Census'').
Given the goldstandard queries, our system answers better than
WolframAlpha\texttrademark{}.
However, our observation is that one of the reasons why we perform better is
that WolframAlpha\texttrademark{} does not include the whole Census dataset.
Therefore, we have computed a secondary success measure, which takes into
account if the dataset is known or not (i.e. if the system is able to answer
the question). For WolframAlpha\texttrademark{}, this has been determined by
reformulating the questions several times until the expected result comes up.
If not, we considered the dataset unknown by the system.
\begin{figure}[h!]
\begin{tikzpicture}
\begin{axis}[width=\linewidth,xlabel=$k$,ylabel={success@$k$},grid=both,legend
entries={\textsc{Quasl},\textsc{Quasl} with updated queries,
\textsc{Wolfram|Alpha},\textsc{Wolfram|Alpha} with updated
queries},legend style={cells={anchor=center, fill},nodes={inner
sep=1,below=-1.1ex},at={(0.5,-0.3)},anchor=south}] 

\addplot[smooth,mark=*,color=black] 
plot coordinates { (1,0.677419355)
	(2,0.774193548)
	(3,0.806451613)
	(4,0.838709677)
	(5,0.838709677)
	(6,0.838709677)
	(7,0.838709677)
	(8,0.903225806)
	(9,0.935483871)
	(10,0.967741935)
};
\addplot[smooth,mark=x,color=black] plot coordinates {
	(1,0.838709677)
	(2,0.935483871)
	(3,0.935483871)
	(4,0.967741935)
	(5,1)
	(6,1)
	(7,1)
	(8,1)
	(9,1)
	(10,1)
};
\addplot[smooth,mark=*,color=black,style=dotted] plot coordinates {
	(1,0.384615385)
	(2,0.384615385)
	(3,0.461538462)
	(4,0.615384615)
	(5,0.769230769)
	(6,0.769230769)
	(7,0.769230769)
	(8,0.769230769)
	(9,0.769230769)
	(10,0.769230769)
};
\addplot[smooth,mark=x,color=black,style=dotted] plot coordinates {
	(1,0.538461538)
	(2,0.538461538)
	(3,0.615384615)
	(4,0.769230769)
	(5,0.923076923)
	(6,1)
	(7,1)
	(8,1)
	(9,1)
	(10,1)
};
\end{axis}
\end{tikzpicture}
\caption{Variant of success of answering goldstandard queries compared to
WolframAlpha\texttrademark{}}
\label{fig:success-k-2}
\end{figure}
The results have been plotted Figure~\vref{fig:success-k-2}.
On this figure, we see that WolframAlpha\texttrademark{} performs better (under
the assumption presented above) from $k=4$.
This can be explained by the fact that WolframAlpha\texttrademark{} has a better
average precision than \textsc{Quasl} (see
table~\vref{tab:evaluation-average-precision}).
\begin{table}[h]
\centering
\begin{tabular}{ll}\hline
\multicolumn{1}{c}{\textbf{\textsc{Quasl}}} &
\multicolumn{1}{c}{\textbf{WolframAlpha\texttrademark{}}}\\\hline\hline
$0.26$ & $0.43$\\\hline
\end{tabular}
\caption{Average precisions of \textsc{Quasl} and WolframAlpha\texttrademark{}}
\label{tab:evaluation-average-precision}
\end{table}
Table~\vref{tab:evaluation-average-precision},
values have been computed based on
the gold-standard queries.
The definition of average precision can
be found in~\cite{Moffat:2008:RPM:1416950.1416952}.
It roughly corresponds to the proportion of good answers in the $k$ first search
results.


% It would be great to lead some user satisfaction evaluation.
% We present figure~\vref{fig:html5} a screenshot of the result page for the query
% ``Revenue per store in New York''.
% \subsection{User experience}
% \begin{figure}[!h]
% \centering
% \includegraphics[width=5cm]{img/screenshot-1}
% \label{fig:html5}
% \caption{HTML5 result page for the query ``Revenue per store in New York''}
% \end{figure}
% Figure~\vref{fig:iphone} is a screenshot of the iPhone app that we have
% implemented, and that implements a siri-like support for three languages
% (English, German and French).
%  \begin{figure}[!h]
% \centering
% \includegraphics[width=5cm]{img/screenshot-2}
% \label{fig:iphone}
% \caption{iPhone app with Siri-like support}
% \end{figure}














\section{Auto-completion -- an experiment on the search platform}
\label{sec:evaluation-experiments-auto-completion}
Besides the answering system itself (described in chapter~\vref{sec:chapter-personalized}), we have investigated \emph{auto-completion}, i.e. the technique that allows to automatically suggest expected words or keywords in the search box, while the user is typing her query.





The front-end application of the system is composed of a search box, where
users are supposed to enter their query (see figure~\vref{fig:auto-complete}).
\begin{figure}
\centering
\includegraphics[width=10cm]{img/auto-complete}
\caption{Search-box of the answering system's front-end with an auto-completion
implementation based on collaborative metrics}
\label{fig:auto-complete}
\end{figure}
This experiment aims at providing suggestions on how to complete the current
user's query (before the user validates the query).
The problem of auto-completing a query is formalized as
follows~\cite{dasfaa12}:
\begin{equation}
QE:(u,q,params)\mapsto\{(q_1,r_1),\ldots,(q_n,r_n)\}
\label{eq:auto-completion}
\end{equation}
where $(q_i,r_i)$ is the collection of scored queries such that for all $i$
from $1$ to $n$, $|q_i|=[q|+1$. The idea in this formulation is to find
candidate \emph{entities} (i.e. dimensions and measures) that are best
associated with query $q$.

\subsection{Usage statistics in \ac{BI} documents}
Functional dependencies and hierarchies previously presented provide very
structural knowledge regarding associations between \ac{BI} entities. Beyond
this, some \ac{BI} platforms propose repositories of documents like reports or
dashboards which can be used to compute actual usage statistics for measures
and dimensions. This kind of information is extremely valuable in our use case,
since query expansion implies to find the best candidate to associate to a
given set of measures and dimensions.

\subsubsection{Structure of \ac{BI} documents and co-occurrences}
We use the structure of \ac{BI} documents to define co-occurrences between
measures and dimensions. For instance, \ac{BI} reports are roughly composed of
sections which may contain charts, tables, text areas for comments, etc. Charts
and tables define important units of sense. Measures and dimensions associated
in a same table/chart are likely to be strongly related and represent an
analysis of specific interest to the user. Similarly, dashboards can be
composed of different pages or views which contain charts and tables. 
%Figure 2
%illustrates the graph representation of the dashboard World Cup Team STATS 2
%and its associated charts. 
More generally, any \ac{BI} document referencing measures and dimensions could
be used to derive consolidated co-occurrences or usage statistics.

\subsubsection{Personal co-occurrence measure}
\ac{BI} platforms provide access control rules for business domain models and
documents built on top of them. Consequently, different users may not have
access to the same models and at a more fine-grained level to the same measures
and dimensions. Besides, repositories contain documents generated by and shared
(or not) between different users of the system. As a result, the measure of
cooccurrence that we define in this section is inherently personalized. Let us
consider a user $u$ and let $occu(e_1)$ denote the set of charts and tables --
visible to the user $u$ -- referencing a \ac{BI} entity $e_1$, measure or
dimension. Note that in the following we only consider compatible entities
(i.e. $e_1,e_2\in d$).

We define the co-occurrence of two entities $e_1$ and $e_2$ as the Jaccard
index of the sets $occu(e_1)$ and $occu(e_2)$:
\begin{equation}
cooc_u(e_1,e_2)=J(occ_u(e_1),occ_u(e_2))=\frac{|occ_u(e_1)\cap
occ_u(e_2)|}{|occ_u(e_1)\cup occ_u(e_2)|}
\label{eq:evaluation-coocu}
\end{equation}
The Jaccard index is a simple but commonly used measure of the similarity
between two sample sets.

\subsection{Collaborative co-occurrence measure}
\subsubsection{Cold-start users and coverage}In recommender systems, the
\emph{coverage} is the percentage of items that can actually be recommended,
similar to the recall in information retrieval.
Formula~\vref{eq:evaluation-coocu} presents a problem for \emph{cold-start}
users, i.e. those new to the system. Indeed, these users do not have stored
documents from which co-occurrences can be computed. Collaborative recommender
systems introduce the contribution of other users in the item scoring function
to improve the system's coverage and enable the exploration of resources
previously unknowned (or unused) by the user. A simple approach consists in
using a linear combination of the user-specific value and the average over the
set of all \emph{users}.
\subsubsection{Using the social/trust network}The simple approach previously
described broadens the collaborative contribution to ``the whole world'' and
all users have the same weight. Trust-based recommender systems have
illustrated the importance of considering the user's social network and, e.g.,
favoring users close to the current
user~\cite{Jamali:2009:TRW:1557019.1557067}.
Narrowing the collaborative contribution down to close users presents benefits
at two levels: (a) results are more precisely personalized and (b) potential
precomputation is reduced.
Let us note $SN(u)$ the set of users in $u$'s social network which can be
filtered, e.g., to keep only users up to a certain maximum distance. We propose
the following refined co-occurrence measure, were $\alpha$ and $\beta$ are
positive coefficients to be adjusted experimentally such that $\alpha + \beta =
1$:
\begin{equation}
cooc(u,e_1,e_2)=\alpha\cdot
cooc_u(e_1,e_2)+\frac{\beta}{|SN(u)|}\cdot\sum_{u^\prime\in
SN(u)}\frac{1}{d(u,u^\prime)}cooc_{u^\prime}(e_1,e_2)
\end{equation}
This measure $cooc(u, e_1, e_2)$ is defined for entities $e_1$ and $e_2$
exposed to the user $u$ by access control rules. The contribution of each user
$u^\prime$ is weighted by the inverse of the distance $d(u, u^\prime)$.
Relations between users can be obtained from a variety of sources, including
popular social networks on the Web. However, this does not necessarily match
corporate requirements since users of the system are actual employees of a same
company. In this context, enterprise directories can be used to extract, e.g.,
hierarchical relations between employees. Clearly, other types of relations may
be considered but the actual construction of the social network is not of the
scope of this thesis.

\subsubsection{User preferences}
We distinguish \emph{explicit} and \emph{implicit} preferences, respectively
noted $pref_{u,expl}$ and $pref_{u,impl}$. For a given entity $e$, we define
the user's preference function $pref_u$ as a linear combination of both
preferences, for instance simply:
\begin{equation}
pref_u(e)=\frac{1}{2}\left(pref_{u,impl}(e)+pref_{u,expl}(e)\right)
\end{equation}
Explicit preferences are feedback received from the user, e.g., in the form of
ratings (in $[0,1]$) assigned to measures, dimensions. Let us note $r_{u,e}$
the rating given by $u$ to $e$ and $\overline{r_u}$ the average rating given by
$u$.
We define $pref_{u,expl}(e)=r_{u,e}$ if $u$ has already rated $e$, and
$pref_{u,expl}(e)=\overline{r_u}$ otherwise.

Implicit preferences can be derived from a variety of sources, for instance by
analyzing logs of queries executed in users'
sessions~\cite{Giacometti:2008:FRO:1458432.1458446}. In our case, we consider
occurrences of \ac{BI} entities in documents manipulated by the user as a
simple indicator of such preferences:
\begin{equation}
pref_{u,impl}(e)=\frac{|occ_u(e)|}{\max_{e^\prime(|occ_u(e^\prime)|)}}
\end{equation}

\subsection{Query expansion}
The aim of our system is to assist the user in the query design phase by
ordering suggestions of measures and dimensions she could use to explore data.
When she selects a measure or a dimension, it is added to the query being built
and suggestions are refreshed to form new consistently augmented queries.

\subsubsection{Ranking}To complete a given query $q=\{e_1,\ldots,e_n\}$ with an
additional measure of dimension, we need to find candidate entities and rank
them. Candidate entities $c_j$, $j\in[1,p]$ are those defined in the same
domain and compatible with every $e_i$ determined using funtional dependencies. 
We then use the following personalized function to rank each candidate $c_j$:
\begin{equation}
rank_u(c_j,q)=\left\{\begin{array}{ll}
pref_u(c_j) & \textnormal{if }q=\emptyset\\
pref_u(c_j)\cdot\frac{1}{n}\sum_{i=1}^n cooc(u,c_j,e_i) & \textnormal{otherwise}
\end{array}\right.
\end{equation}
The query expansion component can thus be defined as:
\begin{equation}
QE:(u,q,params)\mapsto\{(q_1,rank_u(c_1,q)),\ldots,(q_p,rank_u(c_p,q))\}
\end{equation}
\subsubsection{Parameters}Beyond ranking, suggestions of the query expansion
component can be fine-tuned using various parameters:
\begin{itemize}
\item The maximum number of results.
\item The type of suggested entities can be limited to measures and/or
 dimensions.
\item The domain can be restricted to a list of accepted models.
\item Suggested dimensions can be grouped by and limited to certain
hierarchies.
\end{itemize}
This may be used to reduce the number of suggestions and encourage the
user explore varied axis of analysis.

\subsection{Results of the experimentation}
We consider the query designer which simply presents a search text
box to the user (see figure~\vref{fig:architecture-front-end}). As she types,
candidate measures and dimensions are proposed to the user as auto-completion
suggestions.
Figure~\vref{fig:auto-complete} shows measures (from distinct domain models)
suggested when the user starts typing `sa': `Sales revenue', `Avg of savegoal'
and `Keeper save goal'. On the right side of the figure, the user has selected
the first suggestion (i.e. `Sales revenue') and keeps typing `c'. The system suggests the two dimensions `City' and `Category'.
The auto-completion initialization requires that the user roughly knows the
names of objects she wants to manipulate, which may be a barrier to adoption.
To help her get started and explore available data, suggestions can be surfaced
to the user before she even starts typing. For instance, the most commonly used
measures and dimensions of various domain models could be suggested to start
with.
\begin{table}
\centering
\begin{tabular}{lll}\hline
\multicolumn{1}{c}{\textbf{Measure}} & \multicolumn{1}{c}{\textbf{Dimension}} &
\multicolumn{1}{c}{\textbf{Co-occurrence}}\\\hline\hline
\multirow{5}{*}{Sales revenue} & Quarter & 0,38\\\cline{2-3}
 & State & 0,25\\\cline{2-3}
 & Year & 0,25\\\cline{2-3}
 & Category & 0,25\\\cline{2-3}
 & Lines & 0,22\\\hline
\end{tabular}
\caption{Top-5 dimensions that most co-occur with the ``Sales revenue''
measure}
\label{tab:evaluation-auto-complete-result}
\end{table}
Table~\vref{tab:evaluation-auto-complete-result} presents the five dimensions
that most co-occur with a given measure named Sales Revenue. 


\stopcontents[chapters]

% \section{To do}
% \subsection{Similarity between patterns}
% % measures
% - OAQA initiative\footnote{See \url{https://mu.lti.cs.cmu.edu/trac/oaqa}.}\\
% - See also evaluation in~\cite{ferrucci:2009}.
% 
% 
% \subsection{Gold standard queries}
% In~\cite{blunschi2012}: definition of handwritten gold standard queries,
% computation of recall and precision; definition of the ``query complexity''
% (what operations are needed; what execution time does it require?). 
% 
% 
% \subsection{Mention the following}
% - we switch the number of threads to 1 for evaluation\\
% - comparison to WolframAlpha












