
In this section we propose to evaluate the selection approach thanks to the
virtual instructor described in Section~\ref{sec:algorithm} and illustrated in
Section~\ref{sec:case-study}. We are interested both with the \emph{subjective}
assessment of players when they play with our instructor (naturalness, easiness
to identify objects, etc.), and with the \emph{objective} evaluation of the
system in terms of task performance (success rate, duration, etc.). We present
the subjective evaluation in Section~\ref{sec:subjective} and the objective
evaluation in Section~\ref{sec:objective}. The relation between the results of
these evaluations and the actual method with its underlying corpus is discussed
in Section~\ref{sec:corpus}. 

\subsection{Corpus collection}

In order to make sure that the instructor was an expert in the virtual
worlds and that hence she was able to give effective instructions, we collected
the corpus in a Wizard of Oz fashion~\cite{dahlback-iui93}; the role of the instructor was
played by a \emph{wizard}, a single person which was familiar with the virtual
worlds and the task to be completed. To play the user role we recruited 14
volunteers with different demographic characteristics that have shown to have an
impact on the behavior of users in virtual
worlds~\cite{KolStrGarByrCasDalMooObe10}, namely gender and video game familiarity. Our voluteers differed in gender (5 female
and 9 male) and
familiarity with video games (6 expert gamers, 4 occasional players and 4
non-players). 
We asked each volunteer to follow the wizard's instructios in order to complete
a given task in each of the three evaluation
worlds described in~\cite{KolStrGarByrCasDalMooObe10} illustrated in Fig.~\ref{worlds}. 
All the volunteers were able to complete
the task in world 1, but only 12 completed world 2 and 11 finished also world 3.
%This was due to the lack of expertise of some of the users with virtual worlds,
%which made the interactions too long and tedious to ask them to complete the
%tasks in all virtual worlds. Considering only successful games, an average game
%in the collected corpora consists of 47 utterances in world 1, 45 utterances in
%world 2 and 62 utterances in world 3 with an average length of 7 words each. It
%took the users an average of 14 minutes to complete the game in
%world 1, 13 minutes in world 2, and 17 minutes in world 3. 
The registered data constitutes a corpora which is composed of 37 games and 2163 instructions, spanning
6:09 hs. %It took less than 15 hours to collect the corpora through the web and the subjects reported that the experiment was fun.  

\begin{figure}[!h]
\begin{center}
\includegraphics[width=1\linewidth]{images/give-worlds.jpeg} 
\end{center}
%\vspace*{-5mm}
\caption{2D maps of the virtual worlds used in the registration of interactions. \label{worlds}}
\end{figure}
%\vspace*{-2mm}

Since our approach to generating virtual instructors needs a corpus in each
virtual world in which the virtual instructor gives instructions, we attempted to
minimize the effort of corpus collection by restricting it to 3 days of corpus
collection. During these 3 days we managed to collect data from 14 volunteers as
described above. In Section~\ref{sec:evaluation} we analyze the results that we
obtained during the evaluation done in the GIVE Challenge and discuss the
correlation between the metrics and the corpus coverage of the virtual worlds. 

%\subsection{Subjective ratings by human users} \label{sec:subjective}
%
%The player perception when receiving instructions is an important metrics because
%if this perception is too negative, the player may abandon the instructing
%process and the underlying task. We propose the same subjective evaluation than
%in GIVE-2~\cite{GarGarKolStr10}, using the same worlds and metrics but that we
%performed ourselves on 13 interactions\footnote{We are favoring subjective
%metrics from GIVE-2 over GIVE-2.5 since GIVE-2 includes more metrics related to
%naturalness}. (SIZE OF CORPUS ?)
%
%The subjective metrics are obtained by asking the player to fill a questionnaire
%about his assessment after each game. Players have to rate the truth about
%different statements using a slider whose values range from -100 to 100. For
%negative statements such as ``I was confused about which direction to go in'', we
%report reversed score as in GIVE-2, such that higher scores always mean a better
%system. The GIVE-2 questionnaire includes twenty two statements, we focus here on
%the most important ones by comparing our system to the three best systems that
%participated in GIVE-2, NA, Saar and NM. The metrics from Q1 to Q13 and Q22
%assess the effectiveness and reliability of instructions. For almost all these
%metrics we obtain similar or slightly lower results than the other systems.
%However, three of these metrics, Q5, Q6 and Q22 are particularly interesting as
%shown in Table~\ref{table:subjective-quality}. The low score of Q5 (direction
%confusion) and Q22 (player trust) can be explained by unsuccessful utterances
%caused by limitations of the system. For instance, corrections such as ``not this
%way'' or direction instruction such as ``turn left'' are sometimes contradictory
%and thus cause confusion that can explain these low scores. We estimate that
%these scores are a direct consequence of the inability of the system to consider
%directions in the planning formalism~(see Section~\ref{sec:....}). Nevertheless,
%player judged very positively the ability of the system to refer to objects as
%shown in Q6. It demonstrates the efficiency of the referring process despite the
%fact that nothing in our algorithms is dedicated to reference.
%
%\begin{table}[!ht]
%\begin{small}
%\begin{center}
%\begin{tabular}{llll}
%NA & Saar & NM & OUR \\
%\hline
%\multicolumn{4}{@{}p{7.7cm}@{}}{Q5: I was confused about which direction
%to go in} \\ 
%29 & 5 & 9 & -12 \\
%\hline
%\multicolumn{4}{@{}p{7.6cm}@{}}{Q6: I had no difficulty with identifying
%the objects the system described for me} \\ 
%18 & 20 & 13 & 40 \\ 
%\hline
%\multicolumn{4}{@{}p{7.6cm}@{}}{Q22: I felt I could trust the system's instructions} \\ 
%37 & 21 & 23 & 0 \\ 
%\hline
%\end{tabular}
%\end{center}
%\end{small}
%\caption{Results for the significantly different \emph{subjective} measures assessing the effectiveness of the instructions
%(the greater the number, the better the system)
% \label{table:subjective-quality}} 
%\end{table}
%
%Metrics Q14 to Q20 are intended to assess the naturalness of the instructions, as
%well as the immersion and engagement of the interaction. We show in
%Table~\ref{table:subjective-engagement} the most interesting ones. Our instructor
%proves to be much more natural than the other systems (less robotic in Q14 and
%less repetitive in Q15). Metrics Q17 ``I lost track of time while solving the
%task'' provides an estimation of the state of mind of the players in direct
%relation with the \emph{theory of flow} (CITE!) and our instructor also
%outperforms the other systems with regards to this metrics. No clear tendency is
%observed for the other metrics. 
%
%\begin{table}[!ht]
%\begin{small}
%\begin{center}
%\begin{tabular}{llll}
%NA & Saar & NM & OUR \\
%\hline
%\multicolumn{4}{@{}p{7.7cm}@{}}{Q14: The system's instructions sounded robotic} \\ 
% -4 & 5 & -1 & 28 \\
%\hline
%\multicolumn{4}{@{}p{7.7cm}@{}}{Q15: The system's instructions were repetitive} \\ 
% -31 & -26 & -28 & -8 \\
%\hline
%\multicolumn{4}{@{}p{7.7cm}@{}}{Q17: I lost track of time while solving the task} \\ 
% -16 & -11 & -18 & 16 \\
%\hline
%\end{tabular}
%\end{center}
%\end{small}
%\caption{Results for the \emph{subjective} measures assessing the naturalness and engagement of the instructions
%(the greater the number, the better the system)
% \label{table:subjective-engagement}} 
%\end{table}

\subsection{Task performance measures} \label{sec:objective}

(INCLUDE WORLD-BASED COMPARISON TO BETTER LINK WITH THE NEXT SECTION)

In order to evaluate the task performance we participated to the 2011 edition of
the GIVE challenge (GIVE-2.5) in which eight instructors
competed~\cite{Striegnitz2011}. The challenge gathered five manually authored
rule-based systems (A,C,L,P2,T), two supervised machine learning systems trained
on manually annotated corpora (B,P1) and our own system (CL), described in
Section~\ref{sec:algorithm} trained on the corpus introduced in
Section~\ref{sec:corpus}. The data, collected via the Internet, gathers 587
interactions. 

The objective metrics are extracted from the logs of interaction and are
summarized in Table~\ref{table:objective}. The success/canceled/lost metrics
corresponds to the status of each game, a \emph{success} status if the player
managed to reach the trophy thanks to the instructor, a \emph{cancel} status if
the player willingly abandoned the task and a \emph{lost} status if the player
stepped on an alarm and lost the game. For successful games, we also provide the
average duration, and average number of mouse actions. The number of mouse
actions is the number of button clicks, higher number meaning that the player
tried several buttons before reaching the expected one.

\begin{table}[!ht]
\begin{small}
\begin{center}
\begin{tabular}{@{}p{2.2cm}@{}cccccccc@{}}
& A & B & C & CL & L & P1 & P2 & T \\
\hline
Task success (\%) & 40 & 31& 70& 64& 66& 66& 62& 53\\
\hline
Canceled (\%) & 34 & 44 & 17 & 23 & 20 & 20 & 15 & 22\\
\hline
Lost (\%) & 26 & 25 & 13 & 13 & 14 & 14 & 23 & 25\\
\hline
Time (sec) & 705 & 675 & 527 & 512 & 344 & 401 & 415 & 480 \\ 
\hline
Mouse actions & 18 & 35 & 15 & 15 & 14 & 14 & 16 & 16 \\
\hline  
\end{tabular}
\end{center}
\end{small}
\vspace*{-.2cm}
\caption{Results for the \emph{task performance} metrics \label{table:objective}} 
\end{table}

In terms of task success, CL system performance (64\%) is comparable to that of
the best systems that participated in the challenge ---C, L, and P1. The CL task
success is only 6\% lower with respect to the best performing system. This result
is encouraging since all these systems require the development of hand-designed
strategies and hence their design and implementation is labor intensive. For
instance, C system~\cite{Racca2011} implements the grounding model of
Traum~\cite{Traum1999} and took over 6 man-month to develop, while CL system can
be built for a new virtual worlds in a matter of days if the necessary inputs are
given (see Section~\ref{sec:discussion} for details).   

With respect to time that the CL system needed in order to help the players
finish successfully, the CL system is considerably slower than the fastest
systems---such as L and P1. We think that this is probably due to the strategy
implemented by CL which verbalizes paraphrases of the instructions every 3
seconds. This strategy gives too many instructions to the DF as made evident by
the fact that, in the GIVE Challenge evaluation, the DF did not react to 33\% of
the utterances said by the system. 

Finally, the CL system was among the top performers when considering how
effectively the systems helped users identify the objects that they needed to
manipulate in the virtual world. This is shown by the low number of mouse actions
required to complete the task. This correlates with the subjective evaluation of
referring expression quality (Section~\ref{sec:subjective}).

Note that the scores presented here differ from scores found
in~\cite{Striegnitz2011}. We observed that our system did not behave the same in
all the evaluation worlds, and the scores provided here are thus weighted
according to the number of interactions in each world. Our hypothesis is that our
system is very dependent to the training corpus, and we examine closer this
dependence in the next section.


\subsection{Corpus coverage} \label{sec:coverage}

In order to look at the dependence to the corpus, we propose to check how well
the corpus covers the actions that the player is required to do in a given world.
Indeed, if the player reaches a state that requires some actions that were not
observed while collecting the corpus, the selection process fails and returns an
empty set of candidates. We thus distinguish the \emph{training corpus}, that is
the corpus we collected between human pairs to train our system and the
\emph{challenge corpus}, that is the corpus resulting from all interactions
during the GIVE-2.5 challenge, between the system and the players. 

The Figure~\ref{fig:actions-per-corpus} compares for each world the number of
different actions that are found in the training corpus and in the challenge
corpus. It shows that in world 3, 41\% of new actions were observed during the
challenge as opposed to world 2 (17\%) and world 1 (1\%). That is, while
performing in the challenge world 3, the system faced many new situations and new
actions that were not encountered in the training corpus. This observation is
confirmed by the percentage of ``go'' instructions which is the default instruction
uttered when no appropriate instruction is found in the corpus, as seen in
Figure~\ref{fig:go-instructions}. We observe that around 13\% of all instructions
in world 3 are ``go'' instructions, demonstrating that the corpus coverage of
world 3 was indeed lower than the other worlds. While ``go'' instructions may be
a reasonable cause for game cancellation, we note that it actually depends on the
world configuration. For instance, a ``go'' instruction in a corridor is not
prone to raise as many problems than facing an intersection.

\begin{figure}[!ht]
\centering
\subfloat[Number of actions per corpus]{\label{fig:actions-per-corpus}\includegraphics[width=5cm]{images/action_coverage.png}}
\subfloat[Percentage of default instruction]{\label{fig:go-instructions}\includegraphics[width=5cm]{images/go_instructions_single_bar.png}}
\caption{Number of actions and default instructions}
\label{fig:action-go}
\end{figure}

\begin{figure}[!ht]
\centering
\subfloat[training corpus]{\label{fig:corpus-coverage-training}\includegraphics[width=5.5cm]{images/corpus_coverage.png}}
\subfloat[challenge corpus]{\label{fig:corpus-coverage-eval}\includegraphics[width=5.5cm]{images/corpus_coverage_eval.png}}
\caption{Cumulative number of different actions in 1,2,..,n interactions}
\label{fig:corpus-coverage}
\end{figure}

The question that immediately arises is how many actions and how many
interactions do we have to record to well cover a given world. To estimate the
number of interactions that have to be recorded for a given world, we propose to
count the cumulative number of actions that are covered by instructions reaction
if we record 1, 2 or more interactions. When a plateau is reached, recording a
new interaction does not cover new actions. This criterion does not provide the
optimal number of interactions since the selection algorithm can also fail if new
action \emph{sequences} are met during execution, but it guarantees that the
system will fail if the recorded number of interactions is below the plateau
threshold. Figure~\ref{fig:corpus-coverage-training} shows that, in the training
corpus, a plateau is reached after 7 interactions for world 1 and 2. In world 3,
no plateau is reached after 11 interactions, meaning that we failed to record
enough interactions to reach a satisfactory coverage. This is likely correlated
to the lower task success that we observe in world 3. Actually, if we compare
this cumulative number of actions within the \emph{challenge} corpus, on
Figure~\ref{fig:corpus-coverage-eval}, that is what happened with real players,
we observe that a plateau is reached in world 3 after 17 interactions. In other
words, we assume that if we would have recorded 6 more interactions, we would
have sufficiently raised the coverage of the corpus such that the system would
not have faced unexpected actions in this world. This methodology provides then a
practical way to estimate whether the corpus is not large enough for a given
world. 

It is interesting to note that our approach only requires a low number of
interactions to cover the variety of a given world and achieve a satisfying
result. This is to be compared with the large amount of datas that statistical
learning methods generally require. In particular we can compare our approach
with another corpus-based approach that competed in GIVE-2.5, the B
system~\cite{Dethlefs2011a}. This system mixes hand-crafted behaviour and decision
trees trained on the GIVE-2 corpus of 63 English
interactions~\cite{GarGarKolStr10} that is manually annotated with reference and
navigation descriptions. As the author suggests, the low success score of the
system is caused by data sparsity. We note though that there are promising
approaches based on hierarchical reinforcement learning that may overcome the
data sparsity, but these approaches have not yet been evaluated in the GIVE
challenge~\cite{Dethlefs2011b}. 


