The natural setup to evaluate our approach is the GIVE challenge which compares
instruction giving systems for the GIVE task with regards to their objective
performance (task success, speed, number of instructions, etc.) and the
subjective assessment of players interacting with the systems (naturalness,
friendliness, coherence, etc.). There has been three iterations of the challenge
so far, GIVE-1 in which the environment was discrete (the users were only able to
move one step after the other and turn 90 degrees), GIVE-2 that introduced
continuous movements and GIVE-2.5 that used the same setup than GIVE-2 with
different worlds. 

We first present the evaluation results collected in our laboratory of comparing a system generated using our algorithms to the systems that participated in the GIVE-2 challenge, in Section~\ref{in_lab}. Then we describe the results of our participation in the GIVE-2.5 challenge in
Section~\ref{give2-5}. And examine these results in
Section~\ref{}.

\subsection{In-lab evaluation: comparison with the GIVE-2 participants} \label{in_lab}

We performed an in-lab evaluation of the selection approach. We
trained the virtual instructor on the publicly available GIVE
corpus~\cite{GarGarKolStr10} and compare ourselves to the systems that
participated to the GIVE-2 challenge. The main results are summarized in~\cite{Benotti2011a}, and we discuss them below. The results show that despite the fact that the corpus was not collected specifically in order to be used by our algorithms, our system performs as good as other systems with regards
to the objective metrics and much better with regards to naturalness, especially
the quality of referring expressions.

We annotated automatically, as described in Section~\ref{sec:algorithm}, the 3417 instructions of the English corpus. 22\% of the English instructions had an empty reaction. A reaction is empty if the IF did not execute an action in response to it and waited for another instruction. This is the case in two possible situations, either when the instruction was not understood and the IF asked for a rephrased instruction or when the instruction explicitly indicated that the IF should stop moving. For training and evaluating our algorithms we use those instructions that do not have an empty reaction: 2665 English instructions. A fragment of the annotated corpus is shown in the Appendix. 

We collected data from 13 subjects. The participants were mostly graduate
students; 7 female and 6 male. They were not English native speakers but rated
their English skills as near-native or very good. The evaluation contains both objective measures which we discuss in Section~\ref{objective} and subjective measures which we discuss in
Section~\ref{subjective}.


\subsubsection{Objective metrics} \label{objective}

The objective metrics we extracted from the logs of interaction are summarized in
Table~\ref{table:objective}.  The table compares our results with both human
instructors and the three rule-based virtual instructors that were top rated in
the GIVE-2 Challenge. Their results correspond to those published
in~\cite{KolStrGarByrCasDalMooObe10} which were collected not in a laboratory but
connecting the systems to users over the Internet. These hand-coded systems are
called NA, NM and Saar. We refer to our system as OUR.  

\begin{table}[!h]
\begin{small}
\begin{center}
\begin{tabular}{@{}p{2.2cm}@{}ccccc@{}}
& Human & NA & Saar & NM & OUR \\
\hline
Task success & 100\% & 47\% & 40\% & 30\% & 70\% \\
\hline
Canceled & 0\% & 24\% & n/a & 35\% & 7\% \\
\hline
Lost & 0\% & 29\% & n/a & 35\% & 23\% \\
\hline
Time (sec) & 543 & 344 & 467 & 435 & 692 \\ 
\hline
Mouse actions & 12 & 17 & 17 & 18 & 14 \\
\hline 
Utterances & 53 & 224 & 244 & 244 & 194 \\
\hline  
\end{tabular}
\end{center}
\end{small}
\vspace*{-.2cm}
\caption{Results for the \emph{objective} metrics \label{table:objective}} 
\end{table}
%\footnotetext[3]{Data not available}


In the table we show the percentage of games that users completed successfully
with the different instructors. Unsuccessful games can be either
canceled or lost. We also measured the average time until task completion, and the average 
number of utterances users received from each system. To ensure comparability, we only
counted successfully completed games.

In terms of task success, our system performs better than all hand-coded systems.
We duly notice that, 
for the GIVE Challenge in particular (and probably for human evaluations in general)
the success rates in the laboratory tend 
to be higher than the success rate online (this is also the case for completion times)~\cite{KolStrByrCasDalDalMooObe09}. 
Koller et al.\ justify this 
difference by stating that the laboratory subject is being discouraged from 
canceling a frustrating task while the online user is not. 
However, it is also possible that people canceled less because they found the
interaction more natural and engaging as suggested by the results of the
subjective metrics (see next section). 

In any case, our results are preliminary given the amount of subjects that we
tested, but they are indeed encouraging.
In particular, our system helped users to identify better the objects that they
needed to manipulate in the virtual world, as shown by the low number of mouse
actions required to complete the task (a high number indicates that the user must
have manipulated wrong objects). This correlates with the subjective evaluation
of referring expression quality (see next section).

%If our system ought to be a good virtual instructor, it needs to approach human
%rates in task success. 
We performed a detailed analysis of the
instructions uttered by our system that were unsuccessful, that is, all the
instructions that did not cause the intended reaction as annotated in the corpus.
From the 2081 instructions uttered in total (adding all the utterances of the 13 
interactions), 1304 (63\%) of them were successful and 777 (37\%) were unsuccessful.  

Given the limitations of the annotation discussed in Section~\ref{annotation}
(wrong annotation of correction utterances and no representation of user
orientation) we classified the unsuccessful utterances using lexical cues into
1)~correction like ``no'' or ``wrong'', 2)~orientation instruction such as
``left'' or ``straight'', and 3)~other. We found that 25\% of the unsuccessful
utterances are of type 1, 40\% are type 2, 34\% are type 3 (1\% corresponds to
the default utterance ``go'' that our system utters when the set of candidate
utterances is empty). In Section~\ref{conclusions} we propose an improved virtual
instructor designed as a result of this error analysis. 

\subsubsection{Subjective metrics} \label{subjective}

The subjective measures were obtained from responses to the GIVE-2 questionnaire
that was presented to users after each game. It asked users to rate different
statements about the system using a continuous slider. The slider position was
translated to a number between -100 and 100. As done in GIVE-2, for negative
statements, we report the reversed scores, so that in
Tables~\ref{table:subjective-quality} and~\ref{table:subjective-engagement}
greater numbers indicates that the system is better (for example, Q14 shows that
OUR system is less robotic than the rest). In this section we compare our results with
the systems NA, Saar and NM as we did in Section~\ref{objective}, we cannot
compare against human instructors because these subjective metrics were not
collected in~\cite{GarGarKolStr10}. 

The GIVE-2 Challenge questionnaire includes twenty-two subjective metrics.
Metrics Q1 to Q13 and Q22 assess the effectiveness and reliability of
instructions.  For almost all of these metrics we got similar or slightly lower
results than those obtained by the three hand-coded systems, except for three
metrics which we show in Table~\ref{table:subjective-quality}. We suspect that
the low results obtained for Q5 and Q22 relate to the unsuccessful utterances
identified and discussed in Section~\ref{objective} (for instance, corrections
were sometimes contradictory causing confusion and resulting in subjects ignoring
them as they advanced in the interaction). The high unexpected result in Q6, that
is indirectly assessing the quality of referring expressions, demonstrates the
efficiency of the referring process despite the fact that nothing in the
algorithms is dedicated to reference. This good result is probably correlated
with the low number of mouse actions mentioned in Section~\ref{objective}.  

\begin{table}[!h]
\begin{small}
\begin{center}
\begin{tabular}{llll}
NA & Saar & NM & OUR \\
\hline
\multicolumn{4}{@{}p{7.7cm}@{}}{Q5: I was confused about which direction
to go in} \\ 
29 & 5 & 9 & -12 \\
\hline
\multicolumn{4}{@{}p{7.6cm}@{}}{Q6: I had no difficulty with identifying
the objects the system described for me} \\ 
18 & 20 & 13 & 40 \\ 
\hline
\multicolumn{4}{@{}p{7.6cm}@{}}{Q22: I felt I could trust the system's instructions} \\ 
37 & 21 & 23 & 0 \\ 
\hline
\end{tabular}
\end{center}
\end{small}
\caption{Results for the significantly different \emph{subjective} measures assessing the effectiveness of the instructions
(the greater the number, the better the system)
 \label{table:subjective-quality}} 
\end{table}

Metrics Q14 to Q20 are intended to assess the naturalness of the instructions, as
well as the immersion and engagement of the interaction. As
Table~\ref{table:subjective-engagement} shows, in spite of the unsuccessful
utterances, our system is rated as more natural and more engaging (in general) than
the best systems that competed in the GIVE-2 Challenge. 

\begin{table}[!h]
\begin{small}
\begin{center}
\begin{tabular}{llll}
NA & Saar & NM & OUR \\
\hline
\multicolumn{4}{@{}p{7.7cm}@{}}{Q14: The system's instructions sounded robotic} \\ 
 -4 & 5 & -1 & 28 \\
\hline
\multicolumn{4}{@{}p{7.7cm}@{}}{Q15: The system's instructions were repetitive} \\ 
 -31 & -26 & -28 & -8 \\
\hline
\multicolumn{4}{@{}p{7.7cm}@{}}{Q16: I really wanted to find that trophy} \\ 
 -11 & -7 & -8 & 7 \\
\hline
\multicolumn{4}{@{}p{7.7cm}@{}}{Q17: I lost track of time while solving the task} \\ 
 -16 & -11 & -18 & 16 \\
\hline
\multicolumn{4}{@{}p{7.7cm}@{}}{Q18: I enjoyed solving the task} \\ 
 -8 & -5 & -4 & 4 \\
\hline
\multicolumn{4}{@{}p{7.7cm}@{}}{Q19: Interacting with the system was really annoying} \\ 
 8 & -2 & -2 & 4 \\
\hline
\multicolumn{4}{@{}p{7.7cm}@{}}{Q20: I would recommend this game to a friend} \\ 
 -30 & -25 & -24 & -28 \\
\hline
\end{tabular}
\end{center}
\end{small}
\caption{Results for the \emph{subjective} measures assessing the naturalness and engagement of the instructions
(the greater the number, the better the system)
 \label{table:subjective-engagement}} 
\end{table}



\subsection{Online challenge evaluation: Comparison with the GIVE-2.5 participants} \label{give2-5}

In order to participate to the GIVE challenge, our selection approach requires us to
collect a corpus on the virtual worlds used in the GIVE 2.5 challenge. To make sure that
the IG was an expert in the task and hence that she was able to give
effective instructions, we collected the corpus in a Wizard of Oz
fashion~\cite{dahlback-iui93}; the role of the IG was played by a
\emph{wizard}, a single person who was familiar with both the worlds and the
task. To play the IF role we recruited 14 volunteers with different demographic
characteristics that have shown to have an impact on the behavior of users in
virtual worlds~\cite{KolStrGarByrCasDalMooObe10}, namely gender and video game
familiarity. Our 14 voluteers differed in gender (5 female and 9 male) and
familiarity with video games (6 expert gamers, 4 occasional players and 4
non-players). We asked each volunteer to follow the wizard's instructions in
order to complete a given task in each of the three evaluation worlds described
in~\cite{KolStrGarByrCasDalMooObe10} illustrated in Fig.~\ref{worlds}.  All the
volunteers were able to complete the task in world 1, but only 12 completed world
2 and 11 finished also world 3. The registered data, collected over 3 days,
constitutes a corpus which is composed of 37 games and 2163 instructions,
spanning 6:09 hours. The corpus has then been annotated following the algorithm
introduced in Section~\ref{sec:algorithm}. The three different worlds aim to
evaluate different aspects of the instruction giving process: while 
world 1 is a basic, general purpose world, world 2 aims to evaluate
referring expressions in complex situations (a structured grid of buttons or a
target button lost in a myriad of buttons). The world 3 is the most complex
world and aims to evaluate navigational instructions as made necessary in a room
full of alarms and a corridor that changes its configuration after pushing a
button. The world 3 is also the world with the largest number of discretized
regions, 124, whereas the world 1 contains 53 regions and the world 2, 90
regions.

\begin{figure}[!h]
\begin{center}
\includegraphics[width=1\linewidth]{images/give-worlds.jpeg} 
\end{center}
%\vspace*{-5mm}
\caption{2D maps of the virtual worlds used in the registration of interactions. \label{worlds}}
\end{figure}
%\vspace*{-2mm}


%\subsection{Task performance measures} \label{sec:results}

In the GIVE-2.5 challenge, eight virtual instructors
participated~\cite{Striegnitz2011}. The challenge gathered five manually authored
rule-based systems (A,C,L,P2,T), two supervised machine learning systems trained
on manually annotated corpora (B,P1) and our own system (CL), described in
Section~\ref{sec:algorithm} and trained on the corpus presented in the previous
section. The evaluation data, collected via the Internet, gathered 587
interactions. 

The objective metrics presented here are extracted from the logs of interaction and are
summarized in Table~\ref{table:objective}. The success/canceled/lost metrics
corresponds to the status of each game: a \emph{success} status if the IF
managed to complete the task, a \emph{cancel} status if
the IF willingly abandoned the task and a \emph{lost} status if the IF
stepped on an alarm and lost the game. For successful games, we also provide the
average duration, and average number of mouse actions. The number of mouse
actions is the number of button clicks; a higher number meaning that the player
tried several buttons before reaching the expected one.

\begin{table}[!ht]
\begin{small}
\begin{center}
\begin{tabular}{@{}p{2.2cm}@{}cccccccc@{}}
& A & B & C & CL & L & P1 & P2 & T \\
\hline
Task success (\%) & 40 & 31& 70& 64& 66& 66& 62& 53\\
\hline
Canceled (\%) & 34 & 44 & 17 & 23 & 20 & 20 & 15 & 22\\
\hline
Lost (\%) & 26 & 25 & 13 & 13 & 14 & 14 & 23 & 25\\
\hline
Time (sec) & 705 & 675 & 527 & 512 & 344 & 401 & 415 & 480 \\ 
\hline
Mouse actions & 18 & 35 & 15 & 15 & 14 & 14 & 16 & 16 \\
\hline  
\end{tabular}
\end{center}
\end{small}
\vspace*{-.2cm}
\caption{Results for the \emph{task performance} metrics \label{table:objective}} 
\end{table}

In terms of task success, CL system performance (64\%) is comparable to that of
the best systems that participated in the challenge ---C, L, and P1. The CL task
success is only 6\% lower with respect to the best performing system. This result
is encouraging since all these systems require the development of hand-designed
strategies and hence their design and implementation is labor intensive. For
instance, C system~\cite{Racca2011} implements the grounding model of
Traum~\cite{Traum1999} and took over 6 man-month to develop, while the CL system was built in 3 days (see Section~\ref{sec:discussion} for discussion).   

With respect to time that the CL system needed in order to help the players
finish successfully, the CL system is considerably slower than the fastest
systems---such as L and P1. We think that this is probably due to the strategy
implemented by CL which verbalizes high level instructions first to which the IF may not react. In fact, the IF did not react to 33\% of the utterances said by the
system. 

Finally, the CL system was among the top performers when considering how
effectively the systems helped users identify the objects that they needed to
manipulate in the virtual world. This is shown by the low number of mouse actions
required to complete the task. This correlates with the subjective
evaluation of referring expression quality that is presented
in Section~\ref{in_lab}. 

We observed that our system did not behave the
same in all the evaluation worlds. The
Table~\ref{table:results_per_world} shows the actual score for the three
evaluation worlds. It is worth noting that while the performance is comparable
for the first two worlds, players happened to lose more in the world 3. The
reason is mainly the important number of alarms in world 3 (see
Fig.~\ref{worlds}), stepping on those caused the IF to lose the game.
Nonetheless, the fact that world 3 contains an important number of regions (124
as opposed to 53 and 90 for the others) may have been an additional difficulty
for our approach with regards to the training corpus. We examine this hypothesis
in the next section.

\begin{table}[!ht]
\begin{small}
\begin{center}
\begin{tabular}{@{}p{2.2cm}@{}cccc@{}}
                  & World 1 & World 2 & World 3 \\
\hline
Task success (\%) & 71      & 76      & 42    \\
\hline
Canceled (\%)     & 29      & 18      & 29    \\
\hline
Lost (\%)         & 0       & 6       & 29    \\
\hline
\end{tabular}
\end{center}
\end{small}
\vspace*{-.2cm}
\caption{Results per world for the \emph{task performance} metrics
\label{table:results_per_world}} 
\end{table}



\subsection{Corpus coverage} \label{sec:coverage}

In order to look at the dependence to the corpus, we propose to check how well
the corpus covers the actions that the IF is required to do in a given world.
Indeed, if the IF reaches a state that requires some actions that were not
observed while collecting the corpus, the selection process fails and returns an
empty set of candidates. We thus distinguish the \emph{training corpus}, that is
the corpus we collected between human pairs to train our system and the
\emph{challenge corpus}, that is the corpus resulting from all interactions
during the GIVE-2.5 challenge, between the system and the IFs. 

The Fig.~\ref{fig:actions-per-corpus} compares for each world the number of
different actions that are found in the training corpus and in the challenge
corpus. It shows that in world 3, 41\% of new actions were observed during the
challenge as opposed to world 2 (17\%) and world 1 (1\%). That is, while
performing in the challenge world 3, the system faced many new situations and new
actions that were not encountered in the training corpus. This observation is
confirmed by the percentage of the default ``go'' instruction, uttered when no
instruction is selected from the corpus, as seen in
Fig.~\ref{fig:go-instructions}. We observe that around 13\% of all instructions
in world 3 are ``go'' instructions, demonstrating that the corpus coverage of
world 3 was indeed lower than the other worlds. While ``go'' instructions may be
a reasonable cause for failure, we note that it actually depends on the world
configuration. For instance, a ``go'' instruction in a corridor is not prone to
raise as many problems than facing an intersection, but is definitely problematic
in a room full of alarms as is in world 3.

\begin{figure}[!ht]
\centering
\subfloat[Number of actions per corpus]{\label{fig:actions-per-corpus}\includegraphics[width=5cm]{images/action_coverage.png}}
\subfloat[Percentage of default instruction]{\label{fig:go-instructions}\includegraphics[width=5cm]{images/go_instructions_single_bar.png}}
\caption{Number of actions and default instructions}
\label{fig:action-go}
\end{figure}

\begin{figure}[!ht]
\centering
\subfloat[training corpus]{\label{fig:corpus-coverage-training}\includegraphics[width=5.5cm]{images/corpus_coverage.png}}
\subfloat[challenge corpus]{\label{fig:corpus-coverage-eval}\includegraphics[width=5.5cm]{images/corpus_coverage_eval.png}}
\caption{Cumulative number of different actions in 1,2,..,n interactions}
\label{fig:corpus-coverage}
\end{figure}

The question that immediately arises is how many actions and how many
interactions do we have to record to well cover a given world. To estimate the
number of interactions that have to be recorded for a given world, we propose to
count the cumulative number of actions that are covered by instructions reaction
if we record 1, 2 or more interactions. When a plateau is reached, recording a
new interaction does not cover new actions. This criterion does not provide the
optimal number of interactions since the selection algorithm can also fail if new
action \emph{sequences} are met during execution, but it guarantees that the
system will fail if the recorded number of interactions is below the plateau
threshold. Fig.~\ref{fig:corpus-coverage-training} shows that, in the training
corpus, a plateau is reached after 7 interactions for world 1 and 2. In world 3,
no plateau is reached after 11 interactions, meaning that we failed to record
enough interactions to reach a satisfactory coverage. This is likely correlated
to the lower task success that we observe in world 3. Actually, if we compare
this cumulative number of actions within the \emph{challenge} corpus, on
Fig.~\ref{fig:corpus-coverage-eval}, that is what happened with real IFs, we
observe that a plateau is reached in world 3 after 17 interactions. In other
words, we assume that if we would have recorded 6 more interactions, we would
have made our system sufficiently robust. This methodology provides then a
practical way to estimate whether the corpus is not large enough for a given
world. 

It is interesting to note that our approach only requires a low number of
interactions to cover the variety of a given world and achieve a satisfying
result. This is to be compared with the large amount of datas that statistical
learning methods generally require. In particular we can compare our approach
with another corpus-based approach that competed in GIVE-2.5, the B
system~\cite{Dethlefs2011a}. This system mixes hand-crafted behavior and
decision trees trained on the GIVE-2 corpus of 63 English
interactions~\cite{GarGarKolStr10} that is manually annotated with reference and
navigation descriptions. As the author suggests, the low success score of the
system is caused by data sparsity. We note though that there are promising
approaches based on hierarchical reinforcement learning that may overcome the
data sparsity~\cite{Dethlefs2011b}, but these approaches have not yet been
evaluated in the GIVE challenge. 

%In situated task-oriented dialogue, we can exploit domain-specific information
%about the non-linguistic context of IF's interactions to limit the ammount
%of corpora that is needed in order to build a robust instruction giver for a new
%domain. In particular, the goal of the task and the actions that are relevant
%for this goal are the only ones that need to be covered by the corpora. In this
%paper we use a planner in order to automatically annotate instructions with 
%a semantic representation that only includes actions relevant for the task. 


