For GIVE, and more generally for any kind of task, there are basically two
different ways to build a virtual instructor: either the instructions are
\emph{assembled} using composition methods (rule-based or corpus-driven), or the
instructions are \emph{selected} from an existing corpus. We compare here these
two families of approaches with regards to their dependence to the domain of
application and to a particular context inside this domain, with regards to the
linguistic expressivity of instructions, and with regards to their relative
development cost. The selection approach that we advocated in this paper has its
own limitations that we also present in this section. 

\subsection{Context and domain dependence}

---------------
\section{Portability to other virtual environments}

% summarization of evaluation and comparison to handbased systems
The hand-coded systems, which we compared to, do 
not need a corpus in a particular GIVE virtual world in order to generate
instructions for any GIVE virtual world, while our system cannot do without such corpus.
These hand-coded systems are designed to work on different GIVE virtual worlds without 
the need of training data, hence 
their algorithms are more complex (e.g. they include domain independent algorithms 
for generation of referring expressions) and take a longer time to develop. 

Our algorithm is independent of any particular virtual world. In fact, it can be ported
to any other instruction giving task (where the DF has to perform a physical task) 
with the same effort than required to port it to a new GIVE world. This is not 
true for the hand-coded GIVE systems. The inputs of our algorithm are an off-the-shelf planner, 
a formal planning problem representation of 
the task and a human-human corpus collected on the very same task the system aims to 
instruct. It is important to notice that any virtual instructor, in order to 
give instructions that are both causally appropriate at the point of the task
and relevant for the goal cannot do without such planning problem representation. 
Furthermore, it is quite a normal practice nowadays to collect a human-human corpus
on the target task domain. 
It is reasonable, then, to assume that all the inputs 
of our algorithm are already available when developing the virtual 
instructor, which was indeed the case for the GIVE framework.  

Another advantage of our approach is that virtual instructor 
can be generated by developers  
without any knowledge of generation of natural language techniques. 
Furthermore, the actual implementation of our algorithms is extremely simple as
shown in Figures~\ref{algo-annotation} and~\ref{algo-selection}. 
This makes our approach promising for application areas such as games and 
simulation training. 
----------

Generation by composition aims to be \emph{context independent}, that is, given
a domain of application, these approaches aim to cover all possible contexts in
which the virtual instructor and the user can be. Doing so, they are often
\emph{domain dependent}, that is switching them to a new domain involves in
general a lot of module rewriting. In the GIVE domain, generation by
composition aims to work the same for any kind of world, the color of buttons
for instance does not matter. However, if switching with a domain that contains
traffic lights, the color of the traffic light should be associated to a
special action, for instance red would be associated to ``stop'' and green to
``go''. The color information being used differently in this domain, generation
by composition need to explicitely change the link between color information
and the corresponding instruction.

On the contrary, the selection approach can be said relatively \emph{domain
independent} if we assume that the domain can be described and discretized as a
planning domain. We note that, since an instructor needs to help the user to
perform a task, it requires a task-based representation and a mechanism akin to
planning.  This assumption is hence not too constraining. Given a traffic light
planning domain, the selection approach would easily integrate the link between
the red color and the ``stop'' instruction since this link would be captured by
the automatic annotation. However, the selection approach is entirely
\emph{context dependent} in the sense that the selection algorithm can only
work in the same context that was used for annotating. Once an interaction has
been recorded for a planning problem, this interaction can only be used when
running the virtual instructor on the same planning problem or any problem that
is accessible from the initial problem through action application. In terms of
GIVE, this means that if a button is red when recording the interaction, it
must be red when running the system (unless there are some color change actions
that could be captured by the planner). Nevertheless this constraint is
actually the same constraint made by most video games which are defined in
terms of \emph{levels} or \emph{areas} that are fixed beforehand. There can be
a certain degree of change within the same context, but no radical change of
context configuration, that is, the planning problem remains within the same
boundaries. As a consequence, our algorithms would not work in games whose
levels are procedurally generated such as Minecraft\footnote{Developed by
Mojang, 2009}, this kind of games is however still rare. Thus, the context
dependence constraint does not seem to be as problematic as it may sound.


\subsection{Linguistic expressivity}

The expressivity of our selection approach in terms of the linguistic phenomenas
it can handle is directly constrained by the underlying planning domain. Since
the planning domain includes actions whose arguments are objects in the world, it
can integrate easily a wide variety of referring expressions to these objects
(relational expressions, relative propositions, plurals, demonstrative, complex
anaphoras, etc.) as demonstrated in Section~\ref{sec:case-study}. These referring
expressions are hard to generate in the compositional approaches because they
rely on a complex structure that has to be explicitely modelled (for an example
see \cite{Denis2010a}). Moreover it can generate referring expressions for
objects that are not directly represented in the planning domain but are related
to objects that are. For instance, an instruction such as ``go through the
opening with yellow wallpaper'' can be generated despite the fact that there is
no wallpaper object, nor an opening object in the planning domain but only a
world region. With regards to dialogue management, our selection approach can
integrate positive acknowledgments such as ``ok now ...''. Given that they are
typically uttered after performing an action, positive acknowledgments possess a
forward-looking function~\cite{Allen1997} of making the user progress in the task
which is captured seamlessly by our approach.


\subsection{Limitations}

On the downside, since we are using selection features solely defined in terms
of the current plan, our approach does not capture phenomenas that do not rely
on the current plan. This is specially problematic for backward-looking acts
\cite{Allen1997} that rely on the previous context such as ``go back to the
room with the lamp'' which can be uttered without having been to the room with
the lamp or with negative acknowledgments such as ``not this way'' which can be
uttered while being in the right way. These instructions can be uttered because
at some point in the corpus, there exists a situation that is similar with
regards to the current plan, but different with regards to past actions. This
problem also impacts discourse related phenomenas such as anaphoras or ellipsis
like in ``this one'', though it seems not problematic for reference
identification in the GIVE domain. Another defect is related to the inherent
limitations of the planning domain, for instance the GIVE planning domain does
not include orientation and as such instructions like ``turn left'' or ``turn
right'' are not distinguishable. Despite all these issues that may sound
blocking for a task with strong navigational elements, it is worth noting that
the system obtains scores that are comparable to the other, much more complex
systems as described in Section~\ref{sec:evaluation}.

One solution to include these aspects in the selection approach is to include
more selection features both in the automatic annotation and selection. For
instance, it would be possible to consider conjunction of selection features such
as the current plan \emph{and} the current orientation \emph{and} the previous
action. However, because the selection would then be more restrictive, it would
entail collecting a larger corpus to prevent empty selection. The required corpus
augmentation resulting from these additional selection features has yet to be
investigated. Another option could be to have a less naive selection that weighs
the selection features, giving more weight to the current plan, and less weight
to the current orientation, that is by favoring instructions with a similar
orientation without totally disregarding those with a different orientation. The
different options have yet to be evaluated in order to keep the good trade-off
between development cost and quality. 


\subsection{Development cost}

Indeed, the most interesting aspect of our selection approach is without doubt
the trade-off between the system development cost and its resulting quality.
Developping a selection-based instructor primarily involves describing the domain
in terms of planning, discretizing it and collecting a corpus whereas
compositional approaches require complex modelling that is effort intensive. In
our case, the planning domain and its discretization were already provided,
whereas we have shown in Section~\ref{sec:coverage} that the size of the required
corpus can be very low. In terms of development, our approach also outperforms
supervised statistical approaches since it does not require a manually annotated
corpus. Moreover, on the contrary to more traditional approaches, the selection
approach does not raise any assumption on the linguistic expertise of system
developers, and the algorithms themselves are simple to implement. Thus, within
the limits of context dependence, the selection approach gives probably the best
trade-off between complexity, expressivity, and amount of datas. 
