All the candidates are paraphrases whose intention is to make the follower
advance towards the goal of the task. However these paraphrases differ in terms
of reaction length.  For example, ``ok go back to the room with the lamp'' has a
reaction length of ten actions (getting to the door with the yellow wall paper,
going through the door, getting to the intersection, going left, etc.) while the
utterance ``go through the opening with the yellow wall paper'' has a reaction
length of two actions (the first two previous actions). The instructions with the
longest reaction describe the actions to come at a higher level, frequently
summarizing in a few words many actions to come; the instructions with shorter
reaction are directly executable, their references frequently being visible and
close. We decided to verbalize instructions with longest reaction first following
empirical studies on instruction understability in human-robot
interaction~\cite{foster09}. Foster et al. show that uttering a high level
description of the actions to come, significantly reduces (to less than half) the
amount of misunderstandings on low level instructions. Following this results,
our system will utter first ``ok go back to the room with the lamp'' and then, if
the user does not react quickly enough (by default 3 seconds) the system will
continue with ``go through the opening with the yellow wall paper''. If the user
reacts quickly and gets to the intersection before the system can say anything
else, the system there may generate something like ``turn left at the
intersection'', to which the user may react thinking ``makes sense because we are
going to the room with the lamp''. 





The discretization used for annotation and selection directly impacts the
behavior of the virtual instructor. It is crucial then to find an appropriate
granularity of the discretization. If the granularity is too coarse, many
instructions in the corpus will have an empty reaction. For instance, in the
absence of the representation of the user orientation in the planning domain (as
is the case for the virtual instructor we evaluate in
Section~\ref{sec:evaluation}), instructions like ``turn left'' and ``turn right''
will have empty reactions making them indistinguishable during selection.
However, if the granularity is too fine the user may get into situations that do
not occur in the corpus, causing the selection algorithm to return an empty set
of candidate utterances. So the finer the granularity, the bigger the corpus that
is needed.  In other words, the key issue is that of generalization. How can
experience with a limited subset of the state space be usefully generalized to
produce a good approximation over a much larger subset?

This is a severe problem. In many tasks to which we would like to apply our
method, most states encountered will never have been experienced exactly before.
This will almost always be the case when the state or action spaces include
continuous variables or complex sensations, such as a visual image. The only way
to learn anything at all on these tasks is to generalize from previously
experienced states to ones that have never been seen.

Fortunately, generalization from examples has already been extensively studied,
and we do not need to invent totally new methods. To a large extent we need only
combine our methods with existing generalization methods, such as those used for
generalization in reinforcement learning problems~\cite{sutton98}.  In this paper
we take a \emph{tile coding} approach to discretization based on characteristics
of the virtual world. 

