% summarization of contribution

In this paper we presented a novel algorithm for automatically prototyping
virtual instructors from human-human corpora without manual annotation. Using
our algorithms and the GIVE corpus we have generated a virtual instructor for a
game-like virtual environment. A video of our virtual instructor is available
in  \footnotesize{\url{http://cs.famaf.unc.edu.ar/~luciana/give-OUR}}.
\normalsize We obtained encouraging results in the evaluation with human users
that we did on the virtual instructor.  In our evaluation, our system
outperforms rule-based virtual instructors hand-coded for the same task both in
terms of objective and subjective metrics.  We plan to participate in the GIVE
Challenge
2011\footnote{\scriptsize{\url{http://www.give-challenge.org/research}}} in
order to get more evaluation data from online users and to evaluate our
algorithms on multiple worlds. 


%extension to other features

The algorithms we presented solely rely on the plan to define what constitutes
the context of uttering. It may be interesting though to make use of other
kinds of features. For instance, in order to integrate spatial orientation and
differentiate ``turn left'' and ``turn right'', the orientation can be either
added to the planning domain or treated as a context feature. While it may be
possible to add orientation in the planning domain of GIVE, it is not
straightforward to include the diversity of possible features in the same
formalization, like modeling the global discourse history or corrections.  Thus
we plan to investigate the possibility of considering the context of an
utterance as a set of features, including plan, orientation, discourse history
and so forth, in order to extend the algorithms presented in terms of context
building and feature matching operations.  

%That is, an utterance $U-k$ would be selected if its
%context $C-k$ matches the current generation context $C$. The complexity would
%then be pushed in the context matching and building operations, while the general
%selection procedure would remain the same. 

% future work

In the near future we plan to build a new version of the system that improves
based on the error analysis that we did.  For instance, we plan to take
orientation into account during selection. As a result of these extensions
however we may need to enlarge the corpus we used so as not to increase the
number of situations in which the system does not find anything to say.
Finally, if we could identify corrections automatically, as suggested
in~\cite{raux10}, we could get an increase in performance, because we would be
able to treat them as corrections and not as instructions as we do now.  

% final conclusions

In sum, this paper presents the first existing algorithm for
fully-automatically prototyping task-oriented virtual agents from corpora. The
generated agents are able to effectively and naturally help a user complete a
task in a virtual world by giving her/him instructions. 

