The hand-coded systems, which we compared to, do not need a corpus in a
particular GIVE virtual world in order to generate instructions for any GIVE
virtual world, while our system cannot do without such corpus.  These
hand-coded systems are designed to work on different GIVE virtual worlds
without the need of training data, hence their algorithms are more complex
(e.g. they include domain independent algorithms for generation of referring
expressions) and take a longer time to develop. 

Our algorithm is independent of any particular virtual world. In fact, it can
be ported to any other instruction giving task (where the DF has to perform a
physical task) with the same effort than required to port it to a new GIVE
world. This is not true for the hand-coded GIVE systems. The inputs of our
algorithm are an off-the-shelf planner, a formal planning problem
representation of the task and a human-human corpus collected on the very same
task the system aims to instruct. It is important to notice that any virtual
instructor, in order to give instructions that are both causally appropriate at
the point of the task and relevant for the goal cannot do without such planning
problem representation.  Furthermore, it is quite a normal practice nowadays to
collect a human-human corpus on the target task domain.  It is reasonable,
then, to assume that all the inputs of our algorithm are already available when
developing the virtual instructor, which was indeed the case for the GIVE
framework.  

Another advantage of our approach is that virtual instructor can be generated
by developers without any knowledge of generation of natural language
techniques. Furthermore, the actual implementation of our algorithms is
extremely simple as shown in Figures~\ref{algo-annotation}
and~\ref{algo-selection}.  This makes our approach promising for application
areas such as games and simulation training. 

