Our method for developing an instructor consists of two phases: an annotation
phase and a selection phase.  The \emph{annotation phase} is performed only once,
when the instructor is created. It consists of automatically associating each
instruction with its meaning with the help of the automated planner. The \emph{selection phase} is
performed every time the virtual instructor generates an instruction and consists
of picking out, from the annotated corpus, the most appropriate instruction at a
given point. Also in this phase, an automated planner is used in order to maintain the global coherence of the interaction. 

From now on, we will call the virtual instructor \emph{Instructor Giver (IG)} and the trainee will be the \emph{Instructor Follower (IF)}. 

Our method is based on the assumption that a reaction is a direct result of
the instruction that occurred just prior to it. In other words, we assume that the IF reaction makes explicit his interpretation of the instruction.  Therefore, if two different
instructions precede the same reaction, then they must be pragmatic paraphrases of each
other. 
We define \emph{pragmatic paraphrases} as instructions that cause the same reaction even though their semantics can differ because they use different tools to reach the same goal. For instance, Figure~\ref{paraphrases2} shows a case in which references to different landmarks (\emph{the picture} and \emph{the red tile\footnote{In the picture, the red tile is located in the visible hallway; it is an alarm that prevents the IF from using that hallway.}}) are used to make the follower press a green button. The semantics of both instructions is different but their goal is the same. 

\begin{figure}[h!]
\begin{center}
\begin{minipage}[b]{0.46\linewidth}
\centering
\includegraphics[width=1\linewidth]{images/paraphrase-1.png}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[b]{0.46\linewidth}
\centering
\includegraphics[width=1\linewidth]{images/paraphrase-2.png} 
\end{minipage}
\end{center}
%\vspace*{-5mm}
\caption{Both figures show \emph{pragmatic paraphrases} of the same instruction. They use different vocabulary but communicate the same goal.\label{paraphrases2}}
\end{figure} 

By learning from multimodal interactions that register instructions and their reactions, our algorithm can predict which instructions will make the task advance towards the goal.

\subsection{Annotation phase} \label{annotation}

%Hence, the basic idea of the annotation is straightforward: associate each \emph{utterance}
%with its corresponding \emph{reaction}. However, defining reaction formally involves two subtle
%issues, namely \emph{segmentation} and \emph{discretization}. We
%discuss these issues in turn and then give a formal definition of reaction.

The key challenge in learning from massive amounts of easily-collected data
is to automatically annotate an unannotated corpus. Our annotation method consists
of two parts: first, {\em segmenting} a low-level interaction trace into
instructions and corresponding reactions, and second, {\em discretizing}
those reactions into canonical action sequences.

Segmentation enables our algorithm to learn from traces of IFs interacting
directly with a virtual world.  Since the IF can move freely in the virtual
world, his actions are a stream of continuous behavior. Segmentation
divides these traces into reactions that follow from each
instruction of the IG.
Consider the following example starting at the situation shown in Figure~\ref{paraphrases}. 

\medskip
\begin{it}
\indent IG(1): go through the yellow opening\\
\indent IF(2): [walks out of the room]\\
\indent IF(3): [turns left at the intersection]\\
\indent IF(4): [enters the room with the sofa]\\
\indent IG(5): push the green button by the door \\
\indent IF(6): [turns to make the green button visible]\\
\indent IF(7): [pushes the green button]
\end{it}
\medskip

From the example, it is not clear whether the IF is doing $\langle 3,4 \rangle$ because he is reacting
to $1$ or because he is being proactive. While one could manually annotate 
this data to remove extraneous actions, our goal is to develop
automated solutions that enable learning from massive amounts of data.


\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.35]{images/paraphrases.jpg}
\end{center}
\caption{Situation where the IG says ``go through the yellow opening''. The 3D view of the IF and a bird's eye view of the situation is shown.}
\label{paraphrases}
\end{figure}


We approach this issue by segmenting maximally, that is, by including all the actions following an instruction in its reaction. In this way, the annotated meaning may be more specific that required and this would lead to not selecting an instruction when it might have been appropriate, but all instructions that are selected will be appropriate.  

We define \emph{segmentation} as follows. A reaction $r_k$ to an
instruction $i_k$ begins right after the instruction $i_k$ is uttered and ends
right before the next instruction $i_{k+1}$ is uttered. In the example, instruction $1$
corresponds to the reaction $\langle 2,3,4 \rangle$. 

The segmentation method define how to segment an interaction trace into
instructions and their corresponding reactions.  However, users frequently
perform noisy behavior that is irrelevant to the goal of the task.  For
example, after hearing an instruction, an IF might step back in order to 
have a better view of the room before following the instruction. A reaction should not include such
irrelevant actions.  In addition, IFs may accomplish the same goal using
different behaviors: two different IFs may interpret ``go to the pink
room'' by following different paths to the same destination.  We want
to be able to generalize both reactions into one canonical reaction.

To accomplish this, our approach {\em discretizes} reactions into higher-level
action sequences reducing noise and variation.  Our discretization
algorithm uses an \emph{automated planner} and a \emph{planning
representation} of the task.  This planning representation includes: (1)
the task goal, (2) the actions which can be taken in the virtual world, and
(3) the current state of the virtual world.  Using the planning
representation, the planner calculates an optimal path between the starting
and ending states of the reaction, eliminating all unnecessary actions.
While we use the classical planner FF~\cite{hoffmann01}, our technique
could also work with other~\emph{classical planners}~\cite{nau04} or other non-classical planning
techniques such as \emph{probabilistic planning}~\cite{Bonet05}. It is also
not dependent on a particular discretization of the world in terms of
actions. In Section~\ref{sec:corpus} we exemplify these elements in the task that we use to evaluate our algorithms. 

Now we are ready to define \emph{canonical reaction} $c_k$.  Let $S_k$ be the state of
the virtual world when instruction $i_k$ was uttered, $S_{k+1}$ be the state of the
world where the reaction ends, and $D$ be the planning domain
representation of the virtual world.  The \emph{canonical reaction} to $i_k$ is defined as the sequence of
actions returned by the planner with $S_k$ as initial state, $S_{k+1}$ as goal
state and $D$ as planning domain. 

The annotation of the corpus then consists of automatically associating each
instruction to its (discretized) reaction using an automated planner. The algorithm that implements this annotation is shown in
Figure~\ref{algo-annotation}.  

%\begin{figure}[h]
%\begin{center}
\begin{algorithm}
\caption{Annotation algorithm}\label{algo-annotation}
\KwIn{A corpus $Cp$, a planning representation $D$, a planner $Planner$}
\KwOut{An annotated corpus $AnCp$}

 $AnCp \leftarrow Cp$ \\
 $Acts \leftarrow D.Actions$ \\ 
\For{utterance $U_k$ in $AnCp$}{
    $S_k \leftarrow D.StateatTime(U_k)$ \\
    $S_{k+1} \leftarrow D.StateatTime(U_{k+1})$ \\
    $U_k.Reaction \leftarrow Planner.plan(S_k,S_{k+1},Acts)$
}
\end{algorithm}
%\caption{Annotation algorithm}
%\label{algo-annotation}
%\end{center}
%\end{figure}


Once the corpus has been annotated, it can be used by the selection phase to develop automatically-generated instructions.  


\subsection{Selection phase} \label{selection}

Once the corpus is annotated, the virtual instructor is created by first obtaining
a plan to solve the task and then selecting utterances from the corpus whose reaction
would make the user closer to the goal. This design is based on the assumption that
the IF will react in a \emph{cannonically similar way} than the user 
that interacted with the human instructor during the corpus collection. 

In this section we introduce the instruction selection algorithm formally. 
The algorithm displayed in Figure~\ref{algo-selection},
consists in finding in the corpus the set of candidate utterances $C$ for the
current task plan $P_i$. The task plan $P_i$ is the sequence of actions that needs to be executed
in the current state of the virtual world in order to complete the task. 
$P_{i-1}$ is the plan that was calculated when the previous instruction was uttered. If $P_{i-1} = P_{i}$ is means that the user did not react to the previous instruction and hence a pragmatic paraphrase is uttered. If $P_{i-1} = P_{i}$ then the user reacted to the previous instruction and a new candidate set of instructions needs to be calculated in order to advance the task. 

We define $C(P_i) = \{ U \in \mbox{Corpus} \mid U.\mbox{\emph{Reaction}} \mbox{ is a prefix of } P_i
 \}$. In other words, an utterance $U$ belongs to $C(P_i)$ if
the first actions of the current plan $P_i$ are equal to the reaction associated
to the utterance $U$. 

We consider all utterances in $C(P_i)$ as pragmatic paraphrases. $C(P_i)$ define the set of \emph{pragmatic paraphrases} with respect to a plan $P_i$. 
Pragmatic paraphrases, can be seen as utterances performing the same perlocutionary act. A perlocutionary act is a speech act, as viewed at the level of its intentional consequences, of getting the hearer to do or realize something~\cite{austin62}. Pragmatic paraphrases may not be semantic paraphrases since their semantic content may differ, for instance, ``push the red button'' and ``open the door'' could be pragmatic paraphrases in a context in which the red button opens the door but contain different semantic content.  
In Section~\ref{sec:case-study} we give more examples of pragmatic paraphrases.  


%\begin{figure}[h]
%\begin{center}
\begin{algorithm}
\caption{Selection algorithm}\label{algo-selection}
\KwIn{A plan $P_{i-1}$, a plan $P_i$, a queue of utterances $C$, an annotated corpus $AnCp$}
\KwOut{A queue of utterances $C$}

\If{$P_{i-1}\not =P_{i}$}{

  $C \leftarrow \emptyset$

  \For{utterance $U$ in $AnCp$}{
    \If{$U.Reaction$ is a prefix of $P_{i}$}{
        $C$.Queu(U)
    }
  }
  Orderby($C$,$U.Reaction$,descending)
}
\Else{
%  $U$ \leftarrow $C$.Dequeu()

  $C$.Dequeu
}
\end{algorithm}
%\end{center}
%\end{figure}

Given a set of pragmatic paraphrases, one has to choose which utterance to utter. 
When confronted with several paraphrases, it is
interesting to note that the possible instructions may differ in terms of
reaction length. For instance, in a given situation where the user has to leave a
room, an instruction such as ``go through the opening with the yellow wallpaper''
may have a reaction that involves going \emph{to} the opening and going \emph{through}
the opening. For the same situation, another candidate instruction such as ``go back to the
room with the lamp'' may 
imply a longer sequence of actions including going to the opening, going through the opening, 
turning left at the corridor, going through the living room, to finally arrive into the room 
with the lamp. In such
situations, we follow the empirical study on instruction understandability~\cite{foster09} in which it is shown that uttering first high level
descriptions of actions to be performed, and then giving low level descriptions when necessary, leads to a lower amount of
misunderstandings than the opposite. Thus, our instructor would first utter the high level
instruction ``go back to the room with the lamp'', and then, in case the user
does not react (in a time threshold dependent on the task at hand), it would utter ``go through the opening with
the yellow wallpaper''. 

As can be observed in the algorithm, a new instruction is uttered every time the plan changes. In the case there is a misunderstanding and the IF deviates from the initial plan, the instructor finds a new plan for solving the task and generates instructions based on this new plan. 

%LB: moved to corpora section
%\subsection{Implementation in the GIVE framework}


