\section{Methods}
\label{metodos}

% Que quiero lograr
Our method will make the base assumption that a reaction captures the semantics of the instruction that caused it. Therefore, if two utterances result in the same reaction, they are paraphrases of each other, and similar utterances should generate the same reaction. This approach enables us to predict reactions for previously-unseen instructions.

In order to reach the proposed goals, we intend to split our work in two areas. The first phase will deal with the automatic annotation of the corpora, associating instructions and reactions as the result of a cause-consequence relation. The second phase will be focused on the interpretation of new instructions, predicting an appropriate response to a new instruction according to previous reactions observed in the corpora.

\subsection{Annotation phase}
The key challenge in learning from large amounts of easily-collected data is to automatically annotate an unannotated corpus. Our first approach to the annotation method will consist of two parts: first, {\em segmenting} a low-level interaction trace into utterances and corresponding reactions, and second, {\em discretizing} those reactions into canonical action sequences.

Given that users can move freely while interacting with a virtual world, their actions conform a stream of continuous behaviour. The {\em segmentation} of this stream allows us to associate instructions and reactions, as we can consider that any action from the user after an instruction is a direct response to it. This method, however, could easily be deceived by considering superfluous movements as part of the expected reaction, which is why {\em discretization} is required: an appropriate discretization strategy should remove noisy behaviour irrelevant to the goal of the task.

Let's consider the following example: our agent is standing in front of a closed door. Our first instruction, {\em ``click the button to open the door''}, leads to the agent walking all around the room, until he finds the switch and clicks it. Then he turns around and walks through the door. Our follow up instruction, {\em ``take the first door to your right''}, is followed by the agent walking up to the correct door and walking through it. In the {\em segmentation} stage we associate all actions performed between the utterance of the first instruction and the second one with the first instruction, and the remaining actions with the second one. Then, at the {\em discretization stage}, we generate a sequence of actions (in particular, the {\em shorter} sequence) such that changes the state of the world in the same way the agent did. The results of such a discretization are shown in Figure~\ref{fig:discretization}a), 

\begin{figure}
\begin{minipage}{0.3\textwidth}
{\tt click the button to open the door}\\
{\em [walk to the button]}\\
{\em [press the button]}\\
{\em [walk through the door]}\\
\\
{\tt take the first door to your right}\\
{\em [walk to the first door]}\\
{\em [walk through the door]}
\begin{center}a)\end{center}
\end{minipage}
\hspace*{0.03\textwidth}
\begin{minipage}{0.3\textwidth}
{\tt click the button to open the door}\\
\textbf{{\em [walk to the button]}}\\
\textbf{{\em [press the button]}}\\
\textbf{{\em [walk through the door]}}\\
\\
{\tt take the first door to your right}\\
\textbf{{\em [walk to the first door]}}\\
\textbf{{\em [walk through the door]}}
\begin{center}b)\end{center}
\end{minipage}
\hspace*{0.03\textwidth}
\begin{minipage}{0.3\textwidth}
{\tt click the button to open the door}\\
\textbf{{\em [walk to the button]}}\\
{\em [press the button]}\\
{\em [walk through the door]}\\
\\
{\tt take the first door to your right}\\
\textbf{{\em [walk to the first door]}}\\
{\em [walk through the door]}
\begin{center}c)\end{center}
\end{minipage}


\caption{a) Discretized sequence of actions, b) Behaviour-based segmentation, c) Visibility-based segmentation}
\label{fig:discretization}
\end{figure}


%It is important to note that, even with this simple annotation strategy, it is not clear what should be considered exactly as the {\em correct} reaction, which is why we intend to experiment with two alternate definitions: A strict definition that considers the maximum reaction according to the user's {\em behaviour}, and a loose definition based on the empyrical observation that, in situated interaction, most instructions are constrained by the current {\em visually} perceived affordances~\cite{Gibson79,Stoia06}. The first definition will be called {\em behaviour segmentation} (Bhv), and the second one will be named {\em visibility segmentation} (Vis).
%
%For the Bhv method, we'll define the reaction to an instruction as the sequence of (discretized) actions performed by the user immediately after receiving an instruction and immediately before receiving the next instruction. The Vis method, however, will be restricted to the first action performed by the user after receiving an instruction.


% Augmented definition, according to Alex's comments. There is a chance this is *longer* but not *clearer*, though. In my defense, I plea insanity.
For the annotation phase, it is not clear which discretized actions are a direct response to the instruction and which ones are extra: if the user reacts to the instruction {\em ``look for the green button''} by turning right, finding it and then clicking on it, should clicking the button be considered an appropriate response to the utterance? Is it correct if we always react to this instruction by just turning right? Given that we don't have a simple, clear answer, we intend to experiment with two alternate approaches in order to define correctly what should be considered the {\em canonical reaction} to a given instruction. Given a certain sequence of actions following an instruction, our first definition will consider the maximum reaction according to the user's {\em behavior} -- that is, the entire sequence of action will be considered as the canonical reaction. Figure~\ref{fig:discretization}b) shows how the example would be annotated in this case. Our second definition will be based on the empyrical observation that, in situated interaction, most instructions are constrained by the current {\em visually} perceived affordances~\cite{Gibson79,Stoia06}, so only the first action of the sequence will be considered, and the remaining ones will be discarded, as shown in Figure~\ref{fig:discretization}c). The first definition will be called {\em Behaviour segmentation} (Bhv), and the second one will be named {\em Visibility segmentation} (Vis).

Finally, and regarding the discretization method, our algorithm will make use of an {\em automated planner} and a {\em planning representation} of the task, as this combination successfully removes superfluous actions: by fitting a plan through the user's initial and final state, we obtain (in a predictable and repeatable way) the minimal sequence of actions leading from one state to the other. Our representation will include: (1) the task goal, (2) the actions that can be performed on the environment, and (3) the current state of the interactive environment. Using this representation, the planner can calculate an optimal path between the starting and ending states of the reaction, eliminating all unnecessary actions. While on a first approach we plan to use the classical planner FF~\cite{Hoffmann03}, our technique could also work with any other planning approach.

\subsection{Interpretation phase}
One alternative we've previously explored regarding the annotation phase, and one we consider for the current work, results in a collection of {\em (instruction, reaction)} pairs. The interpretation phase used this pairs to interpret new utterances in three steps. First, we filter the set of pairs, retaining only those whose reactions can be directly executed from the current position. Second, we group this pairs according to their reactions. Third, we {\em select} the group with utterances most similar to the new instruction, and output that group's reaction.

The third step was treated as a classification problem, and we plan on approaching it with different classification methods. The first method used nearest-neighbour classification with three different similarity metrics: Jaccard and Overlap coefficients (measuring the degree of overlap between two sets, differing only in the normalization of the final value~\cite{Nikravesh:2005:Overlap}), and Levenshtein Distance (a string matric for measuring the amout of differences between two sequences of words~\cite{levelshtein-66-binary}). The second classification method employed a strategy in which we consider each group of utterances as a set of possible machine translations of our instruction, using then the BLEU measure~\cite{Papineni:2002:BLEU} to select which group could be considered the best translation of our instruction. Finally, we also trained SVM classifiers~\cite{Cortes95} using the unigrams of each paraphrase and the current position of the user as features, and setting their group as the output class~\cite{CC01a}.

\subsection{User corrections}
If the system misinterprets an instruction, it is important to have a mechanism for corrections. While we don't require an advanced mechanism for detecting errors (due to the fact that, in a collaborative task between two users, the one giving the instructions is the one expected to point them out), we do need a robust strategy for selecting the most appropriate correction. A simple first approach would be analysing the new paraphrase given by the user and using it to select one of the available actions from the current position (that is, the same ones as before, but removing the instruction we already know is wrong). However, we could obtain better results by implementing a more robust system, in which we analyse the new instruction in the context of both its features and the previous classification results, by running our prediction process over the new utterance alone and then combining those results with those obtained on the previous classification.

Let's take, as an example, the case in which an agent reacts to the instruction {\em ``press the green button''} by pressing the red button, but then after receiving the second instruction {\em ``no, the other one''} presses the correct one. While a naive approach would simply make use of the latest instruction in order to perform the correct action (following a deduction process such as ``given that there are two affordable elements, and the sentence makes reference to another one, then pressing the button which is further is the likely expected action''), by detecting that a correction is currently taking place we could improve our choices, for instance, by weighing our current guess with information from the previous deduction (given that we know our most likely interpretation was wrong, perhaps our second most likely interpretation will turn out to be right). This approach would be particularly useful when facing corrections such as {\em ``no''} or {\em ``wrong one''}, due to the fact that instructions like this ones provide little or no information about the expected result and would be very hard to interpret correctly without making use of previous information.



% Estrategias
\subsection{Research strategies}
In order to apply this techniques and evaluate them, we'll need extensive corpora. At the first stages, we'll resort to the corpora collected on the natural language generation shared task known as the GIVE Challenge~\cite{KolStrGarByrCasDalMooObe10}. This corpora can be divided in two: the first one, which we'll call $C_m$~\cite{GarGarKolStr10}, contains instructions both given and followed by multiple, random people; the second one, called $C_s$ ~\cite{benotti-denis:2011:ENLG}, was gathered using also mutiple followers, but a single person giving the instructions. This corpora totals up to 5580 instructions in the span of 14:26 hs of interaction.

While this corpora is extensive , it is entirely possible that we may need extra data. In that case, we are planning on collecting it on a small scale at first by recruiting volunteers, and on a larger scale (should it be necessary) over the internet. Crowdsourcing services, such as Amazon's Mechanical Turk, are expected to be of great help. This strategies are also expected to be used in Alexander Koller's future research project SFB 632, so data from this corpora would also be available for the current project.