\section{General Overview of Research Area and Literature}
\label{intro}

This project is framed within the area of \emph{Natural Language Processing}, a branch of \emph{Artificial Intelligence} dedicated to improving the interpretation and generation of language (either written or spoken) for general communication purposes. Some of the practical aplications of this research area are automatic translation, speech recognition and dialog generation systems. A concrete problem on this area is the \emph{interpretation} problem, namely, extracting the meaning of a phrase given by a human being in his/her own language. 

\emph{Interpretation of Natural Language Instructions} is a process through which an automated system receives orders from a user using his/her own language. Figure \ref{fig:world} shows an example from a virtual world, in which an agent is standing in the top left room. If we could guide this agent by means of instructions such as {\em ``go to the room with the couch''} or {\em ``take the second door to your right''}, we could say that our agent correctly interprets instructions in natural language.
This, however, has proven to be a difficult problem to solve, as natural language has a wide grammatical and lexical variability - even in a restricted environment, people describe the same route and the same objects in extremely different ways. Below are some examples of instructions from the same corpus, all given for the same route shown in Figure~\ref{fig:world}:

\medskip
\begin{it}
1) out \\
\indent 2) walk down the passage\\
\indent 3) nowgo \emph{[sic]} to the pink room\\
\indent 4) back to the room with the plant \\
\indent 5) Go through the door on the left \\
\indent 6) go through opening with yellow wall paper
\end{it}
\medskip

People describe routes using landmarks (4) or specific actions (2). They may describe the same object differently (5 vs 6). Instructions also differ in their scope (3 vs 1). Thus, even ignoring spelling and grammatical errors, navigation instructions contain considerable variation which makes interpreting them a challenging problem~\cite{mipaper}.

% A paragraph I may want to recover some day
% For instance, if a user says {\em ``right''}, is that a navigational instruction, or is just positive reinforcement~\cite{Vogel:2010:LFN}? In the instruction {\em ``Fetch my phone from my desk, it is near the keyboard''}, which element is near the keyboard, the phone or the desk ~\cite{zukerman-EtAl:2009:SIGDIAL}?

% TODO: join with the previous paragraph
% FIXED: try and find citations for this
% Developing a system capable of learning how to perform such a task is the main goal of the current project.
The applications of a system capable of interpreting instructions with a high degree of accuracy would be extensive: not only would we be able to control a diversity of systems without the need of learning a specific set of instructions and parameters (from voice-controlled systems to autonomous robots~\cite{INLAGIARD}), but it would also allow us to deliver solutions oriented towards the elderly, children and people with varying degrees of disabilities~\cite{Roy_2000_3390}. By combining this techniques with systems capable of generating instructions in natural language (whose range of applications comprises navigational systems and computer interfaces for the visually impaired, among others) we could also create a much more natural interaction with our systems through dialogue.

\begin{figure}
\begin{center}
\includegraphics[scale=0.33]{paraphrases.jpg}
\caption{A screenshot of a virtual world from the GIVE Challenge. The world consists of interconnecting hallways, rooms and objects.}
\label{fig:world}
\end{center}
\end{figure}

% The section formerly known as "State of the art"
\subsection{Previous work on Instruction Interpretation}
% FIXED: Too long of a sentence
Many of the current approaches towards this problem can be classified in two main branches. The first ones are {\em Symbolic approaches}\cite{benotti-frolog,devault-stone:2009:EACL,MacMahon:2006:WTC}, in which the meaning of a sentences is inferred by analysing the role of each word in the phrase (subject, verb, direct object, etc) by means of detailed grammars corresponding to the language of the speaker. This approaches eventually hit a roadblock, when it became clear that a full lexical and grammatical coberture could not be achieved through this methods: given that rules where manually created, it would be impossible to create a set of rules comprehensive enough to comprise all possible phrases in a given language. Instead, {\em Statistical approaches}\cite{traum-non-team, swartout-iva, Vogel:2010:LFN, chen:aaai11} became the second main branch. In this approaches, a sample or corpus of expected phrases from the same domain is collected, annotated (usually in a manual way) and then used as training for a machine learning system.

% Requires more state-of-the-art
% FIXED: Switch the order, first symbolic then statistical
% FIXED: add this references to the general introduction in the previous paragraph. Also, why was symbolic abandoned? (not full coberture for lexic and grammar)
% FIXED: remove "in this work, a sentence is evaluated"
% FIXED: reinforcement learning requires too many interactions - ours requires slightly less 
% FIXED: "certain aspects" -> in small domains, but it does not scale well to larger domains - and then, cite Sutton and Barto (book of reinforcement learning)

Similar navigation tasks in 2D environments have been explored by~\cite{MacMahon:2006:WTC} in their MARCO architecture. This architecture focuses on the structure of the sentences, but the best results come at the expense of requiring a perfect parse tree for any given sentence. \cite{Vogel:2010:LFN} has delved into this problem by means of reinforcement learning. This approach has proven to be successful in small domains, but (as all reinforcement learning approaches) it does not scale well to larger domains~\cite{sutton-reinf-learn}. The creation of a probabilistic model for predicting the correct interpretation of an instruction has also been explored by ~\cite{zukerman-EtAl:2009:SIGDIAL}. 

Although the area of Instruction Interpretation has moved from symbolic approaches towards statistical approaches, the downside of the latter is the requirement of a lot of work at the corpora annotation phase. Learning to interpret instructions from {\em automatically annotated} would solve this problem, as previously explored by ~\cite{chen:aaai11}. In their work they examine a similar alternative by building a semantic parser for each instruction, but the data must be manually preprocessed, and this is a time-consuming task.

% NOTE: I'm not mentioning *cheap* data because we are not talking about *expensive* data.
In our project we propose a mixed method in which we'll use a statistical approach over automatically annotated data, exploiting existing interaction data directly, with no manual processing involved.

% FIXED: Learning to interpret instructions from A.Anot. data has been explored before by Chen and Mooney (cita). In their work, the data must be manually preprocessed. In this project, we propose a method for exploiting cheap interaction data directly, with no manual processing involved. END

% FIXED: say why symbolic approaches are bad (because it is impossible), then the area MOVED TOWARDS a stadistic approach, but this requires much annotated work.
% How do we solve this? By using a statistical approach, but over *automatically* annotated data. //x(People annotates this by reacting)

\subsection{Relation between Interpretation and Generation}
% This is the beggining of the connection between our work and Alex's
% Perhaps this should be compressed?

% FIXED: not "introduced", but "did research"

The area of Instruction Interpretation is closely related to Instruction Generation, an area that has been previously explored by Luciana Benotti~\cite{benotti-denis:2011:ENLG} and Alexander Koller~\cite{COIN370}, particularly on the context of the GIVE Challenge~\cite{KolStrGarByrCasDalMooObe10}. The GIVE Challenge is a competition in which many different Instruction Generation systems compete against each other, and where their score is measured according to instruction clarity, naturalness and precission, among others.

% Worded as in the paper
I've already done research in this area in a previous research project \cite{mipaper}, in which we presented a new approach towards automatic instruction interpretation, obtaining promising results which, in a preliminary study, could even surpass the state of the art on the area \cite{chen:aaai11}. In this project we introduced a statistical approach in which annotations are obtained through automated planning techniques ~\cite{nau04} to generate an unsupervised learning training strategy. This simplifies a great deal of the adaptation process for a particular domain, making our approach very suitable for fast prototyping of conversational interfaces for instructions interpretation, as it reduces the amount of work required from the designer.

Both Benotti's work and mine share a common world semantic representation, and both use automated annotation. This representation is closer to the world than typical superficial representations of taks, which is why this model is useful for both interpretation and generation. This results essentially prove that a reversible model is possible, and our current approaches seem to be focused on the right direction.

Even though our ultimate objective is to use this technologies in the real world, our research will be designed and evaluated over a 3D virtual world, as using online videogames as a medium is a technique that has proven to be really useful: it allows volunteers all over the world to participate in the corpora collection phase of human behaviour by means of a familiar interface \cite{orkin-nleg11}, but it also enables researchers to make assumptions about the environment without requiring the implementation of complex and time-consuming autonomous agents. Given that a virtual environment is at its core a generic definition, we should also be able to implement our results in any environment of this kind, the most salient of which would be the World Wide Web. Thus, given a virtual environment, we would be able to define clear guidelines regarding how to implement instruction interpretation in it.

% TODO: my work goes to ex "state of the art". Then we explain what CL does, and then we link by explaining that CL does generation and the paper does interpretation, both sharing a common semantic repretsentation, and both use automated annotation.  It is also closer to the world than to the superficial representation of the task, and that's why its useful for both (as both Int and Gen share the representation of the world, not the phrase).

% The features we end up defining can help the other camp (gen vs interp)
% Gen requires as contextual features: history, location, orientation, interacted objects, visited rooms and so on depending on what I want to generate, and I need to consider them for Interp also. 

% Annotation is usually closer to the "surface form" (push button), while ours is more connected to the virtual environment, not with what was said (move from region to region instead of forward), which is what human annotators would annotate. Our semantic representation is grounded in the V.E., which can be used for interp. Main disadvantage? Very dependent of the V.E., and our annotations are not generalizable to differente V.E. - need data for the new VE (but is easy to get)


% Methods of data gathering