\section{Discussion and Conclusions} \label{sec:discussion}

We extend~\shortcite{arec2:2008:Areces} algorithm to generate REs similar to those produced by humans. The modifications 
we proposed are based on two observations. First, it has been argued that no fixed ordering of properties is able to generate all REs produced by humans and, second, humans frequently overspecify their REs~\cite{Engelhardt_Bailey_Ferreira_2006,Arts_Maes_Noordman_Jansen_2011,viet:gene11}. We tested 
the proposed algorithm on the GRE3D7 corpus and found that it is able to generate a large proportion of the overspecified REs found in the corpus without generating trivially redundant referring expressions.
%
\shortcite{viet:gene11} trains decision trees that achieve 65\% average accuracy on the GRE3D7 corpus. 
This approach is able to generate overspecified relational descriptions, but they might fail to be referring 
expressions. Indeed, because the  method does not verify the extension of the generated expression over a model of the scene, the 
generated descriptions might not uniquely identify the target.  As we have already discussed,
our algorithm ensures termination and it always finds a referring expression if one exists.  Moreover, it achieves an average of 75\% of accuracy over the 8 scenes used in our tests. 

Different algorithms for the generation of overspecified referring expressions have been recently proposed~\cite{delucena-paraboni:2008:ENLG,ruud-emiel-mariet:2012:INLG2012}.  To our knowledge, they have not been evaluated on the 
GRE3D7 corpus and, hence, comparison is difficult. \shortcite{delucena-paraboni:2008:ENLG} and \shortcite{ruud-emiel-mariet:2012:INLG2012} algorithms
have been evaluated on the TUNA-AR corpus~\cite{gatt-balz-kow:2008:ENLG} where they have achieved a 33\% and 40\% accuracy respectively. 
As the TUNA-AR corpus includes only propositional REs, it would be interesting future work to evaluate how these algorithms perform in corpora with relational REs such as GRE3D7. 


The way we introduce overspecification is inspired by the work of~\shortcite{keysar:Curr98} on egocentrism and natural language production.  Keysar~et al.\ argue that when producing language, considering hearers point of view is not done from the outset but it is rather an afterthought; adult speakers produce REs egocentrically, just like children do, but then adjust REs so that the addressee is able to identify the target unequivocally. The first, egocentric, step is a heuristic process based in a model of saliency of the scene that contains the target. 
Our definition of \puse\ is intended to capture the saliences of the properties for different scenes and targets. The \puse\ of a relation changes according to the scene. This is in contrast with previous work where the saliency of a property is constant in a domain. Keysar et al.~argue that the reason for this generate-and-adjust procedure may have to do with information processing limitations of the mind: if the heuristic that guides the egocentric phase is well tunned, it succeeds with a suitable RE in most cases and seldom requires adjustments. Interestingly, we observe a similar behavior with our algorithm: when \puse\ values learned from the domain are used, the algorithm is not only more accurate but also much faster than when using random \puse values. 

Besides testing our algorithm over the rest of the scenes in the GRE3D7 corpus, 
as future work we plan to evaluate our algorithm on more complex domains 
like those provided by Open Domain Folksonomies~\cite{pacheco-duboue-dominguez:2012:NAACL-HLT}. We will also 
explore corpora obtained through interaction
such as the GIVE Corpus~\cite{GarGarKolStr10} where it is common to observe multi shot REs. Under time pressure, subjects will first produce an underspecified expression that includes salient properties of the target (e.g., ``the red button'').  And then, in a following utterance, they add additional properties (e.g., ``to the left of the lamp'') to make the expression a proper RE  identifying the target uniquely. The source code and the documentation for the algorithm are distributed under the GNU Lesser GPL and can be obtained at \url{http://code.google.com/p/bisimulation-gre}.

\begin{small}
\paragraph{Acknowledgments.}
 This work was partially supported by grants ANPCyT-PICT-2008-306, ANPCyT-PICT-2010-688, the FP7-PEOPLE-2011-IRSES Project
``Mobility between Europe and Argentina applying Logics to Systems'' (MEALS)
and the Laboratoire Internationale Associ\'e ``INFINIS''.
\end{small}
