\section{Discussion and Conclusions} 
\label{sec:discussion}

In this article we presented the evaluation of the algorithm presented in~\cite{arec2:2008:Areces} extended to generate REs similar to those produced by humans. The modifications proposed are based on the observation that humans frequently overspecify their REs~\cite{Engelhardt_Bailey_Ferreira_2006,Arts_Maes_Noordman_Jansen_2011}. 
We tested the proposed algorithm on the TUNA corpus and found that it is able to generate a large proportion of the overspecified REs found in the corpus without generating trivially redundant referring expressions. The expressions generated are preferred by (one or more) human judges 92\% of the time for the TUNA corpus. 

%\cite{viet:gene11} trains decision trees that are able to achieve a 65\% average accuracy on the GRE3D7 corpus. 
%The approach based on decision trees is able to generate overspecified relational descriptions, but they might fail to be referring 
%expressions. Indeed, as the decision trees does not verify the extension of the generated expression over a model of the scene, the 
%generated descriptions might not uniquely identify the target.  As we have already discussed,
%our algorithm ensures termination and it always finds a referring expression if one exists.  Moreover, it achieves an average of 75.03\% of %accuracy over the scenes used in our tests. 

Different algorithms for the generation of overspecified and distinguishing referring expressions have been proposed in recent years 
(see, e.g.,~\cite{delucena-paraboni:2008:ENLG,ruud-emiel-mariet:2012:INLG2012}). In this paper we compare ourselves to the Graph algorithm~\cite{KrahmerGRAPH} wich has been shown to achieve better accuracy than algorithms describe in~\cite{delucena-paraboni:2008:ENLG,ruud-emiel-mariet:2012:INLG2012} in the TUNA shared task~\cite{gatt-balz-kow:2008:ENLG}. 

An interestesting outcome of our work is that it makes evident the relationship between overspecification and the saliency of properties in the context os a scene.
 
 As we described in Section~\ref{sec:algorithm} the generation of overspecified REs is performed in two steps. In the first iteration, the probability of including a property in the RE depends only on its \puse. We believe our definition of \puse\ is intended to captures the saliency of the properties for different scenes and targets. The \puse\ of a property changes according to the scene as we discussed in Section~\ref{sec:learning}. This is in contrast with previous work where the saliency of a property is constant in a domain. In the first iteration, if the \puse~is high, that is, if the property is very salient, it does not matter whether the property eliminates any distractor, it will probably be used anyway. After all properties had a chance of being included in this way, if the resulting RE is not distinguishing, then the algorithm enters a second phase in which it makes sure that the RE identifies the target uniquely.  

Our two-step algorithm is inspired by the work of~\cite{keysar:Curr98} on egocentrism and natural language production. Keysar~et al.\ put forwards the proposal that when producing language, considering the hearers point of view is not done from the outset but it is rather an afterthought~\cite{keysar:Curr98}. They argue that adult speakers produce REs egocentrically, just like children do, but then adjust the REs so that the addressee is able to identify the target unequivocally. The egocentric step is a heuristic process based in a model of saliency of the scene that contains the target. As a result, the REs that include salient properties are preferred by our algorithm even if such properties are not necessary to identify the target univocally. Keysar et al.~argue that the reason for the generate-and-adjust procedure may have to do with information processing limitations of the mind: if the heuristic that guides the egocentric phase is well tunned, it succeeds with a suitable RE in most cases and seldom requires adjustments. Interestingly, we observe a similar behavior with our algorithm: when \puse\ values learn from the domain are used, the algorithm is not only much more accurate but also much faster. 

As future work we plan to evaluate our algorithm to generate referring expressions inside discourse as required by domains like those provided by Open Domain Folksonimies~\cite{pacheco-duboue-dominguez:2012:NAACL-HLT}. We also plan to explore corpora obtain from interaction, such as the GIVE Corpus~\cite{GarGarKolStr10} where it is common to observe multi shot REs. Under time pressure subjects will first produce an underspecified expression that includes salient properties of the target (e.g., ``the red button'').  And then, in a following utterance, they add additional properties (e.g., ``to the left of the lamp'') to make the expression a proper RE  identifying the target uniquely.

