{ "paper_id": "I11-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:32:37.568316Z" }, "title": "Multi-modal Reference Resolution in Situated Dialogue by Integrating Linguistic and Extra-Linguistic Clues", "authors": [ { "first": "Ryu", "middle": [], "last": "Iida", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Institute of Technology", "location": { "addrLine": "2-12-1 Ohokayama Meguro Tokyo", "postCode": "W8-73, 152-8552", "country": "Japan" } }, "email": "ryu-i@cl.cs.titech.ac.jp" }, { "first": "Masaaki", "middle": [], "last": "Yasuhara", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Institute of Technology", "location": { "addrLine": "2-12-1 Ohokayama Meguro Tokyo", "postCode": "W8-73, 152-8552", "country": "Japan" } }, "email": "yasuhara@cl.cs.titech.ac.jp" }, { "first": "Takenobu", "middle": [], "last": "Tokunaga", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Institute of Technology", "location": { "addrLine": "2-12-1 Ohokayama Meguro Tokyo", "postCode": "W8-73, 152-8552", "country": "Japan" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper focuses on examining the effect of extra-linguistic information, such as eye gaze, integrated with linguistic information on multi-modal reference resolution. In our evaluation, we employ eye gaze information together with other linguistic factors in machine learning, while in prior work such as Kelleher (2006) and Prasov and Chai (2008) the incorporation of eye gaze and linguistic clues was heuristically realised. Conducting our empirical evaluation using a data set extended the REX-J corpus (Spanger et al., 2010) including eye gaze information, we examine which types of clues are useful on these three data sets, which consist largely of pronouns, nonpronouns and both respectively. Our results demonstrate that a dynamically moving visible indicator within the computer display (e.g. a mouse cursor) contributes to reference resolution for pronouns, while eye gaze information is more useful for the resolution of non-pronouns.", "pdf_parse": { "paper_id": "I11-1010", "_pdf_hash": "", "abstract": [ { "text": "This paper focuses on examining the effect of extra-linguistic information, such as eye gaze, integrated with linguistic information on multi-modal reference resolution. In our evaluation, we employ eye gaze information together with other linguistic factors in machine learning, while in prior work such as Kelleher (2006) and Prasov and Chai (2008) the incorporation of eye gaze and linguistic clues was heuristically realised. Conducting our empirical evaluation using a data set extended the REX-J corpus (Spanger et al., 2010) including eye gaze information, we examine which types of clues are useful on these three data sets, which consist largely of pronouns, nonpronouns and both respectively. Our results demonstrate that a dynamically moving visible indicator within the computer display (e.g. a mouse cursor) contributes to reference resolution for pronouns, while eye gaze information is more useful for the resolution of non-pronouns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The task of reference resolution has received much attention because it is important for applications that require interpreting text. In recent work on reference resolution within a text, several machine learning-based approaches have been proposed (McCarthy and Lehnert, 1995; Ge et al., 1998; Soon et al., 2001; Ng and Cardie, 2002; Iida et al., 2003; Yang et al., 2003; Denis and Baldridge, 2008) , each of which mainly exploits linguistic clues motivated by the Centering Theory (Grosz et al., 1995) to model the discourse salience of all candidate antecedents. For instance, Yang et al. (2003) and Iida et al. (2003) presented machine learning-based reference resolution mod-els where a pairwise comparison of candidate antecedents, in line with the basic idea of the Centering Theory, leads to the selection of the candidate with the highest salience for a given context. Denis and Baldridge (2008) extended the model by integrating the set of pairwise comparisons into ranking candidates to directly learn which clues of antecedents are useful.", "cite_spans": [ { "start": 249, "end": 277, "text": "(McCarthy and Lehnert, 1995;", "ref_id": "BIBREF20" }, { "start": 278, "end": 294, "text": "Ge et al., 1998;", "ref_id": "BIBREF6" }, { "start": 295, "end": 313, "text": "Soon et al., 2001;", "ref_id": "BIBREF29" }, { "start": 314, "end": 334, "text": "Ng and Cardie, 2002;", "ref_id": "BIBREF23" }, { "start": 335, "end": 353, "text": "Iida et al., 2003;", "ref_id": "BIBREF12" }, { "start": 354, "end": 372, "text": "Yang et al., 2003;", "ref_id": "BIBREF37" }, { "start": 373, "end": 399, "text": "Denis and Baldridge, 2008)", "ref_id": "BIBREF3" }, { "start": 483, "end": 503, "text": "(Grosz et al., 1995)", "ref_id": "BIBREF8" }, { "start": 580, "end": 598, "text": "Yang et al. (2003)", "ref_id": "BIBREF37" }, { "start": 603, "end": 621, "text": "Iida et al. (2003)", "ref_id": "BIBREF12" }, { "start": 878, "end": 904, "text": "Denis and Baldridge (2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Through the empirical evaluations using the data sets provided by the Message Understanding Conference (MUC) 1 and the Automatic Content Extraction (ACE) 2 , which consist of newspaper articles and transcripts of broadcasts, linguistically motivated approaches have achieved better performance than state-of-the-art rule-based reference resolution systems (e.g. Soon et al. (2001) and Ng and Cardie (2002) ).", "cite_spans": [ { "start": 362, "end": 380, "text": "Soon et al. (2001)", "ref_id": "BIBREF29" }, { "start": 385, "end": 405, "text": "Ng and Cardie (2002)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In contrast to this research paradigm (i.e. research focusing on only the linguistic aspect of reference), research in the area of multi-modal interfaces has focused on referring expressions used in multi-modal conversations, in other words, identifying referents of referring expressions in a static scene or a situated world (e.g. objects depicted in a computer display), taking extralinguistic clues into account (Byron, 2005; Prasov and Chai, 2008; Prasov and Chai, 2010; Sch\u00fctte et al., 2010, etc.) . For instance, Kelleher and van Genabith (2004) used the centrality and size of a object in the display to determine its visual salience. Prasov and Chai (2008) and Prasov and Chai (2010) exploited eye fixations to detect users' focus of attention in terms of visual prominence; their research has been motivated by work in the cognitive sciences (Tanenhaus et al., 1995; Tanenhaus et al., 2000; Hanna et al., 2003; Hanna and Tanenhaus, 2004; Hanna and Brennan, 2007; Metzing and Brennan, 2003; Ferreira and Tanenhaus, 2007; Brown-Schmidt et al., 2002) .", "cite_spans": [ { "start": 416, "end": 429, "text": "(Byron, 2005;", "ref_id": "BIBREF2" }, { "start": 430, "end": 452, "text": "Prasov and Chai, 2008;", "ref_id": "BIBREF24" }, { "start": 453, "end": 475, "text": "Prasov and Chai, 2010;", "ref_id": "BIBREF25" }, { "start": 476, "end": 503, "text": "Sch\u00fctte et al., 2010, etc.)", "ref_id": null }, { "start": 520, "end": 552, "text": "Kelleher and van Genabith (2004)", "ref_id": "BIBREF16" }, { "start": 643, "end": 665, "text": "Prasov and Chai (2008)", "ref_id": "BIBREF24" }, { "start": 670, "end": 692, "text": "Prasov and Chai (2010)", "ref_id": "BIBREF25" }, { "start": 852, "end": 876, "text": "(Tanenhaus et al., 1995;", "ref_id": "BIBREF34" }, { "start": 877, "end": 900, "text": "Tanenhaus et al., 2000;", "ref_id": "BIBREF35" }, { "start": 901, "end": 920, "text": "Hanna et al., 2003;", "ref_id": "BIBREF11" }, { "start": 921, "end": 947, "text": "Hanna and Tanenhaus, 2004;", "ref_id": "BIBREF10" }, { "start": 948, "end": 972, "text": "Hanna and Brennan, 2007;", "ref_id": "BIBREF9" }, { "start": 973, "end": 999, "text": "Metzing and Brennan, 2003;", "ref_id": "BIBREF21" }, { "start": 1000, "end": 1029, "text": "Ferreira and Tanenhaus, 2007;", "ref_id": "BIBREF4" }, { "start": 1030, "end": 1057, "text": "Brown-Schmidt et al., 2002)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These previous studies have shown how promising using eye gaze information for multi-modal reference resolution can be. However, they rely on heuristic techniques for determining visual salience. Hence, there is still room for improvement by introducing eye gaze information in a more systematic and principled manner 3 . This paper, therefore, focuses on a multi-modal reference resolution model that integrates eye gaze and linguistic information by using a machine learning technique. Adapting a ranking-based anaphora resolution model, such as was proposed by Denis and Baldridge (2008) , we integrate extra-linguistic information with other linguistic factors for more accurate reference resolution. With the above as a suitable background, this paper focuses on the issue of how to effectively combine linguistic and extra-linguistic factors for multi-modal reference resolution, taking collaborative task dialogues in Japanese as our target data set.", "cite_spans": [ { "start": 564, "end": 590, "text": "Denis and Baldridge (2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is organised as follows. We first explain related work and our stance on multi-modal reference resolution in Section 2; we then present which multi-modal task we chose and how we merge eye gaze information into the predefined multi-modal task in Section 3. Section 4 introduces what types of information are used in the experiments shown in Section 5. We finally conclude this paper and discuss future directions in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Within the field of computational linguistics, researchers have focused on developing computational models of reference resolution, taking into account various linguistic factors, such as grammatical, semantic and discourse clues mainly acquired from the relationship between an anaphor and any candidate antecedents (Mitkov, 2002; Lappin and Leass, 1994; Brennan et al., 1987; Strube and Hahn, 1996, etc.) . Research trends for reference resolution have shifted from handcrafted rule-based approaches to corpus-based approaches due to the growing success of machine learning algorithms (e.g. Support Vector Ma- 3 Frampton et al. (2009) employed the incorporation of linguistic and visual features on reference resolution of multiparty dialogues. However, their target was limited to only the expression you in dialogues, while our focus is to investigate the use of the expressions bridging between a dialogue and the real world (e.g. expressions referring to puzzle pieces on a computer display).", "cite_spans": [ { "start": 317, "end": 331, "text": "(Mitkov, 2002;", "ref_id": "BIBREF22" }, { "start": 332, "end": 355, "text": "Lappin and Leass, 1994;", "ref_id": "BIBREF19" }, { "start": 356, "end": 377, "text": "Brennan et al., 1987;", "ref_id": "BIBREF0" }, { "start": 378, "end": 406, "text": "Strube and Hahn, 1996, etc.)", "ref_id": null }, { "start": 612, "end": 613, "text": "3", "ref_id": null }, { "start": 614, "end": 636, "text": "Frampton et al. (2009)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "chines (Vapnik, 1998) ). For instance, an approach to coreference resolution proposed by Soon et al. (2001) , in which the problem of reference resolution is decomposed into a set of binary classification problems of whether a pair of markables (e.g. NP) are anaphoric or not, achieved performance comparable to the state-of-the-art rule-based system, even though they used only a limited number of simple features. Researchers' concerns in this area cover a broad range of research topics from modeling the coreferential transitivity of a set of markables, to integrating discourse salience motivated by the Centering Theory (Grosz et al., 1995) . This research area has continued to produce novel reference resolution models over the years, but the target of reference resolution is limited to only written texts or transcripts of speech.", "cite_spans": [ { "start": 7, "end": 21, "text": "(Vapnik, 1998)", "ref_id": "BIBREF36" }, { "start": 89, "end": 107, "text": "Soon et al. (2001)", "ref_id": "BIBREF29" }, { "start": 626, "end": 646, "text": "(Grosz et al., 1995)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In contrast to the above research area, researchers in the multi-modal community also have paid attention to reference resolution because it is also a crucial task for realising interaction between humans and computers. In this area, the evaluation is typically conducted in the situation where a set of objects (i.e. candidate referents) are depicted within a computer display. For instance, Stoia et al. (2008) designed an experiment where two participants controlled an avatar in a virtual world for exploring hidden treasures. In this case, the task of reference resolution is to identify an object shown on the computer display as referred to by a referring expression used by the participants during dialogue. The task becomes more complicated than typical coreference resolution for written texts because a referent is considered as either anaphoric (i.e. it has already appeared in the previous discourse history) or exophoric, (i.e. the reference resolution system needs to search for the referent from the set of objects shown in a computer display).", "cite_spans": [ { "start": 393, "end": 412, "text": "Stoia et al. (2008)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In order to capture the characteristics of exophoric cases, extra-linguistic information acquired from participants' eye gaze data and the visual prominence of each object are also exploited together with linguistic information. A series of research by Kelleher and his colleagues (Kelleher and van Genabith, 2004; Kelleher et al., 2005; Kelleher, 2006; Sch\u00fctte et al., 2010) tackled the problem of modeling visual salience of objects in situated dialogue. In their algorithm, the visual salience of each object is estimated based on its centrality within the scene and its size; their hy-pothesis was that the salience is higher if a object is larger and is placed nearer the centre of the computer display. In Kelleher (2006) 's approach to reference resolution, linguistic clues such as ranking rules of candidate referents based on the Centering Theory (Grosz et al., 1995) were introduced in addition to using visual salience, but the integration of both clues was done in a heuristic way.", "cite_spans": [ { "start": 299, "end": 314, "text": "Genabith, 2004;", "ref_id": "BIBREF16" }, { "start": 315, "end": 337, "text": "Kelleher et al., 2005;", "ref_id": "BIBREF17" }, { "start": 338, "end": 353, "text": "Kelleher, 2006;", "ref_id": "BIBREF18" }, { "start": 354, "end": 375, "text": "Sch\u00fctte et al., 2010)", "ref_id": "BIBREF28" }, { "start": 712, "end": 727, "text": "Kelleher (2006)", "ref_id": "BIBREF18" }, { "start": 857, "end": 877, "text": "(Grosz et al., 1995)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In addition to the visual salience assessed from the characteristics of objects in the world, eye gaze has received much attention as a clue for reference resolution. Prasov and Chai (2008) , for example, employed eye gaze on the task of identifying a referent in the situation where objects are placed in a static scene. The time span after a speaker most recently fixates on an object is incorporated into their reference resolution model as well as the information of how recently the object was referred to by a referring expression. Although the results of their evaluation demonstrated that eye gaze significantly contributes to increasing performance, there is still room for improvement by adapting machine learning techniques, because in their work the linguistic and visual attention information was heuristically integrated.", "cite_spans": [ { "start": 167, "end": 189, "text": "Prasov and Chai (2008)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In contrast, our previous work employed a machine learning technique to identify the most likely candidate referent, taking into account linguistic features together with cues capturing visual salience found within the situated dialogues contained in the REX-J corpus (Spanger et al., 2010) . We reported that extra-linguistic information contributes to improving performance (especially, in pronominal reference). However, in Iida et al. (2010) eye gaze information was not considered, even though in the area of cognitive science researchers have demonstrated that a speaker's eye fixations are strong clues for identifying a referent of a referring expression (Tanenhaus et al., 1995; Tanenhaus et al., 2000; Hanna et al., 2003; Hanna and Tanenhaus, 2004; Hanna and Brennan, 2007; Metzing and Brennan, 2003; Ferreira and Tanenhaus, 2007; Brown-Schmidt et al., 2002) . Against this background, we investigate the effect of linguistic and extra-linguistic information including eye gaze on multi-modal reference resolution, extending Iida et al. 2010 ", "cite_spans": [ { "start": 268, "end": 290, "text": "(Spanger et al., 2010)", "ref_id": "BIBREF30" }, { "start": 663, "end": 687, "text": "(Tanenhaus et al., 1995;", "ref_id": "BIBREF34" }, { "start": 688, "end": 711, "text": "Tanenhaus et al., 2000;", "ref_id": "BIBREF35" }, { "start": 712, "end": 731, "text": "Hanna et al., 2003;", "ref_id": "BIBREF11" }, { "start": 732, "end": 758, "text": "Hanna and Tanenhaus, 2004;", "ref_id": "BIBREF10" }, { "start": 759, "end": 783, "text": "Hanna and Brennan, 2007;", "ref_id": "BIBREF9" }, { "start": 784, "end": 810, "text": "Metzing and Brennan, 2003;", "ref_id": "BIBREF21" }, { "start": 811, "end": 840, "text": "Ferreira and Tanenhaus, 2007;", "ref_id": "BIBREF4" }, { "start": 841, "end": 868, "text": "Brown-Schmidt et al., 2002)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In our evaluation of automatic reference resolution, we focus on investigating the interaction between linguistic and extra-linguistic clues including eye fixations on multi-modal reference resolution. Therefore, corpora where participants frequently utter both anaphoric and exophoric referring expressions are preferable for our evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collecting eye gaze data in situated dialogues", "sec_num": "3" }, { "text": "In recent multi-modal problem settings for data collection, researchers have been concerned with more realistic situations, such as dynamically changing scenes rendered in a 3D virtual world (e.g. (Byron, 2005) ). However, if we use data collected from such a scenario, referring expressions will be relatively skewed to exophoric cases because of frequently occurring scene updates. On the other hand, if we adopt the data collected using a static scene, we will have a disadvantage in that the change of visual salience of objects is not observed because the centrality and size of each object is fixed through dialogues.", "cite_spans": [ { "start": 197, "end": 210, "text": "(Byron, 2005)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Collecting eye gaze data in situated dialogues", "sec_num": "3" }, { "text": "For these reasons, we adopt the same task setting as introduced in the REX-J corpus (Spanger et al., 2010) , which consists of collaborative work (solving Tangram puzzles) by two participants; the setting of this corpus is more suitable for our purposes because of the frequent occurrence of both anaphoric and exophoric referring expressions.", "cite_spans": [ { "start": 84, "end": 106, "text": "(Spanger et al., 2010)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Collecting eye gaze data in situated dialogues", "sec_num": "3" }, { "text": "For collecting data, we recruited 18 Japanese graduate students, and split them into 9 pairs 4 . All pairs knew each other previously and were of the same gender and approximately the same age. Each pair was instructed to solve four different Tangram puzzles. The goal of the puzzle is to construct a given shape by arranging seven pieces (of different simple shapes) as shown in Figure 1 . The precise positions of every piece and every action that the participants make are recorded by the Tangram simulator in which the pieces on the computer display can be moved, rotated and flipped with simple mouse operations. The piece position and the mouse actions were recorded at intervals of 1/65 msec. The simulator displays two areas: a goal shape area (the left side of Figure 1 ) and a working area (the right side of Figure 1 ) where pieces are shown and can be manipulated.", "cite_spans": [], "ref_spans": [ { "start": 380, "end": 388, "text": "Figure 1", "ref_id": null }, { "start": 770, "end": 778, "text": "Figure 1", "ref_id": null }, { "start": 819, "end": 827, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Collecting eye gaze data in situated dialogues", "sec_num": "3" }, { "text": "A different role was assigned to each participant OP-UT (SV-UT) stands for the number of utterances of operators (solvers). The right side of OP-REX (SV-REX) is the frequency of referring expressions uttered by the operators (solvers), whereas the left side stands for the frequency of pronominal expressions uttered by the operators (solvers). ERR-OP (ERR-SV) is the error rate of measuring the operators' (solvers') eye gaze. SD means the standard derivation. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collecting eye gaze data in situated dialogues", "sec_num": "3" }, { "text": "\u00a1 \u00a2 \u00a3 \u00a4 \u00a5 \u00a2 \u00a6 \u00a7 \u00a2 \u00a8 \u00a7 \u00a2 \u00a9 \u00a1 \u00a8 \u00a2 \u00a8 \u00a7 \u00a2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collecting eye gaze data in situated dialogues", "sec_num": "3" }, { "text": "Figure 1: Screenshot of the Tangram simulator of a pair: a solver and an operator. Given a certain goal shape, the solver thinks of the necessary arrangement of the pieces and gives instructions to the operator for how to move them. The operator manipulates the pieces with the mouse according to the solver's instructions. During this interaction, frequent uttering of referring expressions is needed to distinguish between the different puzzle pieces. This collaboration is achieved by placing a set of participants side by side, each with their own display showing the work area and the mouse cursor begin manipulated by the operator in real time, and a shield screen set between them to prevent the operator from seeing the goal shape, which is visible only on the solver's screen, and to further restrict their interaction to only speech. We put no constraint on the contents of their dialogues. In addition to the attributes considered in the original REX-J corpus, we also collected eye gaze data synchronized with speech by using the Tobii T60 Eye Tracker, sampling at 60 Hz for recording users' eye gaze with 0.5 degrees in accuracy. Because the tracking results acquired from Tobii contain tracking errors, 5 dialogues in which the tracking results contain more than 40% errors were removed from the data set used in our evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collecting eye gaze data in situated dialogues", "sec_num": "3" }, { "text": "Annotating referring expressions and their referents were conducted in the same manner as Spanger et al. (2010) , i.e. annotation was conducted using a multimedia annotation tool, ELAN 5 ; an annotator manually detects a referring expression and then selects its referent out of the possible puzzle pieces shown on the computer display. Note that only Tangram pieces were tagged as referents of referring expressions, therefore the expressions referring to abstract entities such as an action and event were not annotated. In the corpus multiple pieces were annotated as a single referent, but such referents were excluded in our evaluation because of their infrequent occurrence. Table 1 summarises the statistics of our new version of the REX-J corpus, consisting of 27 dialogues.", "cite_spans": [ { "start": 90, "end": 111, "text": "Spanger et al. (2010)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 681, "end": 688, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Collecting eye gaze data in situated dialogues", "sec_num": "3" }, { "text": "4 Multi-modal reference resolution", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collecting eye gaze data in situated dialogues", "sec_num": "3" }, { "text": "To investigate the impact of extra-linguistic information on reference resolution, we conducted an empirical evaluation in which a reference resolution model chooses a referent (i.e. a piece) for a given referring expression from the set of pieces on the computer display. As a basis of our reference resolution model, we adopt an existing model for reference resolution. Recently, machine learning-based approaches to reference resolution (Soon et al., 2001; Ng and Cardie, 2002, etc.) focus on identifying anaphoric relations in texts, and have achieved better performance than hand-crafted rule-based approaches. These models for reference resolution take into account linguistic factors, such as relative salience of candidate antecedents, which have been discussed mainly in Centering Theory (Grosz et al., 1995) by ranking candidate antecedents appearing in the preceding discourse (Iida et al., 2003; Yang et al., 2003; Denis and Baldridge, 2008) . In order to take advantage of existing models, we adopt the ranking-based approach as a basis for our reference resolution model. More precisely, we em-eye gaze features GZ1: [0, 1] the frequency of fixating P in the time period [t \u2212 T, t], normalised by the frequency of the total fixations during the period. GZ2: [0, 1] the length of a fixation on P in the time period [t \u2212 T, t], nomalised by T . GZ3: [0, 1] the length of a fixation on P in the time period [t \u2212 T, t], nomalised by the total length of fixation. GZ4: [0, 1] the frequency of fixating P in the time period uttering a referring expression, normalised by the frequency of the total fixations during the period. GZ5: [0, 1] the length of a fixation on P in the time period uttering a referring expression, nominalised by T . GZ6: [0, 1] the length of a fixation on P in the time period uttering a referring expression, nominalised by the total length of fixation. GZ7: yes,no whether the frequency of fixating P in the time period [t \u2212 T, t] is most frequent. GZ8: yes,no whether the frequency of fixating P in the time period [t \u2212 T, t] is more than 1. GZ9: yes,no whether the fixation time of P in the time period [t \u2212 T, t] is longest out of all pieces. GZ10: yes,no whether there exists the fixation time of P in the time period [t \u2212 T, t]. GZ11: yes,no whether the frequency of fixating P in the time period uttering a referring expression is most frequent. GZ12: yes,no whether the frequency of fixating P in the time period uttering a referring expression is more than 1. GZ13: yes,no whether the fixation time of P in the time period uttering a referring expression is longest out of all pieces. GZ14: yes,no whether there exists the fixation time of P in the time period uttering a referring expression.", "cite_spans": [ { "start": 440, "end": 459, "text": "(Soon et al., 2001;", "ref_id": "BIBREF29" }, { "start": 460, "end": 486, "text": "Ng and Cardie, 2002, etc.)", "ref_id": null }, { "start": 797, "end": 817, "text": "(Grosz et al., 1995)", "ref_id": "BIBREF8" }, { "start": 888, "end": 907, "text": "(Iida et al., 2003;", "ref_id": "BIBREF12" }, { "start": 908, "end": 926, "text": "Yang et al., 2003;", "ref_id": "BIBREF37" }, { "start": 927, "end": 953, "text": "Denis and Baldridge, 2008)", "ref_id": "BIBREF3" }, { "start": 1131, "end": 1134, "text": "[0,", "ref_id": null }, { "start": 1135, "end": 1137, "text": "1]", "ref_id": null }, { "start": 1272, "end": 1275, "text": "[0,", "ref_id": null }, { "start": 1276, "end": 1278, "text": "1]", "ref_id": null }, { "start": 1362, "end": 1365, "text": "[0,", "ref_id": null }, { "start": 1366, "end": 1368, "text": "1]", "ref_id": null }, { "start": 1478, "end": 1481, "text": "[0,", "ref_id": null }, { "start": 1482, "end": 1484, "text": "1]", "ref_id": null }, { "start": 1640, "end": 1643, "text": "[0,", "ref_id": null }, { "start": 1644, "end": 1646, "text": "1]", "ref_id": null }, { "start": 1753, "end": 1756, "text": "[0,", "ref_id": null }, { "start": 1757, "end": 1759, "text": "1]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Base models", "sec_num": "4.1" }, { "text": "t is the onset time of a referring expression. P denotes a piece, T is a fixed time window (1500ms). Yang et al. (2003) ). In Denis and Baldridge (2008) 's ranking-based model, the most likely candidate antecedent is decided by simultaneously ranking all candidate antecedents. To induce a ranker used in the ranking process, we adopt the Ranking SVM algorithm (Joachims, 2002) 6 , which learns a weight vector to rank candidates for a given partial ranking of each referent, while the original work by Denis and Baldridge (2008) uses Maximum Entropy to create their ranking-based model. Each training instance is created from the set of all referents for each referring expression. To define the partial ranking of referents, we simply rank referents of a given referring expression as first place and any other referents as second place.", "cite_spans": [ { "start": 101, "end": 119, "text": "Yang et al. (2003)", "ref_id": "BIBREF37" }, { "start": 126, "end": 152, "text": "Denis and Baldridge (2008)", "ref_id": "BIBREF3" }, { "start": 361, "end": 377, "text": "(Joachims, 2002)", "ref_id": "BIBREF14" }, { "start": 503, "end": 529, "text": "Denis and Baldridge (2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Base models", "sec_num": "4.1" }, { "text": "As we mentioned in Section 2, a speaker's eye gaze contributes to disambiguating referents appearing in the speaker's utterances because the speaker tends to see the target object before it is referred to by a referring expression (Spivey et al., 2002) . Several aspects must be considered in order to integrate a speaker's eye gaze data. First, because the eye gaze data includes saccades, the inhibition factor of perceptual sensitivity, we extract only eye fixations as discussed in Richardson et al. (2007) . For separating saccades and eye fixations, we employ Dispersion-threshold identification (Salvucci and Anderson, 2001) , detecting fixations by using the concentration of eye gaze based on the fact the fixations are relatively slower than saccades. Second, because of the errors in measuring eye gaze by the eye tracker, the fixation data needs to be interpolated by the surrounding data. More specifically, if the error interval is less than 100 msec and the difference of the centers of two fixations is smaller then 16 pixels, these fixations are concatenated according to the work by Richardson et al. (2007) .", "cite_spans": [ { "start": 231, "end": 252, "text": "(Spivey et al., 2002)", "ref_id": "BIBREF31" }, { "start": 486, "end": 510, "text": "Richardson et al. (2007)", "ref_id": "BIBREF26" }, { "start": 602, "end": 631, "text": "(Salvucci and Anderson, 2001)", "ref_id": "BIBREF27" }, { "start": 1101, "end": 1125, "text": "Richardson et al. (2007)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Eye gaze features", "sec_num": "4.2" }, { "text": "The clues exploited in this paper are based on the fact that the direction of eye gaze directly reflects the focus of attention (Richardson et al., 2007; Just and Carpenter, 1976) , i.e. when one utters a referring expression, he potentially focuses on the object involved by fixating his eyes on it. Therefore, we use the eye fixations as clues for identifying the pieces focused on using the following criteria: the nearest piece to the eye fixation point is more likely a target of focus over all other pieces. To reflect this, we introduce the feature set shown in Table 2 . We henceforth call these features the eye gaze features. Note that the parameter T is set to 1,500 ms based on the previous work done by Prasov and Chai (2010) .", "cite_spans": [ { "start": 128, "end": 153, "text": "(Richardson et al., 2007;", "ref_id": "BIBREF26" }, { "start": 154, "end": 179, "text": "Just and Carpenter, 1976)", "ref_id": "BIBREF15" }, { "start": 716, "end": 738, "text": "Prasov and Chai (2010)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 569, "end": 576, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Eye gaze features", "sec_num": "4.2" }, { "text": "In order to investigate the effect of extra-linguistic information with or without linguistic factors, we conducted empirical evaluations using the updated version of the REX-J corpus explained in (a) Linguistic features L1 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluation", "sec_num": "5" }, { "text": "yes, no whether P is referred to by the most recent referring expression. L2 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluation", "sec_num": "5" }, { "text": "yes, no whether the time distance to the last mention of P is less than or equal to 10 sec. L3 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluation", "sec_num": "5" }, { "text": "yes, no whether the time distance to the last mention of P is more than 10 sec and less than or equal to 20 sec. L4 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluation", "sec_num": "5" }, { "text": "yes, no whether the time distance to the last mention of P is more than 20 sec. L5 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluation", "sec_num": "5" }, { "text": "yes, no whether P has never been referred to by any mentions in the preceding utterances. L6 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluation", "sec_num": "5" }, { "text": "yes, no, N/A whether the attributes of P are compatible with the attributes of R. L7 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluation", "sec_num": "5" }, { "text": "yes, no whether R is followed by the case marker 'o (accusative)'. L8 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluation", "sec_num": "5" }, { "text": "yes, no whether R is followed by the case marker 'ni (dative)'. L9 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluation", "sec_num": "5" }, { "text": "yes, no whether R is a pronoun and the most recent reference to P is not a pronoun. L10 : yes, no whether R is not a pronoun and was most recently referred to by a pronoun. (b) Task specific features T1 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluation", "sec_num": "5" }, { "text": "yes, no whether the mouse cursor was over P at the beginning of uttering R. T2 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluation", "sec_num": "5" }, { "text": "yes, no whether P is the last piece that the mouse cursor was over when feature T1 is 'no'. T3 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluation", "sec_num": "5" }, { "text": "yes, no whether the time distance is less than or equal to 10 sec after the mouse cursor was over P.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluation", "sec_num": "5" }, { "text": "yes, no whether the time distance is more than 10 sec and less than or equal to 20 sec after the mouse cursor was over P. T5 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T4 :", "sec_num": null }, { "text": "yes, no whether the time distance is more than 20 sec after the mouse cursor was over P. T6 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T4 :", "sec_num": null }, { "text": "yes, no whether the mouse cursor was never over P in the preceding utterances. T7 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T4 :", "sec_num": null }, { "text": "yes, no whether P is being manipulated at the beginning of uttering R. T8 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T4 :", "sec_num": null }, { "text": "yes, no whether P is the most recently manipulated piece when feature T7 is 'no'. T9 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T4 :", "sec_num": null }, { "text": "yes, no whether the time distance is less than or equal to 10 sec after P was most recently manipulated. T10 : yes, no whether the time distance is more than 10 sec and less than or equal to 20 sec after P was most recently manipulated. T11 : yes, no whether the time distance is more than 20 sec after P was most recently manipulated. T12 : yes, no whether P has never been manipulated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T4 :", "sec_num": null }, { "text": "P stands for a piece of the Tangram puzzle (i.e. a candidate referent of a referring expression) and R stands for the target referring expression. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T4 :", "sec_num": null }, { "text": "We employed two models as baselines: a model using only discourse history features, and one using only eye gaze features. Because the task setting is the same as the evaluation conducted in , we employ the same feature set, consisting of linguistically motivated features, and also features which capture the task specific extra-linguistic information of each object. We call these two kinds of features the linguistic features and task specific features, respectively. The details of these features are summarised in Table 3. As reported in , the referential behaviour of pronouns is completely different from non-pronouns. For this reason, we separately create two reference resolution models; one called the pronoun model, which identifies a referent of a given pronoun, and another called the nonpronoun model, which is for all other expressions. During the training phase, we use only training instances whose referring expressions are pronouns for creating the pronoun model, and all other training instances for the non-pronoun model. We group these two models together, selecting which Ling, TaskSp and Gaze stand for the models using the linguistic, task specific and eye gaze features respectively. one to use based on the referring expression. In other words, the pronoun model is selected if a referring expression is a pronoun, and the nonpronoun model otherwise. We will hereafter refer to the selectional model which alternatively picks between the pronoun and non-pronoun models as the separated model. We also train a third model using all training instances without distinguishing between pronouns and non-pronouns. This model we will refer to as the combined model. the results show that the model using only the linguistic features (Ling) achieved performance comparable to the one using only the eye gaze features (Gaze). Moreover, the model using only the task specific features (TaskSp) obtained performance significantly better than the others. This is because a mouse cursor is the only shared visual stimulus between the operator and solver. Therefore, it becomes the most important clue for pronouns, while the eye fixations of a speaker are not necessarily shared between them. In contrast to pronouns, the non-pronoun model using only the linguistic features (Ling) outperforms the one using either eye gaze features or the task specific features (Gaze and TaskSp). This may be because one linguistic feature (L6) works more effectively than the other features. As shown later (see Table 6 ), in non-pronoun cases, the feature L6, which is the binary value indicating the compatibility of the attributes between two referring expressions, has the highest feature weight, leading to the best performance out of all three models (Ling, Gaze and TaskSp).", "cite_spans": [], "ref_spans": [ { "start": 518, "end": 526, "text": "Table 3.", "ref_id": "TABREF3" }, { "start": 2511, "end": 2518, "text": "Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Experimental settings", "sec_num": "5.1" }, { "text": "In addition, combining the linguistic and eye gaze features (Ling+Gaze) on non-pronoun reference resolution contributes to increasing performance. This means that these two features work in a complementary manner when a referring expression cannot be judged on a superficial level whether it refers to a discourse referent or a visually focused referent. From these results, we can see that the clues from utterances of participants are also essential for precise reference resolution, while the previous work focusing on eye fixations tends to concentrate on modeling only eye gaze information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "The accuracy results in Table 5 show the performance of the combined and separated models for different settings of feature selection. Table 5 shows that the two models achieved almost the same performance when the linguistic, eye gaze and task specific features are individually used. However, it also shows that the separated model outperforms the combined model when more than two feature types are utilised. This indicates that separating the models with regard to the type of referring expression does make sense even when we employ eye fixations as a clue for recognising referent objects. It also shows that both the combined and separated models obtained the best performance for each model using all the features. In other words, the three types of features work in a complementary manner on multi-modal reference resolution. We next investigated the significance of each feature for the pronoun and non-pronoun models. We calculate the weight of a feature f shown in Table 6 according to the following formula.", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 31, "text": "Table 5", "ref_id": "TABREF7" }, { "start": 135, "end": 142, "text": "Table 5", "ref_id": "TABREF7" }, { "start": 977, "end": 984, "text": "Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "weight(f ) = \u2211 x\u2208SV s w x z x (f )", "eq_num": "(1)" } ], "section": "Results", "sec_num": "5.2" }, { "text": "where SVs is a set of the support vectors in a ranker induced by the Ranking SVM algorithm, w x is the weight of the support vector x, z x (f ) is the function that returns 1 if f occurs in x, respectively. Table 6 shows the top 10 features with the highest weights of each model. It demonstrates that in the pronoun model the task specific features have the highest weight, while in the non-pronoun model these features are less significant. As shown in Table 4 , pronouns are strongly related to the situation where the mouse cursor is over a piece, which is consistent with the results reported in .", "cite_spans": [], "ref_spans": [ { "start": 207, "end": 214, "text": "Table 6", "ref_id": "TABREF9" }, { "start": 455, "end": 462, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "In contrast, the highest features in the nonpronoun model are occupied by the eye gaze features, except for L6. This indicates that in the situation where a speaker mentions pieces realised as non-pronouns, the eye fixations become a good clue for identifying the current focus of the speaker, while the task specific features such as the location of the mouse cursor are less significant. In addition, Table 6 also shows that the discourse feature L6 obtains the highest significance. This means that exploiting the linguistic factors together with eye fixations is essential for more accurate reference resolution.", "cite_spans": [], "ref_spans": [ { "start": 403, "end": 410, "text": "Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "In this paper we focused on investigating the impact of eye fixations on reference resolution compared to using other extra-linguistic information. We conducted an empirical evaluation using referring expressions appearing in collaborative work dialogues from the extended REX-J corpus, synchronised with eye gaze information. We demonstrated that the referents of pronouns are relatively easily identified, as they rely on the visual salience such as is indicated by moving the mouse cursor, and that non-pronouns are strongly related to eye fixations on its referent. In addition, our results also show that combining linguistic, eye gaze and other extra-linguistic factors contribute to increasing the overall performance of identifying all referring expressions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "There are several future directions for making the multi-modal reference resolution more accurate and robust. First, we need to introduce more task dependent information reflecting the characteristics of each multi-modal task. In the Tangram puzzle task, for example, once a piece becomes part of a partially constructed shape, the piece tends to be less salient because a solver typically gives an instruction to move a scattered piece to a partially constructed shape. We expect that introducing such task specific clues into the reference resolution model as features will contribute to improving performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Second, in our evaluation we adopted collaborative work dialogues where two participants solve Tangram puzzles. Since all objects (i.e. puzzle pieces) have nearly the same size, this results in explicitly rejecting the factor that a relatively larger object occupying the computer display has higher prominence over smaller objects, which has been considered by Byron (2005) . In order to take such a factor into account, we need further data collection and then to incorporate additional factors into the current reference resolution model. A third possible direction for future work is to examine the relation between linguistic and inten-tional structures, which are discussed in Grosz and Sidner (1986) . In our problem setting, when a solver instructs an operator how to construct a goal shape, a series of utterances by the solver reflects the solver's intentions. As we already mentioned above, objects which a solver wants an operator to manipulate tend to draw a solver's attention, while the other objects (especially, the objects representing the partially constructed shape) are considered less salient. Exploiting the importance of the speaker's intentions also needs to be considered in future work.", "cite_spans": [ { "start": 362, "end": 374, "text": "Byron (2005)", "ref_id": "BIBREF2" }, { "start": 683, "end": 706, "text": "Grosz and Sidner (1986)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "www-nlpir.nist.gov/related projects/muc/ 2 www.itl.nist.gov/iad/mig/tests/ace/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that the first pair was used to adjust the settings of our data collection, so 4 dialogues collected from that pair were not included in the evaluation data set used in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "www.lat-mpi.eu/tools/elan/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "www.cs.cornell.edu/People/tj/svm light/svm rank.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A centering approach to pronouns", "authors": [ { "first": "S", "middle": [ "E" ], "last": "Brennan", "suffix": "" }, { "first": "M", "middle": [ "W" ], "last": "Friedman", "suffix": "" }, { "first": "C", "middle": [], "last": "Pollard", "suffix": "" } ], "year": 1987, "venue": "Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "155--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. E. Brennan, M. W. Friedman, and C. Pollard. 1987. A centering approach to pronouns. In Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics (ACL), pages 155-162.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Reference resolution in the wild: On-line circumscription of referential domains in a natural, interactive, problem-solving task", "authors": [ { "first": "S", "middle": [], "last": "Brown-Schmidt", "suffix": "" }, { "first": "E", "middle": [], "last": "Campana", "suffix": "" }, { "first": "M", "middle": [ "K" ], "last": "Tanenhaus", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 24th annual meeting of the Cognitive Science Society", "volume": "", "issue": "", "pages": "148--153", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Brown-Schmidt, E. Campana, and M. K. Tanenhaus. 2002. Reference resolution in the wild: On-line cir- cumscription of referential domains in a natural, in- teractive, problem-solving task. In Proceedings of the 24th annual meeting of the Cognitive Science So- ciety, pages 148-153.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Utilizing visual attention for crossmodal coreference interpretation", "authors": [ { "first": "D", "middle": [ "K" ], "last": "Byron", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Fifth International and Interdisciplinary Conference on Modeling and Using Context", "volume": "", "issue": "", "pages": "83--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. K. Byron. 2005. Utilizing visual attention for cross- modal coreference interpretation. In In Proceedings of Fifth International and Interdisciplinary Confer- ence on Modeling and Using Context, pages 83-96.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Specialized models and ranking for coreference resolution", "authors": [ { "first": "P", "middle": [], "last": "Denis", "suffix": "" }, { "first": "J", "middle": [], "last": "Baldridge", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "660--669", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Denis and J. Baldridge. 2008. Specialized models and ranking for coreference resolution. In Proceed- ings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 660-669.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Introduction to the special issue on language-vision interactions", "authors": [ { "first": "F", "middle": [], "last": "Ferreira", "suffix": "" }, { "first": "M", "middle": [ "K" ], "last": "Tanenhaus", "suffix": "" } ], "year": 2007, "venue": "Journal of Memory and Language", "volume": "57", "issue": "", "pages": "455--459", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Ferreira and M. K. Tanenhaus. 2007. Introduction to the special issue on language-vision interactions. Journal of Memory and Language, 57:455-459.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Who is \"you\"? combining linguistic and gaze features to resolve secondperson references in dialogue", "authors": [ { "first": "M", "middle": [], "last": "Frampton", "suffix": "" }, { "first": "R", "middle": [], "last": "Fern\u00e1ndez", "suffix": "" }, { "first": "P", "middle": [], "last": "Ehlen", "suffix": "" }, { "first": "M", "middle": [], "last": "Christoudias", "suffix": "" }, { "first": "T", "middle": [], "last": "Darrell", "suffix": "" }, { "first": "S", "middle": [], "last": "Peters", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 12th Conference of the European Chapter", "volume": "", "issue": "", "pages": "273--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Frampton, R. Fern\u00e1ndez, P. Ehlen, M. Christoudias, T. Darrell, and S. Peters. 2009. Who is \"you\"? com- bining linguistic and gaze features to resolve second- person references in dialogue. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 273-281.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A statistical approach to anaphora resolution", "authors": [ { "first": "N", "middle": [], "last": "Ge", "suffix": "" }, { "first": "J", "middle": [], "last": "Hale", "suffix": "" }, { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 6th Workshop on Very Large Corpora", "volume": "", "issue": "", "pages": "161--170", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Ge, J. Hale, and E. Charniak. 1998. A statistical ap- proach to anaphora resolution. In Proceedings of the 6th Workshop on Very Large Corpora, pages 161- 170.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Attention, intentions, and the structure of discourse", "authors": [ { "first": "J", "middle": [], "last": "Barbara", "suffix": "" }, { "first": "Candace", "middle": [ "L" ], "last": "Grosz", "suffix": "" }, { "first": "", "middle": [], "last": "Sidner", "suffix": "" } ], "year": 1986, "venue": "Computational Linguistics", "volume": "12", "issue": "3", "pages": "175--204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara J. Grosz and Candace L. Sidner. 1986. Atten- tion, intentions, and the structure of discourse. Com- putational Linguistics, 12(3):175-204.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Centering: A framework for modeling the local coherence of discourse", "authors": [ { "first": "B", "middle": [ "J" ], "last": "Grosz", "suffix": "" }, { "first": "A", "middle": [ "K" ], "last": "Joshi", "suffix": "" }, { "first": "S", "middle": [], "last": "Weinstein", "suffix": "" } ], "year": 1995, "venue": "Computational Linguistics", "volume": "21", "issue": "2", "pages": "203--226", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. J. Grosz, A. K. Joshi, and S. Weinstein. 1995. Centering: A framework for modeling the local co- herence of discourse. Computational Linguistics, 21(2):203-226.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Speakers' eye gaze disambiguates referring expressions early during face-to-face conversation", "authors": [ { "first": "J", "middle": [ "E" ], "last": "Hanna", "suffix": "" }, { "first": "S", "middle": [ "E" ], "last": "Brennan", "suffix": "" } ], "year": 2007, "venue": "Journal of Memory and Language", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. E. Hanna and S. E. Brennan. 2007. Speakers' eye gaze disambiguates referring expressions early dur- ing face-to-face conversation. Journal of Memory and Language, 57.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Pragmatic effects on reference resolution in a collaborative task: evidence from eye movements", "authors": [ { "first": "J", "middle": [ "E" ], "last": "Hanna", "suffix": "" }, { "first": "M", "middle": [ "K" ], "last": "Tanenhaus", "suffix": "" } ], "year": 2004, "venue": "Cognitive Science", "volume": "28", "issue": "", "pages": "105--115", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. E. Hanna and M. K. Tanenhaus. 2004. Pragmatic ef- fects on reference resolution in a collaborative task: evidence from eye movements. Cognitive Science, 28:105-115.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The effects of common ground and perspective on domains of referential interpretation", "authors": [ { "first": "J", "middle": [ "E" ], "last": "Hanna", "suffix": "" }, { "first": "M", "middle": [ "K" ], "last": "Tanenhaus", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Trueswell", "suffix": "" } ], "year": 2003, "venue": "Journal of Memory and Language", "volume": "49", "issue": "1", "pages": "43--61", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. E. Hanna, M. K. Tanenhaus, and J. C. Trueswell. 2003. The effects of common ground and perspec- tive on domains of referential interpretation. Jour- nal of Memory and Language, 49(1):43-61.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Incorporating contextual cues in trainable models for coreference resolution", "authors": [ { "first": "R", "middle": [], "last": "Iida", "suffix": "" }, { "first": "K", "middle": [], "last": "Inui", "suffix": "" }, { "first": "H", "middle": [], "last": "Takamura", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 10th EACL Workshop on The Computational Treatment of Anaphora", "volume": "", "issue": "", "pages": "23--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Iida, K. Inui, H. Takamura, and Y. Matsumoto. 2003. Incorporating contextual cues in trainable models for coreference resolution. In Proceedings of the 10th EACL Workshop on The Computational Treatment of Anaphora, pages 23-30.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Incorporating extra-linguistic information into reference resolution in collaborative task dialogue", "authors": [ { "first": "R", "middle": [], "last": "Iida", "suffix": "" }, { "first": "S", "middle": [], "last": "Kobayashi", "suffix": "" }, { "first": "T", "middle": [], "last": "Tokunaga", "suffix": "" } ], "year": 2010, "venue": "Proceeding of the 48st Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "1259--1267", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Iida, S. Kobayashi, and T. Tokunaga. 2010. In- corporating extra-linguistic information into refer- ence resolution in collaborative task dialogue. In Proceeding of the 48st Annual Meeting of the Asso- ciation for Computational Linguistics (ACL), pages 1259-1267.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Optimizing search engines using clickthrough data", "authors": [ { "first": "T", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD)", "volume": "", "issue": "", "pages": "133--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the ACM Con- ference on Knowledge Discovery and Data Mining (KDD), pages 133-142.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Eye fixations and cognitive processes", "authors": [ { "first": "M", "middle": [], "last": "Just", "suffix": "" }, { "first": "P", "middle": [ "A" ], "last": "Carpenter", "suffix": "" } ], "year": 1976, "venue": "Cognitive Psychology", "volume": "8", "issue": "", "pages": "441--480", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Just and P. A. Carpenter. 1976. Eye fixations and cognitive processes. Cognitive Psychology, 8:441- 480.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Visual salience and reference resolution in simulated 3-d environments", "authors": [ { "first": "J", "middle": [], "last": "Kelleher", "suffix": "" }, { "first": "J", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2004, "venue": "Artificial Intelligence Review", "volume": "21", "issue": "3", "pages": "253--267", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Kelleher and J. van Genabith. 2004. Visual salience and reference resolution in simulated 3-d environ- ments. Artificial Intelligence Review, 21(3):253- 267.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Dynamically structuring updating and interrelating representations of visual and linguistic discourse", "authors": [ { "first": "J", "middle": [], "last": "Kelleher", "suffix": "" }, { "first": "F", "middle": [], "last": "Costello", "suffix": "" }, { "first": "J", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2005, "venue": "Artificial Intelligence", "volume": "167", "issue": "", "pages": "62--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Kelleher, F. Costello, and J. van Genabith. 2005. Dy- namically structuring updating and interrelating rep- resentations of visual and linguistic discourse. Arti- ficial Intelligence, 167:62-102.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Attention driven reference resolution in multimodal contexts", "authors": [ { "first": "J", "middle": [ "D" ], "last": "Kelleher", "suffix": "" } ], "year": 2006, "venue": "Artificial Intelligence Review", "volume": "25", "issue": "", "pages": "21--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. D. Kelleher. 2006. Attention driven reference reso- lution in multimodal contexts. Artificial Intelligence Review, 25:21-35.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "An algorithm for pronominal anaphora resolution", "authors": [ { "first": "S", "middle": [], "last": "Lappin", "suffix": "" }, { "first": "H", "middle": [ "J" ], "last": "Leass", "suffix": "" } ], "year": 1994, "venue": "Computational Linguistics", "volume": "20", "issue": "4", "pages": "535--561", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Lappin and H. J. Leass. 1994. An algorithm for pronominal anaphora resolution. Computational Linguistics, 20(4):535-561.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Using decision trees for coreference resolution", "authors": [ { "first": "J", "middle": [ "F" ], "last": "Mccarthy", "suffix": "" }, { "first": "W", "middle": [ "G" ], "last": "Lehnert", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 14th International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "1050--1055", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. F. McCarthy and W. G. Lehnert. 1995. Using deci- sion trees for coreference resolution. In Proceedings of the 14th International Joint Conference on Artifi- cial Intelligence, pages 1050-1055.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "When conceptual pacts are broken: Partner-specific effects on the comprehension of referring expressions", "authors": [ { "first": "C", "middle": [], "last": "Metzing", "suffix": "" }, { "first": "S", "middle": [ "E" ], "last": "Brennan", "suffix": "" } ], "year": 2003, "venue": "Journal of Memory and Language", "volume": "49", "issue": "", "pages": "201--213", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Metzing and S. E. Brennan. 2003. When concep- tual pacts are broken: Partner-specific effects on the comprehension of referring expressions. Journal of Memory and Language, 49:201-213.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Anaphora Resolution. Studies in Language and Linguistics", "authors": [ { "first": "R", "middle": [], "last": "Mitkov", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Mitkov. 2002. Anaphora Resolution. Studies in Language and Linguistics. Pearson Education.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Improving machine learning approaches to coreference resolution", "authors": [ { "first": "V", "middle": [], "last": "Ng", "suffix": "" }, { "first": "C", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "104--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. Ng and C. Cardie. 2002. Improving machine learn- ing approaches to coreference resolution. In Pro- ceedings of the 40th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL), pages 104-111.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "What's in a gaze? the role off eye-gaze in reference resolution in multimodal conversational interface", "authors": [ { "first": "Z", "middle": [], "last": "Prasov", "suffix": "" }, { "first": "J", "middle": [ "Y" ], "last": "Chai", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 13th international conference on Intelligent user interfaces", "volume": "", "issue": "", "pages": "20--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z. Prasov and J. Y. Chai. 2008. What's in a gaze? the role off eye-gaze in reference resolution in mul- timodal conversational interface. In In Proceedings of the 13th international conference on Intelligent user interfaces, pages 20-29.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Fusing eye gaze with speech recognition hypotheses to resolve exophoric references in situated dialogue", "authors": [ { "first": "Z", "middle": [], "last": "Prasov", "suffix": "" }, { "first": "J", "middle": [ "Y" ], "last": "Chai", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "471--481", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z. Prasov and J. Y. Chai. 2010. Fusing eye gaze with speech recognition hypotheses to resolve exophoric references in situated dialogue. In Proceedings of the 2010 Conference on Empirical Methods in Nat- ural Language Processing, pages 471-481.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Eye movements in language and cognition: A brief introduction, methods in cognitive linguistics", "authors": [ { "first": "D", "middle": [ "C" ], "last": "Richardson", "suffix": "" }, { "first": "R", "middle": [], "last": "Dale", "suffix": "" }, { "first": "M", "middle": [ "J" ], "last": "Spivey", "suffix": "" } ], "year": 2007, "venue": "Methods in Cognitive Linguistics", "volume": "", "issue": "", "pages": "323--344", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. C. Richardson, R. Dale, and M. J. Spivey. 2007. Eye movements in language and cognition: A brief introduction, methods in cognitive linguistics. In M. Gonzalez-Marquez, I. Mittelberg, S. Coulson, and M. J. Spivey, editors, Methods in Cognitive Lin- guistics, pages 323-344. John Benjamins.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Automated eye-movement protocol analysis", "authors": [ { "first": "D", "middle": [ "D" ], "last": "Salvucci", "suffix": "" }, { "first": "J", "middle": [ "R" ], "last": "Anderson", "suffix": "" } ], "year": 2001, "venue": "", "volume": "16", "issue": "", "pages": "39--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. D. Salvucci and J. R. Anderson. 2001. Automated eye-movement protocol analysis. Human-Computer Interaction, 16:39-86.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Visual salience and reference resolution in situated dialogues: A corpus-based evaluation", "authors": [ { "first": "N", "middle": [], "last": "Sch\u00fctte", "suffix": "" }, { "first": "J", "middle": [ "D" ], "last": "Kelleher", "suffix": "" }, { "first": "B", "middle": [], "last": "Mac Namee", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the AAAI Symposium on Dialog with Robots", "volume": "", "issue": "", "pages": "11--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Sch\u00fctte, J. D. Kelleher, and B. Mac Namee. 2010. Visual salience and reference resolution in situated dialogues: A corpus-based evaluation. In In Pro- ceedings of the AAAI Symposium on Dialog with Robots, Arlington, Virginia, USA. 11th -13th Nov 2010.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A machine learning approach to coreference resolution of noun phrases", "authors": [ { "first": "W", "middle": [ "M" ], "last": "Soon", "suffix": "" }, { "first": "H", "middle": [ "T" ], "last": "Ng", "suffix": "" }, { "first": "D", "middle": [ "C Y" ], "last": "Lim", "suffix": "" } ], "year": 2001, "venue": "Computational Linguistics", "volume": "27", "issue": "4", "pages": "521--544", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. M. Soon, H. T. Ng, and D. C. Y. Lim. 2001. A machine learning approach to coreference resolu- tion of noun phrases. Computational Linguistics, 27(4):521-544.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "REX-J: Japanese referring expression corpus of situated dialogs", "authors": [ { "first": "P", "middle": [], "last": "Spanger", "suffix": "" }, { "first": "M", "middle": [], "last": "Yasuhara", "suffix": "" }, { "first": "R", "middle": [], "last": "Iida", "suffix": "" }, { "first": "T", "middle": [], "last": "Tokunaga", "suffix": "" }, { "first": "A", "middle": [], "last": "Terai", "suffix": "" }, { "first": "N", "middle": [], "last": "Kuriyama", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Spanger, M. Yasuhara, R. Iida, T. Tokunaga, A. Terai, and N. Kuriyama. 2010. REX-J: Japanese referring expression corpus of situated dialogs. Lan- guage Resources & Evaluation.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Eye movements and spoken language comprehension: Effects of visual context on syntactic ambiguity resolution", "authors": [ { "first": "M", "middle": [ "J" ], "last": "Spivey", "suffix": "" }, { "first": "M", "middle": [ "K" ], "last": "Tanenhaus", "suffix": "" }, { "first": "K", "middle": [ "M" ], "last": "Eberhard", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Sedivy", "suffix": "" } ], "year": 2002, "venue": "Cognitive Psychology", "volume": "45", "issue": "4", "pages": "447--481", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. J. Spivey, M. K. Tanenhaus, K. M. Eberhard, and J. C. Sedivy. 2002. Eye movements and spoken lan- guage comprehension: Effects of visual context on syntactic ambiguity resolution. Cognitive Psychol- ogy, 45(4):447-481.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Scare: A situated corpus with annotated referring expressions", "authors": [ { "first": "L", "middle": [], "last": "Stoia", "suffix": "" }, { "first": "D", "middle": [ "M" ], "last": "Shockley", "suffix": "" }, { "first": "D", "middle": [ "K" ], "last": "Byron", "suffix": "" }, { "first": "E", "middle": [], "last": "Fosler-Lussier", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Stoia, D. M. Shockley, D. K. Byron, and E. Fosler- Lussier. 2008. Scare: A situated corpus with an- notated referring expressions. In Proceedings of the Sixth International Conference on Language Re- sources and Evaluation (LREC 2008).", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Functional centering", "authors": [ { "first": "M", "middle": [], "last": "Strube", "suffix": "" }, { "first": "U", "middle": [], "last": "Hahn", "suffix": "" } ], "year": 1996, "venue": "Proceeding of the 34st Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "270--277", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Strube and U. Hahn. 1996. Functional centering. In Proceeding of the 34st Annual Meeting of the Association for Computational Linguistics (ACL), pages 270-277.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Integration of visual and linguistic information in spoken language comprehension", "authors": [ { "first": "M", "middle": [ "K" ], "last": "Tanenhaus", "suffix": "" }, { "first": "M", "middle": [ "J" ], "last": "Spivey-Knowlton", "suffix": "" }, { "first": "K", "middle": [ "M" ], "last": "Eberhard", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Sedivy", "suffix": "" } ], "year": 1995, "venue": "Science", "volume": "268", "issue": "5217", "pages": "1632--1634", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. K. Tanenhaus, M. J. Spivey-Knowlton, K. M. Eber- hard, and J. C. Sedivy. 1995. Integration of visual and linguistic information in spoken language com- prehension. Science, 268(5217):1632-1634.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Eye movements and lexical access in spoken-language comprehension: Evaluating a linking hypothesis between fixations and linguistic processing", "authors": [ { "first": "M", "middle": [ "K" ], "last": "Tanenhaus", "suffix": "" }, { "first": "J", "middle": [ "S" ], "last": "Magnuson", "suffix": "" }, { "first": "D", "middle": [], "last": "Dahan", "suffix": "" }, { "first": "C", "middle": [], "last": "Chambers", "suffix": "" } ], "year": 2000, "venue": "Journal of Psycholinguistic Research", "volume": "29", "issue": "6", "pages": "557--580", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. K. Tanenhaus, J. S. Magnuson, D. Dahan, and C. Chambers. 2000. Eye movements and lexical ac- cess in spoken-language comprehension: Evaluating a linking hypothesis between fixations and linguistic processing. Journal of Psycholinguistic Research, 29(6):557-580.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Statistical Learning Theory. Adaptive and Learning Systems for Signal Processing Communications, and control", "authors": [ { "first": "V", "middle": [ "N" ], "last": "Vapnik", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. N. Vapnik. 1998. Statistical Learning Theory. Adaptive and Learning Systems for Signal Process- ing Communications, and control. John Wiley & Sons.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Coreference resolution using competition learning approach", "authors": [ { "first": "X", "middle": [], "last": "Yang", "suffix": "" }, { "first": "G", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "J", "middle": [], "last": "Su", "suffix": "" }, { "first": "C", "middle": [ "L" ], "last": "Tan", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "176--183", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Yang, G. Zhou, J. Su, and C. L. Tan. 2003. Coreference resolution using competition learning approach. In Proceedings of the 41st Annual Meet- ing of the Association for Computational Linguistics (ACL), pages 176-183.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "text": "'s reference resolution model.", "type_str": "figure" }, "TABREF1": { "text": "Referring expressions in the extended REX-J corpus", "content": "", "num": null, "type_str": "table", "html": null }, "TABREF2": { "text": "", "content": "
: Eye gaze features
", "num": null, "type_str": "table", "html": null }, "TABREF3": { "text": "", "content": "
: Feature set
", "num": null, "type_str": "table", "html": null }, "TABREF5": { "text": "", "content": "", "num": null, "type_str": "table", "html": null }, "TABREF6": { "text": "", "content": "
shows the accuracy results of our empiri-
cal evaluation separately evaluating pronouns and
non-pronouns. In reference resolution of pronouns
", "num": null, "type_str": "table", "html": null }, "TABREF7": { "text": "Overall results (accuracy)", "content": "", "num": null, "type_str": "table", "html": null }, "TABREF9": { "text": "10 highest weights of the features in each model", "content": "
", "num": null, "type_str": "table", "html": null } } } }