{ "paper_id": "I13-1026", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:15:35.812676Z" }, "title": "Evaluation of the Scusi? Spoken Language Interpretation System -A Case Study", "authors": [ { "first": "Thomas", "middle": [], "last": "Kleinbauer", "suffix": "", "affiliation": { "laboratory": "", "institution": "Monash University Clayton", "location": { "postCode": "3800", "region": "Victoria", "country": "Australia" } }, "email": "" }, { "first": "Ingrid", "middle": [], "last": "Zukerman", "suffix": "", "affiliation": { "laboratory": "", "institution": "Monash University Clayton", "location": { "postCode": "3800", "region": "Victoria", "country": "Australia" } }, "email": "" }, { "first": "Su", "middle": [ "Nam" ], "last": "Kim", "suffix": "", "affiliation": { "laboratory": "", "institution": "Monash University Clayton", "location": { "postCode": "3800", "region": "Victoria", "country": "Australia" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a performance evaluation framework for Spoken Language Understanding (SLU) modules, focusing on three elements: (1) characterization of spoken utterances, (2) experimental design, and (3) quantitative evaluation metrics. We then describe the application of our framework to Scusi?our SLU system that focuses on referring expressions.", "pdf_parse": { "paper_id": "I13-1026", "_pdf_hash": "", "abstract": [ { "text": "We present a performance evaluation framework for Spoken Language Understanding (SLU) modules, focusing on three elements: (1) characterization of spoken utterances, (2) experimental design, and (3) quantitative evaluation metrics. We then describe the application of our framework to Scusi?our SLU system that focuses on referring expressions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We present a performance evaluation framework for Spoken Language Understanding (SLU) modules, and describe its application to the evaluation of Scusi? -an SLU system that focuses on the interpretation of descriptions of household objects (Zukerman et al., 2008) . Our contributions pertain to (1) the characterization of spoken utterances, (2) experimental design, and (3) quantitative evaluation metrics for an N-best list. Characterization of spoken utterances. According to (Jokinen and McTear, 2010) , \"in diagnostic-type evaluations, a representative test suite is used so as to produce a system's performance profile with respect to a taxonomy of possible inputs\". In addition, one of the typical aims of an evaluation is to identify components that can be improved (Paek, 2001 ). These two factors in combination motivate a characterization of input utterances along two dimensions: accuracy and knowledge (Section 4).", "cite_spans": [ { "start": 239, "end": 262, "text": "(Zukerman et al., 2008)", "ref_id": "BIBREF21" }, { "start": 478, "end": 504, "text": "(Jokinen and McTear, 2010)", "ref_id": "BIBREF10" }, { "start": 773, "end": 784, "text": "(Paek, 2001", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Accuracy indicates whether an utterance describes an intended object precisely and unambiguously. For instance, when intending a blue plate, \"the blue plate\" is an accurate description if there is only one such plate in the room, while \"the green plate\" is inaccurate. \u2022 Knowledge indicates how much the SLU module knows about different factors of the interpretation process, e.g., vocabulary or geometric relations. For instance, \"CPU\" in \"the CPU under the desk\" * 1 is Out of Vocabulary (OOV) for Scusi?, and the \"of\" in \"the picture of a face\" * is an unknown relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The frequency of different values for these dimensions influence the requirements from an SLU system, and the components that necessitate additional resources, e.g., vocabulary extension.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experimental design. It is generally accepted that an SLU system should exhibit reasonable behaviour by human standards. At present, in experiments that evaluate an SLU system's performance, people speak to the system, and the accuracy of the system's interpretation is assessed. However, this mode of evaluation, which we call Generative, does not address whether a system's interpretations are plausible (even if they are wrong). Thus, in addition to a Generative experiment, we offer an Interpretive experiment. Both experiments are briefly described below. Their implementation in our SLU system is described in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 In the Interpretive experiment, trial subjects and the SLU system are addressees, and are given utterances generated by a third party. The SLU system's confidence in its interpretations is then compared with the preferences of the participants.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 In the Generative experiment, trial subjects are speakers, generating free-form utterances, and the SLU module and expert annotators are addressees. Gold standard interpretations for these descriptions are produced by annotators on the basis of their understanding of what was said, e.g., an ambiguous utterance has more than one correct interpretation. The SLU system's performance is evaluated on the basis of the rank of the correct interpretations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These two experiments, in combination with our characterization of spoken utterances, enable the comparison of system and human interpretations under different conditions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Quantitative evaluation metrics. Automatic Speech Recognizers (ASRs) and parsers often return N-best hypotheses to SLU modules, while many SLU systems return only one interpretation (DeVault et al., 2009; Jokinen and McTear, 2010; Black et al., 2011) . However, maintaining N-best interpretations at the semantic and pragmatic level enables a Dialogue Manager (DM) to examine more than one interpretation, and discover features that guide appropriate responses and support error recovery. This ranking requirement, together with our experimental design, motivates the following metrics (Section 6).", "cite_spans": [ { "start": 182, "end": 204, "text": "(DeVault et al., 2009;", "ref_id": "BIBREF4" }, { "start": 205, "end": 230, "text": "Jokinen and McTear, 2010;", "ref_id": "BIBREF10" }, { "start": 231, "end": 250, "text": "Black et al., 2011)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 For Interpretive experiments, we propose correlation measures, such as Spearman rank or Pearson correlation coefficient, to compare participants' ratings of candidate interpretations with the scores given by an SLU system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 For Generative experiments, we provide a broad view of an SLU system's performance by counting the utterances that it CantRepresent, and among the remaining utterances, counting those for which a correct interpretation was NotFound. We obtain a finer-grained view using fractional variants of the Information Retrieval (IR) metrics Recall (Salton and McGill, 1983) and Normalized Discounted Cumulative Gain (NDCG) (J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002) , which handle equiprobable interpretations in an N-best list. We also compute @K versions of these metrics to represent the relation between rank and performance.", "cite_spans": [ { "start": 341, "end": 366, "text": "(Salton and McGill, 1983)", "ref_id": "BIBREF17" }, { "start": 416, "end": 447, "text": "(J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the next section, we discuss related work, and in Section 3, we outline our system Scusi?. In Section 4, we present our characterization of descriptions, followed by our experimental design and evaluation metrics. The results obtained by applying our framework to Scusi? are described in Section 7, followed by concluding remarks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As mentioned above, our contributions pertain to the characterization of spoken utterances, experimental design, and quantitative metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Characterization of spoken utterances. Most evaluations of SLU systems characterize input utterances in terms of ASR Word Error Rate (WER), e.g., (Hirschman, 1998; Black et al., 2011) . M\u00f6ller (2008) provides a comprehensive collection of interaction parameters for evaluating telephone-based spoken dialogue services, which pertain to different aspects of an interaction, viz communication, cooperativity, task success, and spoken input. Our characterization of spoken utterances along the accuracy and knowledge dimensions is related to M\u00f6ller's task success category. However, in our case, these features pertain to the context, rather than the task. In addition, our characterization is linked to system development effort, i.e., how much effort should be invested to address utterances with certain characteristics; and to evaluation metrics, in the sense that the assessment of an interpretation depends on the accuracy of an utterance, and takes into account the capabilities of an SLU system. Experimental design. Evaluations performed to date are based on Generative experiments (Hirschman, 1998; Gandrabur et al., 2006; Thomson et al., 2008; DeVault et al., 2009; Black et al., 2011) , which focus on correct or partially correct responses. They do not consider human interpretations for utterances with diverse characteristics, as done in our Interpretive trials.", "cite_spans": [ { "start": 146, "end": 163, "text": "(Hirschman, 1998;", "ref_id": "BIBREF8" }, { "start": 164, "end": 183, "text": "Black et al., 2011)", "ref_id": "BIBREF0" }, { "start": 186, "end": 199, "text": "M\u00f6ller (2008)", "ref_id": "BIBREF15" }, { "start": 1088, "end": 1105, "text": "(Hirschman, 1998;", "ref_id": "BIBREF8" }, { "start": 1106, "end": 1129, "text": "Gandrabur et al., 2006;", "ref_id": "BIBREF6" }, { "start": 1130, "end": 1151, "text": "Thomson et al., 2008;", "ref_id": "BIBREF20" }, { "start": 1152, "end": 1173, "text": "DeVault et al., 2009;", "ref_id": "BIBREF4" }, { "start": 1174, "end": 1193, "text": "Black et al., 2011)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Quantitative evaluation metrics. Most SLU system evaluations use IR-based metrics, such as recall, precision and accuracy, to compare the components of one interpretation of a perfect request to the components of a reference interpretation (Hirschman, 1998; M\u00f6ller, 2008; DeVault et al., 2009; Jokinen and McTear, 2010) . In contrast, we consider the rank of completely correct interpretations of perfect requests and partially correct interpretations of imperfect requests in an N-best list. Thomson et al. (2008) analyzed metrics for N-best lists, such as Receiver Operator Characteristic, Weighted Semantic Error Rate and Normalized Cross Entropy (Gandrabur et al., 2006) ; and offered the Item Level Cross Entropy (ICE) metric, which combines the confidence score and correctness of each of N-best interpretations. In this paper, we adapt IR-based metrics to handle equiprobable interpretations in an N-best list, and offer the CantRepresent and NotFound metrics to give a broad view of system performance. In the future, we intend to incorporate confidence/accuracy metrics, such ICE. ", "cite_spans": [ { "start": 240, "end": 257, "text": "(Hirschman, 1998;", "ref_id": "BIBREF8" }, { "start": 258, "end": 271, "text": "M\u00f6ller, 2008;", "ref_id": "BIBREF15" }, { "start": 272, "end": 293, "text": "DeVault et al., 2009;", "ref_id": "BIBREF4" }, { "start": 294, "end": 319, "text": "Jokinen and McTear, 2010)", "ref_id": "BIBREF10" }, { "start": 493, "end": 514, "text": "Thomson et al. (2008)", "ref_id": "BIBREF20" }, { "start": 650, "end": 674, "text": "(Gandrabur et al., 2006)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Scusi? is a system that implements an anytime, probabilistic mechanism for the interpretation of spoken utterances, focusing on a household context. It has four processing stages, where each stage produces multiple outputs for a given input, early processing stages may be probabilistically revisited, and only the most promising options at each stage are explored further.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Scusi? System", "sec_num": "3" }, { "text": "The system takes as input a speech signal, and uses an ASR (Microsoft Speech SDK 6.1) to produce candidate texts. Each text is assigned a probability given the speech wave. The second stage applies Charniak's probabilistic parser (bllip. cs.brown.edu/resources.shtml#software) to syntactically analyze the texts in order of their probability, yielding at most 50 different parse trees per text. The third stage applies mapping rules to the parse trees to generate Uninstantiated Concept Graphs (UCGs) that represent the semantics of the utterance (Sowa, 1984) . The final stage produces Instantiated Concept Graphs (ICGs) that match the concepts and relations in a UCG with objects and relations within the current context (e.g., a room), and estimates how well each instantiation matches its \"parent\" UCG and the context. For example, Figure 1 (a) shows one of the UCGs returned for the description \"the blue mug on the large table\", and Figure 1 (b) displays one of the ICGs generated for this UCG. Note that the concepts in the UCG have generic names, e.g., mug, while the ICG contains specific objects, e.g., mug03 or cup01, which are offered as candidate matches for lex=mug, color=blue.", "cite_spans": [ { "start": 547, "end": 559, "text": "(Sowa, 1984)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 836, "end": 844, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 939, "end": 947, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The Scusi? System", "sec_num": "3" }, { "text": "Scusi? aims to understand requests for actions involving physical objects (Zukerman et al., 2008) . Focusing on object descriptions, Scusi? has a vocabulary of lexical items pertaining to objects, colours, sizes and positions. For object names, this vocabulary is expanded with synonyms and near synonyms obtained from WordNet (Fellbaum, 1998) and word similarity metrics from (Leacock and Chodorow, 1998) . However, this vocabulary is not imposed on the ASR, as we do not want Scusi? to hear only what it wants to hear. In addition, Scusi? was designed to understand the colour and size of objects; the topological positional relations on, in, near and at, optionally combined with center, corner, edge and end, e.g., \"the mug near the center of the table\"; and the projective positional relations in front of, behind, to the left/right, above and under (topological and projective relations are discussed in detail in (Coventry and Garrod, 2004; Kelleher and Costello, 2008) ). By \"understanding a description\" we mean mapping attributes and positions to values in the physical world. For instance, the CIE colour metric (CIE, 1995) is employed to understand colours, Gaussian functions are used to represent sizes of things compared to the size of an average exemplar, and spatial geometry is used to understand positional relations.", "cite_spans": [ { "start": 74, "end": 97, "text": "(Zukerman et al., 2008)", "ref_id": "BIBREF21" }, { "start": 327, "end": 343, "text": "(Fellbaum, 1998)", "ref_id": null }, { "start": 377, "end": 405, "text": "(Leacock and Chodorow, 1998)", "ref_id": "BIBREF13" }, { "start": 920, "end": 947, "text": "(Coventry and Garrod, 2004;", "ref_id": "BIBREF2" }, { "start": 948, "end": 976, "text": "Kelleher and Costello, 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Scusi?'s capabilities", "sec_num": "3.1" }, { "text": "At present, Scusi? does not understand (1) OOV words, e.g., \"the opposite wall\" * ; (2) more than one meaning of polysemous positional relations, e.g., \"to the left of the table\" * as \"to the left and on the table\" as well as \"to the left and next to the table\"; (3) positional relations that are complex, e.g., \"in the left near corner of the table\" * , or don't have a landmark, e.g., \"the ball in the center\" * ; and (4) descriptive prepositional phrases starting with \"of\" or \"with\", e.g., \"the picture of the face\" * and \"the plant with the leaves\" * . However, contextual information sometimes enables the system to overcome OOV words. For example, Scusi? may return the correct ICG for \"the round blue plate on the table\" at a good rank.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scusi?'s capabilities", "sec_num": "3.1" }, { "text": "Clearly, these problems can be solved by programming additional capabilities into our system. However, people will always say things that an SLU system cannot understand. Our evaluation framework can help distinguish between situations in which it is worth investing additional development effort, and situations for which other coping mechanisms should be developed, e.g., asking a clarification question or ignoring the unknown portions of an utterance (while being aware of the impact of this action on comprehension).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scusi?'s capabilities", "sec_num": "3.1" }, { "text": "The WER of the ASR used by Scusi? is 30% when trained on an open vocabulary in combination with a small language model for our corpus. This WER is consistent with the WER obtained in the 2010 Spoken Dialogue Challenge (Black et al., 2011) . In addition to the obvious problem of misrecognized entities or actions, which yield OOV words, ASR errors often produce ungrammatical sentences that cannot be successfully parsed. For instance, one of the alternatives produced by the ASR for \"the blue plate at the front of the table\" * is \"to build played at the front door the table\". Further, disfluencies are often mis-heard by the ASR or cause it to return broken sentences.", "cite_spans": [ { "start": 218, "end": 238, "text": "(Black et al., 2011)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "ASR capabilities", "sec_num": "3.2" }, { "text": "When describing an object or action, speakers may employ a wrong lexical item, or use a wrong attribute. For instance, \"the green couch\" * was described when intending a green bookcase. In addition, when describing objects, speakers may under-specify them, e.g., ask for \"the pink mug\" when there are several such mugs; provide inconsistent specifications that do not match any object perfectly, yielding no candidates or several partial candidates, e.g., request \"the large blue mug\" when there is a large pink mug and a small blue mug; omit a landmark, e.g., \"the ball in the center\" * ; or employ words or constructs unknown to an SLU module, e.g., \"the exact center\" * . 2 These situations, which affect the performance of an SLU system, are characterized along the following two dimensions: accuracy and knowledge.", "cite_spans": [ { "start": 675, "end": 676, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Characterization of Spoken Utterances", "sec_num": "4" }, { "text": "\u2022 Accuracy -We distinguish between Perfect and Imperfect utterances. An utterance is perfect if it matches at least one object or action in the current context in every respect. In this case, an SLU module should produce one or more interpretations that match perfectly the utterance. If every object or action in the context mismatches an utterance at least in one aspect, the utterance is imperfect. In this case, we consider reasonable interpretations (that match the request well but not perfectly) to be the Gold standard. The number of Gold interpretations is an attribute of accuracy: an utterance may match (perfectly or imperfectly) 0, 1 or more than 1 interpretation. \u2022 Knowledge -If all the words and syntactic constructs in an utterance are understood by an SLU module (Section 3.1), the utterance is deemed known, otherwise, it is unknown.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Characterization of Spoken Utterances", "sec_num": "4" }, { "text": "To illustrate these concepts, a description that contains only known words, and matches two objects in the context in every respect, is classified as known-perfect>1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Characterization of Spoken Utterances", "sec_num": "4" }, { "text": "We devised two experiments to assess an SLU system's performance: Interpretive, where the participants and the SLU system are the addressees (Section 5.1), and Generative, where the participants are the speakers and the SLU module is the addressee (Section 5.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "5" }, { "text": "In both experiments, we evaluate the performance of an SLU system on the basis of complete interpretations of an utterance, which in Scusi?'s case is a description. For example, given \"the pink ball near the table\", all the elements of an ICG must try to match this description and the context. That is, if ball01 is pink, but it is on table02, the ICG ball01-location_near-table02 will have a good description match but a bad reality match, while the opposite happens for ICG ball01-location_on-table02.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "5" }, { "text": "This experiment tests whether Scusi?'s understanding matches the understanding of a relatively large population under different accuracy conditions. We focus on imperfect and ambiguous descriptions, as they pose a greater challenge to people than perfect descriptions. The trial consists of a Web-based survey where participants were given a picture of a room and 9 descriptions generated by the authors (Figure 2 ). For each description, participants were asked to rate each of 20 labeled objects based on how well they match the description, where a rating of 10 denotes a \"perfect match\" and a rating of 0 denotes \"no match\".", "cite_spans": [], "ref_spans": [ { "start": 404, "end": 413, "text": "(Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Interpretive trial", "sec_num": "5.1" }, { "text": "Our Web survey was done by 47 participants, resulting in 47 \u00d7 20 scores for each description. These scores were averaged across participants, yielding a single score for each labeled object for each of our 9 descriptions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpretive trial", "sec_num": "5.1" }, { "text": "In this experiment, trial subjects generated freeform, spoken descriptions to identify three designated objects in each of four scenarios. The scenarios, which were designed to test different functionalities of Scusi?, contain between 8 and 16 objects (Figure 3 shows two scenarios). The annotators provided the Gold standard interpretations for a description on the basis of what they understood (rather than using the designated referents). Each annotator handled half of the descriptions, and the other annotator verified the annotations. Disagreements were resolved by consensus.", "cite_spans": [], "ref_spans": [ { "start": 252, "end": 261, "text": "(Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Generative trial", "sec_num": "5.2" }, { "text": "Our study had 26 participants, who generated a total of 432 spoken descriptions (average length was 10 words, median 8, and the longest description had 21 words). We manually filtered out 32 descriptions that were broken up by the ASR due to pauses made by the speakers, and 105 descriptions that Scusi? CantRepresent (Section 6.2). Two sets of files were submitted to Scusi?: a set containing textual transcriptions of the remaining 295 descriptions, and a set containing textual alternatives produced by the ASR for each of these descriptions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative trial", "sec_num": "5.2" }, { "text": "This experiment enables us to observe the frequencies of descriptions with different characteristics (Section 4), and determine their influence on performance, as well as the effect of ASR versus textual input. Table 3 displays the frequencies of the four accuracy classes of descriptions (perfect =1 and >1 and imperfect =1 and >1), and two knowledge classes (known and unknown-OOV) (Section 4). For instance, the top row shows that 197 descriptions are known-perfect=1 (Col-umn 2), and 25 descriptions are unknown-OOV (Column 3). 18 unknown-non-OOV descriptions were omitted from Table 3 . These descriptions have Gold ICGs, but contain word combinations that are not known to Scusi?, e.g., \"on top of\" and \"at the front of\". Note the low frequencies of three of the unknown-OOV categories, and of the imperfect>1 classes. The latter suggests that, unlike our Interpretive trial, people rarely generate descriptions that are both ambiguous and inaccurate. Table 3 also displays the results obtained for the performance metrics NotFound@K, FRecall@K and NDCG@K (Section 6) for each accuracy-knowledge combination and for Text and ASR input; the results are described in Section 7.", "cite_spans": [], "ref_spans": [ { "start": 211, "end": 218, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 582, "end": 589, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 958, "end": 965, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Generative trial", "sec_num": "5.2" }, { "text": "We first consider the Interpretive trial followed by the Generative trial.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "6" }, { "text": "Scusi?'s understanding of each description was compared with that of our trial subjects by calculating the Spearman rank correlation coefficient and Pearson correlation coefficient between the average of the scores of the subjects' ratings for each object, and the probability assigned by Scusi? to the top-ranked correct interpretation with the corresponding head object, e.g., plate16-near-ball09 for the first description in Figure 2(b) . The results for the Spearman rank and Pearson correlation coefficient appear in Section 7.1.", "cite_spans": [ { "start": 379, "end": 398, "text": "plate16-near-ball09", "ref_id": null } ], "ref_spans": [ { "start": 428, "end": 439, "text": "Figure 2(b)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Interpretive trial", "sec_num": "6.1" }, { "text": "We first describe our broad metrics, followed by the fine-grained metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative trial", "sec_num": "6.2" }, { "text": "CantRepresent counts the number of utterances that an SLU system cannot represent, which are a subset of the unknown utterances, and are excluded from the rest of the evaluation. Table 1 displays the frequencies of such descriptions and their causes (11 descriptions had more than one problem). As shown in Table 1 , complex positional relations, e.g., \"the left front corner\" * , account for most of the problems.", "cite_spans": [], "ref_spans": [ { "start": 179, "end": 186, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 307, "end": 314, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Generative trial", "sec_num": "6.2" }, { "text": "NotFound@K counts the number of representable utterances for which no correct interpretation was found within rank K. NotFound@\u221e considers all the interpretations returned by an SLU system. It is worth noting that NotFound utterances are included when calculating the following metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative trial", "sec_num": "6.2" }, { "text": "Precision@K and Recall@K. The @K versions of precision and recall evaluate performance for different cut-off ranks K. Precision@K is simply the number of correct interpretations at rank K or better divided by K. Recall@K is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative trial", "sec_num": "6.2" }, { "text": "Recall@K(d) = |CF(d) \u2229 {I 1 , . . . , I K }| |C(d)| ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative trial", "sec_num": "6.2" }, { "text": "where C(d) is the set of correct interpretations for utterance d, CF(d) is the set of correct interpretations found by an SLU module, and I j denotes an interpretation with rank j. Contrary to IR settings, where typically there are many relevant documents, in language understanding situations, there is often one correct interpretation for an utterance (Table 3 ). If this interpretation is ranked close to the top, Precision@K will be constantly reduced as K increases. Hence, we eschew this measure when evaluating the performance of an SLU system.", "cite_spans": [], "ref_spans": [ { "start": 354, "end": 362, "text": "(Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Generative trial", "sec_num": "6.2" }, { "text": "An SLU module may return several equiprobable interpretations, some of which may be incorrect. The relative ranking of these interpretations is arbitrary, leading to non-deterministic values for Recall@K -a problem that is exacerbated when K falls within a set of such equiprobable interpretations. This motivates a variant of Recall@K, denoted FRecall@K (Fractional Recall), that allows us to represent the arbitrariness of the ranked order of equiprobable interpretations, as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative trial", "sec_num": "6.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "FRecall@K(d) = K j=1 fc(I j ) |C(d)| ,", "eq_num": "(1)" } ], "section": "Generative trial", "sec_num": "6.2" }, { "text": "where fc is the fraction of correct interpretations among those with the same probability as I j (this is a proxy for the probability that I j is correct):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative trial", "sec_num": "6.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "fc(I j ) = c j h j \u2212 l j + 1 ,", "eq_num": "(2)" } ], "section": "Generative trial", "sec_num": "6.2" }, { "text": "where l j is the lowest rank of all the interpretations with the same probability as I j , h j the highest rank, and c j the number of correct interpretations between rank l j and h j inclusively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative trial", "sec_num": "6.2" }, { "text": "Normalized Discounted Cumulative Gain (NDCG)@K. A shortcoming of Recall@K is that it considers the rank of an interpretation only in a coarse way (at the level of K). A finer-grained account of rank is provided by NDCG@K (J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002) , which discounts interpretations with higher (worse) ranks. DCG@K allows the definition of a relevance measure for a result, and divides this measure by a logarithmic penalty that reflects the rank of the result. Using fc(I j ) as a measure of the relevance of interpretation I j , we obtain", "cite_spans": [ { "start": 221, "end": 252, "text": "(J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Generative trial", "sec_num": "6.2" }, { "text": "DCG@K(d) = fc(I 1 ) + K j=2 fc(I j ) log 2 j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative trial", "sec_num": "6.2" }, { "text": "This score is normalized to the [0, 1] range by dividing it by the score of an ideal answer where |C(d)| correct interpretations are ranked in the first |C(d)| places, yielding", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative trial", "sec_num": "6.2" }, { "text": "NDCG@K(d) = DCG@K(d) 1 + min{|C(d)|,N } j=2 1 log 2 j . (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative trial", "sec_num": "6.2" }, { "text": "Note that FRecall@K is computed in relation to the number of correct interpretations, while NDCG@K considers the minimum of K and this number (Equations 1 and 3 respectively). plate16 -(near)\u2192 ball9 plate15 -(near)\u2192 ball9 plate28 -(near)\u2192 ball20 2. box17 box17 box19 carpet23 3. plate16 mug26 plate16 mug12 4. bookcase14 bookcase10 -(under)\u2192 portrait18 bookcase10 -(under)\u2192 portrait8 bookcase14 -(instr_r)\u2192 portrait8 5. mug26 mug26 -(near)\u2192 vase6 mug12 -(near)\u2192 vase6 mug13 -(near)\u2192 vase6 6. plate28 plate16/plate28 plate28/plate16 plate15 7. bookcase10 bookcase14 -(near)\u2192 chest7 bookcase10 -(near)\u2192 chest7 bookcase14 -(recipient_r)\u2192 chest7 8. ball9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative trial", "sec_num": "6.2" }, { "text": "ball9 -(on)\u2192 table1 ball20 -(agent_r)\u2192 table1 ball20 -(action_r)\u2192 table1 9. portrait18 portrait18 -(above)\u2192 bookcase10 portrait8 -(above)\u2192 bookcase10 portrait27 -(instr_r)\u2192 bookcase14", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative trial", "sec_num": "6.2" }, { "text": "We first discuss the results of our Interpretive trials followed by those of our Generative trials. Table 2 compares the results of the Web survey with Scusi?'s performance for the Interpretive trials. Column 2 indicates the object preferred by the trial subjects, and Columns 3-5 show the topthree interpretations preferred by Scusi? (I 1 -I 3 ). Matches between the system's output and the averaged participants' ratings are boldfaced.", "cite_spans": [], "ref_spans": [ { "start": 100, "end": 107, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "7" }, { "text": "As seen in Table 2 , Scusi?'s ratings generally match those of our participants, achieving a strong Pearson correlation of 0.77, and a weaker Spearman correlation of 0.63. This is due to the fact that implausible interpretations get a score of 0 from Scusi?, while some people still choose them, thus yielding different ranks for them.", "cite_spans": [], "ref_spans": [ { "start": 11, "end": 18, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Interpretive Trials", "sec_num": "7.1" }, { "text": "Scusi?'s top-ranked interpretation matches our participants' preferences in 5.5 cases, and its second-ranked interpretation in 2.5 cases (the fractions are for equiprobable interpretations). The discrepancies between Scusi?'s choices and those of our trial subjects are explained as follows: (desc. 3) \"the red dish\" -according to Leacock and Chodorow's similarity metric (Section 3.1), a mug is more similar to a dish than a dinner plate, while our trial subjects thought otherwise; (desc. 4) \"the brown bookcase under the portrait\" -Scusi? penalizes heavily attributes that do not match reality (Zukerman et al., 2008) , hence bookcase14 is penalized, as it is not under any portrait; (desc. 6) \"the large plate\" -our participants perceived plate28 to be larger than plate16 although they are the same size, and hence equiprobable; (desc. 7) \"the large green bookcase near the chest\" -like description 4, bookcase10 (which is green) is ranked second due to its low probability of being considered large.", "cite_spans": [ { "start": 597, "end": 620, "text": "(Zukerman et al., 2008)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Interpretive Trials", "sec_num": "7.1" }, { "text": "Thus, according to this trial, Scusi?'s performance satisfies our original requirement for rea-sonable behaviour and plausible mistakes, but perhaps it should be more forgiving with respect to mis-matched attributes. Table 3 displays the results for NotFound@K, FRecall@K and NDCG@K for K = 1, 3, 10, \u221e for Text and ASR input, the four accuracy classes, and the known and unknown-OOV knowledge categories. There are 277 descriptions in total (instead of 295), as 18 unknown-non-OOV descriptions were omitted from Table 3 (Section 5.2). As mentioned in Section 5.2, the vast majority of the utterances belong to the perfect=1 class (with known or unknown-OOV words), and to the known perfect>1 and imperfect=1 categories.", "cite_spans": [], "ref_spans": [ { "start": 217, "end": 224, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Interpretive Trials", "sec_num": "7.1" }, { "text": "ASR versus Text. The NotFound@1,3, FRecall@1,3 and NDCG@1,3 metrics show that Scusi? yields at least one correct interpretation at the lowest (best) ranks for the vast majority of Text inputs (the discrepancy between FRecall and NDCG at low ranks is due to the way these measures are calculated, Section 6.2). This suggests that in the absence of ASR errors, if correct interpretations are found, the system's confidence in its output is justified. As expected, the NotFound values are substantially higher, and the FRecall and NDCG values lower, for inputs obtained from the ASR (23% of the descriptions had one wrong word in the best ASR alternative, 21% had two wrong words, 12.5% had three, and 8.5% more than three). There is a substantial improvement in FRecall and NDCG as ranks increase, which shows that contextual information can alleviate some ASR errors. The improvement in these metrics for the perfect>1 class, without affecting NotFound, indicates that Scusi? finds more correct interpretations for the same descriptions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Trials", "sec_num": "7.2" }, { "text": "The ASR results compared to those of Text indicate that, unsurprisingly, speech recognition quality must be improved. This may be achieved through advances in ASR technology, or by pre- 1, 3, 10, \u221e 9, 4, 2, 1 73, 60, 49, 31 8, 8, 8, 3 16, 13, 11, 9 FRecall@1, 3, 10, \u221e 0.95, 0.98, 0.99, 0.99 0.61, 0.69, 0.75, 0.84 0.47, 0.68, 0.68, 0.88 0.24, 0.45, 0.54, 0.64 NDCG@1, 3, 10, \u221e 0.95, 0.98, 0.98, 0.98 0.61, 0.69, 0.71, 0.73 0.47, 0.64, 0.64, 0.68 0.24, 0.40, 0.44, 0 .46 perfect>1 30 1 NotFound@1,3,10,\u221e 2,2,1,1 13,12,10,9 0,0,0,0 0,0,0,0 FRecall@1, 3,10,\u221e 0.40,0.82,0.88,0.97 0.22,0.48,0.62,0 .70 0.50,1.00,1.00,1.00 0.50,1.00,1.00,1.00 NDCG@1, 3,10,\u221e 0.84,0.84,0.85,0.87 0.47,0.48,0.53,0 .55 1.00,1.00,1.00,1.00 1.00,1.00,1.00,1.00 imperfect=1 18 2 NotFound@1,3,10,\u221e 1,1,1,0 8,7,7,5 0,0,0,0 1,1,1,1 FRecall@1, 3, 10, \u221e 0.91, 0.94, 0.94, 1.00 0.56, 0.59, 0.61, 0.72 0.51, 0.54, 0.64, 1.00 0.03, 0.08, 0.26, 0.50 NDCG@1, 3, 10, \u221e 0.91, 0.94, 0.94, 0.95 0.56, 0.59, 0.60, 0.61 0.51, 0.54, 0.58, 0.66 0.03, 0.07, 0.14, 0 .20 imperfect>1 3 1 NotFound@1,3,10,\u221e 1,0,0,0 3,2,1,1 0,0,0,0 1,1,1,1 FRecall@1,3,10,\u221e 0.18,0.53,0.61,1.00 0.00,0.33,0.51,0.67 0.03,0.09,0.29,1.00 0.00,0.00,0.00,0.00 NDCG@1, 3,10,\u221e 0.36,0.53,0.56,0.64 0.00,0.27,0.35,0.38 0.06,0.08,0.15,0 .31 0.00,0.00,0.00,0.00 venting ASR errors (Gorniak and Roy, 2005; Sugiura et al., 2009) or correcting them (L\u00f3pez-C\u00f3zar and Callejas, 2008; Kim et al., 2013) .", "cite_spans": [ { "start": 186, "end": 188, "text": "1,", "ref_id": null }, { "start": 189, "end": 191, "text": "3,", "ref_id": null }, { "start": 192, "end": 195, "text": "10,", "ref_id": null }, { "start": 196, "end": 200, "text": "\u221e 9,", "ref_id": null }, { "start": 201, "end": 203, "text": "4,", "ref_id": null }, { "start": 204, "end": 206, "text": "2,", "ref_id": null }, { "start": 207, "end": 212, "text": "1 73,", "ref_id": null }, { "start": 213, "end": 216, "text": "60,", "ref_id": null }, { "start": 217, "end": 220, "text": "49,", "ref_id": null }, { "start": 221, "end": 226, "text": "31 8,", "ref_id": null }, { "start": 227, "end": 229, "text": "8,", "ref_id": null }, { "start": 230, "end": 232, "text": "8,", "ref_id": null }, { "start": 233, "end": 238, "text": "3 16,", "ref_id": null }, { "start": 239, "end": 242, "text": "13,", "ref_id": null }, { "start": 243, "end": 246, "text": "11,", "ref_id": null }, { "start": 247, "end": 259, "text": "9 FRecall@1,", "ref_id": null }, { "start": 260, "end": 262, "text": "3,", "ref_id": null }, { "start": 263, "end": 266, "text": "10,", "ref_id": null }, { "start": 267, "end": 274, "text": "\u221e 0.95,", "ref_id": null }, { "start": 275, "end": 280, "text": "0.98,", "ref_id": null }, { "start": 281, "end": 286, "text": "0.99,", "ref_id": null }, { "start": 287, "end": 297, "text": "0.99 0.61,", "ref_id": null }, { "start": 298, "end": 303, "text": "0.69,", "ref_id": null }, { "start": 304, "end": 309, "text": "0.75,", "ref_id": null }, { "start": 310, "end": 320, "text": "0.84 0.47,", "ref_id": null }, { "start": 321, "end": 326, "text": "0.68,", "ref_id": null }, { "start": 327, "end": 332, "text": "0.68,", "ref_id": null }, { "start": 333, "end": 343, "text": "0.88 0.24,", "ref_id": null }, { "start": 344, "end": 349, "text": "0.45,", "ref_id": null }, { "start": 350, "end": 355, "text": "0.54,", "ref_id": null }, { "start": 356, "end": 368, "text": "0.64 NDCG@1,", "ref_id": null }, { "start": 369, "end": 371, "text": "3,", "ref_id": null }, { "start": 372, "end": 375, "text": "10,", "ref_id": null }, { "start": 376, "end": 383, "text": "\u221e 0.95,", "ref_id": null }, { "start": 384, "end": 389, "text": "0.98,", "ref_id": null }, { "start": 390, "end": 395, "text": "0.98,", "ref_id": null }, { "start": 396, "end": 406, "text": "0.98 0.61,", "ref_id": null }, { "start": 407, "end": 412, "text": "0.69,", "ref_id": null }, { "start": 413, "end": 418, "text": "0.71,", "ref_id": null }, { "start": 419, "end": 429, "text": "0.73 0.47,", "ref_id": null }, { "start": 430, "end": 435, "text": "0.64,", "ref_id": null }, { "start": 436, "end": 441, "text": "0.64,", "ref_id": null }, { "start": 442, "end": 452, "text": "0.68 0.24,", "ref_id": null }, { "start": 453, "end": 458, "text": "0.40,", "ref_id": null }, { "start": 459, "end": 464, "text": "0.44,", "ref_id": null }, { "start": 465, "end": 466, "text": "0", "ref_id": null }, { "start": 550, "end": 593, "text": "3,10,\u221e 0.40,0.82,0.88,0.97 0.22,0.48,0.62,0", "ref_id": null }, { "start": 646, "end": 689, "text": "3,10,\u221e 0.84,0.84,0.85,0.87 0.47,0.48,0.53,0", "ref_id": null }, { "start": 812, "end": 814, "text": "3,", "ref_id": null }, { "start": 815, "end": 818, "text": "10,", "ref_id": null }, { "start": 819, "end": 826, "text": "\u221e 0.91,", "ref_id": null }, { "start": 827, "end": 832, "text": "0.94,", "ref_id": null }, { "start": 833, "end": 838, "text": "0.94,", "ref_id": null }, { "start": 839, "end": 849, "text": "1.00 0.56,", "ref_id": null }, { "start": 850, "end": 855, "text": "0.59,", "ref_id": null }, { "start": 856, "end": 861, "text": "0.61,", "ref_id": null }, { "start": 862, "end": 872, "text": "0.72 0.51,", "ref_id": null }, { "start": 873, "end": 878, "text": "0.54,", "ref_id": null }, { "start": 879, "end": 884, "text": "0.64,", "ref_id": null }, { "start": 885, "end": 895, "text": "1.00 0.03,", "ref_id": null }, { "start": 896, "end": 901, "text": "0.08,", "ref_id": null }, { "start": 902, "end": 907, "text": "0.26,", "ref_id": null }, { "start": 908, "end": 920, "text": "0.50 NDCG@1,", "ref_id": null }, { "start": 921, "end": 923, "text": "3,", "ref_id": null }, { "start": 924, "end": 927, "text": "10,", "ref_id": null }, { "start": 928, "end": 935, "text": "\u221e 0.91,", "ref_id": null }, { "start": 936, "end": 941, "text": "0.94,", "ref_id": null }, { "start": 942, "end": 947, "text": "0.94,", "ref_id": null }, { "start": 948, "end": 958, "text": "0.95 0.56,", "ref_id": null }, { "start": 959, "end": 964, "text": "0.59,", "ref_id": null }, { "start": 965, "end": 970, "text": "0.60,", "ref_id": null }, { "start": 971, "end": 981, "text": "0.61 0.51,", "ref_id": null }, { "start": 982, "end": 987, "text": "0.54,", "ref_id": null }, { "start": 988, "end": 993, "text": "0.58,", "ref_id": null }, { "start": 994, "end": 1004, "text": "0.66 0.03,", "ref_id": null }, { "start": 1005, "end": 1010, "text": "0.07,", "ref_id": null }, { "start": 1011, "end": 1016, "text": "0.14,", "ref_id": null }, { "start": 1017, "end": 1018, "text": "0", "ref_id": null }, { "start": 1194, "end": 1257, "text": "3,10,\u221e 0.36,0.53,0.56,0.64 0.00,0.27,0.35,0.38 0.06,0.08,0.15,0", "ref_id": null }, { "start": 1301, "end": 1324, "text": "(Gorniak and Roy, 2005;", "ref_id": "BIBREF7" }, { "start": 1325, "end": 1346, "text": "Sugiura et al., 2009)", "ref_id": "BIBREF19" }, { "start": 1366, "end": 1398, "text": "(L\u00f3pez-C\u00f3zar and Callejas, 2008;", "ref_id": "BIBREF14" }, { "start": 1399, "end": 1416, "text": "Kim et al., 2013)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Generative Trials", "sec_num": "7.2" }, { "text": "Known versus Unknown-OOV. Perfect=1 is the only class with a substantial number of OOV words (25). Note the increase in FRecall up to rank @\u221e for known ASR and unknown-OOV Text and ASR, which indicates that correct interpretations are returned at very high ranks when input words are not identified (NDCG increases only modestly, as it penalizes high ranks). The difference in performance between known-perfect=1 and unknown-OOV-perfect=1 suggests that it is worth improving Scusi?'s vocabulary coverage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Trials", "sec_num": "7.2" }, { "text": "We offered a framework for the evaluation of SLU systems that comprises a characterization of spoken utterances, experimental design and evaluation metrics. We described its application to the evaluation of Scusi?our SLU module for the interpretation of descriptions in a household context. Our characterization of descriptions identifies frequently occurring cases, such as perfect=1, and rare cases, such as imperfect>1; and highlights the influence of vocabulary coverage on performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Our two types of experiments enable the evaluation of an SLU system's performance from two viewpoints: Interpretive trials support the comparison of an SLU module's performance with that of people as addressees, and Generative trials assess the performance of an SLU system when interpreting descriptions commonly spoken by users. The results of the Interpretive trial were encouraging, but they indicate that Scusi?'s \"punitive\" at-titude to attributes that do not match reality, such as a bookcase not being under any portrait, may need to be moderated. However, as stated above, imperfect>1 descriptions were rare in our Generative trials. The results of these trials show that development effort should be invested in (1) ASR accuracy (Kim et al., 2013) ; (2) vocabulary coverage; and (3) ability to represent complex, polysemous and no-landmark positional relations. In contrast, descriptive prepositional phrases starting with \"with\" or \"of\" may be judiciously ignored, or the referent may be disambiguated by asking a clarification question.", "cite_spans": [ { "start": 739, "end": 757, "text": "(Kim et al., 2013)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Our CantRepresent and NotFound evaluation metrics provide an overall view of an SLU system's performance. IR-based metrics have been used in the evaluation of SLU systems to compare an interpretation returned by an SLU module with a reference interpretation. In contrast, we employ FRecall and NDCG in the traditional IR manner, i.e., to assess the rank of correct interpretations in an N-best list. The relevance measure fc (Equation 2), which is applied to both metrics, enables us to handle equiprobable interpretations. However, rank-based evaluation metrics do not consider the absolute quality of an interpretation, i.e., the top-ranked interpretation might be quite bad. In the future, we propose to investigate confidence/accuracy metrics, such ICE (Thomson et al., 2008) , to address this problem.", "cite_spans": [ { "start": 757, "end": 779, "text": "(Thomson et al., 2008)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Examples from our trials are marked with asterisks ( * ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "People often over-specify their descriptions, e.g., \"the large red mug\" when there is only one red mug(Dale and Reiter, 1995). Such over-specifications are not problematic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported in part by grants DP110100500 and DP120100103 from the Australian Research Council.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Spoken dialog challenge 2010: Comparison of live and control test results", "authors": [ { "first": "A", "middle": [], "last": "Black", "suffix": "" }, { "first": "S", "middle": [], "last": "Burger", "suffix": "" }, { "first": "A", "middle": [], "last": "Conkie", "suffix": "" }, { "first": "H", "middle": [], "last": "Hastie", "suffix": "" }, { "first": "S", "middle": [], "last": "Keizer", "suffix": "" }, { "first": "O", "middle": [], "last": "Lemon", "suffix": "" }, { "first": "N", "middle": [], "last": "Merigaud", "suffix": "" }, { "first": "G", "middle": [], "last": "Parent", "suffix": "" }, { "first": "G", "middle": [], "last": "Schubiner", "suffix": "" }, { "first": "B", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "J", "middle": [ "D" ], "last": "Williams", "suffix": "" }, { "first": "K", "middle": [], "last": "Yu", "suffix": "" }, { "first": "S", "middle": [], "last": "Young", "suffix": "" }, { "first": "M", "middle": [], "last": "Eskenazi", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 11th SIGdial Conference on Discourse and Dialogue", "volume": "", "issue": "", "pages": "2--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Black, S. Burger, A. Conkie, H. Hastie, S. Keizer, O. Lemon, N. Merigaud, G. Parent, G. Schubiner, B. Thomson, J.D. Williams, K. Yu, S. Young, and M. Eskenazi. 2011. Spoken dialog challenge 2010: Comparison of live and control test results. In Pro- ceedings of the 11th SIGdial Conference on Dis- course and Dialogue, pages 2-7, Portland, Oregon.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Industrial colour difference evaluation", "authors": [], "year": 1995, "venue": "CIE", "volume": "", "issue": "", "pages": "115--1995", "other_ids": {}, "num": null, "urls": [], "raw_text": "CIE, 1995. Industrial colour difference evaluation. CIE 115-1995.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Saying, Seeing, and Acting: the psychological semantics of spatial prepositions", "authors": [ { "first": "K", "middle": [ "R" ], "last": "Coventry", "suffix": "" }, { "first": "S", "middle": [ "C" ], "last": "Garrod", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K.R. Coventry and S.C. Garrod. 2004. Saying, Seeing, and Acting: the psychological semantics of spatial prepositions. Psychology Press.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Computational interpretations of the Gricean maxims in the generation of referring expressions", "authors": [ { "first": "R", "middle": [], "last": "Dale", "suffix": "" }, { "first": "E", "middle": [], "last": "Reiter", "suffix": "" } ], "year": 1995, "venue": "Cognitive Science", "volume": "18", "issue": "2", "pages": "233--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Dale and E. Reiter. 1995. Computational interpreta- tions of the Gricean maxims in the generation of re- ferring expressions. Cognitive Science, 18(2):233- 263.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Can I finish? Learning when to respond to incremental interpretation results in interactive dialogue", "authors": [ { "first": "D", "middle": [], "last": "Devault", "suffix": "" }, { "first": "K", "middle": [], "last": "Sagae", "suffix": "" }, { "first": "D", "middle": [], "last": "Traum", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 10th SIGdial Conference on Discourse and Dialogue", "volume": "", "issue": "", "pages": "11--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. DeVault, K. Sagae, and D. Traum. 2009. Can I fin- ish? Learning when to respond to incremental inter- pretation results in interactive dialogue. In Proceed- ings of the 10th SIGdial Conference on Discourse and Dialogue, pages 11-20, London, United King- dom.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "WordNet: An Electronic Lexical Database (Language, Speech, and Communication)", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.D. Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database (Language, Speech, and Commu- nication). MIT Press.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Confidence estimation for NLP applications", "authors": [ { "first": "S", "middle": [], "last": "Gandrabur", "suffix": "" }, { "first": "G", "middle": [], "last": "Foster", "suffix": "" }, { "first": "G", "middle": [], "last": "Lapalme", "suffix": "" } ], "year": 2006, "venue": "ACM Transactions on Speech and Language Processing", "volume": "3", "issue": "3", "pages": "1--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Gandrabur, G. Foster, and G. Lapalme. 2006. Confidence estimation for NLP applications. ACM Transactions on Speech and Language Processing, 3(3):1-29.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Probabilistic grounding of situated speech using plan recognition and reference resolution", "authors": [ { "first": "P", "middle": [], "last": "Gorniak", "suffix": "" }, { "first": "D", "middle": [], "last": "Roy", "suffix": "" } ], "year": 2005, "venue": "ICMI'05: Proceedings of the 7th International Conference on Multimodal Interfaces", "volume": "", "issue": "", "pages": "138--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Gorniak and D. Roy. 2005. Probabilistic grounding of situated speech using plan recognition and refer- ence resolution. In ICMI'05: Proceedings of the 7th International Conference on Multimodal Interfaces, pages 138-143, Trento, Italy.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The evolution of evaluation: Lessons from the Message Understanding Conferences", "authors": [ { "first": "L", "middle": [], "last": "Hirschman", "suffix": "" } ], "year": 1998, "venue": "Computer Speech and Language", "volume": "12", "issue": "", "pages": "281--305", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Hirschman. 1998. The evolution of evaluation: Lessons from the Message Understanding Confer- ences. Computer Speech and Language, 12:281- 305.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Cumulated gainbased evaluation of IR techniques", "authors": [ { "first": "K", "middle": [], "last": "J\u00e4rvelin", "suffix": "" }, { "first": "J", "middle": [], "last": "Kek\u00e4l\u00e4inen", "suffix": "" } ], "year": 2002, "venue": "ACM Transactions on Information Systems (TOIS)", "volume": "20", "issue": "4", "pages": "422--446", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. J\u00e4rvelin and J. Kek\u00e4l\u00e4inen. 2002. Cumulated gain- based evaluation of IR techniques. ACM Trans- actions on Information Systems (TOIS), 20(4):422- 446.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Spoken Dialogue Systems", "authors": [ { "first": "K", "middle": [], "last": "Jokinen", "suffix": "" }, { "first": "M", "middle": [], "last": "Mctear", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Jokinen and M. McTear. 2010. Spoken Dialogue Systems. Morgan and Claypool.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Applying computational models of spatial prepositions to visually situated dialog", "authors": [ { "first": "J", "middle": [ "D" ], "last": "Kelleher", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Costello", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "35", "issue": "2", "pages": "271--306", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.D. Kelleher and F.J. Costello. 2008. Applying computational models of spatial prepositions to vi- sually situated dialog. Computational Linguistics, 35(2):271-306.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A noisy channel approach to error correction in spoken referring expressions", "authors": [ { "first": "S", "middle": [ "N" ], "last": "Kim", "suffix": "" }, { "first": "I", "middle": [], "last": "Zukerman", "suffix": "" }, { "first": "", "middle": [], "last": "Th", "suffix": "" }, { "first": "F", "middle": [], "last": "Kleinbauer", "suffix": "" }, { "first": "", "middle": [], "last": "Zavareh", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 6th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S.N. Kim, I. Zukerman, Th. Kleinbauer, and F. Zavareh. 2013. A noisy channel approach to error correction in spoken referring expressions. In Pro- ceedings of the 6th International Joint Conference on Natural Language Processing, Nagoya, Japan.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Combining local context and WordNet similarity for word sense identification", "authors": [ { "first": "C", "middle": [], "last": "Leacock", "suffix": "" }, { "first": "M", "middle": [], "last": "Chodorow", "suffix": "" } ], "year": 1998, "venue": "WordNet: An Electronic Lexical Database", "volume": "", "issue": "", "pages": "265--285", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Leacock and M. Chodorow. 1998. Combining lo- cal context and WordNet similarity for word sense identification. In C. Fellbaum, editor, WordNet: An Electronic Lexical Database, pages 265-285. MIT Press.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "ASR postcorrection for spoken dialogue systems based on semantic, syntactic, lexical and contextual information", "authors": [ { "first": "R", "middle": [], "last": "L\u00f3pez-C\u00f3zar", "suffix": "" }, { "first": "Z", "middle": [], "last": "Callejas", "suffix": "" } ], "year": 2008, "venue": "Journal of Speech Communication", "volume": "50", "issue": "8-9", "pages": "745--766", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. L\u00f3pez-C\u00f3zar and Z. Callejas. 2008. ASR post- correction for spoken dialogue systems based on semantic, syntactic, lexical and contextual infor- mation. Journal of Speech Communication, 50(8- 9):745-766.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Evaluating interactions with spoken dialogue telephone services", "authors": [ { "first": "S", "middle": [], "last": "M\u00f6ller", "suffix": "" } ], "year": 2008, "venue": "Recent Trends in Discourse and Dialogue", "volume": "", "issue": "", "pages": "69--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. M\u00f6ller. 2008. Evaluating interactions with spo- ken dialogue telephone services. In L. Dybkjaer and W. Minker, editors, Recent Trends in Discourse and Dialogue, pages 69-100. Springer.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Empirical methods for evaluating dialog systems", "authors": [ { "first": "T", "middle": [], "last": "Paek", "suffix": "" } ], "year": 2001, "venue": "SIGDIAL'01 -Proceedings of the Second SIGdial Workshop on Discourse and Dialogue", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Paek. 2001. Empirical methods for evaluating dia- log systems. In SIGDIAL'01 -Proceedings of the Second SIGdial Workshop on Discourse and Dia- logue, pages 1-9, Aalborg, Denmark.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "An Introduction to Modern Information Retrieval", "authors": [ { "first": "G", "middle": [], "last": "Salton", "suffix": "" }, { "first": "M", "middle": [ "J" ], "last": "Mcgill", "suffix": "" } ], "year": 1983, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Salton and M.J. McGill. 1983. An Introduction to Modern Information Retrieval. McGraw Hill, New York, New York.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Conceptual Structures: Information Processing in Mind and Machine", "authors": [ { "first": "J", "middle": [ "F" ], "last": "Sowa", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.F. Sowa. 1984. Conceptual Structures: Information Processing in Mind and Machine. Addison-Wesley, Reading, MA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Bayesian learning of confidence measure function for generation of utterances and motions in object manipulation dialogue task", "authors": [ { "first": "K", "middle": [], "last": "Sugiura", "suffix": "" }, { "first": "N", "middle": [], "last": "Iwahashi", "suffix": "" }, { "first": "H", "middle": [], "last": "Kashioka", "suffix": "" }, { "first": "S", "middle": [], "last": "Nakamura", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Interspeech", "volume": "", "issue": "", "pages": "2483--2486", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Sugiura, N. Iwahashi, H. Kashioka, and S. Naka- mura. 2009. Bayesian learning of confidence mea- sure function for generation of utterances and mo- tions in object manipulation dialogue task. In Pro- ceedings of Interspeech 2009, pages 2483-2486, Brighton, United Kingdom.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Evaluating semantic-level confidence scores with multiple hypotheses", "authors": [ { "first": "B", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "K", "middle": [], "last": "Yu", "suffix": "" }, { "first": "M", "middle": [], "last": "Ga\u0161i\u0107", "suffix": "" }, { "first": "S", "middle": [], "last": "Keizer", "suffix": "" }, { "first": "F", "middle": [], "last": "Mairesse", "suffix": "" }, { "first": "J", "middle": [], "last": "Schatzmann", "suffix": "" }, { "first": "S", "middle": [], "last": "Young", "suffix": "" } ], "year": 2008, "venue": "Proceedings of Interspeech", "volume": "", "issue": "", "pages": "1153--1156", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Thomson, K. Yu, M. Ga\u0161i\u0107, S. Keizer, F. Mairesse, J. Schatzmann, and S. Young. 2008. Evaluating semantic-level confidence scores with multiple hy- potheses. In Proceedings of Interspeech 2008, pages 1153-1156, Brisbane, Australia.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A probabilistic approach to the interpretation of spoken utterances", "authors": [ { "first": "I", "middle": [], "last": "Zukerman", "suffix": "" }, { "first": "E", "middle": [], "last": "Makalic", "suffix": "" }, { "first": "M", "middle": [], "last": "Niemann", "suffix": "" }, { "first": "S", "middle": [], "last": "George", "suffix": "" } ], "year": 2008, "venue": "PRICAI 2008 -Proceedings of the 10th Pacific Rim International Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "581--592", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Zukerman, E. Makalic, M. Niemann, and S. George. 2008. A probabilistic approach to the interpreta- tion of spoken utterances. In PRICAI 2008 -Pro- ceedings of the 10th Pacific Rim International Con- ference on Artificial Intelligence, pages 581-592, Hanoi, Vietnam.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "Sample UCG and ICG for \"the blue mug on the large table\".", "uris": null }, "FIGREF1": { "num": null, "type_str": "figure", "text": "large ball on the table imperfect>1 9. the portrait above the bookcase perfect=1 (b) Descriptions with their characterization", "uris": null }, "FIGREF2": { "num": null, "type_str": "figure", "text": "Context visualization and object descriptions used in the Interpretive experiment. (a) Projective relations and \"end, edge, corner\" and \"center\" of a table (b) Colour, size, positional relation and intervening object in a room", "uris": null }, "FIGREF3": { "num": null, "type_str": "figure", "text": "Two of the scenarios used in the Generative experiments.", "uris": null }, "TABREF0": { "num": null, "type_str": "table", "text": "Descriptions that cannot be represented.", "html": null, "content": "
Positional relationOthers and
Poly-ComplexNoPrep. Phrase
semousLandm. \"with\"/\"of\"
perfect=192909
perfect>151504
imperfect=1613183
imperfect>12210
TOTAL22591916
" }, "TABREF1": { "num": null, "type_str": "table", "text": "Results of the Interpretive trials.", "html": null, "content": "
# SurveyScusi? I1Scusi? I2Scusi? I3
1. plate16
" }, "TABREF2": { "num": null, "type_str": "table", "text": "Description breakdown in terms of accuracy and knowledge, performance metrics and results.", "html": null, "content": "
KnownUnknown-OOV
TextASRTextASR
perfect=119725
NotFound@
" } } } }