ACL-OCL / Base_JSON /prefixI /json /inlg /2020.inlg-1.24.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
163 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:28:11.922086Z"
},
"title": "Disentangling the Properties of Human Evaluation Methods: A Classification System to Support Comparability, Meta-Evaluation and Reproducibility Testing",
"authors": [
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Brighton",
"location": {
"country": "UK"
}
},
"email": "a.s.belz@brighton.ac.uk"
},
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UPF",
"location": {
"settlement": "Barcelona",
"country": "Spain"
}
},
"email": "simon.mille@upf.edu"
},
{
"first": "David",
"middle": [
"M"
],
"last": "Howcroft",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heriot-Watt University",
"location": {
"country": "UK"
}
},
"email": "d.howcroft@hw.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Current standards for designing and reporting human evaluations in NLP mean it is generally unclear which evaluations are comparable and can be expected to yield similar results when applied to the same system outputs. This has serious implications for reproducibility testing and meta-evaluation, in particular given that human evaluation is considered the gold standard against which the trustworthiness of automatic metrics is gauged. Using examples from NLG, we propose a classification system for evaluations based on disentangling (i) what is being evaluated (which aspect of quality), and (ii) how it is evaluated in specific (a) evaluation modes and (b) experimental designs. We show that this approach provides a basis for determining comparability, hence for comparison of evaluations across papers, meta-evaluation experiments, reproducibility testing.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Current standards for designing and reporting human evaluations in NLP mean it is generally unclear which evaluations are comparable and can be expected to yield similar results when applied to the same system outputs. This has serious implications for reproducibility testing and meta-evaluation, in particular given that human evaluation is considered the gold standard against which the trustworthiness of automatic metrics is gauged. Using examples from NLG, we propose a classification system for evaluations based on disentangling (i) what is being evaluated (which aspect of quality), and (ii) how it is evaluated in specific (a) evaluation modes and (b) experimental designs. We show that this approach provides a basis for determining comparability, hence for comparison of evaluations across papers, meta-evaluation experiments, reproducibility testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Human evaluations play a central role in Natural Language Generation (NLG), a field which has always been wary of automatic evaluation metrics and their limitations (Reiter and Belz, 2009; Novikova et al., 2017; Reiter, 2018) . NLG has trusted human evaluations perhaps more than any other NLP subfield, and has always gauged the trustworthiness of automatic evaluation metrics in terms of how well, and how consistently, they correlate with human evaluation scores (Over et al., 2007; Gatt and Belz, 2008; Bojar et al., 2016; Shimorina et al., 2018; Ma et al., 2019; Mille et al., 2019; . If they do not, even in isolated cases, the reliability of the metric is seen as doubtful, regardless of the quality of the human evaluation, or whether the metric and human evaluation involved aimed to assess the same thing.",
"cite_spans": [
{
"start": 165,
"end": 188,
"text": "(Reiter and Belz, 2009;",
"ref_id": "BIBREF38"
},
{
"start": 189,
"end": 211,
"text": "Novikova et al., 2017;",
"ref_id": "BIBREF33"
},
{
"start": 212,
"end": 225,
"text": "Reiter, 2018)",
"ref_id": "BIBREF37"
},
{
"start": 466,
"end": 485,
"text": "(Over et al., 2007;",
"ref_id": "BIBREF34"
},
{
"start": 486,
"end": 506,
"text": "Gatt and Belz, 2008;",
"ref_id": "BIBREF17"
},
{
"start": 507,
"end": 526,
"text": "Bojar et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 527,
"end": 550,
"text": "Shimorina et al., 2018;",
"ref_id": "BIBREF42"
},
{
"start": 551,
"end": 567,
"text": "Ma et al., 2019;",
"ref_id": "BIBREF23"
},
{
"start": 568,
"end": 587,
"text": "Mille et al., 2019;",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "More generalised conclusions are sometimes drawn, for example that BLEU scores do not correlate well with human judgements of specific quality criteria 1 such as 'Fluency,' 'Naturalness,' 'Readability' or 'Overall Quality' 2 in the general case (Novikova et al., 2017; May and Priyadarshi, 2017; Reiter, 2018; Shimorina et al., 2018; Sellam et al., 2020; Mathur et al., 2020) . However, such comments make the assumption that, and only really make sense if, multiple evaluations of, say, 'Fluency' do in fact assess the same aspect of quality in the output texts. We argue that we do not currently have a way of establishing whether any two evaluations, metric or human, do or do not assess the same thing.",
"cite_spans": [
{
"start": 245,
"end": 268,
"text": "(Novikova et al., 2017;",
"ref_id": "BIBREF33"
},
{
"start": 269,
"end": 295,
"text": "May and Priyadarshi, 2017;",
"ref_id": "BIBREF26"
},
{
"start": 296,
"end": 309,
"text": "Reiter, 2018;",
"ref_id": "BIBREF37"
},
{
"start": 310,
"end": 333,
"text": "Shimorina et al., 2018;",
"ref_id": "BIBREF42"
},
{
"start": 334,
"end": 354,
"text": "Sellam et al., 2020;",
"ref_id": "BIBREF41"
},
{
"start": 355,
"end": 375,
"text": "Mathur et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In fact, we have plenty of evidence (Section 2) that in many cases, when two evaluations use the same name for a quality criterion, they do in fact assess different aspects of quality, even for seemingly straightforward criteria like 'Fluency' and 'Readability.' And conversely, evaluations that do use different terms often assess identical aspects of quality. In this situation, not only are we on shaky ground when drawing conclusions from meta-evaluations of metrics via correlations with human evaluations, but not knowing when two different evaluations should produce the same results also has clear implications for reproducibility assessments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a classification system that disentangles the properties of evaluation methods, providing a basis for establishing comparability. We start with issues in how human evaluations are currently designed and reported in NLG (Section 2). We then discuss the difficulties of disentangling the properties of evaluation methods (Section 3), and present the proposed classification system consisting of three quality-criterion properties, three evaluation modes, and 12 experimental design properties (Section 4). Next we demonstrate how these combine to form a classification system that supports comparability (Section 5), and show how the system can be used in the context of de-signing and reporting evaluations, meta-evaluations and reproducibility testing (Section 6). We finish with some discussion and conclusions (Section 7).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Notational conventions: We use boldface for defined terms where they are being defined (e.g. quality criterion), italics where we want to emphasise that we are using a term in its defined meaning (e.g. quality criterion), and normal font otherwise; a combination of italics, boldface and capitalised initials for names of quality criteria with definitions (e.g. Fluency); and italics and double quotes for verbatim definitions of quality criteria from papers (e.g. \"ease of reading\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Human evaluations in NLG currently paint a confused picture 3 with very poor standards for designing and reporting evaluations (van der Lee et al., 2019) . In this section we focus on those aspects that make it hard to compare different evaluations.",
"cite_spans": [
{
"start": 127,
"end": 153,
"text": "(van der Lee et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Issues in Comparing Human Evaluations in NLG",
"sec_num": "2"
},
{
"text": "Different papers use the same quality criterion name with different definitions, and the same definitions with different names. Even for less problematic criteria names such as Readability, 4 substantial variation exists. Some definitions are about reading ease: \"Ease of reading\" (Forrest et al., 2018) ; \"a summary is readable if it is easy to read and understand\" (Di Fabbrizio et al., 2014) . Others veer towards fluency: \"how fluent and readable [the text is]\" (Belz and Kow, 2010) ; \"readability concerns fluency of the textual data\" (Mahapatra et al., 2016 ). Yet others combine multiple aspects of quality: \"measures the linguistic quality of text and helps quantify the difficulty of understanding the text for a reader\" (Santhanam and Shaikh, 2019) ; \"[r]eadability is [...] concerned with the fluency and coherence of the texts.\" (Zang and Wan, 2017) . A far messier criterion name is Coherence, some definitions referring to structure (underlined text below) and theme/topic (dotted underline), some just to one of the two, and others to neither (last three examples): \"[whether] the poem [is] thematically structured\" (Van de Cruys, 2020); \"measures if a question is coherent with previous ones\" (Chai and Wan, 2020) ; \"measures ability of the dialogue system to produce responses consistent with the topic of conversation\" (Santhanam and Shaikh, 2019) ; \"measures how much the response is comprehensible and relevant to a user's request\" (Yi et al., 2019) ; \"refers to the meaning of the generated sentence, so that a sentence with no meaning would be rated with a 1 and a sentence with a full meaning would be rated with a 5\" (Barros et al., 2017) ; \"measures [a conversation's] grammaticality and fluency\" (Juraska et al., 2019) ; \"concerns coherence and readability\" (Murray et al., 2010) . The inverse is also common, where the same definition is used with different criterion names. E.g. define Language Naturalness as \"whether the generated text is grammatically correct and fluent, regardless of factual correctness\", while Juraska et al. (2019) give essentially the same definition (see preceding paragraph) for Coherence. Wubben et al. (2016) define Fluency as \"the extent to which a sentence is in proper, grammatical English\", while Harrison and Walker (2018) use a very similar definition for Grammaticality: \"adherence to rules of syntax, use of the wrong wh-word, verb tense consistency, and overall legitimacy as an English sentence.\"",
"cite_spans": [
{
"start": 281,
"end": 303,
"text": "(Forrest et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 367,
"end": 394,
"text": "(Di Fabbrizio et al., 2014)",
"ref_id": "BIBREF11"
},
{
"start": 466,
"end": 486,
"text": "(Belz and Kow, 2010)",
"ref_id": "BIBREF3"
},
{
"start": 540,
"end": 563,
"text": "(Mahapatra et al., 2016",
"ref_id": "BIBREF24"
},
{
"start": 730,
"end": 758,
"text": "(Santhanam and Shaikh, 2019)",
"ref_id": "BIBREF39"
},
{
"start": 779,
"end": 784,
"text": "[...]",
"ref_id": null
},
{
"start": 841,
"end": 861,
"text": "(Zang and Wan, 2017)",
"ref_id": "BIBREF50"
},
{
"start": 1209,
"end": 1229,
"text": "(Chai and Wan, 2020)",
"ref_id": "BIBREF8"
},
{
"start": 1337,
"end": 1365,
"text": "(Santhanam and Shaikh, 2019)",
"ref_id": "BIBREF39"
},
{
"start": 1452,
"end": 1469,
"text": "(Yi et al., 2019)",
"ref_id": "BIBREF48"
},
{
"start": 1641,
"end": 1662,
"text": "(Barros et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 1722,
"end": 1744,
"text": "(Juraska et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 1784,
"end": 1805,
"text": "(Murray et al., 2010)",
"ref_id": "BIBREF31"
},
{
"start": 2045,
"end": 2066,
"text": "Juraska et al. (2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality criterion names",
"sec_num": "2.1"
},
{
"text": "In some cases where criterion names are different, it is slightly more evident that criteria are in fact closely related, as with Wang et al. (2020)'s Faithfulness, Cao et al. (2020)'s Content similarity, and Zhou et al. (2020)'s Content preservation, all of which measure the extent to which the content of an output overlaps with that of the input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality criterion names",
"sec_num": "2.1"
},
{
"text": "However, in many cases similarities are unguessably obscured behind criteria names, as is the case for the following names, all defined as the usefulness of the output text for completing a particular task: Dialogue efficiency (Qu and Green, 2002) , Usefulness (Miliaev et al., 2003) , Task completion (Varges, 2006) , Productivity (Allman et al., 2012).",
"cite_spans": [
{
"start": 227,
"end": 247,
"text": "(Qu and Green, 2002)",
"ref_id": "BIBREF36"
},
{
"start": 261,
"end": 283,
"text": "(Miliaev et al., 2003)",
"ref_id": "BIBREF27"
},
{
"start": 302,
"end": 316,
"text": "(Varges, 2006)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality criterion names",
"sec_num": "2.1"
},
{
"text": "A vanishingly small number of papers provide full details of human evaluation experiments. It is common for papers not to report how many system outputs or evaluators were used, what information was given to them, what questions asked, etc. Our survey of 468 individual human evaluations in NLG (Howcroft et al., 2020) indicates that in about 2/3 of cases reports do not provide the question/prompt evaluators were shown, over half do not define the quality criterion assessed, and around 1/5 do not name the quality criterion. Missing information about experimental design is particularly problematic for reproducibility testing (Section 6.3).",
"cite_spans": [
{
"start": 295,
"end": 318,
"text": "(Howcroft et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other aspects of evaluations",
"sec_num": "2.2"
},
{
"text": "While some aspects of evaluations such as type and size of rating scale, evaluation mode (Section 4.2) etc., are relatively easy to determine from papers, the confusion over which evaluations assess which aspect of quality, and the paucity of detail about experimental design in the great majority of papers, at present mean we do not have a basis for establishing comparability, calling into question the validity of results from reproducibility and meta-evaluation tests that assume comparability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other aspects of evaluations",
"sec_num": "2.2"
},
{
"text": "3 Disentangling Properties of Evaluations 3.1 Similarity of evaluations When different papers report human evaluations of Readability, we are likely to expect them to report similar system rankings when applied to the same set of system outputs, and similar correlations in meta-evaluations of metrics. But would that expectation change if we then learn that one evaluation measured reading time (on the assumption that more readable texts are faster to read), and in the other, participants were asked to explicitly rate the readability of outputs on a 5-point scale? And what if we are then told that definitions of Readability and questions put to evaluators differed in each case? The point is that we need to know how similar evaluations are, and in what respects, to inform expectations of similarity between their results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other aspects of evaluations",
"sec_num": "2.2"
},
{
"text": "Conversely, when results are reported for different criteria (names), we may expect metaevaluation and correlation analysis to yield distinguishable results. This can be the case, e.g. Belz and Reiter (2006) report high Pearson correlation with all metrics for Fluency (of weather forecasts), but no correlations with any metrics for Accuracy (of the meteorological information). However, extreme positive correlations (r = 0.93..0.99) are often reported (Belz and Kow, 2009; Gardent et al., 2017; for pairs of apparently very different quality criteria (e.g. Readability/Meaning Similarity), even when assessed separately for the express purpose of avoiding conflation (Mille et al., 2018 (Mille et al., , 2019 .",
"cite_spans": [
{
"start": 185,
"end": 207,
"text": "Belz and Reiter (2006)",
"ref_id": "BIBREF5"
},
{
"start": 455,
"end": 475,
"text": "(Belz and Kow, 2009;",
"ref_id": "BIBREF2"
},
{
"start": 476,
"end": 497,
"text": "Gardent et al., 2017;",
"ref_id": "BIBREF16"
},
{
"start": 670,
"end": 689,
"text": "(Mille et al., 2018",
"ref_id": "BIBREF28"
},
{
"start": 690,
"end": 711,
"text": "(Mille et al., , 2019",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other aspects of evaluations",
"sec_num": "2.2"
},
{
"text": "What is clear, if nothing else, is that some evaluations are less similar, and others more, than meets the eye, and that we do not currently have a systematic way of telling in what respects (in terms of which properties) evaluations are the same and in what respects they are different. In order to be able to do this, we need a system that specifies what those properties are, and provides definitions that make it possible to determine whether evaluations are the same or different in terms of each property.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other aspects of evaluations",
"sec_num": "2.2"
},
{
"text": "Identifying such properties is a major challenge, with currently little to no consensus about which ones usefully to distinguish. One of the most basic distinctions is between what is being evaluated and how it is being evaluated. The former refers to the specific aspect of quality (the quality criterion) that an evaluation aims to assess, while the latter refers to how it is mapped to a specific measure that can be implemented in an evaluation experiment. It is worth distinguishing the how from the what, because in principle there can be many different specific measures and experimental designs that can be used to assess the same quality criterion. Yet the distinction is rarely made in papers, contributing to obscuring similarities between evaluations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other aspects of evaluations",
"sec_num": "2.2"
},
{
"text": "Definitions of what is being evaluated often refer to evaluator perception, task success, or preference judgements, all to do with how outputs are evaluated. E.g. Allman et al. 2012define Productivity as \"the quantity of text an experienced translator could translate in a given period of time [compared] with the quantity of text generated by [the system] that the same person could edit in the given time.\" The aspect of quality that is being assessed is the overall quality of a translation given the source text (the better the translation the faster the post-editing) which is measured as the increase in translation speed afforded by use of the system. This is comparable to other assessments of overall translation quality (such as the Would you use this system evaluations from dialogue Walker et al. 2001) , and results can be expected to be similar, but it is hard to tell this is so, because the required information is not provided in papers.",
"cite_spans": [
{
"start": 795,
"end": 814,
"text": "Walker et al. 2001)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other aspects of evaluations",
"sec_num": "2.2"
},
{
"text": "Properties relating to how a quality criterion is evaluated further fall into those that are more 'implementational' in character, such as what type of rating scale is used, with how many possible values, how many evaluators, system outputs, etc., and those can be implemented in different ways such as whether multiple outputs are ranked or single outputs are evaluated separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other aspects of evaluations",
"sec_num": "2.2"
},
{
"text": "The proposed system disentangles characteristics of evaluations into 18 properties, each with a set of possible values, that fall into three groups as indicated above (quality criteria, evaluation mode, and experimental design), and in combination fully specify an evaluation experiment. A quality criterion is a criterion in terms of which the quality of system outputs is assessed, and is in itself entirely agnostic about how it is evaluated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disentangled evaluation properties",
"sec_num": "3.2"
},
{
"text": "Evaluation modes are properties that need to be specified to turn a quality criterion into an evaluation measure that can be implemented, and are orthogonal to quality criteria, i.e. any given quality criterion can be combined with any mode. We distinguish three modes (see Section 4.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disentangled evaluation properties",
"sec_num": "3.2"
},
{
"text": "Experimental design is the full specification of how to obtain a quantitative or qualitative response value for a given evaluation measure, yielding a fully specified evaluation method. In sum:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disentangled evaluation properties",
"sec_num": "3.2"
},
{
"text": "\u2022 Quality criterion + evaluation mode = evaluation measure;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disentangled evaluation properties",
"sec_num": "3.2"
},
{
"text": "\u2022 Evaluation measure + experimental design = evaluation method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disentangled evaluation properties",
"sec_num": "3.2"
},
{
"text": "This three-way separation of properties, and its details in the next section, are motivated by the need to establish comparability in two main contexts: (i) meta-evaluation: comparability assessments of evaluation methods are needed to inform design of meta-evaluation studies and conclusions drawn from them; and (ii) reproducibility testing: similarity in terms of the quality criterion properties indicates which evaluations should reproduce each other's results, while similarity in evaluation mode and experimental design can be used to define degrees of reproducibility (Section 6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disentangled evaluation properties",
"sec_num": "3.2"
},
{
"text": "The three quality criterion properties are intended to help determine whether or not the same aspect of quality is being evaluated. To this end, we use three properties to characterise quality criteria reflecting (i) what type of quality is being assessed (Section 4.1.1); (ii) what aspect of the system output is being assessed (Section 4.1.2); and (iii) whether system outputs are assessed in their own right or with reference to some system-internal or systemexternal frame of reference (Section 4.1.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality Criterion properties",
"sec_num": "4.1"
},
{
"text": "The primary distinction we draw is between criteria assessing correctness, goodness and features. For the former two, it is normally clear which end of the scale is preferred regardless of evaluation context. E.g. one would normally want output texts to be more fluent, more grammatical, more clear. 5 For feature-type criteria this does not hold; in one evaluation context, one end of the scale might be preferable, in another, the other, and in a third, the criterion may not apply. E.g. when evaluating a conversational agent, Conversationality is desirable, but it may not be relevant in a flight booking system. Similarly, Funny and Entertaining might be desirable properties for a narrative generator, but are inappropriate in a nursing report generator.",
"cite_spans": [
{
"start": 300,
"end": 301,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Type of quality being assessed",
"sec_num": "4.1.1"
},
{
"text": "We define the three classes as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type of quality being assessed",
"sec_num": "4.1.1"
},
{
"text": "1. Correctness: For correctness criteria it is possible to state, generally for all outputs, the conditions under which outputs are maximally correct (hence of maximal quality). E.g. for Grammaticality, outputs are (maximally) correct if they contain no grammatical errors; for Semantic Completeness, outputs are correct if they express all the content in the input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type of quality being assessed",
"sec_num": "4.1.1"
},
{
"text": "2. Goodness: For goodness criteria, in contrast to correctness criteria, there is no single, general mechanism for deciding when outputs are maximally good, only for deciding for two outputs which is better and which is worse. E.g. for Fluency, even if outputs contain no disfluencies, there may be other ways in which any given output could be more fluent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type of quality being assessed",
"sec_num": "4.1.1"
},
{
"text": "3. Features: For criteria X in this class, outputs are not generally better if they are more X.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type of quality being assessed",
"sec_num": "4.1.1"
},
{
"text": "Depending on evaluation context, more X may be better or less X may be better. E.g. outputs can be more specific or less specific, but it's not the case that outputs are, in the general case, better when they are more specific.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type of quality being assessed",
"sec_num": "4.1.1"
},
{
"text": "Properties in this group capture which aspect of an output is being assessed:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aspect of system output being assessed",
"sec_num": "4.1.2"
},
{
"text": "1. Form of output: Evaluations of this type aim to assess the form of outputs alone, e.g. Grammaticality is only about the form, a sentence can be grammatical yet be wrong or nonsensical in terms of content.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aspect of system output being assessed",
"sec_num": "4.1.2"
},
{
"text": "2. Content of output: Evaluations aim to assess the content/meaning of the output alone, e.g. content; two sentences can be considered to have the same meaning, but differ in form. 3. Both form and content of output: Here, evaluations assess outputs as a whole, not distinguishing form from content. E.g. Coherence is a property of outputs as a whole, either form or meaning can detract from it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aspect of system output being assessed",
"sec_num": "4.1.2"
},
{
"text": "Properties in this group describe whether assessment of output quality involves a frame of reference in addition to the outputs themselves, i.e. whether the evaluation process also consults (refers to) anything else. We distinguish three cases:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality with/without frame of reference",
"sec_num": "4.1.3"
},
{
"text": "1. Quality of output in its own right: assessing output quality without referring to anything other than the output itself, i.e. no systeminternal or external frame of reference. E.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality with/without frame of reference",
"sec_num": "4.1.3"
},
{
"text": "Poeticness is assessed by considering (just) the output and how poetic it is.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality with/without frame of reference",
"sec_num": "4.1.3"
},
{
"text": "2. Quality of output relative to the input: the quality of an output is assessed relative to the input. E.g. Answerability is the degree to which the output question can be answered from information in the input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality with/without frame of reference",
"sec_num": "4.1.3"
},
{
"text": "3. Quality of output relative to a systemexternal frame of reference: output quality is assessed with reference to system-external information, e.g. a knowledge base, a person's individual writing style, or an embedding system. E.g. Factual Accuracy assesses outputs relative to a source of real-world knowledge. Figure 1 shows how the quality-criterion properties combine to give 27 groups of quality criteria, numbered for ease of reference in subsequent sections. 6",
"cite_spans": [],
"ref_spans": [
{
"start": 313,
"end": 321,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Quality with/without frame of reference",
"sec_num": "4.1.3"
},
{
"text": "Evaluation modes are orthogonal to quality criteria, i.e. any given quality criterion can in principle be combined with any of the modes (although some combinations are decidedly more frequent than others). We distinguish three evaluation modes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Mode Properties",
"sec_num": "4.2"
},
{
"text": "1. Objective vs. subjective: whether the evaluation involves an objective or a subjective assessment. Examples of objective assessment include any automatically counted or otherwise quantified measurements such as mouse-clicks, occurrences in text, etc. Subjective assessments involve ratings, opinions and preferences by evaluators. Some criteria lend themselves more readily to subjective assessments, e.g. Friendliness of a conversational agent, but an objective measure e.g. based on lexical markers is also conceivable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Mode Properties",
"sec_num": "4.2"
},
{
"text": "2. Absolute vs. relative: whether evaluators are shown outputs from a single system during evaluation (absolute), or from multiple systems in parallel (relative), in the latter case typically ranking or preference-judging them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Mode Properties",
"sec_num": "4.2"
},
{
"text": "3. Extrinsic vs. intrinsic: whether evaluation assesses quality of outputs in terms of theieffect on something external to the system, e.g. performance of an embedding system or of a user at a task (extrinisic), or not (intrinsic).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Mode Properties",
"sec_num": "4.2"
},
{
"text": "The properties in this section characterise how response values are obtained for a given evaluation measure defined by quality criterion and evaluation modes. We distinguish 12 properties:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Design properties",
"sec_num": "4.3"
},
{
"text": "1. System outputs: (1.1) number and (1.2) how selected for inclusion in evaluation. It is not possible to classify a sample of papers in terms of experimental design properties, because very few provide much of the information. The main relevance of the experimental design properties to the present context is that reproducibility in the narrowest sense (Section 6) assumes that experimental design is the same in the above sense. Table 1 gives example classifications using the proposed system for evaluation measures 7 from 19 different papers, alongside the criterion name used in the paper. Table 2 shows the corresponding definitions given in the paper (or other evidence if none provided), and maps each evaluation measure to one of the groups from Figure 1 . The two tables are divided into five groups, as indicated by the grey text inserts in Table 1 . Group 1 contains evaluation measures where the criterion name is the same (Fluency and Coherence, respectively), but quality-criterion properties differ. In conjunction with the definitions in Table 2 , this demonstrates that the three evaluation measures called Fluency in the papers in actual fact assess distinct aspects of quality, as do the four criteria called Coherence.",
"cite_spans": [],
"ref_spans": [
{
"start": 432,
"end": 439,
"text": "Table 1",
"ref_id": null
},
{
"start": 596,
"end": 603,
"text": "Table 2",
"ref_id": null
},
{
"start": 756,
"end": 764,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 853,
"end": 860,
"text": "Table 1",
"ref_id": null
},
{
"start": 1056,
"end": 1063,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Design properties",
"sec_num": "4.3"
},
{
"text": "The third example of Fluency in this group in fact assesses three distinct aspects of quality, which is likely to place a high cognitive load on evaluators. The three evaluation measures in Group 2 have identical classifications but different names. Based on our classifications and information in the original paper, these criteria are not actually distinct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Classifications",
"sec_num": "5"
},
{
"text": "The four evaluation measures in Group 3 present a similar case, with Reading time and Ease of reading on the one hand, and Task success and Usefulness on the other. However, here the evaluation modes are different within each pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Classifications",
"sec_num": "5"
},
{
"text": "Group 4 has two examples of feature-type criteria with different names but the same qualitycriterion classification; evaluation modes are different, with one involving system rankings (relative mode), and the other direct ratings (absolute mode). The names used (Text complexity and Simplicity) indicate two ends of the same scale, either one of which may be preferable depending on context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Classifications",
"sec_num": "5"
},
{
"text": "The evaluation measures in Group 5 involve quality criteria that appear at first glance closely related (see Table 2 ). What they have in common is that they assess aspects of the quality of referring expressions. However, none of the classifications are exactly the same, and we would argue that the criteria assess distinct aspects of quality: correct pronoun usage, identifiability of referents, and fast referent identification, the former two being correctness criteria, the latter a goodness criterion.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 116,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Example Classifications",
"sec_num": "5"
},
{
"text": "The proposed classification system provides a basis for systematically comparing evaluation methods. We can see at least three contexts in which this is either a prerequisite or at least useful, as outlined in the next three subsections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Use Cases",
"sec_num": "6"
},
{
"text": "At present, in the majority of cases it is generally not clear enough from papers what quality criterion was evaluated in a human evaluation, one of the main conclusions we drew from our attempt to map quality criteria reported in papers to normalised terms and definitions in our extensive survey of human evaluation in NLG (Howcroft et al., 2020) . The example classifications we give in Tables 1 and 2 Miliaev et al. (2003) Usefulness goodness both external FoR subj. abs. intr. Qu and Green (2002) Task success goodness both external FoR obj. abs. extr. Group 4 -Equivalent names, same quality-criterion properties, different evaluation modes: Moraes et al. (2016) Text Complexity feature both none subj. rel. intr. Narayan and Gardent (2016) Simplicity feature both none subj. abs.",
"cite_spans": [
{
"start": 325,
"end": 348,
"text": "(Howcroft et al., 2020)",
"ref_id": null
},
{
"start": 406,
"end": 427,
"text": "Miliaev et al. (2003)",
"ref_id": "BIBREF27"
},
{
"start": 483,
"end": 502,
"text": "Qu and Green (2002)",
"ref_id": "BIBREF36"
},
{
"start": 649,
"end": 669,
"text": "Moraes et al. (2016)",
"ref_id": "BIBREF30"
},
{
"start": 721,
"end": 747,
"text": "Narayan and Gardent (2016)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 390,
"end": 405,
"text": "Tables 1 and 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Design and reporting of evaluations",
"sec_num": "6.1"
},
{
"text": "intr.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Design and reporting of evaluations",
"sec_num": "6.1"
},
{
"text": "Group 5 -Different names, different quality-criterion properties, different evaluation modes, related definitions: Chai and Wan (2020) Coreference correctness both none subj. abs. intr. Funakoshi et al. (2004) Accuracy correctness both external FoR obj. abs. extr. Gatt and Belz (2008) Identification Time goodness both external FoR obj abs extr Table 1 : Examples of human evaluations described according to the proposed classification system.",
"cite_spans": [
{
"start": 186,
"end": 209,
"text": "Funakoshi et al. (2004)",
"ref_id": "BIBREF15"
},
{
"start": 265,
"end": 285,
"text": "Gatt and Belz (2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 346,
"end": 353,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Design and reporting of evaluations",
"sec_num": "6.1"
},
{
"text": "Fluency some authors might take that to relate to both form and content. As things stand, it is often impossible to tell, because (a) there is not enough information provided in papers, and (b) even if there is, it is not described in shared terms. A related question is how well evaluators understand what they are being asked to evaluate. It is often assumed that aspects of quality like Fluency and Clarity, and the differences between them, are intuitively clear to evaluators, but how certain is this when good intra and inter-evaluator agreement is so hard to achieve (Belz and Kow, 2011) , and correlations between apparently very different criteria are so often in the high nineties (Section 3)? That researchers struggle to explain what to evaluate is also clear from definitions and prompts reported in papers which often define one quality criterion in terms of others (e.g. Rows 2, 3, 5 in Tables 1 and 2) , and use inconsistent language in quality criterion name, definition, and prompts. A shared classification system helps address both the above, (a) making clear what needs to be included in reports to convey what was evaluated, and (b) providing a basis for conveying to evaluators what aspect of quality they are expected to assess in such a way as to ensure multiple evaluators end up with the same interpretation as each other and as the designers of the experiment.",
"cite_spans": [
{
"start": 574,
"end": 594,
"text": "(Belz and Kow, 2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 902,
"end": 917,
"text": "Tables 1 and 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Design and reporting of evaluations",
"sec_num": "6.1"
},
{
"text": "The standard way of validating a new automatic evaluation metric is to obtain system-level correlations with human assessments of the same set of system outputs, usually termed meta-evaluation. The expectation that a given metric should correlate with the human evaluation it is meta-evaluated against is not normally justified, but the implicit assumption is that they are measuring the same thing, for why else should they correlate?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meta-evaluation",
"sec_num": "6.2"
},
{
"text": "For example, years of mixed results from metaevaluating BLEU against a wide variety of different human evaluations have resulted in conclusions that BLEU is not a good metric, or is not reliable enough, because it does not correlate consistently well with human evaluations. But why should a single metric be expected to correlate equally well with human assessments of quality criteria as distinct as Fluency and Accuracy (of content)? 2019fluency\" 10 (b) goodness of form of output iioR Chai and Wan (2020) Coherence \"measures if a question is coherent with previous ones\" 15 goodness of content relative to ext. FoR Barros et al. (2017) Coherence \"meaning of the generated sentence, [...] sentence with no meaning would be rated with 1 and a sentence with a full meaning would be rated with 5\" 4 correctness of content of output iioR Gatt and Belz (2008) Reading Time \"[time] from the point at which the description was presented, to the point at which a participant called up the next screen via mouse click\" 16 goodness of form/content of output iioR Forrest et al. (2018) Ease of Reading \"self-reported ease of reading of the explanation and interpretation\" 16 goodness of form/content of output iioR Miliaev et al. (2003) Usefulness 'how useful was the manual to cope with the task\" 18 goodness of form/content relative to ext. FoR Qu and Green (2002) Task success \"the degree of task success with respect to the user's original information need\" 18 goodness of form/content relative to ext. FoR Moraes et al. (2016) Text Complexity \"ability of the system on varying the text complexity as perceived by human readers.\" 25 complexity of form/content of output iioR Narayan and Gardent (2016) Simplicity \"How much does the generated sentence(s) simplify the complex input?\" 25 complexity of form/content of output iioR Chai and Wan (2020) Coreference \"measures if a question uses correct pronouns\" 7 correctness of form/content of output iioR Funakoshi et al. (2004) Accuracy \"rates at which subjects could identify the correct target objects from the given expressions\" 9 correctness of form/content relative to ext. FoR Gatt and Belz (2008) Identification Time \"[time] from the point at which pictures [...] were presented on the screen to the point where a participant identified a referent by clicking on it\" 18 goodness of form/content relative to ext. FoR Table 2: Companion table to Table 1 . Definitions/other evidence from each paper, suggested mapping to groups from Figure 1 , and gloss for each group (FoR = frame of reference; iioR = in its own right).",
"cite_spans": [
{
"start": 489,
"end": 508,
"text": "Chai and Wan (2020)",
"ref_id": "BIBREF8"
},
{
"start": 619,
"end": 639,
"text": "Barros et al. (2017)",
"ref_id": "BIBREF1"
},
{
"start": 686,
"end": 691,
"text": "[...]",
"ref_id": null
},
{
"start": 837,
"end": 857,
"text": "Gatt and Belz (2008)",
"ref_id": "BIBREF17"
},
{
"start": 1056,
"end": 1077,
"text": "Forrest et al. (2018)",
"ref_id": "BIBREF14"
},
{
"start": 1207,
"end": 1228,
"text": "Miliaev et al. (2003)",
"ref_id": "BIBREF27"
},
{
"start": 1339,
"end": 1358,
"text": "Qu and Green (2002)",
"ref_id": "BIBREF36"
},
{
"start": 1503,
"end": 1523,
"text": "Moraes et al. (2016)",
"ref_id": "BIBREF30"
},
{
"start": 1824,
"end": 1843,
"text": "Chai and Wan (2020)",
"ref_id": "BIBREF8"
},
{
"start": 1948,
"end": 1971,
"text": "Funakoshi et al. (2004)",
"ref_id": "BIBREF15"
},
{
"start": 2127,
"end": 2147,
"text": "Gatt and Belz (2008)",
"ref_id": "BIBREF17"
},
{
"start": 2209,
"end": 2214,
"text": "[...]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 2367,
"end": 2402,
"text": "Table 2: Companion table to Table 1",
"ref_id": null
},
{
"start": 2482,
"end": 2490,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Meta-evaluation",
"sec_num": "6.2"
},
{
"text": "Even for conclusions about correlation with human assessments of individual quality criteria, such as that BLEU does not correlate consistently well with Fluency, the implicit assumption is that all evaluations assessing something called 'Fluency' in fact succeed in measuring the same thing. Looking at the first three rows of Tables 1 and 2 it is doubtful that we currently know whether or not BLEU does correlate consistently with Fluency. A shared classification system for human evaluation methods provides firmer ground for conclusions by helping establish which groups of human evaluations are similar enough to be expected to correlate similarly with a given metric, and even whether a given metric is similar enough to a given type of human evaluation to be expected to corre-late well with it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meta-evaluation",
"sec_num": "6.2"
},
{
"text": "In simple terms, reproducibility tests re-run existing evaluations in either the same way or with controlled differences to see if the results are the same. Beyond this, there is little agreement in NLP/ML, despite growing levels of interest in the subject of reproducibility over recent years. Not wishing to wade into the general debate, we use the definitions of the International Vocabulary of Metrology (VIM) (JCGM, 2012), where repeatability is the precision of measurements of the same or similar object obtained under the same conditions, as captured by a specified set of repeatability conditions, whereas reproducibility is the precision of measurements of the same or similar object obtained under different conditions, as captured by a specified set of reproducibility conditions. 8 The properties defined by the classification system proposed here can be straightforwardly used to serve as the set of repeatability/reproducibility conditions, for repeatability specifying the respects in which original and repeat measurements are controlled to be the same, and for reproducibility additionally specifying in which respects original and reproduction measurements differ.",
"cite_spans": [
{
"start": 793,
"end": 794,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reproducibility tests",
"sec_num": "6.3"
},
{
"text": "One step further would be to select nested subsets of properties to define different degrees of reproducibility, for example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reproducibility tests",
"sec_num": "6.3"
},
{
"text": "1. Reproducibility in the first degree: all 18 properties are the same.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reproducibility tests",
"sec_num": "6.3"
},
{
"text": "2. Reproducibility in the second degree: quality criteria properties and evaluation mode properties are the same, but some or all of the experimental design properties differ.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reproducibility tests",
"sec_num": "6.3"
},
{
"text": "3. Reproducibility in the third degree: quality criteria properties are the same, but some or all of the evaluation mode properties and experimental design properties differ. Such degrees of reproducibility are similar in spirit to the four-way 'quadrants of reproducibility' proposed recently by Whitaker (2017) and adopted by Schloss (2018), but unlike them, the above approach (a) is not inherently limited to just two dimensions (data and code), and (b) does not attach 8 The ACM definitions are described as being based on VIM but it's not clear how exactly: https: //www.acm.org/publications/policies/ artifact-review-and-badging-current disputed labels (replicability, robustness, generalisability) to the different degrees.",
"cite_spans": [
{
"start": 474,
"end": 475,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reproducibility tests",
"sec_num": "6.3"
},
{
"text": "The present paper is intended as a step towards full comparability of human evaluation methods in NLG. There are clear directions for further development. E.g. we have remained agnostic about what happens within the 27 groups of quality criteria defined by the proposed system (visualised in Figure 1 ). Do the groups map to 27 quality criteria that are enough for all evaluation contexts, merely needing to be 'localised' to a specific task and domain? This might work for correctness criteria and goodness criteria, but new criteria can be almost arbitrarily added to the feature-type groups.",
"cite_spans": [],
"ref_spans": [
{
"start": 292,
"end": 300,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Future Work and Conclusions",
"sec_num": "7"
},
{
"text": "Another question is how to ensure that experimental design matches a chosen quality criterion and does not end up evaluating something else entirely. We have pointed to using the terms and definitions of the proposed classification properties in experimental design, but not given details of how this can be done. We can see relevance also to recent machine-learned evaluation metrics (to clarify what it is they are emulating). We plan to address the above lines of inquiry in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work and Conclusions",
"sec_num": "7"
},
{
"text": "While this paper proposes a standard way of classifying evaluation methods, we do not propose a standardised nomenclature of quality criterion names and definitions. If such a standard did become widely adopted in the field, it would go a long way towards addressing the issue of comparability. However, given the deeply ingrained habit in NLG of using ad-hoc, tailored evaluation methods that differ widely even within small NLG subfields, this seems unrealistic for now.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work and Conclusions",
"sec_num": "7"
},
{
"text": "Our aim in this paper has instead been to find a way of teasing apart the similarities and dissimilarities of evaluation methods used in the current, highly diverse context, to yield a set of clearly defined properties that provides a firm basis for designing and reporting evaluation methods, establishing comparability for meta-evaluation, and specifying repeatability/reproducibility conditions for reproducibility tests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work and Conclusions",
"sec_num": "7"
},
{
"text": "Term initially used informally, defined in Section 3. 2 Quotes to indicate no specific meaning intended.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See our survey of 20 years of human evaluations in NLG(Howcroft et al., 2020).4 Note that the examples in this section were chosen at random, not because they vary most widely.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Exceptionally, a goodness/correctness criterion can become a feature, e.g. in expressionist poetry generation where less fluency might be better, as pointed out by a reviewer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The tree structure is just a way of showing how the groups relate to each other, we could have used a table instead.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We're not including experimental design properties for reasons explained in the preceding section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank our reviewers for their valuable input. Mille's contribution was supported by the European Commission under H2020 contracts 870930-RIA, 779962-RIA, 825079-RIA, 786731-RIA; Howcroft's under EPSRC project MaDrIgAL (EP/N017536/1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "8"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Linguist's assistant: A multi-lingual natural language generator based on linguistic universals, typologies, and primitives",
"authors": [
{
"first": "Tod",
"middle": [],
"last": "Allman",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Beale",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Denton",
"suffix": ""
}
],
"year": 2012,
"venue": "INLG 2012 Proceedings of the Seventh International Natural Language Generation Conference",
"volume": "",
"issue": "",
"pages": "59--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tod Allman, Stephen Beale, and Richard Denton. 2012. Linguist's assistant: A multi-lingual natural lan- guage generator based on linguistic universals, ty- pologies, and primitives. In INLG 2012 Proceedings of the Seventh International Natural Language Gen- eration Conference, pages 59-66, Utica, IL. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Improving the naturalness and expressivity of language generation for Spanish",
"authors": [
{
"first": "Cristina",
"middle": [],
"last": "Barros",
"suffix": ""
},
{
"first": "Dimitra",
"middle": [],
"last": "Gkatzia",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Lloret",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 10th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "41--50",
"other_ids": {
"DOI": [
"10.18653/v1/W17-3505"
]
},
"num": null,
"urls": [],
"raw_text": "Cristina Barros, Dimitra Gkatzia, and Elena Lloret. 2017. Improving the naturalness and expressivity of language generation for Spanish. In Proceedings of the 10th International Conference on Natural Lan- guage Generation, pages 41-50, Santiago de Com- postela, Spain. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "System building cost vs. output quality in data-to-text generation",
"authors": [
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Kow",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "16--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anya Belz and Eric Kow. 2009. System building cost vs. output quality in data-to-text generation. In Pro- ceedings of the 12th European Workshop on Natural Language Generation (ENLG 2009), pages 16-24.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Comparing rating scales and preference judgements in language evaluation",
"authors": [
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Kow",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 6th International Natural Language Generation Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anya Belz and Eric Kow. 2010. Comparing rating scales and preference judgements in language eval- uation. In Proceedings of the 6th International Nat- ural Language Generation Conference.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Discrete vs. continuous rating scales for language evaluation in NLP",
"authors": [
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Kow",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "230--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anya Belz and Eric Kow. 2011. Discrete vs. continu- ous rating scales for language evaluation in NLP. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 230-235, Portland, Ore- gon, USA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Comparing automatic and human evaluation of nlg systems",
"authors": [
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
}
],
"year": 2006,
"venue": "11th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anya Belz and Ehud Reiter. 2006. Comparing auto- matic and human evaluation of nlg systems. In 11th Conference of the European Chapter of the Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Findings of the 2016 conference on machine translation",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Antonio",
"middle": [
"Jimeno"
],
"last": "Yepes",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Aur\u00e9lie",
"middle": [],
"last": "N\u00e9v\u00e9ol",
"suffix": ""
},
{
"first": "Mariana",
"middle": [],
"last": "Neves",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Raphael",
"middle": [],
"last": "Rubino",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Scarton",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Verspoor",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "131--198",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2301"
]
},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, An- tonio Jimeno Yepes, Philipp Koehn, Varvara Lo- gacheva, Christof Monz, Matteo Negri, Aur\u00e9lie N\u00e9v\u00e9ol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Spe- cia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 131-198, Berlin, Ger- many. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Expertise style transfer: A new task towards better communication between experts and laymen",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Ruihao",
"middle": [],
"last": "Shui",
"suffix": ""
},
{
"first": "Liangming",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tat-Seng",
"middle": [],
"last": "Chua",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1061--1071",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.100"
]
},
"num": null,
"urls": [],
"raw_text": "Yixin Cao, Ruihao Shui, Liangming Pan, Min-Yen Kan, Zhiyuan Liu, and Tat-Seng Chua. 2020. Expertise style transfer: A new task towards better communi- cation between experts and laymen. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1061-1071, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning to ask more: Semi-autoregressive sequential question generation under dual-graph interaction",
"authors": [
{
"first": "Zi",
"middle": [],
"last": "Chai",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "225--237",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.21"
]
},
"num": null,
"urls": [],
"raw_text": "Zi Chai and Xiaojun Wan. 2020. Learning to ask more: Semi-autoregressive sequential question generation under dual-graph interaction. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 225-237, Online. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Few-shot NLG with pre-trained language model",
"authors": [
{
"first": "Zhiyu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Harini",
"middle": [],
"last": "Eavani",
"suffix": ""
},
{
"first": "Wenhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yinyin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "183--190",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.18"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu, and William Yang Wang. 2020. Few-shot NLG with pre-trained language model. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 183-190, Online. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automatic poetry generation from prosaic text",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Van De Cruys",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2471--2480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Van de Cruys. 2020. Automatic poetry generation from prosaic text. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 2471-2480.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A hybrid approach to multidocument summarization of opinions in reviews",
"authors": [
{
"first": "Giuseppe",
"middle": [],
"last": "Di Fabbrizio",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Stent",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Natural Language Generation Conference (INLG)",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {
"DOI": [
"10.3115/v1/W14-4408"
]
},
"num": null,
"urls": [],
"raw_text": "Giuseppe Di Fabbrizio, Amanda Stent, and Robert Gaizauskas. 2014. A hybrid approach to multi- document summarization of opinions in reviews. In Proceedings of the 8th International Natural Lan- guage Generation Conference (INLG), pages 54-63, Philadelphia, Pennsylvania, U.S.A. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Evaluating the state-of-the-art of end-to-end natural language generation: The e2e nlg challenge",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Jekaterina",
"middle": [],
"last": "Novikova",
"suffix": ""
},
{
"first": "Verena",
"middle": [],
"last": "Rieser",
"suffix": ""
}
],
"year": 2020,
"venue": "Computer Speech & Language",
"volume": "59",
"issue": "",
"pages": "123--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Du\u0161ek, Jekaterina Novikova, and Verena Rieser. 2020. Evaluating the state-of-the-art of end-to-end natural language generation: The e2e nlg challenge. Computer Speech & Language, 59:123-156.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG challenge",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Jekaterina",
"middle": [],
"last": "Novikova",
"suffix": ""
},
{
"first": "Verena",
"middle": [],
"last": "Rieser",
"suffix": ""
}
],
"year": 2020,
"venue": "Computer Speech Language",
"volume": "59",
"issue": "",
"pages": "123--156",
"other_ids": {
"DOI": [
"10.1016/j.csl.2019.06.009"
]
},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Du\u0161ek, Jekaterina Novikova, and Verena Rieser. 2020. Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG chal- lenge. Computer Speech Language, 59:123 -156.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Towards making NLG a voice for interpretable machine learning",
"authors": [
{
"first": "James",
"middle": [],
"last": "Forrest",
"suffix": ""
},
{
"first": "Somayajulu",
"middle": [],
"last": "Sripada",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Coghill",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "177--182",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6522"
]
},
"num": null,
"urls": [],
"raw_text": "James Forrest, Somayajulu Sripada, Wei Pang, and George Coghill. 2018. Towards making NLG a voice for interpretable machine learning. In Proceedings of the 11th International Conference on Natural Language Generation, pages 177-182, Tilburg University, The Netherlands. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Generating referring expressions using perceptual groups",
"authors": [
{
"first": "Kotaro",
"middle": [],
"last": "Funakoshi",
"suffix": ""
},
{
"first": "Satoru",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Naoko",
"middle": [],
"last": "Kuriyama",
"suffix": ""
},
{
"first": "Takenobu",
"middle": [],
"last": "Tokunaga",
"suffix": ""
}
],
"year": 2004,
"venue": "International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "51--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kotaro Funakoshi, Satoru Watanabe, Naoko Kuriyama, and Takenobu Tokunaga. 2004. Generating refer- ring expressions using perceptual groups. In Inter- national Conference on Natural Language Genera- tion, pages 51-60, Brockenhurst, UK. Springer.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The webnlg challenge: Generating text from rdf data",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "Gardent",
"suffix": ""
},
{
"first": "Anastasia",
"middle": [],
"last": "Shimorina",
"suffix": ""
},
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Perez-Beltrachini",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 10th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "124--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The webnlg challenge: Generating text from rdf data. In Pro- ceedings of the 10th International Conference on Natural Language Generation, pages 124-133.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Attribute selection for referring expression generation: New algorithms and evaluation methods",
"authors": [
{
"first": "Albert",
"middle": [],
"last": "Gatt",
"suffix": ""
},
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Fifth International Natural Language Generation Conference",
"volume": "",
"issue": "",
"pages": "50--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Albert Gatt and Anya Belz. 2008. Attribute selection for referring expression generation: New algorithms and evaluation methods. In Proceedings of the Fifth International Natural Language Generation Confer- ence, pages 50-58, Salt Fork, Ohio, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Neural generation of diverse questions using answer focus, contextual and linguistic features",
"authors": [
{
"first": "Vrindavan",
"middle": [],
"last": "Harrison",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "296--306",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6536"
]
},
"num": null,
"urls": [],
"raw_text": "Vrindavan Harrison and Marilyn Walker. 2018. Neu- ral generation of diverse questions using answer fo- cus, contextual and linguistic features. In Proceed- ings of the 11th International Conference on Natural Language Generation, pages 296-306, Tilburg Uni- versity, The Netherlands. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Emiel van Miltenburg, and Verena Rieser. 2020. Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised definitions",
"authors": [
{
"first": "David",
"middle": [],
"last": "Howcroft",
"suffix": ""
},
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Miruna",
"middle": [],
"last": "Clinciu",
"suffix": ""
},
{
"first": "Dimitra",
"middle": [],
"last": "Gkatzia",
"suffix": ""
},
{
"first": "Sadid",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Saad",
"middle": [],
"last": "Mahamood",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Sashank",
"middle": [],
"last": "Santhanam",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 13th International Natural Language Generation Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Howcroft, Anya Belz, Miruna Clinciu, Dimi- tra Gkatzia, Sadid Hasan, Saad Mahamood, Simon Mille, Sashank Santhanam, Emiel van Miltenburg, and Verena Rieser. 2020. Twenty years of confu- sion in human evaluation: NLG needs evaluation sheets and standardised definitions. In Proceedings of the 13th International Natural Language Genera- tion Conference.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Joint Committee for Guides in Metrology",
"authors": [],
"year": null,
"venue": "JCGM. 2012. International vocabulary of metrology: Basic and general concepts and associated terms (VIM)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joint Committee for Guides in Metrology, JCGM. 2012. International vocabulary of metrology: Basic and general concepts and associated terms (VIM).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "ViGGO: A video game corpus for data-totext generation in open-domain conversation",
"authors": [
{
"first": "Juraj",
"middle": [],
"last": "Juraska",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Bowden",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "164--172",
"other_ids": {
"DOI": [
"10.18653/v1/W19-8623"
]
},
"num": null,
"urls": [],
"raw_text": "Juraj Juraska, Kevin Bowden, and Marilyn Walker. 2019. ViGGO: A video game corpus for data-to- text generation in open-domain conversation. In Proceedings of the 12th International Conference on Natural Language Generation, pages 164-172, Tokyo, Japan. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Best practices for the human evaluation of automatically generated text",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Van Der Lee",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Gatt",
"suffix": ""
},
{
"first": "Emiel",
"middle": [],
"last": "Van Miltenburg",
"suffix": ""
},
{
"first": "Sander",
"middle": [],
"last": "Wubben",
"suffix": ""
},
{
"first": "Emiel",
"middle": [],
"last": "Krahmer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "355--368",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris van der Lee, Albert Gatt, Emiel Van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th Interna- tional Conference on Natural Language Generation, pages 355-368.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Results of the wmt19 metrics shared task: Segment-level and strong mt systems pose big challenges",
"authors": [
{
"first": "Qingsong",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Johnny",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "62--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qingsong Ma, Johnny Wei, Ond\u0159ej Bojar, and Yvette Graham. 2019. Results of the wmt19 metrics shared task: Segment-level and strong mt systems pose big challenges. In Proceedings of the Fourth Confer- ence on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 62-90.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Statistical natural language generation from tabular non-textual data",
"authors": [
{
"first": "Joy",
"middle": [],
"last": "Mahapatra",
"suffix": ""
},
{
"first": "Sudip",
"middle": [],
"last": "Kumar Naskar",
"suffix": ""
},
{
"first": "Sivaji",
"middle": [],
"last": "Bandyopadhyay",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 9th International Natural Language Generation conference",
"volume": "",
"issue": "",
"pages": "143--152",
"other_ids": {
"DOI": [
"10.18653/v1/W16-6624"
]
},
"num": null,
"urls": [],
"raw_text": "Joy Mahapatra, Sudip Kumar Naskar, and Sivaji Bandy- opadhyay. 2016. Statistical natural language genera- tion from tabular non-textual data. In Proceedings of the 9th International Natural Language Generation conference, pages 143-152, Edinburgh, UK. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Tangled up in bleu: Reevaluating the evaluation of automatic machine translation evaluation metrics",
"authors": [
{
"first": "Nitika",
"middle": [],
"last": "Mathur",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.06264"
]
},
"num": null,
"urls": [],
"raw_text": "Nitika Mathur, Tim Baldwin, and Trevor Cohn. 2020. Tangled up in bleu: Reevaluating the evaluation of automatic machine translation evaluation metrics. arXiv preprint arXiv:2006.06264.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Abstract Meaning Representation parsing and generation",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Jay",
"middle": [],
"last": "Priyadarshi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "9",
"issue": "",
"pages": "536--545",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2090"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan May and Jay Priyadarshi. 2017. SemEval- 2017 task 9: Abstract Meaning Representation parsing and generation. In Proceedings of the 11th International Workshop on Semantic Evalua- tion (SemEval-2017), pages 536-545, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Applied NLG system evaluation: Flexy-CAT",
"authors": [
{
"first": "Nestor",
"middle": [],
"last": "Miliaev",
"suffix": ""
},
{
"first": "Alison",
"middle": [],
"last": "Cawsey",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Michaelson",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 9th European Workshop on Natural Language Generation (ENLG-2003) at EACL 2003",
"volume": "",
"issue": "",
"pages": "55--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nestor Miliaev, Alison Cawsey, and Greg Michael- son. 2003. Applied NLG system evaluation: Flexy- CAT. In Proceedings of the 9th European Workshop on Natural Language Generation (ENLG-2003) at EACL 2003, pages 55-62, Sofia, Bulgaria.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The First Multilingual Surface Realisation Shared Task (SR'18): Overview and Evaluation Results",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 1st Workshop on Multilingual Surface Realisation (MSR), 56th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Mille, Anya Belz, Bernd Bohnet, Yvette Gra- ham, Emily Pitler, and Leo Wanner. 2018. The First Multilingual Surface Realisation Shared Task (SR'18): Overview and Evaluation Results. In Pro- ceedings of the 1st Workshop on Multilingual Sur- face Realisation (MSR), 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1-12, Melbourne, Australia.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The second multilingual surface realisation shared task (SR'19): Overview and evaluation results",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019)",
"volume": "",
"issue": "",
"pages": "1--17",
"other_ids": {
"DOI": [
"10.18653/v1/D19-6301"
]
},
"num": null,
"urls": [],
"raw_text": "Simon Mille, Anya Belz, Bernd Bohnet, Yvette Gra- ham, and Leo Wanner. 2019. The second mul- tilingual surface realisation shared task (SR'19): Overview and evaluation results. In Proceedings of the 2nd Workshop on Multilingual Surface Realisa- tion (MSR 2019), pages 1-17, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Enabling text readability awareness during the micro planning phase of NLG applications",
"authors": [
{
"first": "Priscilla",
"middle": [],
"last": "Moraes",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "Carberry",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 9th International Natural Language Generation conference",
"volume": "",
"issue": "",
"pages": "121--131",
"other_ids": {
"DOI": [
"10.18653/v1/W16-6621"
]
},
"num": null,
"urls": [],
"raw_text": "Priscilla Moraes, Kathleen Mccoy, and Sandra Car- berry. 2016. Enabling text readability awareness dur- ing the micro planning phase of NLG applications. In Proceedings of the 9th International Natural Lan- guage Generation conference, pages 121-131, Edin- burgh, UK. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Generating and validating abstracts of meeting conversations: a user study",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 6th International Natural Language Generation Conference",
"volume": "",
"issue": "",
"pages": "105--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Murray, Giuseppe Carenini, and Raymond Ng. 2010. Generating and validating abstracts of meet- ing conversations: a user study. In Proceedings of the 6th International Natural Language Generation Conference, pages 105-113, Dublin, Ireland.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Unsupervised sentence simplification using deep semantics",
"authors": [
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Gardent",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 9th International Natural Language Generation conference",
"volume": "",
"issue": "",
"pages": "111--120",
"other_ids": {
"DOI": [
"10.18653/v1/W16-6620"
]
},
"num": null,
"urls": [],
"raw_text": "Shashi Narayan and Claire Gardent. 2016. Unsuper- vised sentence simplification using deep semantics. In Proceedings of the 9th International Natural Lan- guage Generation conference, pages 111-120, Edin- burgh, UK. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Why we need new evaluation metrics for NLG",
"authors": [
{
"first": "Jekaterina",
"middle": [],
"last": "Novikova",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Amanda",
"middle": [
"Cercas"
],
"last": "Curry",
"suffix": ""
},
{
"first": "Verena",
"middle": [],
"last": "Rieser",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2241--2252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241-2252.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Duc in context. Information Processing Management",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Over",
"suffix": ""
},
{
"first": "Hoa",
"middle": [],
"last": "Dang",
"suffix": ""
},
{
"first": "Donna",
"middle": [],
"last": "Harman",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "43",
"issue": "",
"pages": "1506--1520",
"other_ids": {
"DOI": [
"10.1016/j.ipm.2007.01.019"
]
},
"num": null,
"urls": [],
"raw_text": "Paul Over, Hoa Dang, and Donna Harman. 2007. Duc in context. Information Processing Management, 43(6):1506 -1520. Text Summarization.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Semantic graphs for generating deep questions",
"authors": [
{
"first": "Liangming",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Yuxi",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Tat-Seng",
"middle": [],
"last": "Chua",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1463--1475",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.135"
]
},
"num": null,
"urls": [],
"raw_text": "Liangming Pan, Yuxi Xie, Yansong Feng, Tat-Seng Chua, and Min-Yen Kan. 2020. Semantic graphs for generating deep questions. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 1463-1475, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A constraint-based approach for cooperative information-seeking dialogue",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Nancy",
"middle": [],
"last": "Green",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the International Natural Language Generation Conference",
"volume": "",
"issue": "",
"pages": "136--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yan Qu and Nancy Green. 2002. A constraint-based approach for cooperative information-seeking dia- logue. In Proceedings of the International Natural Language Generation Conference, pages 136-143, Harriman, New York, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A structured review of the validity of BLEU",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
}
],
"year": 2018,
"venue": "Computational Linguistics",
"volume": "44",
"issue": "3",
"pages": "393--401",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehud Reiter. 2018. A structured review of the validity of BLEU. Computational Linguistics, 44(3):393- 401.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "An investigation into the validity of some metrics for automatically evaluating natural language generation systems",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
},
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": ""
}
],
"year": 2009,
"venue": "Computational Linguistics",
"volume": "35",
"issue": "4",
"pages": "529--558",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehud Reiter and Anya Belz. 2009. An investiga- tion into the validity of some metrics for automat- ically evaluating natural language generation sys- tems. Computational Linguistics, 35(4):529-558.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Towards best experiment design for evaluating dialogue system output",
"authors": [
{
"first": "Sashank",
"middle": [],
"last": "Santhanam",
"suffix": ""
},
{
"first": "Samira",
"middle": [],
"last": "Shaikh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "88--94",
"other_ids": {
"DOI": [
"10.18653/v1/W19-8610"
]
},
"num": null,
"urls": [],
"raw_text": "Sashank Santhanam and Samira Shaikh. 2019. To- wards best experiment design for evaluating dia- logue system output. In Proceedings of the 12th International Conference on Natural Language Gen- eration, pages 88-94, Tokyo, Japan. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Identifying and overcoming threats to reproducibility, replicability, robustness, and generalizability in microbiome research",
"authors": [
{
"first": "D",
"middle": [],
"last": "Patrick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schloss",
"suffix": ""
}
],
"year": 2018,
"venue": "MBio",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick D Schloss. 2018. Identifying and overcoming threats to reproducibility, replicability, robustness, and generalizability in microbiome research. MBio, 9(3).",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "BLEURT: Learning robust metrics for text generation",
"authors": [
{
"first": "Thibault",
"middle": [],
"last": "Sellam",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ankur P",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.04696"
]
},
"num": null,
"urls": [],
"raw_text": "Thibault Sellam, Dipanjan Das, and Ankur P Parikh. 2020. BLEURT: Learning robust metrics for text generation. arXiv preprint arXiv:2004.04696.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "WebNLG challenge: Human evaluation results",
"authors": [
{
"first": "Anastasia",
"middle": [],
"last": "Shimorina",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Gardent",
"suffix": ""
},
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Perez-Beltrachini",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anastasia Shimorina, Claire Gardent, Shashi Narayan, and Laura Perez-Beltrachini. 2018. WebNLG chal- lenge: Human evaluation results. Technical report, Technical report.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Overgeneration and ranking for spoken dialogue systems",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Varges",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Fourth International Natural Language Generation Conference",
"volume": "",
"issue": "",
"pages": "20--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Varges. 2006. Overgeneration and ranking for spoken dialogue systems. In Proceedings of the Fourth International Natural Language Generation Conference, pages 20-22, Sydney, Australia. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Darpa communicator dialog travel planning systems: The june 2000 data collection",
"authors": [
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Aberdeen",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Boland",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bratt",
"suffix": ""
},
{
"first": "Lynette",
"middle": [],
"last": "Garofolo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "Sungbok",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Shrikanth",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kishore",
"middle": [],
"last": "Narayanan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Papineni",
"suffix": ""
}
],
"year": 2001,
"venue": "Seventh European Conference on Speech Communication and Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn Walker, J Aberdeen, J Boland, E Bratt, J Garo- folo, Lynette Hirschman, A Le, Sungbok Lee, Shrikanth Narayanan, Kishore Papineni, et al. 2001. Darpa communicator dialog travel planning systems: The june 2000 data collection. In Seventh European Conference on Speech Communication and Technol- ogy.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Towards faithful neural table-to-text generation with content-matching constraints",
"authors": [
{
"first": "Zhenyi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiaoyang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bang",
"middle": [],
"last": "An",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Changyou",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1072--1086",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.101"
]
},
"num": null,
"urls": [],
"raw_text": "Zhenyi Wang, Xiaoyang Wang, Bang An, Dong Yu, and Changyou Chen. 2020. Towards faithful neural table-to-text generation with content-matching con- straints. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 1072-1086, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Publishing a reproducible paper. Presentation at Open Science in Practice Summer School",
"authors": [
{
"first": "Kirstie",
"middle": [],
"last": "Whitaker",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kirstie Whitaker. 2017. Publishing a reproducible paper. Presentation at Open Science in Practice Summer School, https://www.cs.mcgill.ca/ jpineau/ReproducibilityChecklist.pdf.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Abstractive compression of captions with attentive recurrent neural networks",
"authors": [
{
"first": "Emiel",
"middle": [],
"last": "Sander Wubben",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Krahmer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Suzan",
"middle": [],
"last": "Bosch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Verberne",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 9th International Natural Language Generation conference",
"volume": "",
"issue": "",
"pages": "41--50",
"other_ids": {
"DOI": [
"10.18653/v1/W16-6608"
]
},
"num": null,
"urls": [],
"raw_text": "Sander Wubben, Emiel Krahmer, Antal van den Bosch, and Suzan Verberne. 2016. Abstractive compression of captions with attentive recurrent neural networks. In Proceedings of the 9th International Natural Lan- guage Generation conference, pages 41-50, Edin- burgh, UK. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Towards coherent and engaging spoken dialog response generation using automatic conversation evaluators",
"authors": [
{
"first": "Sanghyun",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Khatri",
"suffix": ""
},
{
"first": "Alessandra",
"middle": [],
"last": "Cervone",
"suffix": ""
},
{
"first": "Tagyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Behnam",
"middle": [],
"last": "Hedayatnia",
"suffix": ""
},
{
"first": "Anu",
"middle": [],
"last": "Venkatesh",
"suffix": ""
},
{
"first": "Raefer",
"middle": [],
"last": "Gabriel",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "65--75",
"other_ids": {
"DOI": [
"10.18653/v1/W19-8608"
]
},
"num": null,
"urls": [],
"raw_text": "Sanghyun Yi, Rahul Goel, Chandra Khatri, Alessan- dra Cervone, Tagyoung Chung, Behnam Hedayatnia, Anu Venkatesh, Raefer Gabriel, and Dilek Hakkani- Tur. 2019. Towards coherent and engaging spoken dialog response generation using automatic conver- sation evaluators. In Proceedings of the 12th In- ternational Conference on Natural Language Gen- eration, pages 65-75, Tokyo, Japan. Association for Computational Linguistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Review-based question generation with adaptive instance transfer and augmentation",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Lidong",
"middle": [],
"last": "Bing",
"suffix": ""
},
{
"first": "Qiong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wai",
"middle": [],
"last": "Lam",
"suffix": ""
},
{
"first": "Luo",
"middle": [],
"last": "Si",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "280--290",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.26"
]
},
"num": null,
"urls": [],
"raw_text": "Qian Yu, Lidong Bing, Qiong Zhang, Wai Lam, and Luo Si. 2020. Review-based question generation with adaptive instance transfer and augmentation. In Proceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, pages 280- 290, Online. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Towards automatic generation of product reviews from aspectsentiment scores",
"authors": [
{
"first": "Hongyu",
"middle": [],
"last": "Zang",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 10th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "168--177",
"other_ids": {
"DOI": [
"10.18653/v1/W17-3526"
]
},
"num": null,
"urls": [],
"raw_text": "Hongyu Zang and Xiaojun Wan. 2017. Towards au- tomatic generation of product reviews from aspect- sentiment scores. In Proceedings of the 10th In- ternational Conference on Natural Language Gen- eration, pages 168-177, Santiago de Compostela, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Exploring contextual word-level style relevance for unsupervised style transfer",
"authors": [
{
"first": "Chulun",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Liangyu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jiachen",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xinyan",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7135--7144",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.639"
]
},
"num": null,
"urls": [],
"raw_text": "Chulun Zhou, Liangyu Chen, Jiachen Liu, Xinyan Xiao, Jinsong Su, Sheng Guo, and Hua Wu. 2020. Exploring contextual word-level style relevance for unsupervised style transfer. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 7135-7144, Online. As- sociation for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Quality-criterion properties and the 27 different groupings they define (FoR = frame of reference).",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "Wang et al. (2020) Faithfulness\"A sentence is faithful if it contains only information supported by the table.[...] Also, the generated sentence should cover as much information in the given table as possible.",
"uris": null,
"type_str": "figure"
},
"TABREF2": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>Paper</td><td>Criterion Name in Paper</td><td colspan=\"2\">quality-criterion properties Type of Qual-Form/ Frame of Ref-ity Content erence (FoR)</td><td colspan=\"3\">Evaluation Mode obj. / abs. extr. subj. / rel. / intr.</td></tr><tr><td>Group 1 -Yu et al. (2020) Van de Cruys (2020) Pan et al. (2020)</td><td>Fluency Fluency Fluency</td><td>goodness correctness correctness</td><td>form form (a) form (b) content (b) none none none (a) none (c) content (c) external FoR</td><td>subj. subj. subj.</td><td>abs. abs. abs.</td><td>intr. intr. intr.</td></tr><tr><td colspan=\"5\">Van de Cruys (2020) Juraska et al. (2019) Chai and Wan (2020) Barros et al. (2017) Group 2 -Different names, same quality-criterion properties, same evaluation modes: Coherence goodness content none Coherence (a) correctness form none (b) goodness Coherence goodness content external FoR Coherence correctness content none Wang et al. (2020) Faithfulness correctness content FoR = input Cao et al. (2020) Content Similarity correctness content FoR = input Zhou et al. (2020) Content correctness content FoR = input Preservation Group 3 -Different names, same quality-criterion properties, different evaluation modes (2 example sets): subj. subj. subj. subj. obj. obj. obj. Gatt and Belz (2008) Reading Time goodness both none obj Forrest et al. (2018) Ease of Reading goodness both none subj.</td><td>abs. abs. abs. abs. abs. abs. abs. abs abs.</td><td>intr. intr. intr. intr. intr. intr. intr. extr intr.</td></tr></table>",
"text": "represent our interpretation of the information provided in each paper, but the authors may have intended slightly different meanings, e.g. for Same name, different quality-criterion properties, same evaluation modes (2 example sets):",
"html": null
}
}
}
}