{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:29:03.946118Z" }, "title": "Evaluating Semantic Accuracy of Data-to-Text Generation with Natural Language Inference", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles University", "location": { "country": "Czechia" } }, "email": "odusek@ufal.mff.cuni.cz" }, { "first": "Zden\u011bk", "middle": [], "last": "Kasner", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles University", "location": { "country": "Czechia" } }, "email": "kasner@ufal.mff.cuni.cz" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A major challenge in evaluating data-to-text (D2T) generation is measuring the semantic accuracy of the generated text, i.e. checking if the output text contains all and only facts supported by the input data. We propose a new metric for evaluating the semantic accuracy of D2T generation based on a neural model pretrained for natural language inference (NLI). We use the NLI model to check textual entailment between the input data and the output text in both directions, allowing us to reveal omissions or hallucinations. Input data are converted to text for NLI using trivial templates. Our experiments on two recent D2T datasets show that our metric can achieve high accuracy in identifying erroneous system outputs.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "A major challenge in evaluating data-to-text (D2T) generation is measuring the semantic accuracy of the generated text, i.e. checking if the output text contains all and only facts supported by the input data. We propose a new metric for evaluating the semantic accuracy of D2T generation based on a neural model pretrained for natural language inference (NLI). We use the NLI model to check textual entailment between the input data and the output text in both directions, allowing us to reveal omissions or hallucinations. Input data are converted to text for NLI using trivial templates. Our experiments on two recent D2T datasets show that our metric can achieve high accuracy in identifying erroneous system outputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Neural models may reduce the effort for building natural language generation (NLG) systems and produce very natural outputs, at the cost of limited control over the model outputs. State-of-the-art neural D2T models are prone to omitting or hallucinating facts (Gehrmann et al., 2018; Castro Ferreira et al., 2019; Du\u0161ek et al., 2020) , which restricts their real-world deployment. Recognizing these errors is thus essential for proper system evaluation and further research in D2T generation.", "cite_spans": [ { "start": 260, "end": 283, "text": "(Gehrmann et al., 2018;", "ref_id": "BIBREF5" }, { "start": 284, "end": 313, "text": "Castro Ferreira et al., 2019;", "ref_id": "BIBREF0" }, { "start": 314, "end": 333, "text": "Du\u0161ek et al., 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In general, evaluating the semantic accuracy of D2T generation outputs requires full natural language understanding. Minor changes in wording may cause major differences in the meaning of the text, making it difficult for handcrafted heuristics to cover all edge cases. Human evaluation, on the other hand, is expensive and difficult to scale. We note that the task of checking if a generated sentence includes/entails a particular fact is very close to the task of natural language inference (NLI). NLI is a sequence classification task which takes two inputs-a hypothesis and a premiseand produces one of the possible outputs: the hypothesis is entailed by (follows from) the premise, contradicts the premise, or their relation is neutral. Recently, neural models for NLI (Zhang et al., 2020b; Liu et al., 2019a,b) reached near-human levels of performance and NLI was used for evaluating the output of abstractive summarization systems (Maynez et al., 2020) .", "cite_spans": [ { "start": 774, "end": 795, "text": "(Zhang et al., 2020b;", "ref_id": "BIBREF30" }, { "start": 796, "end": 816, "text": "Liu et al., 2019a,b)", "ref_id": null }, { "start": 938, "end": 959, "text": "(Maynez et al., 2020)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This brings a question: Can we use an NLI model for evaluating the semantic accuracy of D2T outputs? The main idea of our method is to check with a general pretrained NLI model if the semantic information implied by the input data and the generated text is equal. We achieve this by using the NLI model to check for entailment in two directions: By inferring input facts from the generated text we can check for omissions, while the other direction allows us to check for hallucinations. 1 For instance, consider the two input facts from Figure 1: (Blue Spice | eat_type | pub), (Blue Spice | area | riverside) and the generated text: \"You can bring your kids to Blue Spice in the riverside area.\" A NLI system should detect that the first fact is not entailed by the text (there is no mention of Blue Spice being a pub), but the text is also not entailed by the facts (the information about kids is hallucinated).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Applying NLI for the D2T task brings a problem: The hypothesis for the standard NLI task is a natural language text, but the input for D2T generation is structured. However, we show that we can easily sidestep this issue by transforming the data into text using a trivial template for each fact.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We demonstrate that even without any human references or in-domain training and with minimal handcrafting, our approach achieves high accuracy (>90%) on the E2E Challenge data (Du\u0161ek et al., 2020) , competitive with scripts specifically handcrafted for the domain, and produces useful results (>75% accuracy) on the more challenging WebNLG dataset (Gardent et al., 2017) . A manual error analysis shows that some instances marked as errors were in fact assessed correctly by our metric; we also identified a few major sources of errors that can be mitigated by in-domain tuning. The experimental code for our metric is now available on GitHub. 2", "cite_spans": [ { "start": 176, "end": 196, "text": "(Du\u0161ek et al., 2020)", "ref_id": "BIBREF1" }, { "start": 348, "end": 370, "text": "(Gardent et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Automatic Evaluation of NLG NLG outputs were traditionally evaluated by reference-based metrics measuring n-gram overlap with a reference, such as BLEU (Papineni et al., 2002) , ROUGE (Lin, 2004) and METEOR (Lavie and Agarwal, 2007) . Alternative, referenceless quality estimation metrics based on language model scores (Kann et al., 2018) or linguistic features (Tian et al., 2018) focus on fluency and do not consider semantic accuracy. Recent works try to estimate NLG output quality with finetuned pretrained models (Zhou and Xu, 2020; Zhang et al., 2020a; Sellam et al., 2020) . The score from these models can capture some aspects of semantic accuracy, but only implicitly.", "cite_spans": [ { "start": 152, "end": 175, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF19" }, { "start": 184, "end": 195, "text": "(Lin, 2004)", "ref_id": "BIBREF12" }, { "start": 207, "end": 232, "text": "(Lavie and Agarwal, 2007)", "ref_id": "BIBREF11" }, { "start": 320, "end": 339, "text": "(Kann et al., 2018)", "ref_id": "BIBREF9" }, { "start": 363, "end": 382, "text": "(Tian et al., 2018)", "ref_id": "BIBREF25" }, { "start": 520, "end": 539, "text": "(Zhou and Xu, 2020;", "ref_id": "BIBREF31" }, { "start": 540, "end": 560, "text": "Zhang et al., 2020a;", "ref_id": "BIBREF29" }, { "start": 561, "end": 581, "text": "Sellam et al., 2020)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Semantic Accuracy To our knowledge, there is no generally accepted automatic metric for explicitly measuring semantic accuracy of NLG outputs. The closest commonly used metric is the slot error rate, which is typically based on pattern matching tailored for a given dataset (Reed et al., 2018; Mi et al., 2019; Du\u0161ek et al., 2020) . Recently, Goodrich et al. (2019) introduced a metric based on training a neural model on named-entity recognition and fact extraction.", "cite_spans": [ { "start": 274, "end": 293, "text": "(Reed et al., 2018;", "ref_id": "BIBREF20" }, { "start": 294, "end": 310, "text": "Mi et al., 2019;", "ref_id": "BIBREF16" }, { "start": 311, "end": 330, "text": "Du\u0161ek et al., 2020)", "ref_id": "BIBREF1" }, { "start": 343, "end": 365, "text": "Goodrich et al. (2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Faithful NLG Some recent neural NLG systems train specifically for semantic accuracy (Nie et al., 2019; Tian et al., 2019; Kedzie and McKeown, 2019) . Similarly to us, Harkous et al. (2020) use a pretrained neural model as a classifier to detect inaccurate output, finetuning the classifier on manually augmented domain-specific data.", "cite_spans": [ { "start": 85, "end": 103, "text": "(Nie et al., 2019;", "ref_id": "BIBREF17" }, { "start": 104, "end": 122, "text": "Tian et al., 2019;", "ref_id": "BIBREF24" }, { "start": 123, "end": 148, "text": "Kedzie and McKeown, 2019)", "ref_id": "BIBREF10" }, { "start": 168, "end": 189, "text": "Harkous et al. (2020)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Unlike previous works, we use a pretrained neural model finetuned for NLI which we do not fur-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We use pretrained RoBERTa (Liu et al., 2019b) as implemented in the Transformers library (Wolf et al., 2020 ) for our NLI model. Specifically, we use the roberta-large-mnli 3 checkpoint, which was finetuned on the MultiNLI dataset (Williams et al., 2018) . We use the model as is, without any further training. Given a premise text and a hypothesis text, the NLI model produces a probability distribution over three results: contradiction, neutral and entailment (cf. Section 1). We consider a NLI check as passed if the probability for entailment is the highest of the three.", "cite_spans": [ { "start": 26, "end": 45, "text": "(Liu et al., 2019b)", "ref_id": "BIBREF14" }, { "start": 89, "end": 107, "text": "(Wolf et al., 2020", "ref_id": null }, { "start": 231, "end": 254, "text": "(Williams et al., 2018)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "NLI Model", "sec_num": "3.1" }, { "text": "The input to our metric is a set of facts (the input for a D2T system) and the corresponding verbalization of these facts (the output from a D2T system). In our setup, the facts are RDF-like triples in the subject-predicate-object form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Preparation", "sec_num": "3.2" }, { "text": "We convert each triple to natural language using a trivial template. We consider two cases:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Preparation", "sec_num": "3.2" }, { "text": "(1) Default: The templates can be handcrafted or extracted from the NLG systems' training data for each predicate. (2) Backoff: We use only a single, universal \"backoff\" template for all the facts, in the form: The of is . Hereinafter, a fact refers to a template filled with the values from the triple.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Preparation", "sec_num": "3.2" }, { "text": "The generated text is said to be correct if it mentions all and only the input facts. We check if the text contains any omissions or hallucinations in two steps (see Figure 1 for an example):", "cite_spans": [], "ref_spans": [ { "start": 166, "end": 174, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Process", "sec_num": "3.3" }, { "text": "(1) To check for omissions, we use the whole generated text as a premise and sequentially feed each fact as a hypothesis to the NLI model. Any failed NLI check is considered an omission. While we could use all concatenated facts in a single NLI check, our approach gives us further information about which facts are omitted. (2) To check for hallucinations, we use a concatenation of all facts as a premise and feed the generated text as a hypothesis to the NLI model. If this NLI check fails, the text is considered to Figure 1 : An example of evaluating the output from a D2T system with our metric. The generated text is used as a premise (P) to check for omissions and as a hypothesis (H) to check for hallucinations. The NLI model generates probabilities for contradiction (C), neutral (N) and entailment (E).", "cite_spans": [], "ref_spans": [ { "start": 520, "end": 528, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Process", "sec_num": "3.3" }, { "text": "contain hallucination. This step cannot be split into simpler NLI checks. The final output of our metric is either 4-way (denoted as FINE): OK (i.e., all NLI checks passed), omission, hallucination or omission+hallucination (based on the failed checks), or 2-way (denoted as ROUGH) where the latter three results are collapsed into not_OK. The FINE 4-way output is more useful for system evaluation (we can distinguish whether the system tends to hallucinate or omit information). The ROUGH 2-way output corresponds more to a usage inside an NLG system for output reranking or filtering: any output that is not_OK should be penalized/filtered out. Additionally, we compute a confidence score of the model as the minimum of all the entailment probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Process", "sec_num": "3.3" }, { "text": "We experiment with two recent English data-totext datasets with a triple-like format: WebNLG (Gardent et al., 2017) and E2E (Novikova et al., 2017) . 4 Since both of them were used in shared tasks, sets of system outputs and measures of semantic accuracy are available (see Supplementary for details).", "cite_spans": [ { "start": 93, "end": 115, "text": "(Gardent et al., 2017)", "ref_id": "BIBREF4" }, { "start": 124, "end": 147, "text": "(Novikova et al., 2017)", "ref_id": "BIBREF18" }, { "start": 150, "end": 151, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "For WebNLG, we compare our metric with crowdsourced human ratings of semantic adequacy (Shimorina et al., 2019) . Human annotators used a three-point Likert scale (1 = Incorrect, 2 = Medium, 3 = Correct) and answers are averaged over multiple annotators. In our experiments discussed in Section 5.1, we consider a sentence correct if it achieved human rating 2.5 or higher (we also tried a threshold of 2.0, with slightly worse results).", "cite_spans": [ { "start": 87, "end": 111, "text": "(Shimorina et al., 2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "For the E2E dataset, the challenge results were checked for semantic accuracy using a handcrafted automatic script (Du\u0161ek et al., 2020 ), 5 we therefore use this automatic script as the ground truth for evaluating our metric in Section 5.2. We further use small sets of system outputs and humanwritten texts with expert annotation (provided by Du\u0161ek et al., 2019) to evaluate our approach against gold-standard annotation and to compare to existing semantic accuracy classifiers for E2E data in Section 5.3. We evaluate the Default and Backoff approaches to acquiring templates as described in Section 3.2. The Default setup works with one custom template per predicate type. For WebNLG, we obtained templates by delexicalizing human references for single-triple examples from WebNLG training data. 6 For E2E, we handcrafted 8 templates. The templates are filled with values from individual input triples and concatenated for multitriple inputs as described in Section 3.3.", "cite_spans": [ { "start": 115, "end": 134, "text": "(Du\u0161ek et al., 2020", "ref_id": "BIBREF1" }, { "start": 344, "end": 363, "text": "Du\u0161ek et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "We evaluate our metric in terms of accuracy, precision, recall, and F1-measure (where not_OK samples are treated as positive since we focus on detecting errors). We additionally perform a manual error analysis on a random sample of 100 error examples for each dataset, i.e. examples where our metric gave a different assessment from the ground truth (provided by crowdsourced annotation for WebNLG and by a handcrafted classification script Table 1 : WebNLG dataset results, compared to crowdsourced human ratings (A = accuracy, R = recall, P = precision, F1 = F-measure, \u03c1 = Spearman correlation of confidence scores with human scores). Table 2 : E2E dataset results, compared to the automatic evaluation script (Af = FINE accuracy, Ar = ROUGH accuracy, R = recall, P = precision, F1 = F-measure).", "cite_spans": [], "ref_spans": [ { "start": 441, "end": 448, "text": "Table 1", "ref_id": null }, { "start": 638, "end": 645, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results Analysis", "sec_num": "5" }, { "text": "for E2E as described in Section 4). In general, the results are high above the random baseline (0.5 for the ROUGH metric and 0.25 for the FINE metric) but differ between the datasets, which we discuss below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results Analysis", "sec_num": "5" }, { "text": "The overall scores for the WebNLG dataset are summarized in Table 1 . To further check whether the size of the input affects performance, we computed Spearman correlation of the number of input triples with metric errors. The resulting very low value of -0.05 (p = 0.02, Default setting) shows that the metric holds its performance even for more complex WebNLG examples.", "cite_spans": [], "ref_spans": [ { "start": 60, "end": 67, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "WebNLG Analysis", "sec_num": "5.1" }, { "text": "On the other hand, the overall scores show that our metric deviates quite a lot from the human judgments. Our manual error analysis indicates several reasons for that (see Supplementary for examples):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WebNLG Analysis", "sec_num": "5.1" }, { "text": "(1) The annotation is somewhat noisy and using a threshold is not ideal-many correctly rendered outputs do not reach the 2.5 threshold (while some incorrect ones do). (2) Imprecise templates can confuse the NLI (e.g., for the predicate nationality, our extracted template is was , which works well with values such as French, but not with United States). This is currently a weak point of our metric, as illustrated by the very small performance difference between the Default and Backoff setups; however, the issue can be mitigated by a better selection of the templates from training data, e.g. using language-model scoring. (3) The human annotators tend to give lower scores to accurate but ungrammatical or poorly organized texts. Our metric tends to rate these texts as OK. Overall, our re-examination shows that almost half of the error examples (42 out of 100) were in fact correctly classified by our metric (i.e. their crowdsourced human annotation was incorrect), so the true performance is most likely higher than the reported numbers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WebNLG Analysis", "sec_num": "5.1" }, { "text": "The Spearman correlation of our model's confidence scores with the average human scores is around 0.63 (p <1e-10). This is similar to n-grambased metrics on this data (Shimorina, 2018 reports 0.59 for BLEU and 0.73 for METEOR), but unlike these metrics, our approach does not require human-written reference texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WebNLG Analysis", "sec_num": "5.1" }, { "text": "The results for the E2E dataset (shown in Table 2 ) are very good compared to the WebNLG dataset, with over 90% agreement with the handcrafted script. This can be attributed to lower lexical variability and less noisy texts, as well as to the better quality of the handcrafted templates (the difference between the Default and Backoff setups is much more pronounced here). Again, we observe only a very slight drop in performance for more complex E2E inputs (Spearman correlation of metric errors with the number of input triples is -0.08, p <1e-10 for the Default setting).", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 49, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "E2E Analysis", "sec_num": "5.2" }, { "text": "The main issues identified by our error analysis are: (1) Problems in the interpretation of some values, e.g., price range=less than \u00a320 is verbalized as \"cheap\" or family-friendly=no as \"adult-only\". These cases are classified as not_OK by the NLI model. (2) Missing or over-greedy patterns in the slot error script, causing annotation errors. (3) Edge cases: some expressions cannot be interpreted in a straightforward way, e.g. \"high restaurant\" for pricerange=high is deemed OK by the NLI but not by the slot error script. (4) Expressions in the outputs that do not correspond to input facts, such as \"with full service\", are considered hallucinations by the NLI, but ignored by the slot error script. Again, we consider about half of the error examples (45 out of 100) as correctly classified by our metric (see Supplementary for details), and thus our metric's performance is probably higher than the reported values due to erroneous annotation from the handcrafted script.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E2E Analysis", "sec_num": "5.2" }, { "text": "We used expert-annotated E2E data samples (cf. Section 4) to compare our approach to other accuracy classifiers in the E2E domain: Table 3 : Semantic classifiers evaluated on expert human annotation on E2E data (see Table 2 for metrics legend).", "cite_spans": [], "ref_spans": [ { "start": 131, "end": 138, "text": "Table 3", "ref_id": null }, { "start": 216, "end": 223, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "E2E MR Classifier Comparison", "sec_num": "5.3" }, { "text": "\u2022 Slug2Slug slot aligner (Juraska et al., 2018) is based on keyword matches. It is carefully tuned but not designed to detect hallucination; it only checks for presence of facts from the input MR. \u2022 E2E slot error script (used in Section 5.2) is based on regular expressions; it is also able to detect irrelevant facts. \u2022 TGen reranker is an LSTM-based model trained on the E2E training data to rerank outputs of the TGen system (Du\u0161ek and Jur\u010d\u00ed\u010dek, 2016) based on their semantic accuracy.", "cite_spans": [ { "start": 25, "end": 47, "text": "(Juraska et al., 2018)", "ref_id": "BIBREF8" }, { "start": 429, "end": 455, "text": "(Du\u0161ek and Jur\u010d\u00ed\u010dek, 2016)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "E2E MR Classifier Comparison", "sec_num": "5.3" }, { "text": "The results for all classifiers (in Table 3 ) are much weaker on human-written data, which exhibit much more variability than system outputs. The TGen reranker is very weak when required to detect all facts properly. Our approach is slightly less precise than both handcrafted scripts, but the difference is small on system outputs (97.8% vs. 99.5% accuracy). If we disregard the value eat-Type=restaurant, which is generally noisy, we get 76.5% accuracy and 97.6% recall on the humanwritten data. Moreover, our approach requires much less handcrafting and is more general.", "cite_spans": [], "ref_spans": [ { "start": 36, "end": 43, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "E2E MR Classifier Comparison", "sec_num": "5.3" }, { "text": "We described an automatic metric for evaluating semantic accuracy of D2T generation. With just a basic setup, without human references or training and with minimum handcrafting, our metric is able to detect omissions or hallucinations in generated texts, with results competitive with crowdsourced human ratings or handcrafted scripts customized for particular domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "While our metric seems to scale well to more complex inputs in our experiments on the WebNLG and E2E data, we note that these datasets are still relatively limited. Further experiments are needed to evaluate this approach on long text generation and tasks where content selection is required, which we reserve for future work. We also plan to integrate our metric as a reranker into an NLG system and apply small-scale in-domain finetuning in order to further improve results. Following our findings from the error analysis on WebNLG, which showed that human ratings of semantic correctness are influenced by grammaticality, we would like to investigate the possibilities for combining our metric with a fluency/grammaticality checker (Kann et al., 2018; Tian et al., 2018) , as well as ways to better separate these two criteria in human evaluation.", "cite_spans": [ { "start": 735, "end": 754, "text": "(Kann et al., 2018;", "ref_id": "BIBREF9" }, { "start": 755, "end": 773, "text": "Tian et al., 2018)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "This check in both directions is appropriate for D2T tasks that do not include content selection, which are the focus of our experiments in this paper. If the generator is supposed to select just some of the input facts to verbalize (cf. e.g.Wiseman et al., 2017), we can either only check for hallucinations or, if the content selection is explicit, perform a two-way check with the selected facts provided.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/ufal/nlgi_eval ther train on any domain-specific data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://huggingface.co/roberta-large-mnli", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "E2E data use attribute-value pairs relating to a restaurant; we convert them to triples where the restaurant is the subject.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "While the E2E challenge did include crowdsourced evaluation of semantic accuracy, the results were unreliable, overestimating the number of errors(Du\u0161ek et al., 2020). Note that unlike our metric, such a handcrafted approach to evaluating semantic accuracy is only viable for limited domains such as E2E.6 For each predicate, we choose randomly if more templates are found and use the backoff if no templates are found.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the anonymous reviewers for their helpful comments. This work was supported by the Charles University GAUK grant No. 140320, the SVV project No. 260575, and the Charles University project PRIMUS/19/SCI/10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural datato-text generation: A comparison between pipeline and end-to-end architectures", "authors": [ { "first": "Chris", "middle": [], "last": "Thiago Castro Ferreira", "suffix": "" }, { "first": "", "middle": [], "last": "Van Der Lee", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Emiel Van Miltenburg", "suffix": "" }, { "first": "", "middle": [], "last": "Krahmer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "552--562", "other_ids": { "DOI": [ "10.18653/v1/D19-1052" ] }, "num": null, "urls": [], "raw_text": "Thiago Castro Ferreira, Chris van der Lee, Emiel van Miltenburg, and Emiel Krahmer. 2019. Neural data- to-text generation: A comparison between pipeline and end-to-end architectures. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 552-562, Hong Kong.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG challenge", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2020, "venue": "Computer Speech & Language", "volume": "59", "issue": "", "pages": "123--156", "other_ids": { "DOI": [ "10.1016/j.csl.2019.06.009" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Du\u0161ek, Jekaterina Novikova, and Verena Rieser. 2020. Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG chal- lenge. Computer Speech & Language, 59:123-156.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Semantic Noise Matters for Neural Natural Language Generation", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "M", "middle": [], "last": "David", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Howcroft", "suffix": "" }, { "first": "", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 12th International Conference on Natural Language Generation (INLG 2019)", "volume": "", "issue": "", "pages": "421--426", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ond\u0159ej Du\u0161ek, David M Howcroft, and Verena Rieser. 2019. Semantic Noise Matters for Neural Natural Language Generation. In Proceedings of the 12th International Conference on Natural Language Gen- eration (INLG 2019), pages 421-426, Tokyo, Japan.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Sequence-to-Sequence Generation for Spoken Dialogue via Deep Syntax Trees and Strings", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Jur\u010d\u00ed\u010dek", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "45--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ond\u0159ej Du\u0161ek and Filip Jur\u010d\u00ed\u010dek. 2016. Sequence-to- Sequence Generation for Spoken Dialogue via Deep Syntax Trees and Strings. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 45-51, Berlin.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The WebNLG challenge: Generating text from RDF data", "authors": [ { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" }, { "first": "Anastasia", "middle": [], "last": "Shimorina", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Perez-Beltrachini", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 10th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "124--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The WebNLG challenge: Generating text from RDF data. In Pro- ceedings of the 10th International Conference on Natural Language Generation, pages 124-133.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "End-to-End Content and Plan Selection for Data-to-Text Generation", "authors": [ { "first": "Sebastian", "middle": [], "last": "Gehrmann", "suffix": "" }, { "first": "Z", "middle": [], "last": "Falcon", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Elder", "suffix": "" }, { "first": "", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Gehrmann, Falcon Z. Dai, Henry Elder, and Alexander M. Rush. 2018. End-to-End Content and Plan Selection for Data-to-Text Generation. In Pro- ceedings of the 11th International Conference on Natural Language Generation, Tilburg, The Nether- lands.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Assessing The Factual Accuracy of Generated Text", "authors": [ { "first": "Ben", "middle": [], "last": "Goodrich", "suffix": "" }, { "first": "Vinay", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Saleh", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "KDD", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/3292500.3330955" ] }, "num": null, "urls": [], "raw_text": "Ben Goodrich, Vinay Rao, Mohammad Saleh, and Pe- ter J. Liu. 2019. Assessing The Factual Accuracy of Generated Text. In KDD, Anchorage, AK, USA.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Have your text and use it too! end-to-end neural data-to-text generation with semantic fidelity", "authors": [ { "first": "Hamza", "middle": [], "last": "Harkous", "suffix": "" }, { "first": "Isabel", "middle": [], "last": "Groves", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Saffari", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.06577" ] }, "num": null, "urls": [], "raw_text": "Hamza Harkous, Isabel Groves, and Amir Saffari. 2020. Have your text and use it too! end-to-end neural data-to-text generation with semantic fidelity. arXiv preprint arXiv:2004.06577.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A Deep Ensemble Model with Slot Alignment for Sequenceto-Sequence Natural Language Generation", "authors": [ { "first": "Juraj", "middle": [], "last": "Juraska", "suffix": "" }, { "first": "Panagiotis", "middle": [], "last": "Karagiannis", "suffix": "" }, { "first": "Kevin", "middle": [ "K" ], "last": "Bowden", "suffix": "" }, { "first": "Marilyn", "middle": [ "A" ], "last": "Walker", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "152--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juraj Juraska, Panagiotis Karagiannis, Kevin K. Bow- den, and Marilyn A. Walker. 2018. A Deep En- semble Model with Slot Alignment for Sequence- to-Sequence Natural Language Generation. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long Papers), pages 152-162, New Orleans, LA, USA.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Sentence-Level Fluency Evaluation: References Help, But Can Be Spared! In Proceedings of the 22nd Conference on Computational Natural Language Learning", "authors": [ { "first": "Katharina", "middle": [], "last": "Kann", "suffix": "" }, { "first": "Sascha", "middle": [], "last": "Rothe", "suffix": "" }, { "first": "Katja", "middle": [], "last": "Filippova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "313--323", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katharina Kann, Sascha Rothe, and Katja Filippova. 2018. Sentence-Level Fluency Evaluation: Refer- ences Help, But Can Be Spared! In Proceedings of the 22nd Conference on Computational Natural Lan- guage Learning, pages 313-323, Brussels, Belgium.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A good sample is hard to find: Noise injection sampling and self-training for neural language generation models", "authors": [ { "first": "Chris", "middle": [], "last": "Kedzie", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 12th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "584--593", "other_ids": { "DOI": [ "10.18653/v1/W19-8672" ] }, "num": null, "urls": [], "raw_text": "Chris Kedzie and Kathleen McKeown. 2019. A good sample is hard to find: Noise injection sampling and self-training for neural language generation models. In Proceedings of the 12th International Conference on Natural Language Generation, pages 584-593, Tokyo, Japan.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Meteor: An Automatic Metric for MT Evaluation with High Levels of Correlation with Human Judgments", "authors": [ { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "Abhaya", "middle": [], "last": "Agarwal", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Second Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "228--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alon Lavie and Abhaya Agarwal. 2007. Meteor: An Automatic Metric for MT Evaluation with High Lev- els of Correlation with Human Judgments. In Pro- ceedings of the Second Workshop on Statistical Ma- chine Translation, pages 228-231, Prague, Czech Republic. Association for Computational Linguis- tics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Multi-task deep neural networks for natural language understanding", "authors": [ { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4487--4496", "other_ids": { "DOI": [ "10.18653/v1/P19-1441" ] }, "num": null, "urls": [], "raw_text": "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019a. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496, Flo- rence, Italy.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. RoBERTa: A Robustly Optimized BERT Pretrain- ing Approach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "On faithfulness and factuality in abstractive summarization", "authors": [ { "first": "Joshua", "middle": [], "last": "Maynez", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Bernd", "middle": [], "last": "Bohnet", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1906--1919", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.173" ] }, "num": null, "urls": [], "raw_text": "Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factual- ity in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics, pages 1906-1919, Online.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Meta-learning for low-resource natural language generation in task-oriented dialogue systems", "authors": [ { "first": "Fei", "middle": [], "last": "Mi", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Jiyong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Boi", "middle": [], "last": "Faltings", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19", "volume": "", "issue": "", "pages": "3151--3157", "other_ids": { "DOI": [ "10.24963/ijcai.2019/437" ] }, "num": null, "urls": [], "raw_text": "Fei Mi, Minlie Huang, Jiyong Zhang, and Boi Faltings. 2019. Meta-learning for low-resource natural lan- guage generation in task-oriented dialogue systems. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 3151-3157.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A simple recipe towards reducing hallucination in neural surface realisation", "authors": [ { "first": "Feng", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Jin-Ge", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Jinpeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Rong", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2673--2679", "other_ids": { "DOI": [ "10.18653/v1/P19-1256" ] }, "num": null, "urls": [], "raw_text": "Feng Nie, Jin-Ge Yao, Jinpeng Wang, Rong Pan, and Chin-Yew Lin. 2019. A simple recipe towards re- ducing hallucination in neural surface realisation. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2673- 2679, Florence, Italy.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The E2E dataset: New challenges for end-toend generation", "authors": [ { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue", "volume": "", "issue": "", "pages": "201--206", "other_ids": { "DOI": [ "10.18653/v1/W17-5525" ] }, "num": null, "urls": [], "raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, and Verena Rieser. 2017. The E2E dataset: New challenges for end-to- end generation. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 201-206, Saarbr\u00fccken, Germany.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Can neural generators for dialogue learn sentence planning and discourse structuring?", "authors": [ { "first": "Lena", "middle": [], "last": "Reed", "suffix": "" }, { "first": "Shereen", "middle": [], "last": "Oraby", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "284--295", "other_ids": { "DOI": [ "10.18653/v1/W18-6535" ] }, "num": null, "urls": [], "raw_text": "Lena Reed, Shereen Oraby, and Marilyn Walker. 2018. Can neural generators for dialogue learn sentence planning and discourse structuring? In Proceedings of the 11th International Conference on Natural Lan- guage Generation, pages 284-295, Tilburg Univer- sity, The Netherlands.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "BLEURT: Learning Robust Metrics for Text Generation", "authors": [ { "first": "Thibault", "middle": [], "last": "Sellam", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ankur", "middle": [ "P" ], "last": "Parikh", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7881--7892", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thibault Sellam, Dipanjan Das, and Ankur P. Parikh. 2020. BLEURT: Learning Robust Metrics for Text Generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 7881-7892, Online.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Human vs Automatic Metrics: on the Importance of Correlation Design", "authors": [ { "first": "Anastasia", "middle": [], "last": "Shimorina", "suffix": "" } ], "year": 2018, "venue": "WiNLP Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anastasia Shimorina. 2018. Human vs Automatic Met- rics: on the Importance of Correlation Design. In WiNLP Workshop, New Orleans, LA, USA.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "WebNLG challenge: Human evaluation results", "authors": [ { "first": "Anastasia", "middle": [], "last": "Shimorina", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Perez-Beltrachini", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anastasia Shimorina, Claire Gardent, Shashi Narayan, and Laura Perez-Beltrachini. 2019. WebNLG chal- lenge: Human evaluation results. Technical report, LORIA.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Sticking to the facts: Confident decoding for faithful data-to-text generation", "authors": [ { "first": "Ran", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Thibault", "middle": [], "last": "Sellam", "suffix": "" }, { "first": "Ankur P", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.08684" ] }, "num": null, "urls": [], "raw_text": "Ran Tian, Shashi Narayan, Thibault Sellam, and Ankur P Parikh. 2019. Sticking to the facts: Con- fident decoding for faithful data-to-text generation. arXiv preprint arXiv:1910.08684.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Treat the system like a human student: Automatic naturalness evaluation of generated text without reference texts", "authors": [ { "first": "Ye", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Douratsos", "suffix": "" }, { "first": "Isabel", "middle": [], "last": "Groves", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "109--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ye Tian, Ioannis Douratsos, and Isabel Groves. 2018. Treat the system like a human student: Automatic naturalness evaluation of generated text without ref- erence texts. In Proceedings of the 11th Inter- national Conference on Natural Language Genera- tion, pages 109-118, Tilburg University, The Nether- lands.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1112--1122", "other_ids": { "DOI": [ "10.18653/v1/N18-1101" ] }, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Challenges in Data-to-Document Generation", "authors": [ { "first": "Sam", "middle": [], "last": "Wiseman", "suffix": "" }, { "first": "Stuart", "middle": [ "M" ], "last": "Shieber", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2243--2253", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2017. Challenges in Data-to-Document Gen- eration. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2243-2253, Copenhagen, Denmark.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. HuggingFace's Transformers: State-of-theart Natural Language Processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "", "middle": [], "last": "Gugger", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.03771" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Fun- towicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Can- wen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. HuggingFace's Transformers: State-of-the- art Natural Language Processing. arXiv preprint arXiv:1910.03771.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "BERTScore: Evaluating Text Generation with BERT", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Varsha", "middle": [], "last": "Kishore", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020a. BERTScore: Evaluating Text Generation with BERT. In ICLR, Online.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Semantics-aware BERT for language understanding", "authors": [ { "first": "Zhuosheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yuwei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Zuchao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shuailiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2020, "venue": "Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-2020)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. 2020b. Semantics-aware BERT for language understanding. In Thirty-Fourth AAAI Conference on Artificial Intel- ligence (AAAI-2020).", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models", "authors": [ { "first": "Wangchunshu", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Ke", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2020, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wangchunshu Zhou and Ke Xu. 2020. Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models. In AAAI, New York, NY, USA.", "links": null } }, "ref_entries": { "TABREF0": { "content": "
Input dataNLI model
(Blue Spice | eat_type | pub) (Blue Spice | area | riverside)Result omission
Generated text+ hallucination
You can bring your kids to Blue Spice in the riverside area.C: 0.87 N: 0.09 E: 0.04 ? omissionC: 0.01 N: 0.02 E: 0.97 ? OKOK confidence 0.04
eat_type: < subj> is a < obj> . area: < subj> is located in the < obj> . TemplatesP: C: 0.72 N: 0.17 E: 0.11 ? hallucinationOmitted facts (Blue Spice | eat_type | pub)
", "type_str": "table", "num": null, "text": "You can bring your kids to Blue Spice in the riverside area.H: Blue Spice is a pub.H: Blue Spice is located in the riverside. Blue Spice is a pub. Blue Spice is located in the riverside. You can bring your kids to Blue Spice in the riverside area.", "html": null } } } }