{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:28:23.798663Z" }, "title": "ReproGen: Proposal for a Shared Task on Reproducibility of Human Evaluations in NLG", "authors": [ { "first": "Anya", "middle": [], "last": "Belz", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Brighton", "location": { "country": "UK" } }, "email": "a.s.belz@brighton.ac.uk" }, { "first": "Shubham", "middle": [], "last": "Agarwal", "suffix": "", "affiliation": { "laboratory": "", "institution": "Heriot Watt University", "location": { "country": "UK" } }, "email": "" }, { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Aberdeen", "location": { "country": "UK" } }, "email": "e.reiter@abdn.ac.uk" }, { "first": "Anastasia", "middle": [], "last": "Shimorina", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universit\u00e9 de Lorraine / LORIA", "location": { "country": "France" } }, "email": "anastasia.shimorina@loria.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Across NLP, a growing body of work is looking at the issue of reproducibility. However, replicability of human evaluation experiments and reproducibility of their results is currently under-addressed, and this is of particular concern for NLG where human evaluations are the norm. This paper outlines our ideas for a shared task on reproducibility of human evaluations in NLG which aims (i) to shed light on the extent to which past NLG evaluations have been replicable and reproducible, and (ii) to draw conclusions regarding how evaluations can be designed and reported to increase replicability and reproducibility. If the task is run over several years, we hope to be able to document an overall increase in levels of replicability and reproducibility over time.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Across NLP, a growing body of work is looking at the issue of reproducibility. However, replicability of human evaluation experiments and reproducibility of their results is currently under-addressed, and this is of particular concern for NLG where human evaluations are the norm. This paper outlines our ideas for a shared task on reproducibility of human evaluations in NLG which aims (i) to shed light on the extent to which past NLG evaluations have been replicable and reproducible, and (ii) to draw conclusions regarding how evaluations can be designed and reported to increase replicability and reproducibility. If the task is run over several years, we hope to be able to document an overall increase in levels of replicability and reproducibility over time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Human evaluations play a central role in Natural Language Generation (NLG) (Reiter, 2018; Novikova et al., 2017) , so it is of concern that we do not currently know to what extent their results are reproducible, hence whether they are reliable or not. Reproducibility is on many NLP researchers' minds at present. There have been workshops on replicability and reproducibility in NLP most years since 2015. 1 The Reproducibility Challenge has been running since 2018, initially in conjunction with ICLR'18 and ICLR'19 , then at NeurIPS'19 and NeurIPS'20 (to appear). COLING'18 had a Reproduction Paper special category, for which it reported 35 submissions. NeurIPS'19 had a reproducibility programme comprising a code submission policy, a reproducibility challenge for machine learning (ML) results, and the ML Reproducibility checklist for submitted papers 1 IJCAI'15 Workshop on Replicability and Reproducibility in NLP.", "cite_spans": [ { "start": 75, "end": 89, "text": "(Reiter, 2018;", "ref_id": "BIBREF15" }, { "start": 90, "end": 112, "text": "Novikova et al., 2017)", "ref_id": "BIBREF9" }, { "start": 407, "end": 408, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "which has also been adopted by EMNLP'20 and AAAI'21. LREC'20 ran a reproducibility track (Branco et al., 2020) . Other conferences have foregrounded reproducibility via calls, chairs' blogs, special themes and social media posts.", "cite_spans": [ { "start": 89, "end": 110, "text": "(Branco et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "All this is against the wider background of what has been called a 'reproducibility crisis' (Baker, 2016) in science, where 70% of scientists report failing to reproduce someone else's results on at least one occasion, and over half report failing to reproduce own results. In NLP, 24.9% of attempts to reproduce own results, and 56.7% of attempts to reproduce another team's results, fail to reach the same conclusions (Mieskes et al., 2019) .", "cite_spans": [ { "start": 92, "end": 105, "text": "(Baker, 2016)", "ref_id": "BIBREF1" }, { "start": 420, "end": 442, "text": "(Mieskes et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Progress is being made in NLP regarding reproducibility, as can be seen from the long list of events and initiatives above. The habit of sharing code, data and supplementary material providing details about data, systems and training regimes is firmly established in the field, virtually all main events now encouraging and making space for it. Moreover, reproducibility is beginning to be addressed formally in the reviewing process, e.g. EMNLP'20 followed NeurIPS'19 in adding the ML Reproducibility Checklist (Pineau, 2020) to submission forms, 2 where authors had to indicate compliance with reproducibility criteria (although this was not used in selection decisions).", "cite_spans": [ { "start": 512, "end": 526, "text": "(Pineau, 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While progress is being made on many fronts, there is one big gap in efforts to achieve greater reproducibility, and that concerns human evaluation. If a paper complies with all of the NeurIPS'19/EMNLP'20 reproducibility criteria, it should be possible to reproduce metric results reported in it closely. However, any human evaluation results reported for the same system(s) in the same paper may or may not be reproducible, because the criteria say nothing at all about those. This shared task proposal is part of a wider effort to address reproducibility of human evaluations in NLG, a field in which they play a central role and which has always been wary of automatic evaluation metrics and their limitations (Reiter and Belz, 2009; Novikova et al., 2017; Reiter, 2018 ). Below we start by briefly diagnosing the reproducibility problem in NLG (Section 2), and looking at what the conditions are for reproducibility testing of results from human evaluation of NLG systems (Section 3). We then outline our ideas for a shared task that could help to understand and potentially address the problem (Section 4). We describe related research that has provided inspiration (Section 5), and conclude with next steps (Section 6).", "cite_spans": [ { "start": 713, "end": 736, "text": "(Reiter and Belz, 2009;", "ref_id": "BIBREF16" }, { "start": 737, "end": 759, "text": "Novikova et al., 2017;", "ref_id": "BIBREF9" }, { "start": 760, "end": 772, "text": "Reiter, 2018", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In an era dominated by metrics and leaderboards, human evaluation can be an afterthought, often carried out and reported in a slapdash way with tiny numbers of evaluators and included to add a veneer of credibility. Data collected for a recent survey of 20 years of human evaluation in NLG (Howcroft et al., 2020) indicates that 33% of evaluations use fewer than 10 evaluators, and 22% use between 1 and 4, numbers so small that experiments are unlikely to yield meaningful results, or to be reproducible, in many contexts. The survey also revealed that the roughly 170 papers reviewed often provide woefully inadequate information about human evaluations: numbers of evaluators, experimental design, the quality criterion assessed, even system language, are often unclear. Researchers moreover use a wide variety of different quality criteria, with a startling 200-odd different quality criterion names found in the survey. Even when researchers do use the same criterion name they often don't use it with the same meaning, and vice versa (see also Van Der Lee et al. 2019).", "cite_spans": [ { "start": 290, "end": 313, "text": "(Howcroft et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Issues with Human Evaluation in NLG", "sec_num": "2" }, { "text": "Inter-rater and intra-rater agreement, two indicators of human evaluation reliability, are rarely reported. Amidei et al. (2019) surveyed 135 NLG papers from 2008-2018 and found that just 18% reported any annotator agreement, and where agreement scores were reported they were low, casting doubt on the reliability of results.", "cite_spans": [ { "start": 108, "end": 128, "text": "Amidei et al. (2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Issues with Human Evaluation in NLG", "sec_num": "2" }, { "text": "We're aware of one published attempt to reproduce someone else's human evaluation results in NLG: Cooper and Shardlow (2020) , as part of RE-PROLANG, successfully reproduced system rankings in text simplification (however, with lower means). Another paper tested the ability of evaluators to reproduce their own evaluations in exactly the same experimental set-up which found that for some tasks and some evaluation instruments, evaluators struggled to reproduce their own evaluations (Belz and Kow, 2010) . NLG has valued and trusted human evaluation perhaps more than any other NLP subfield, so it's concerning that we currently don't know if any given set of human evaluation results are reproducible, meaning we don't know whether we can, in fact, trust them.", "cite_spans": [ { "start": 98, "end": 124, "text": "Cooper and Shardlow (2020)", "ref_id": "BIBREF5" }, { "start": 485, "end": 505, "text": "(Belz and Kow, 2010)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Issues with Human Evaluation in NLG", "sec_num": "2" }, { "text": "For results from human evaluations to be deemed reproducible, the first condition is that the system outputs assessed in the human evaluation must be reproducible, and the NeurIPS'19/EMNLP'20 criteria provide a pretty comprehensive set of conditions for that to be achievable. We take this as our starting point, i.e. we assume we have been successful in reproducing system outputs. In the shared task we will not try to reproduce system outputs, but start from existing sets of outputs (and inputs, where applicable), and try to reproduce the results of human evaluations performed on them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testing Human Evaluations for reproducibility", "sec_num": "3" }, { "text": "The second condition is that the experiment that produced the human evaluation results must be replicable which means having access to detailed information about how the experiment was designed and run, but also that it is repeatable in principle. Belz et al. (2020) identify a set of 18 properties with associated value sets for characterising evaluations that are needed for replicability: in addition to defining quality criteria and evaluation mode, papers need to include, or give access to, full details of system outputs (number, how selected), evaluators (number, type, how selected), method for determining effect size and significance of findings, scale or other rating instrument (size, list or range of possible response values), how presented to evaluators, form of response elicitation, information given to evaluators, and experimental conditions. This level of detail is currently extremely rare in NLG papers (Van Der Lee et al., 2019; Howcroft et al., 2020) . However much detail is provided, it is bound to be an approximation rather than a complete specification. Aspects such as the lab environment, the interface design and the exact training or instructions given are not normally reported, but may have considerable impact on results. More commonly described details may not be replicable either generally or for a particular team of researchers. For example, expert evaluators or proprietary software may not be accessible. An experiment may not be repeatable even in principle, e.g. if data protection laws or ethical regulations have changed, or if it was conducted in a one-off real-life context.", "cite_spans": [ { "start": 248, "end": 266, "text": "Belz et al. (2020)", "ref_id": "BIBREF3" }, { "start": 926, "end": 952, "text": "(Van Der Lee et al., 2019;", "ref_id": "BIBREF18" }, { "start": 953, "end": 975, "text": "Howcroft et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Replicating experiments", "sec_num": "3.1" }, { "text": "The third condition is that the replication must produce results that are the same as those produced by the original human evaluation, under the terms of a given frame of reference. The latter needs to set out under what conditions, in terms of which aspects, two evaluations can be considered to have produced the same results. We know we can't demand that they be identical, given the limits to replicability discussed above, but we need some principled way of determining 'sameness'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reproducing results", "sec_num": "3.2" }, { "text": "A related question is how to factor in variation in experimental design. Presumably one would want results to be very similar if the experiment is repeated in an identical manner in terms of the details listed in the previous section, with the same experimental software, and the same evaluators. But what about differences in user interface design? Or in the method for allocating test items to evaluators?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reproducing results", "sec_num": "3.2" }, { "text": "Variation in design, like similarity of results, is likely to be most usefully construed as a matter of degree, and reproducibility results reported in terms of (i) the level of variation in the experiment, and (ii) the level at which it has been possible to reproduce results. This would make it possible to characterise outcomes of different reproducibility tests in the same terms and make them comparable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reproducing results", "sec_num": "3.2" }, { "text": "With the ReproGen Challenge we aim to start finding answers to the above questions, as a community of researchers, in a public process. Initially (Section 4), the task for participants will be to take available information about a human evaluation and try, with support from the original authors, to replicate experiments as closely as possible, then report the outcome. With multiple teams trying to reproduce results from the same papers, we will have a more complete view of the reproducibility of individual sets of results (in contrast to other studies which often make just one attempt, e.g. Open Science Collaboration 2015).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reproducing results", "sec_num": "3.2" }, { "text": "Following the above approach, the outcome of a reproduction attempt is not simply success or failure, it's a matter of how similar the two experiments were and how many aspects the attempt reproduced successfully. To facilitate these assessments, we will select, where possible, papers for reproduction that report system rankings, mean system level scores, significant differences, p-values, and effect sizes, and ask participants to report corresponding results. In addition, we will ask participants to document their reproduction attempts as precisely as possible including any gaps that had to be filled in.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreting reproducibility results", "sec_num": "3.3" }, { "text": "Low reproducibility can have diverse causes: lack of detail about the original experiment, a flaw in the experimental design, or a problem with the evaluation task at the core of it (e.g. if participants find it too hard to score a given quality criterion).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreting reproducibility results", "sec_num": "3.3" }, { "text": "Reproducibility outcomes from the proposed shared task will allow conclusions to be drawn about what kind of experiments are easier to reproduce, what the required level of information about experiments is to make them replicable, results from what types of experimental design are more, or less, reproducible, and how different aspects of experimental design and implementation affect reproducibility.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreting reproducibility results", "sec_num": "3.3" }, { "text": "We envisage ReproGen to have two tracks, one an 'unshared task' in which teams attempt to reproduce their own prior human evaluation results, the other a shared task in which teams try to reproduce the same prior human evaluation results. We envisage a fairly simple challenge first, where we nominate about five papers as replication targets for participants to choose from. Attendees can then either (A) replicate one of these experiments, or (B) replicate one of their own previous experiments:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organisation of Shared Task", "sec_num": "4" }, { "text": "A Main Reproducibility Track: For a shared set of selected human evaluations, participants attempt to reproduce their results, using published information plus extra detail provided by the authors (discussion with and support from the original authors e.g. played a big role in the Reproducibility @NeurIPS'19 challenge for ML results ), and making common-sense assumptions where information is still incomplete.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organisation of Shared Task", "sec_num": "4" }, { "text": "B RYO Track: Reproduce Your Own previous human evaluation results, and report what happened. Unshared task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organisation of Shared Task", "sec_num": "4" }, { "text": "For the main track (A above), the plan is to ask authors to volunteer their papers for inclusion via an open call for expressions of interest. Authors will be invited to provide additional details and/or software to help teams reproduce the results. It's not clear how much this will help: while some studies indicate large increases from author help (Raff, 2019) , other large-scale studies show no improvement at all (Klein et al., 2019) . Neither of the tracks would have winners or leaderboards in the normal shared-task sense. However, 'winners' in the main track would provide toplines of reproducibility for the included papers, and taken together, ReproGen Challenge contributions would help shed light on how to improve the reproducibility of human evaluations in NLG.", "cite_spans": [ { "start": 351, "end": 363, "text": "(Raff, 2019)", "ref_id": "BIBREF14" }, { "start": 419, "end": 439, "text": "(Klein et al., 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Organisation of Shared Task", "sec_num": "4" }, { "text": "We expect teams to participate for a variety of reasons, ranging from researchers new to human evaluation (the Reproducibility Challenge @NeurIPS for ML results encouraged computer science courses to get their students to participate), to researchers experienced in human evaluation specifically interested in reproducibility.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organisation of Shared Task", "sec_num": "4" }, { "text": "Participation will have financial implications in the case of some papers. We will endeavour to keep such cost as low as possible by selecting mostly papers that were not expert or crowd-evaluated. We are also applying for funding to support crowdbased evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organisation of Shared Task", "sec_num": "4" }, { "text": "Reproducibility investigations are commonly conducted in a closed project. For example, the Open Science Collaboration (2015) conducted 100 attempts to reproduce studies in psychology, mainly evaluating reproducibility using significance, pvalues, effect sizes, and meta-analysis of effect sizes. They found substantial decreases from original to replication study for all three indicators, while just 39% of effects were subjectively rated to have reproduced the original result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "In contrast, the shared task framework ensures openness to all members of a research community. The Reproducibility Challenge @NeurIPS'19, focusing on ML results and metric scores, was organised as a 'live' challenge, where participants pick one of the NeurIPS accepted papers, and try to reproduce its ML results . NeurIPS'19 authors were strongly encouraged to submit code and data, which 73% did, resulting in a 'codebase' the Reproducibility Challenge participants could choose from to participate in one of three tracks: (i) a baselines track (rigorous analysis of baseline results, re-implementing them if necessary), (ii) an ablation track (rigorous ablation experiments, modifying model and hyperparameters using the authors' code), and (iii) a replications track (replication of experiments in paper from scratch without using code from codebase).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "The challenge was a big success, attracting 83 submissions after participants initially claimed 173 NeurIPS papers. The submissions were peerreviewed as part of the NeurIPS reviewing process which relied heavily on the OpenReview platform, and 10 papers were selected for publication in Re-Science C, an open access journal intended as a forum for replication work in computing science.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "The one paper on reproducing human evaluation results in NLG mentioned above (Cooper and Shardlow, 2020) was part of the REPROLANG'20 initiative which followed on from two earlier, smaller-scale 3 LREC workshops on reproducibility and citation, and offered a shared task (Branco et al., 2020) which asked participants to reproduce results from one of 11 papers from different areas of NLP. While in the case of ten papers, the results up for reproduction were automatic scores, in one case they included human evaluation scores. 4", "cite_spans": [ { "start": 77, "end": 104, "text": "(Cooper and Shardlow, 2020)", "ref_id": "BIBREF5" }, { "start": 271, "end": 292, "text": "(Branco et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "With this shared task proposal we hope to engage the NLG community in a discussion about how best to design and organise the ReproGen Challenge. Following feedback and input, we will finalise the task specification and organisational aspects, expecting to be able to launch the task in 2021 for a pilot run with around five sets of human evaluation results up for reproduction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Next Steps", "sec_num": "6" }, { "text": "We would hope that the ReproGen Challenge will both shed light on the reproducibility of current human evaluations in NLG, and allow conclusions about how evaluations can be designed and reported to increase reproducibility. Over repeated instances of the Shared Task, we hope to be able to document an overall increase in the reproducibility of new human evaluations in NLG.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Next Steps", "sec_num": "6" }, { "text": "https://2020.emnlp.org/ call-for-papers", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "4REAL2016 and 4REAL2018 had four papers each and one actual reproduction attempt.4 Task D.1: Text simplification: http://wordpress. let.vupr.nl/lrec-reproduction/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Agreement is overrated: A plea for correlation to assess human evaluation reliability", "authors": [ { "first": "Jacopo", "middle": [], "last": "Amidei", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Piwek", "suffix": "" }, { "first": "Alistair", "middle": [], "last": "Willis", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 12th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "344--354", "other_ids": { "DOI": [ "10.18653/v1/W19-8642" ] }, "num": null, "urls": [], "raw_text": "Jacopo Amidei, Paul Piwek, and Alistair Willis. 2019. Agreement is overrated: A plea for correlation to as- sess human evaluation reliability. In Proceedings of the 12th International Conference on Natural Lan- guage Generation, pages 344-354, Tokyo, Japan. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Reproducibility crisis", "authors": [ { "first": "Monya", "middle": [], "last": "Baker", "suffix": "" } ], "year": 2016, "venue": "Nature", "volume": "533", "issue": "26", "pages": "353--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Monya Baker. 2016. Reproducibility crisis. Nature, 533(26):353-66.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Comparing rating scales and preference judgements in language evaluation", "authors": [ { "first": "Anya", "middle": [], "last": "Belz", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Kow", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 6th International Natural Language Generation Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anya Belz and Eric Kow. 2010. Comparing rating scales and preference judgements in language eval- uation. In Proceedings of the 6th International Nat- ural Language Generation Conference.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Disentangling the properties of human evaluation methods: A classification system to support comparability, meta-evaluation and reproducibility testing", "authors": [ { "first": "Anya", "middle": [], "last": "Belz", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Mille", "suffix": "" }, { "first": "David", "middle": [], "last": "Howcroft", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 13th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anya Belz, Simon Mille, and David Howcroft. 2020. Disentangling the properties of human evaluation methods: A classification system to support compa- rability, meta-evaluation and reproducibility testing. In Proceedings of the 13th International Conference on Natural Language Generation.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Andr\u00e9 Moreira, and Willem Elbers. 2020. A shared task of a new, collaborative type to foster reproducibility: A first exercise in the area of language science and technology with RE-PROLANG2020", "authors": [ { "first": "Ant\u00f3nio", "middle": [], "last": "Branco", "suffix": "" }, { "first": "Nicoletta", "middle": [], "last": "Calzolari", "suffix": "" }, { "first": "Piek", "middle": [], "last": "Vossen", "suffix": "" }, { "first": "Gertjan", "middle": [], "last": "Van Noord", "suffix": "" }, { "first": "Jo\u00e3o", "middle": [], "last": "Dieter Van Uytvanck", "suffix": "" }, { "first": "Lu\u00eds", "middle": [], "last": "Silva", "suffix": "" }, { "first": "", "middle": [], "last": "Gomes", "suffix": "" } ], "year": null, "venue": "Proceedings of The 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "5539--5545", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ant\u00f3nio Branco, Nicoletta Calzolari, Piek Vossen, Gertjan Van Noord, Dieter van Uytvanck, Jo\u00e3o Silva, Lu\u00eds Gomes, Andr\u00e9 Moreira, and Willem Elbers. 2020. A shared task of a new, collaborative type to foster reproducibility: A first exercise in the area of language science and technology with RE- PROLANG2020. In Proceedings of The 12th Lan- guage Resources and Evaluation Conference, pages 5539-5545, Marseille, France. European Language Resources Association.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Com-biNMT: An exploration into neural text simplification models", "authors": [ { "first": "Michael", "middle": [], "last": "Cooper", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Shardlow", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Cooper and Matthew Shardlow. 2020. Com- biNMT: An exploration into neural text simplifi- cation models. In Proceedings of The 12th Lan- guage Resources and Evaluation Conference, Mar- seille, France. European Language Resources Asso- ciation.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Emiel van Miltenburg, and Verena Rieser. 2020. Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised definitions", "authors": [ { "first": "David", "middle": [], "last": "Howcroft", "suffix": "" }, { "first": "Anya", "middle": [], "last": "Belz", "suffix": "" }, { "first": "Miruna", "middle": [], "last": "Clinciu", "suffix": "" }, { "first": "Dimitra", "middle": [], "last": "Gkatzia", "suffix": "" }, { "first": "Sadid", "middle": [], "last": "Hasan", "suffix": "" }, { "first": "Saad", "middle": [], "last": "Mahamood", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Mille", "suffix": "" }, { "first": "Sashank", "middle": [], "last": "Santhanam", "suffix": "" } ], "year": null, "venue": "Proceedings of the 13th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Howcroft, Anya Belz, Miruna Clinciu, Dimi- tra Gkatzia, Sadid Hasan, Saad Mahamood, Simon Mille, Sashank Santhanam, Emiel van Miltenburg, and Verena Rieser. 2020. Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised definitions. In Proceedings of the 13th International Conference on Natural Language Generation.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Many labs 4: Failure to replicate mortality salience effect with and without original author involvement", "authors": [ { "first": "Corey", "middle": [ "L" ], "last": "Richard A Klein", "suffix": "" }, { "first": "", "middle": [], "last": "Cook", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Charles R Ebersole", "suffix": "" }, { "first": "", "middle": [], "last": "Vitiello", "suffix": "" }, { "first": "A", "middle": [], "last": "Brian", "suffix": "" }, { "first": "", "middle": [], "last": "Nosek", "suffix": "" }, { "first": "R", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Cody", "middle": [ "D" ], "last": "Chartier", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Christopherson", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Clay", "suffix": "" }, { "first": "Jarret", "middle": [], "last": "Collisson", "suffix": "" }, { "first": "", "middle": [], "last": "Crawford", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard A Klein, Corey L Cook, Charles R Eber- sole, Christine Vitiello, Brian A Nosek, Christo- pher R Chartier, Cody D Christopherson, Samuel Clay, Brian Collisson, Jarret Crawford, et al. 2019. Many labs 4: Failure to replicate mortality salience effect with and without original author involvement.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Community perspective on replicability in natural language processing", "authors": [ { "first": "Margot", "middle": [], "last": "Mieskes", "suffix": "" }, { "first": "Kar\u00ebn", "middle": [], "last": "Fort", "suffix": "" }, { "first": "Aur\u00e9lie", "middle": [], "last": "N\u00e9v\u00e9ol", "suffix": "" }, { "first": "Cyril", "middle": [], "last": "Grouin", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)", "volume": "", "issue": "", "pages": "768--775", "other_ids": { "DOI": [ "10.26615/978-954-452-056-4_089" ] }, "num": null, "urls": [], "raw_text": "Margot Mieskes, Kar\u00ebn Fort, Aur\u00e9lie N\u00e9v\u00e9ol, Cyril Grouin, and Kevin Cohen. 2019. Community per- spective on replicability in natural language process- ing. In Proceedings of the International Conference on Recent Advances in Natural Language Process- ing (RANLP 2019), pages 768-775, Varna, Bulgaria. INCOMA Ltd.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Why we need new evaluation metrics for NLG", "authors": [ { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Amanda", "middle": [ "Cercas" ], "last": "Curry", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2241--2252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241-2252.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Estimating the reproducibility of psychological science", "authors": [], "year": 2015, "venue": "Science", "volume": "", "issue": "6251", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Open Science Collaboration. 2015. Estimating the reproducibility of psychological science. Science, 349(6251).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The machine learning reproducibility checklist", "authors": [ { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joelle Pineau. 2020. The machine learning repro- ducibility checklist v2.0.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A step toward quantifying independently reproducible machine learning research", "authors": [ { "first": "Edward", "middle": [], "last": "Raff", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5485--5495", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Raff. 2019. A step toward quantifying indepen- dently reproducible machine learning research. In Advances in Neural Information Processing Systems, pages 5485-5495.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A structured review of the validity of BLEU", "authors": [ { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "" } ], "year": 2018, "venue": "Computational Linguistics", "volume": "44", "issue": "3", "pages": "393--401", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ehud Reiter. 2018. A structured review of the validity of BLEU. Computational Linguistics, 44(3):393- 401.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "An investigation into the validity of some metrics for automatically evaluating natural language generation systems", "authors": [ { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "" }, { "first": "Anya", "middle": [], "last": "Belz", "suffix": "" } ], "year": 2009, "venue": "Computational Linguistics", "volume": "35", "issue": "4", "pages": "529--558", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ehud Reiter and Anya Belz. 2009. An investiga- tion into the validity of some metrics for automat- ically evaluating natural language generation sys- tems. Computational Linguistics, 35(4):529-558.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Best practices for the human evaluation of automatically generated text", "authors": [ { "first": "Chris", "middle": [], "last": "Van Der Lee", "suffix": "" }, { "first": "Albert", "middle": [], "last": "Gatt", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Van Miltenburg", "suffix": "" }, { "first": "Sander", "middle": [], "last": "Wubben", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Krahmer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 12th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "355--368", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Van Der Lee, Albert Gatt, Emiel Van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th Interna- tional Conference on Natural Language Generation, pages 355-368.", "links": null } }, "ref_entries": {} } }