ACL-OCL / Base_JSON /prefixD /json /dadc /2022.dadc-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:43:35.808684Z"
},
"title": "Collecting high-quality adversarial data for machine reading comprehension tasks with humans and models in the loop",
"authors": [
{
"first": "Damian",
"middle": [
"Y"
],
"last": "Romero Diaz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Arizona",
"location": {}
},
"email": "damian@explosion.ai*"
},
{
"first": "Magdalena",
"middle": [],
"last": "Anio\u0142",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Arizona",
"location": {}
},
"email": ""
},
{
"first": "John",
"middle": [],
"last": "Culnan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Arizona",
"location": {}
},
"email": "jmculnan@arizona.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present our experience as annotators in the creation of high-quality, adversarial machinereading-comprehension data for extractive QA for Task 1 of the First Workshop on Dynamic Adversarial Data Collection (DADC). DADC is an emergent data collection paradigm with both models and humans in the loop. We set up a quasi-experimental annotation design and perform quantitative analyses across groups with different numbers of annotators focusing on successful adversarial attacks, cost analysis, and annotator confidence correlation. We further perform a qualitative analysis of our perceived difficulty of the task given the different topics of the passages in our dataset and conclude with recommendations and suggestions that might be of value to people working on future DADC tasks and related annotation interfaces.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "We present our experience as annotators in the creation of high-quality, adversarial machinereading-comprehension data for extractive QA for Task 1 of the First Workshop on Dynamic Adversarial Data Collection (DADC). DADC is an emergent data collection paradigm with both models and humans in the loop. We set up a quasi-experimental annotation design and perform quantitative analyses across groups with different numbers of annotators focusing on successful adversarial attacks, cost analysis, and annotator confidence correlation. We further perform a qualitative analysis of our perceived difficulty of the task given the different topics of the passages in our dataset and conclude with recommendations and suggestions that might be of value to people working on future DADC tasks and related annotation interfaces.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We present quantitative and qualitative analyses of our experience as annotators in the machine reading comprehension shared task for the First Workshop on Dynamic Adversarial Data Collection. 1 . The shared task was a collection of three sub-tasks focused on the selection of excerpts from unstructured texts that best answer a given question (extractive question-answering). The sub-tasks included: (A) the manual creation of question-answer pairs by human annotators, (B) the submission of novel training data (10,000 training examples), and (C) the creation of better extractive question-answering models. In this paper, we focus on our participation in the the manual creation of question-answer pairs task dubbed as \"Track 1: Better Annotators\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Machine reading comprehension (MRC) is a type of natural language processing task that relies in the understanding of natural language and knowledge about the world to answer questions about a given text (Rajpurkar et al., 2016) . In some cases, state-of-the-art MRC systems are close to or have already started outperforming standard human benchmarks (Dzendzik et al., 2021) . However, models trained on standard datasets (i.e., collected in non-adversarial conditions) do not perform as well when evaluated on adversarially-chosen inputs (Jia and Liang, 2017) .",
"cite_spans": [
{
"start": 204,
"end": 228,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF9"
},
{
"start": 352,
"end": 375,
"text": "(Dzendzik et al., 2021)",
"ref_id": "BIBREF3"
},
{
"start": 540,
"end": 561,
"text": "(Jia and Liang, 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To further challenge models and make them robust against adversarial attacks, researchers have started creating adversarial datasets which continuously change models as they grow stronger. Dynamic Adversarial Data Collection (DADC) is an emergent data collection paradigm explicitly created for the collection of such adversarial datasets. In DADC, human annotators interact with an adversary model or ensemble of models in real-time during the annotation process (Bartolo et al., 2020) to create examples that elicit incorrect predictions from the model . DADC allows for the creation of increasingly more challenging data as well as improved models and benchmarks for adversarial attacks (Dua et al., 2019; Nie et al., 2020) .",
"cite_spans": [
{
"start": 464,
"end": 486,
"text": "(Bartolo et al., 2020)",
"ref_id": null
},
{
"start": 690,
"end": 708,
"text": "(Dua et al., 2019;",
"ref_id": "BIBREF2"
},
{
"start": 709,
"end": 726,
"text": "Nie et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There is evidence that data collected through adversarial means is distributionally different from standard data. From a lexical point of view, note that \"what-\" and \"how-\" questions dominate in adversarial data collection (ADC) as opposed to \"who-\" and \"when-\" questions in the standard datasets. In the context of reading comprehension, DADC has been championed by Bartolo et al. (2020) , who observe that DADC QA datasets are generally syntactically and lexically more diverse, contain more paraphrases and comparisons, and often require multi-hop inference, especially implicit inference.",
"cite_spans": [
{
"start": 367,
"end": 388,
"text": "Bartolo et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Single Annotator Two-Annotator Three-Annotator Sessions Sessions Sessions Model fooled 45 21 19 5 Model not fooled 43 22 13 8 False negative 10 5 4 1 False positive 2 1 1 0 Total 100 49 37 14 Table 1 : Overall annotation results before verification.",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 226,
"text": "Sessions Model fooled 45 21 19 5 Model not fooled 43 22 13 8 False negative 10 5 4 1 False positive 2 1 1 0 Total 100 49 37 14 Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Annotation Result Total",
"sec_num": null
},
{
"text": "Apart from corpus analyses, researchers have also noted certain limitations of the DADC paradigm. For instance, note that annotators overfitting on models might lead to cyclical progress and that the dynamically collected data might rely too heavily on the model used, which can potentially be mitigated by mixing in standard data. Similarly, find that DADC models do not respond well to distribution shifts and have problems generalizing to non-DADC tests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Result Total",
"sec_num": null
},
{
"text": "In this paper, we present our experience as annotators in the reading comprehension shared task for the First Workshop on Dynamic Adversarial Data Collection. Through quantitative and qualitative analyses of a quasi-experimental annotation design, we discuss issues such as cost analysis, annotator confidence, perceived difficulty of the task in relation to the topics of the passages in our dataset, and the issues we encountered while interacting with the system, specifically in relationship with the commonly-used F1 word-overlap metric. We conclude with recommendations and suggestions that might be of value to people working on future DADC tasks and related annotation interfaces.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contributions",
"sec_num": null
},
{
"text": "Track 1 of the First Workshop on Dynamic Adversarial Data Collection consisted in generating 100 reading comprehension questions from a novel set of annotation passages while competing against the current state-of-the-art QA model , which would remain static throughout the task. Through Dynabench , 2 an annotation platform specialized in DADC, annotators would create model-fooling questions that could be answered with a continuous span of text. Successful attacks required the annotators to pro-2 https://dynabench.org/ vide explanations of the question and a hypothesis for the model's failure. These were then subject to a post hoc human validation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "During our participation, we discovered two issues with the implementation of the metric used in Dynabench to decide whether the model had been fooled or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "F1 metric and false negatives",
"sec_num": "2.1"
},
{
"text": "Dynabench uses a word-overlap metric to calculate the success of the model(s)' responses against those selected by the annotators . This metric is calculated as the F1 score of the overlapping words between the answer selected by the annotators and the answer predicted by the model, where model responses with a score above 40% are labeled as a successful answer for the model. For example, the answer \"New York\" would be considered equivalent to the answer \"New York City\" (Bartolo et al., 2020) .",
"cite_spans": [
{
"start": 475,
"end": 497,
"text": "(Bartolo et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "F1 metric and false negatives",
"sec_num": "2.1"
},
{
"text": "In practice, we observed that the F1 metric led to many false negatives, 3 or, in other words, to answers that were considered unsuccessful attacks from the annotators when, in reality, the model was wrong. This happened in two different circumstances. First, in the form of incomplete answers where critical information was missing from the model's answer, and the answer was still considered equivalent due to a sufficient word overlap, as in example A from Table 2 .",
"cite_spans": [
{
"start": 73,
"end": 74,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 460,
"end": 467,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "F1 metric and false negatives",
"sec_num": "2.1"
},
{
"text": "In this case, since \"Tinker Tailor Soldier Spy\" is a movie, it cannot be said that the first movie and the sequel are equivalent. This behavior was so common that we decided to turn it into an adversarial-attack strategy by forcing the model to provide full answers, which it could not do because of its strong bias towards short answers. For example, we asked questions such as \"What is the full location of the plot of the TV show?\", for which or 9:00 pm during DST the model tended to answer with the bare minimum of information due to being trained using the F1 word-overlap metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "F1 metric and false negatives",
"sec_num": "2.1"
},
{
"text": "In other cases, the model selected a different text span than the one selected by the annotators, as in example B from Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 126,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "F1 metric and false negatives",
"sec_num": "2.1"
},
{
"text": "In this case, not only is the model's answer incomplete but 3:30 pm and 6:00 pm have entirely different meanings. Cases such as the one above occurred in passages that had two very similar strings in the text. In these cases, the F1 metric lead Dynabench to score in favor of the model even when the answer was incorrect. We believe that the answer provided by the annotators, in cases where annotators are hired as experts in a given domain, should be considered a gold standard subject to the validation process. In other cases, when annotations come from crowdsourcing platforms, the F1 metric could be more adequate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "F1 metric and false negatives",
"sec_num": "2.1"
},
{
"text": "Our annotator roster consisted of three annotators with postgraduate degrees in linguistics and natural language processing. One of the annotators spoke English as a first language, while the other two were proficient speakers of English as a second language who completed their graduate degrees in English-speaking universities. For the annotation process, we set up a quasi-experimental design using convenience sampling where approximately half of the annotations would be performed by a single annotator (n=49), and the other half would be performed synchronously by a group of two or more annotators (n=51). Because the annotators live in different time zones, annotator groups did not remain consistent across group sessions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "During the annotation task, the platform randomly picked a passage, usually of the length of a short paragraph (of about 160 words on average) from different topics. Annotators could then choose to create questions for that passage or skip it entirely. Annotators skipped passages when we agreed that it would be difficult to create even a single question to fool the model. 4 Table 1 contains our overall annotation results by the number of annotators. We report our results using the following typology:",
"cite_spans": [
{
"start": 375,
"end": 376,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 377,
"end": 384,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Model fooled: Items marked by Dynabench as successful annotations. Model not fooled: Items marked by Dynabench as unsuccessful annotations. False negatives: Instances where the model was fooled, but Dynabench marked them as not fooled. 5 False positives: Items marked by Dynabench as successful annotations but deemed unsuccessful by the annotators. 6 Even though the limited number of examples does not allow us to draw any strong conclusions about the annotation task, we find our analyses worth presenting as a preliminary step for other annotators to further reflect on the annotation process during the planning stages of any DADC task. 7",
"cite_spans": [
{
"start": 236,
"end": 237,
"text": "5",
"ref_id": null
},
{
"start": 350,
"end": 351,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "In order to capture if we as annotators are increasingly improving our model-fooling skills, we investigate the progression of the \"model fooled / model not fooled\" ratio throughout the annotation sessions. Figure 1 summarizes the results.",
"cite_spans": [],
"ref_spans": [
{
"start": 207,
"end": 215,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Model fooled ratio by annotator group",
"sec_num": "3.1"
},
{
"text": "For the single-annotator group, the progression seems apparent with a progressive fool ratio of 0, .44, .50, and .61. Sessions with two annotators do not have a clear progression (0.57, 0.60, 0.58, and 0.62), which may be because annotators did not remain the same in each session. The worst performance happened with the three-annotator sessions (0.50 and 0.33), which indicates a possible high degree of disagreement across annotators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model fooled ratio by annotator group",
"sec_num": "3.1"
},
{
"text": "We investigate the efficiency of the different annotator groups by calculating the mean time per successful adversarial attack. Formally, we define annotation group efficiency E(g) as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation costs",
"sec_num": "3.2"
},
{
"text": "E(g) = k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation costs",
"sec_num": "3.2"
},
{
"text": "n t n \u00d7 a n N Where t n is the total time in seconds spent in annotation session n, k is the total number of sessions for annotator group g, a n is the number of annotators in the session, and N is the total number of successful adversarial attacks across all the annotation sessions for group g. Table 3 shows annotation efficiency in seconds. Total mean time 8738.42 Table 3 : Mean time in seconds spent per annotator for every successful adversarial attack across groups with different annotators.",
"cite_spans": [],
"ref_spans": [
{
"start": 297,
"end": 304,
"text": "Table 3",
"ref_id": null
},
{
"start": 369,
"end": 376,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotation costs",
"sec_num": "3.2"
},
{
"text": "The single-annotator group took 8h 58' 58\" to create 21 model-fooling examples, rendering efficiency of 25' 37\" per successful attack. For the group annotations, the two-annotator sessions took 14h 39' 34\" to create 19 model-fooling examples, with an efficiency of 46' 17\", while the three-annotator sessions took 6h 8' 39\" to create five successful examples with an efficiency of 1h 13'. The total time spent on the task was 29h 46\" 11'. Figure 2 shows (in seconds) how the time increment is almost linear.",
"cite_spans": [],
"ref_spans": [
{
"start": 439,
"end": 447,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Annotation costs",
"sec_num": "3.2"
},
{
"text": "Lastly, to better understand why annotation times took longer when working in groups, we investigate the level of confidence agreement between annotators via correlation. To measure confidence agreement, annotators individually logged in confidence scores for all of the 100 questions in our dataset. The scores range between 0 and 3 points, with three being entirely confident that they would fool the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confidence scores",
"sec_num": "3.3"
},
{
"text": "We first test our data for normality using the \"normaltest\" function of the Python SciPy library (Virtanen et al., 2020) . After ensuring that normality tests came out negative across all annotators' ratings (p < 0.001), we used the Spearman rank correlation test (Figure 3) as implemented in the Python Pandas library (McKinney, 2010; Reback et al., 2022) . The fact that correlation coefficients range from weak to moderate supports our view that the lower efficiency in annotation costs might be due to differences in how annotators perceive how the model will evaluate their questions. This could lead to more debate during the synchronous annotation sessions. The lack of exponential time increase when more annotators are present, as was the case of the sessions with three annotators, may be due to the fact that annotators were often tired of the feeling of low-productivity of the sessions and were, at times, willing to risk questions without fully debating them.",
"cite_spans": [
{
"start": 97,
"end": 120,
"text": "(Virtanen et al., 2020)",
"ref_id": null
},
{
"start": 319,
"end": 335,
"text": "(McKinney, 2010;",
"ref_id": "BIBREF7"
},
{
"start": 336,
"end": 356,
"text": "Reback et al., 2022)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 264,
"end": 274,
"text": "(Figure 3)",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Confidence scores",
"sec_num": "3.3"
},
{
"text": "The relative difficulty of a dynamic adversarial dataset creation task may vary partly as a function of the genre and specific topic of the text passages from which question-answer pairs are drawn. During the shared task, Dynabench randomly assigned passages for the creation of question-answer pairs, revealing several important aspects of this challenge. Topics of the passages used in our data vary as shown by the success by topic scores in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 445,
"end": 452,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "4"
},
{
"text": "Our more successful questions came from music, science, and technology topics. On the one hand, we are more familiar with these topics than comics, sports, and video games. Furthermore, the paragraphs in literature and music tended to be more narrative in nature which, we believe, also made it easier for us to process them and create better questions. Data-heavy, enumeration-based paragraphs typical of sports, history, and TV and movies topics proved more challenging for the creation of model-fooling questions. Still, further examination is necessary to understand each of these possibilities separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "4"
},
{
"text": "A closer examination of the DADC task included evaluating the success of different strategies for creating questions. Overall, the model successfully answered questions about dates and names, as well as questions that could be answered with a single short phrase, especially if that phrase was produced as an appositive. For example, asking \"Which political associate of Abraham Lincoln was aware of his illness while traveling from Washington DC to Gettysburg?\" allowed the model to select a name as the answer, which it did with a high degree of success, even when multiple distractor names appeared in the same paragraph. On the other hand, formulating questions that required longer answers, especially questions that asked for both \"what\" and \"why\", frequently fooled the model. Furthermore, requiring references to multiple non-contiguous portions of the passage to make predictions also often fooled the model. Still, using synonymous words or phrases or similar sentence structures to the critical portions of the passage allowed the model to make correct predictions, even when these other strategies may have fooled it under different circumstances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "4"
},
{
"text": "Based on the experience with DADC shared task Track 1, we recommend several strategies to improve the efficiency of data collection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We found that allowing annotators to run \"dry\" trials before starting data collection, as done by the organizers of the DADC Shared Task, might help them form initial hypotheses about the potential weaknesses of the model and what strategies could be helpful to fool it, e.g., targeting different capabilities such as NER or coreference resolution. Additionally, it could be possible that once annotators are familiarized with the task and understand what examples have a better chance of fooling the model, productivity between multiple annotators might increase as their confidence starts to align.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimenting with the task",
"sec_num": "5.1"
},
{
"text": "We believe it may be significantly easier to come up with good-quality questions if the annotators are familiar with the domain of the contexts. Not only can they read and understand the paragraphs faster, but it is easier to abstract from the immediate context and, thus, ask more challenging questions. Annotation managers of campaigns with heterogeneous datasets might want to consider recruiting experts for technical or specific sub-domains and crowdsourced workers for those texts consisting of general knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Familiarity with the domain",
"sec_num": "5.2"
},
{
"text": "Keeping a rough track of what annotation strategies worked best proved useful to us during annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Having a list of strategies",
"sec_num": "5.3"
},
{
"text": "As an example of the types of strategies that annotators can keep track of and implement, below we list the strategies we favored for creating modelfooling questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Having a list of strategies",
"sec_num": "5.3"
},
{
"text": "1. Play with the pragmatics of the question, for instance:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Having a list of strategies",
"sec_num": "5.3"
},
{
"text": "Question: What is the full location of the plot of this TV show?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Having a list of strategies",
"sec_num": "5.3"
},
{
"text": "Annotators' answer: A mysterious island somewhere in the South Pacific Ocean",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Having a list of strategies",
"sec_num": "5.3"
},
{
"text": "Model's answer: South Pacific Ocean Explanation: The model is biased towards the shortest answer, which does not always cover the information human need as an answer (Grice's principle of quantity)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Having a list of strategies",
"sec_num": "5.3"
},
{
"text": "2. Change the register, e.g., ask a question as a five-year-old would.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Having a list of strategies",
"sec_num": "5.3"
},
{
"text": "3. Whenever possible, ask a question that requires a holistic understanding of the whole paragraph (not just a particular sentence).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Having a list of strategies",
"sec_num": "5.3"
},
{
"text": "4. Ask questions that require common sense reasoning, e.g., about the causes and effects of events. 5. Ask questions about entities that appear multiple times or have multiple instances in the paragraph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Having a list of strategies",
"sec_num": "5.3"
},
{
"text": "Another practice that can help is to work in teams whereby annotators would come up with questions in isolation and then rank and further modify them in a brainstorming session. In our experience, having two annotators in one session was almost as efficient as having only one annotator and made the task more engaging,ludic and, consequently, less tedious, potentially reducing the risk of burnout syndrome.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussing created prompts with other annotators",
"sec_num": "5.4"
},
{
"text": "Because DADC annotation applies to NLI and QA datasets , we believe that specific considerations would be necessary for future projects that make use of a dedicated DADC interface, including the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Suggestions for future DADC annotation interfaces",
"sec_num": "5.5"
},
{
"text": "\u2022 Given that one of the issues we observed was that many of the successful questions were unnatural and, thus, probably, not helpful for real-life scenarios, annotation platforms could include a naturality score to encourage annotators to create data that will be used in real-world scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Suggestions for future DADC annotation interfaces",
"sec_num": "5.5"
},
{
"text": "\u2022 Because the word-overlap F1 threshold seems to vary depending on what is enough information and the appropriate information needed to answer specific questions, we believe that a language model could be trained to replace or aid the F1 metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Suggestions for future DADC annotation interfaces",
"sec_num": "5.5"
},
{
"text": "\u2022 Annotation interfaces could also help annotators by displaying relevant visualizations of the training data so that annotators could try to fool the model in those cases where the model contains little or no data. For example, Bartolo et al. (2020, pp. 17-19) provide bar plots and sunburst plots 8 of question types and answer types for each of their modified datasets. We believe that displaying such visualizations to the annotators in a targeted way could potentially increase their performance while also helping balance the creation of datasets.",
"cite_spans": [
{
"start": 229,
"end": 261,
"text": "Bartolo et al. (2020, pp. 17-19)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Suggestions for future DADC annotation interfaces",
"sec_num": "5.5"
},
{
"text": "\u2022 Finally, we believe that augmenting the interface with functionality for storing and managing annotation strategies such as the ones mentioned above, together with their rate of effectiveness, could make the annotation process more efficient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Suggestions for future DADC annotation interfaces",
"sec_num": "5.5"
},
{
"text": "Beyond any of the suggestions above, we believe that the DADC has certain limitations that annotation campaigns should be aware of. In our experience in the context of this extractive QA task, we found it extremely difficult to fool the model, primarily because of its powerful lexical and syntactic reasoning capabilities. This was partly because we were constrained to create questions that a continuous string of text could answer. In many cases, we relied on very complex lexical and syntactic inferences (e.g., violating syntactic islands), which often led to unnatural questions that were unlikely to appear in the real world.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final considerations",
"sec_num": "5.6"
},
{
"text": "The problem of creating model-fooling examples has already been acknowledged in previous research (Bartolo et al., 2020; and is generally addressed by either providing question templates to edit or mixing questions from other \"more naturally-distributed\" datasets. We want to draw the attention of anyone wishing to apply DADC to their problem of this risk. note that applying DADC for generative QA is not a straightforward task. However, it is perhaps in generative tasks where DADC could offer more value. Given how powerful the SOTA models are, the DADC extractive datasets seem doomed to be eventually skewed towards long and unnatural examples. This is one of ours: \"Despite knowledge of which fact does Buffy still allow herself to pass at the hands of an",
"cite_spans": [
{
"start": 98,
"end": 120,
"text": "(Bartolo et al., 2020;",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Final considerations",
"sec_num": "5.6"
},
{
"text": "Notice that, from a model-evaluation perspective, these would be considered false positives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We did not keep track of the passages we skipped.5 Mainly due to F1 score problems.6 There are two of these in the dataset and they were products of mistakes the annotators made when selecting the answers on Dynabench that lead to a mismatch between the question asked and the answer given to the model.7 The code for our analyses can be found at https://gi thub.com/fireworks-ai/conference-papers/ tree/master/naacl-dadc-2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The dataset statistics are only available in the pre-print version of their paper, available at: https://arxiv.org/abs/2002.00293 enemy, protecting the one to whom the fact relates by doing so?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the organizers and sponsors of the first DADC shared task, especially Max Bartolo, who was in direct contact with us and provided us with the data we needed for our analyses. We would also like to thank Dr. Anders S\u00f8gaard for his valuable insights during the revision of this article.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Sebastian Riedel, and Pontus Stenetorp. 2020. Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension. Transactions of the Association for Computational Linguistics",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Bartolo",
"suffix": ""
},
{
"first": "Alastair",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Welbl",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "8",
"issue": "",
"pages": "662--678",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00338"
]
},
"num": null,
"urls": [],
"raw_text": "Max Bartolo, Alastair Roberts, Johannes Welbl, Sebas- tian Riedel, and Pontus Stenetorp. 2020. Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension. Transactions of the Associ- ation for Computational Linguistics, 8:662-678.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Improving question answering model robustness with synthetic adversarial data generation",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Bartolo",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Thrush",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "8830--8848",
"other_ids": {
"DOI": [
"10.18653/v1/2021.emnlp-main.696"
]
},
"num": null,
"urls": [],
"raw_text": "Max Bartolo, Tristan Thrush, Robin Jia, Sebastian Riedel, Pontus Stenetorp, and Douwe Kiela. 2021. Improving question answering model robustness with synthetic adversarial data generation. In Proceedings of the 2021 Conference on Empirical Methods in Nat- ural Language Processing, pages 8830-8848, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs",
"authors": [
{
"first": "Dheeru",
"middle": [],
"last": "Dua",
"suffix": ""
},
{
"first": "Yizhong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2368--2378",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1246"
]
},
"num": null,
"urls": [],
"raw_text": "Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requir- ing discrete reasoning over paragraphs. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368-2378, Min- neapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "English machine reading comprehension datasets: A survey",
"authors": [
{
"first": "Daria",
"middle": [],
"last": "Dzendzik",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "8784--8804",
"other_ids": {
"DOI": [
"10.18653/v1/2021.emnlp-main.693"
]
},
"num": null,
"urls": [],
"raw_text": "Daria Dzendzik, Jennifer Foster, and Carl Vogel. 2021. English machine reading comprehension datasets: A survey. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8784-8804, Online and Punta Cana, Domini- can Republic. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Adversarial examples for evaluating reading comprehension systems",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2021--2031",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1215"
]
},
"num": null,
"urls": [],
"raw_text": "Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2021-2031, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "On the efficacy of adversarial data collection for question answering: Results from a large-scale randomized study",
"authors": [
{
"first": "Divyansh",
"middle": [],
"last": "Kaushik",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"C"
],
"last": "Lipton",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "6618--6633",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.517"
]
},
"num": null,
"urls": [],
"raw_text": "Divyansh Kaushik, Douwe Kiela, Zachary C. Lipton, and Wen-tau Yih. 2021. On the efficacy of adversar- ial data collection for question answering: Results from a large-scale randomized study. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6618-6633, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Dynabench: Rethinking benchmarking in NLP",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Bartolo",
"suffix": ""
},
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Divyansh",
"middle": [],
"last": "Kaushik",
"suffix": ""
},
{
"first": "Atticus",
"middle": [],
"last": "Geiger",
"suffix": ""
},
{
"first": "Zhengxuan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Grusha",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Pratik",
"middle": [],
"last": "Ringshia",
"suffix": ""
},
{
"first": "Zhiyi",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Thrush",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "4110--4124",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.324"
]
},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vid- gen, Grusha Prasad, Amanpreet Singh, Pratik Ring- shia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, and Adina Williams. 2021. Dynabench: Rethinking benchmarking in NLP. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 4110-4124, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Data Structures for Statistical Computing in Python",
"authors": [
{
"first": "Wes",
"middle": [],
"last": "Mckinney",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 9th Python in Science Conference",
"volume": "",
"issue": "",
"pages": "56--61",
"other_ids": {
"DOI": [
"10.25080/Majora-92bf1922-00a"
]
},
"num": null,
"urls": [],
"raw_text": "Wes McKinney. 2010. Data Structures for Statistical Computing in Python. In Proceedings of the 9th Python in Science Conference, pages 56 -61.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adversarial NLI: A new benchmark for natural language understanding",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4885--4901",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.441"
]
},
"num": null,
"urls": [],
"raw_text": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language under- standing. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4885-4901, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "SQuAD: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1264"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Model fooled ratio by annotator group by session. False negatives and false positives are excluded. Missing sessions had a score of zero.",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Mean time in seconds spent per annotator for every successful adversarial attack across groups with different annotators.",
"uris": null
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"text": "Correlation heatmap of annotators' confidence metrics through the full dataset.",
"uris": null
},
"TABREF1": {
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null,
"text": "Examples of questions, model answers, and annotators' answers in the data creation procedure. All question-answer examples are adapted from the Dynabench dataset."
},
"TABREF3": {
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null,
"text": "Number of times our questions fooled the model out of the total number of questions we generated for each passage topic in our dataset. False negatives and false positives are included in the total number of items."
}
}
}
}