ACL-OCL / Base_JSON /prefixI /json /insights /2020.insights-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
105 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:13:08.050328Z"
},
"title": "Q. Can Knowledge Graphs be used to Answer Boolean Questions? A. It's complicated!",
"authors": [
{
"first": "Daria",
"middle": [],
"last": "Dzendzik",
"suffix": "",
"affiliation": {},
"email": "daria.dzendzik@adaptcentre.ie"
},
{
"first": "Carl",
"middle": [],
"last": "Vogel",
"suffix": "",
"affiliation": {},
"email": "vogel@tcd.ie"
},
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": "",
"affiliation": {},
"email": "jennifer.foster@dcu.ie"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we explore the problem of machine reading comprehension, focusing on the BoolQ dataset of Yes/No questions. We carry out an error analysis of a BERT-based machine reading comprehension model on this dataset, revealing issues such as unstable model behaviour and some noise within the dataset itself. We then experiment with two approaches for integrating information from knowledge graphs: (i) concatenating knowledge graph triples to text passages and (ii) encoding knowledge with a Graph Neural Network. Neither of these approaches show a clear improvement and we hypothesize that this may be due to a combination of inaccuracies in the knowledge graph, imprecision in entity linking, and the models' inability to capture additional information from knowledge graphs.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we explore the problem of machine reading comprehension, focusing on the BoolQ dataset of Yes/No questions. We carry out an error analysis of a BERT-based machine reading comprehension model on this dataset, revealing issues such as unstable model behaviour and some noise within the dataset itself. We then experiment with two approaches for integrating information from knowledge graphs: (i) concatenating knowledge graph triples to text passages and (ii) encoding knowledge with a Graph Neural Network. Neither of these approaches show a clear improvement and we hypothesize that this may be due to a combination of inaccuracies in the knowledge graph, imprecision in entity linking, and the models' inability to capture additional information from knowledge graphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "1 Introduction Clark et al. (2019) explore the difficulty of Yes/No questions and introduce the BoolQ dataset which contains 16k questions based on real Google user queries, paired by crowdworkers with passages from Wikipedia. They establish a strong baseline using BERT large and transfer learning from the Multi-Genre Natural Language Inference (MNLI) task (Williams et al., 2018) .",
"cite_spans": [
{
"start": 15,
"end": 34,
"text": "Clark et al. (2019)",
"ref_id": "BIBREF2"
},
{
"start": 359,
"end": 382,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In this work, we carry out an error analysis of 200 samples from the BERT large + M N LI baseline model and find out that 77% constitute genuine model errors, almost 6% of samples contain an incorrect answer tag, and 8% do not contain enough evidence to answer the question. The remaining 9% we classified as difficult questions as they involve deep understanding, reasoning, specific knowledge, and sometimes depend on opinion. Due to the unstable behaviour of the model, error samples vary from run to run, where a run refers to the pipeline of MNLI pre-training, BoolQ fine-tuning, and evaluation of the model. We introduce a stable accuracy metric to evaluate a system across multiple runs with the same hyperparameters. Stable accuracy over n runs refers to the proportion of questions that are always correctly answered. We observed a 3.3% and an 11% drop of stable accuracy over 2 and 10 runs respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Next we turn our attention to improving machine reading comprehension (MRC) system performance. We hypothesize the system might benefit from additional information about entities and/or relations between the entities, in the question and passage. Consider, for example, (1) where pei is an abbreviation of Prince Edward Island.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "(1) Question: is anne with an e filmed on pei Passage: The series is filmed partially in Prince Edward Island as well as ... We propose and evaluate two approaches for augmenting questions and answers with KG information: (1) concatenating the model input with sentences constructed from ConceptNet triples 1 (Speer et al., 2017) ; and (2) encoding KG entities and relations with the Graph Neural Network (GNN) proposed by Shaw et al. (2019) , a model suited to graph-based input. Neither approach shows a significant improvement over the baseline. 2 A Closer Look at the BoolQ Baseline",
"cite_spans": [
{
"start": 309,
"end": 329,
"text": "(Speer et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 423,
"end": 441,
"text": "Shaw et al. (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We manually analyse 200 errors made by one run of the baseline system (33% of one-run errors) and discover that 6% of them involve an incorrect answer tag and another 8% involve confusing passages which do not give enough support for the answer (see Appendix B for examples). Table 1 shows a categorization of the errors according to the reasoning types provided by Clark et al. (2019) . The majority of errors belongs to the Paraphrasing type (48.5%). In these cases, the answer is in the passage and only a minimum amount of extra knowledge and reasoning is required to answer the question. The Implicit and Missing Mention types account for 19.5% and 14% of errors respectively. Only about 3.5% of incorrectly answered questions require an understanding of examples given in the passage, 6% requrie factual reasoning, and 8% require other inference.",
"cite_spans": [
{
"start": 366,
"end": 385,
"text": "Clark et al. (2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 276,
"end": 283,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "2.1"
},
{
"text": "We reproduce the results of the baseline BERT large + M N LI model released by Clark et al. (2019) . 2 Its accuracy is between 80% and 82% ( Fig. 1 (a) G) with an average 81.41% accuracy over 10 runs (vs. 82.2% reported in Clark et al. (2019) ). Our error analysis shows that a significant portion of the correctly answered questions varies from run to run together with around 40% of errors.",
"cite_spans": [
{
"start": 79,
"end": 98,
"text": "Clark et al. (2019)",
"ref_id": "BIBREF2"
},
{
"start": 200,
"end": 242,
"text": "(vs. 82.2% reported in Clark et al. (2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 141,
"end": 147,
"text": "Fig. 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Stable Accuracy",
"sec_num": "2.2"
},
{
"text": "We define the ratio of the number of correctly answered questions across n runs to the total number of questions as stable accuracy. Formally, if Q is the set of all questions and Q i correct is the set of correctly answered questions at the i th run, the stable accuracy after n runs is defined as (2):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stable Accuracy",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "StableAccuracy n = | \u2229 n i=0 Q i correct | |Q|",
"eq_num": "(2)"
}
],
"section": "Stable Accuracy",
"sec_num": "2.2"
},
{
"text": "The stable accuracy over 10 runs drops to 71% (see Fig 1 ( up to 10 runs ( Fig. 1 (a) , L) does not outperform the baseline: the values are within the range of 78.09% and 81.77%. 3 We repeat the experiment using the robustly optimized RoBERT a large model implemented by Wolf et al. (2019) and fine tuned on the MNLI task. This model has a better average accuracy (83.7)% but it is also more unstable: the stable accuracy drops to 64.0% (see Fig. 1 (b) ). As with the BERT model, ensembling over 10 runs does not give a performance boost.",
"cite_spans": [
{
"start": 179,
"end": 180,
"text": "3",
"ref_id": null
},
{
"start": 271,
"end": 289,
"text": "Wolf et al. (2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 51,
"end": 58,
"text": "Fig 1 (",
"ref_id": "FIGREF1"
},
{
"start": 75,
"end": 85,
"text": "Fig. 1 (a)",
"ref_id": "FIGREF1"
},
{
"start": 442,
"end": 452,
"text": "Fig. 1 (b)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Stable Accuracy",
"sec_num": "2.2"
},
{
"text": "This observed behavior means that the system performs well on each run but every time it performs well on a different set of questions. This might be related to the notion of \"forgettable\" examples described by Toneva et al. (2019) . The difference is that they discovered the ability of models to forget the learned examples during the training phase, while we examine stable and unstable examples when the training is finished.",
"cite_spans": [
{
"start": 211,
"end": 231,
"text": "Toneva et al. (2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stable Accuracy",
"sec_num": "2.2"
},
{
"text": "Our manual inspection of the results of one baseline system run reveals that approximately 20% of erroneous cases are questions involving some property of an entity or concept, or some hierarchical relationship between entities. An example of the former is (3) and the latter is (4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Knowledge Graph Data",
"sec_num": "3"
},
{
"text": "(3) is i 80 in indiana a toll road (4) is college of william and mary an ivy league school?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Knowledge Graph Data",
"sec_num": "3"
},
{
"text": "We hypothesize that adding knowledge graph data could help in answering such questions, as well as examples such as (1) and (5) below where the entity in the question is referred to using a different name in the passage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Knowledge Graph Data",
"sec_num": "3"
},
{
"text": "(5) Question: does smeagol die in lord of the rings Passage: ... Gollum finally ... but he fell into the fires of the volcano, where both he and the Ring were destroyed. Answer: Yes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Knowledge Graph Data",
"sec_num": "3"
},
{
"text": "We use the CloudAPI 4 to annotate text with tokens, part of speech tags, named entities with Freebase 5 KG identifiers (MIDs), numbers, dates and VerbNet 6 roles which can be used for establishing relations between entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Knowledge Graph Data",
"sec_num": "3"
},
{
"text": "ConceptNet (Liu and Singh, 2004; Speer et al., 2017) is an open semantic network based on DB-Pedia, Wiktionary, WordNet, and other resources. It captures common-sense knowledge and was created for computers to understand words and concepts in the same way people do. It was particularly designed to be used by NLP applications and widely used in MRC (Weissenborn et al., 2017; Bauer et al., 2018; Mihaylov and Frank, 2018; Lin et al., 2019; Qiu et al., 2019) . Partly inspired by Weissenborn et al. 2017, we convert ConceptNet relations into sentences but instead of embedding them independently, we concatenate them to the baseline model input.",
"cite_spans": [
{
"start": 11,
"end": 32,
"text": "(Liu and Singh, 2004;",
"ref_id": "BIBREF6"
},
{
"start": 33,
"end": 52,
"text": "Speer et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 350,
"end": 376,
"text": "(Weissenborn et al., 2017;",
"ref_id": "BIBREF19"
},
{
"start": 377,
"end": 396,
"text": "Bauer et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 397,
"end": 422,
"text": "Mihaylov and Frank, 2018;",
"ref_id": "BIBREF8"
},
{
"start": 423,
"end": 440,
"text": "Lin et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 441,
"end": 458,
"text": "Qiu et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extending Passages with ConceptNet",
"sec_num": "3.1"
},
{
"text": "ConceptNet has 34 relation types. 7 Each relation has start and end entities and a strength of relation (relevance weight). We look up every annotated entity from questions and passages in ConceptNet. We extract the top 100 relations according to the relevance weight, and select those where both the start and end entities are in English. We remove relations that are not useful, such as \"External URLs\", or too broad such as \"FormOf\". Then we transform ConceptNet relations into simple sentences based on the relation description or, if there is no description, we create a string:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Extraction and Filtering",
"sec_num": "3.1.1"
},
{
"text": "[entity1] [relation] [entity2]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Extraction and Filtering",
"sec_num": "3.1.1"
},
{
"text": ", e.g. the \"panda is near a bamboo forest\" string is created from entites: \"panda\", \"bamboo forest\" and the relation \"LocatedNear\". Fig. 2 shows a ConceptNet entity from example (1). The verbalized triples such as \"pei is a synonym of Prince Edward Island\" are prepended to the text passage.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 138,
"text": "Fig. 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Sentence Extraction and Filtering",
"sec_num": "3.1.1"
},
{
"text": "Since such new sentences can add noise (see polyetherimide examples in Fig. 2 ) and a long input might confuse the model (Thayaparan et al., 2019) , we aim to add extra sentences to the passages only if it is relevant and can better \"explain\" the nature of entities. To select those, we rank all extracted sentences S according to the sum of their similarities with the question q and passage p as shown in (6):",
"cite_spans": [
{
"start": 121,
"end": 146,
"text": "(Thayaparan et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 71,
"end": 77,
"text": "Fig. 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Sentence Extraction and Filtering",
"sec_num": "3.1.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2200s \u2208 S : score(s) = g(k(s), k(q))+g(k(s), k(p))",
"eq_num": "(6)"
}
],
"section": "Sentence Extraction and Filtering",
"sec_num": "3.1.1"
},
{
"text": "where g \u2208 {correlation, cosine} are similarity measures, k is a semantic embedding function. We use the semantic textual similarity model 8 proposed by . To filter more examples, we add an empirically tuned threshold for similarities 9 and select only those sentences which were ranked as the most similar to the question and passage by both correlation (inner product) and cosine similarity, and each score is higher than the established thresholds. Another method of selecting relevant sentences is to consider only the relations which connect an entity in the question to an entity in the passage. We then combine these two strategies: we add sentences only to the examples which meet both criteria (Intersection) or all that meet at least one of the criteria (Union). Table 2 shows the results averaged over 5 runs. With threshold filtering we add sentences to 21.84% 9 of passages, obtaining an average accuracy of 81.23% (see Table 2 : SentEmb). Using entity relations from questions and answers, 22.58% of QA pairs are affected but the performance is slightly worse (see Table 2 : Q&P Match).",
"cite_spans": [],
"ref_spans": [
{
"start": 772,
"end": 779,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 932,
"end": 939,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 1078,
"end": 1085,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Sentence Extraction and Filtering",
"sec_num": "3.1.1"
},
{
"text": "The intersection gives the best performance. By affecting only 1.23% of the data, we obtain 81.46% average accuracy and 82.05% accuracy for the ensemble majority voting scenario. The Union criterion does not show any improvement on accuracy. The Intersection improvement, as well as the disimprovement of SentEmb, Q&PMatch, and Union, are not statistically significant with respect to the baseline. 10 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.1.2"
},
{
"text": "Facing instability of the BERT-based baseline and low coverage of ConceptNet (see Section 4) we experiment with a new architecture and knowledge graph. To better model graph-based input, such as entities and their relations, we tried a transformerbased seq2seq GNN (Shaw et al., 2019) . Entities, relations and input tokens are embedded and fed to a GNN sub-layer that incorporates edge representations extending the self-attention mechanism. The encoder-decoder attention layer considers both encoder output token and entity representations, jointly normalizing attention weights over tokens and entities. In our case, the GNN decoder simply outputs our expected answers: \"Yes\" or \"No\" (see Fig. 3 ). In this case, we initialize the GNN with a pre-trained BERT large model and only fine tune on BoolQ.",
"cite_spans": [
{
"start": 265,
"end": 284,
"text": "(Shaw et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 692,
"end": 698,
"text": "Fig. 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Modeling Knowledge Graphs with GraphNNs",
"sec_num": "3.2"
},
{
"text": "As an alternative to ConceptNet we also tried the Google Knowledge Graph. It has more than 500 billion facts about 5 billion entities. 11 The entities describe real-world objects and concepts like 10 According to the two sample proportion Z-Test the maximum difference: z = \u22121.3674, p = 0.17068 11 https://blog.google/products/search/ about-knowledge-graph-and-knowledge-panels/ -l.v. 07/2020 people, places, events, and things. Entities are represented as nodes and connected by relations. The latter can simply indicate that a relation is present, or they may encode the type of relation. We try the first three of the following possible experiments:",
"cite_spans": [
{
"start": 197,
"end": 199,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Knowledge Graphs with GraphNNs",
"sec_num": "3.2"
},
{
"text": "1. adding a relation between different entities which have the same MID; 2. only adding connections between entities across the QA pair, as in the ConceptNet Q&P Match experiment; 3. distinguishing different types of relations; 4. adding a relation between different mentions of the same entity; 5. adding entities not mentioned in the text but linked to the mentioned entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Knowledge Graphs with GraphNNs",
"sec_num": "3.2"
},
{
"text": "The results are presented in ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2.1"
},
{
"text": "ConceptNet Even after the filtering described in Section 3.1.1, we observe that often the relations from ConceptNet are too general and do not add new information, e.g. \"cookie jar is a type of jar\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4"
},
{
"text": "Such relations are already part of the language model. Petroni et al. (2019) show that BERT contains relational knowledge and has a strong ability to recall factual knowledge without fine-tuning. Furthermore, some entities are missing, e.g. there is a \"Tom Hanks\" entity but no \"Meg Ryan\" entity, or the entity \"dragon ball\" contains only non-English connections, confirming the general coverage issue of KGs. 12 Sensitivity We observe that the GNN is sensitive to the learning rate and hyper-parameters. Better tuning may compensate for the difference in performance wrt to the BERT baseline.",
"cite_spans": [
{
"start": 55,
"end": 76,
"text": "Petroni et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 405,
"end": 412,
"text": "KGs. 12",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4"
},
{
"text": "We found issues with the entity linker. Named entities are often not covered or the MID is missing. In some cases, the entity has a wrong MID, e.g. in (7) the entity \"northern ireland\" is not recognised but the entity \"ireland\" (Republic of Ireland) is mentioned instead, while the entity \"great britain\" is recognised with the MID of \"United Kingdom\". ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity recognition and linker",
"sec_num": null
},
{
"text": "We observe a positive tendency towards stable correct answers in the ConceptNet experiments (Table 4 ). The number of new stable correct answers is higher than the number of new stable errors for all settings except Q&AMatch. Also, for all scenarios except Intersection, the number of questions where the predicted answer fluctuates from incorrect to correct is higher than the number of questions where the predicted answer fluctuates from correct.",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 100,
"text": "(Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Do KGs affect stable accuracy?",
"sec_num": null
},
{
"text": "Is a KG necessary? The BoolQ dataset was not originally created to be used with a KG, and the passages were selected such that they contain the information required to answer a question. For some questions, such as (1) the additional information provided by a KG is helpful, and for questions like (7), even though the passage has all the required information, a KG could highlight the relation between entities and help answer the question. However, there are also cases where a KG is not needed or cannot be applied, e.g. (8) and (9).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Do KGs affect stable accuracy?",
"sec_num": null
},
{
"text": "(8) Question: do all ni numbers have a letter at the end Passage: The format of the number is two prefix letters, six digits, and one suffix letter. The example used is typically QQ123456C. ... Answer: Yes (9) Question: was the movie insomnia based on a book Passage: Robert Westbrook adapted the screenplay to novel form, which was published by Alex in May 2002. Answer: No",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Do KGs affect stable accuracy?",
"sec_num": null
},
{
"text": "In (8) a question is asked about a number format and the information about the specific last symbol is unlikely to be a part of a KG. (9) contains a very short passage explicitly saying there is a book but it was adapted from the screenplay. In this case, a KG could provide potentially confusing information simply stating that there is a book.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Do KGs affect stable accuracy?",
"sec_num": null
},
{
"text": "In this work, we take a closer look at a BERT baseline system on the BoolQ dataset, which reveals some inconsistencies in the data and some instability in the model. We try two approaches to integrating knowledge graph information, one based on augmenting the passage text and another using a Graph Neural Network. Neither are successful. One culprit is the lack of coverage of Con-ceptNet and another is related to accuracy of the entity recognition. We also suggest that the number of questions where suitable KG data is needed and could be found might just not be enough for the models to learn from.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The BoolQ dataset (Clark et al., 2019 ) is a part of the SuperGLUE benchmark 13 (Wang et al., 2019) . About 3000 question and passages come from Nat-uralQuestion . The main statistics about the dataset is collected in Table 5 . Clark et al. (2019) showed the BERT large model outperforming recurrent models with attention , both in their vanilla version and in combination with deep contextualized word representation (Peters et al., 2018) .",
"cite_spans": [
{
"start": 18,
"end": 37,
"text": "(Clark et al., 2019",
"ref_id": "BIBREF2"
},
{
"start": 80,
"end": 99,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 228,
"end": 247,
"text": "Clark et al. (2019)",
"ref_id": "BIBREF2"
},
{
"start": 418,
"end": 439,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 218,
"end": 225,
"text": "Table 5",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "A BoolQ Dataset Details",
"sec_num": null
},
{
"text": "Some questions in BoolQ are formulated in a certain context which might change given time. For example (10) which is asking about a movie released this year. As the dataset was released in 2019 the data could be collected in 2018 so then the answer is yes but if this question would be asked in 2015 or today (2020) the answer should be no. Another example (11) where a passage provides the information about United States citizens border crossing requirements but the question does not specify what kind of citizenship the person asking the question holds. In contrast with example (12) where the question and passage provide an unconditional outcome as a holder of the Schengen visa (information from question) can enter Montenegro for 30 days (information from the passage). So, in such cases like examples (10) and (11), the passage information is not enough to answer the questions unconditionally.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Erroneous and Confusing Examples",
"sec_num": null
},
{
"text": "(10) Question: is there a star wars movie this year Passage: The first film was followed by two successful sequels, The Empire Strikes Back (1980) (13 -14) . The passages are related to the questions but specific information is missing the answer \"Yes\" cannot be confirmed by the passages. We observe, around 8% of questions we confusing or have certain assumptions. Passage: A cordon bleu or schnitzel cordon bleu is a dish of meat wrapped around cheese (or with cheese filling), then breaded and pan-fried or deep-fried. Veal or pork cordon bleu is made of veal or pork pounded thin and wrapped around a slice of ham and a slice of cheese, breaded, and then pan fried or baked. For chicken cordon bleu chicken breast is used instead of veal. Ham cordon bleu is ham stuffed with mushrooms and cheese. Answer: Yes",
"cite_spans": [
{
"start": 140,
"end": 146,
"text": "(1980)",
"ref_id": null
},
{
"start": 147,
"end": 155,
"text": "(13 -14)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B Erroneous and Confusing Examples",
"sec_num": null
},
{
"text": "There are a few examples of errors (15 -17) from the dataset. The first error example is asking if shower gel can be used instead of shampoo in a negative form (\"is it bad to ...\") and the passage says that they are perfectly substitutable so the answer should be No (it is not bad). In the second example (16) the passage explicitly says India does not have a national language so the answer should be No. And in the third example (17) there is nothing that should make the reader believe there were any games outside of Russia, so the answer should be Yes. According to our analysis 6% of samples have the wrong answer tag. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Erroneous and Confusing Examples",
"sec_num": null
},
{
"text": "Note that the ensemble performs slightly better with an odd numbers of runs as only the samples with strictly more votes for the correct answer are considered to be answered correctly. This is a very strict evaluation. Alternatively, in the case of a tie, the majority answer (Yes) can be selected, but we aim to provide the evaluation with the maximum certainty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://cloud.google.com/apis/docs/ overview -l.v. 07/2020 5 https://en.wikipedia.org/wiki/ Freebase_(database) -l.v. 07/2020 6 http://verbs.colorado.edu/\u02dcmpalmer/ projects/verbnet.html -l.v. 07/2020 7 Based on https://github.com/commonsense/ conceptnet5/wiki/Relations -l.v. 07/2020. We found a few more like \"language\" or \"occupations\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available via TensorFlowHub (Cer et al., 2018): https: //www.tensorflow.org/hub/ -l.v. 07/20209 We used: correlation > 220; cosine similarity > 1.38.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://conceptnet.io/c/en/jar, https: //conceptnet.io/c/en/tom_hanks -An English term in ConceptNet 5.8, https://conceptnet.io/ c/en/meg_ryan -'meg ryan' is not a node in Con-ceptNet, https://conceptnet.io/c/en/dragon_ ball,-l.v. 07/2020",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are extremely gratefully to Massimo Nicosia from Google Research Switzerland without whom this work would not be possible. We thank the anonymous reviewers for their constructive and helpful feedback. Finally, a big thank you to Andrew Dunne, Lauren Cassidy, and Meghan Dowling.This research is partly supported by Science Foundation Ireland in the ADAPT Centre for Digital Content Technology, funded under the SFI Research Centres Programme (Grant 13/RC/2106) and the European Regional Development Fund.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Commonsense for generative multi-hop question answering tasks",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Yicheng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4220--4230",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1454"
]
},
"num": null,
"urls": [],
"raw_text": "Lisa Bauer, Yicheng Wang, and Mohit Bansal. 2018. Commonsense for generative multi-hop question an- swering tasks. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing, pages 4220-4230, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Universal sentence encoder for English",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sheng-Yi",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "Rhomni",
"middle": [],
"last": "St",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Guajardo-Cespedes",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Tar",
"suffix": ""
},
{
"first": "Ray",
"middle": [],
"last": "Strope",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kurzweil",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "169--174",
"other_ids": {
"DOI": [
"10.18653/v1/D18-2029"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for English. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing: System Demonstrations, pages 169-174, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BoolQ: Exploring the surprising difficulty of natural yes/no questions",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2924--2936",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1300"
]
},
"num": null,
"urls": [],
"raw_text": "Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2924-2936, Min- neapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Natural questions: A benchmark for question answering research",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Redfield",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Epstein",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "0",
"pages": "452--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Al- berti, Danielle Epstein, Illia Polosukhin, Jacob De- vlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question an- swering research. Transactions of the Association for Computational Linguistics, 7(0):452-466.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "KagNet: Knowledge-aware graph networks for commonsense reasoning",
"authors": [
{
"first": "Xinyue",
"middle": [],
"last": "Bill Yuchen Lin",
"suffix": ""
},
{
"first": "Jamin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2829--2839",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1282"
]
},
"num": null,
"urls": [],
"raw_text": "Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xi- ang Ren. 2019. KagNet: Knowledge-aware graph networks for commonsense reasoning. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2829-2839, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Conceptnet -a practical commonsense reasoning tool-kit",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Push",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2004,
"venue": "BT Technology Journal",
"volume": "22",
"issue": "4",
"pages": "211--226",
"other_ids": {
"DOI": [
"10.1023/B:BTTJ.0000047600.45421.6d"
]
},
"num": null,
"urls": [],
"raw_text": "Hugo Liu and Push Singh. 2004. Conceptnet -a prac- tical commonsense reasoning tool-kit. BT Technol- ogy Journal, 22(4):211-226.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. arXiv:1907.11692.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge",
"authors": [
{
"first": "Todor",
"middle": [],
"last": "Mihaylov",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "821--832",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1076"
]
},
"num": null,
"urls": [],
"raw_text": "Todor Mihaylov and Anette Frank. 2018. Knowledge- able reader: Enhancing cloze-style reading compre- hension with external commonsense knowledge. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 821-832, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Bakhtin",
"suffix": ""
},
{
"first": "Yuxiang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2463--2473",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1250"
]
},
"num": null,
"urls": [],
"raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2463-2473, Hong Kong, China. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Machine reading comprehension using structural knowledge graph-aware network",
"authors": [
{
"first": "Delai",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Yuanzhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xinwei",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xiangwen",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Wenbin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Yajuan",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5896--5901",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1602"
]
},
"num": null,
"urls": [],
"raw_text": "Delai Qiu, Yuanzhe Zhang, Xinwei Feng, Xiangwen Liao, Wenbin Jiang, Yajuan Lyu, Kang Liu, and Jun Zhao. 2019. Machine reading comprehension us- ing structural knowledge graph-aware network. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5896- 5901, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Generating logical forms from graph representations of text and entities",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Shaw",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Massey",
"suffix": ""
},
{
"first": "Angelica",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Piccinno",
"suffix": ""
},
{
"first": "Yasemin",
"middle": [],
"last": "Altun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "95--106",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1010"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Shaw, Philip Massey, Angelica Chen, Francesco Piccinno, and Yasemin Altun. 2019. Generating log- ical forms from graph representations of text and entities. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 95-106, Florence, Italy. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Conceptnet 5.5: An open multilingual graph of general knowledge",
"authors": [
{
"first": "Robyn",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Chin",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17",
"volume": "",
"issue": "",
"pages": "4444--4451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the Thirty- First AAAI Conference on Artificial Intelligence, AAAI'17, page 4444-4451. AAAI Press.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "CommonsenseQA: A question answering challenge targeting commonsense knowledge",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Talmor",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Herzig",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Lourie",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4149--4158",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1421"
]
},
"num": null,
"urls": [],
"raw_text": "Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149-4158, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Identifying supporting facts for multi-hop question answering with document graph networks",
"authors": [
{
"first": "Mokanarangan",
"middle": [],
"last": "Thayaparan",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Valentino",
"suffix": ""
},
{
"first": "Viktor",
"middle": [],
"last": "Schlegel",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [],
"last": "Freitas",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)",
"volume": "",
"issue": "",
"pages": "42--51",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5306"
]
},
"num": null,
"urls": [],
"raw_text": "Mokanarangan Thayaparan, Marco Valentino, Viktor Schlegel, and Andr\u00e9 Freitas. 2019. Identifying supporting facts for multi-hop question answering with document graph networks. In Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13), pages 42-51, Hong Kong. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An empirical study of example forgetting during deep neural network learning",
"authors": [
{
"first": "Mariya",
"middle": [],
"last": "Toneva",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Tachet",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Combes",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Trischler",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"J"
],
"last": "Bengio",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gordon",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geof- frey J. Gordon. 2019. An empirical study of exam- ple forgetting during deep neural network learning. In International Conference on Learning Represen- tations.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "SuperGLUE: A stickier benchmark for general-purpose language understanding systems",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "3266--3280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. SuperGLUE: A stickier benchmark for general-purpose language un- derstanding systems. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 3266-3280. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5446"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dynamic integration of background knowl",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Weissenborn",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u00fd",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2017,
"venue": "neural NLU systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.02596"
]
},
"num": null,
"urls": [],
"raw_text": "Dirk Weissenborn, Tom\u00e1\u0161 Ko\u010disk\u00fd, and Chris Dyer. 2017. Dynamic integration of background knowl- edge in neural NLU systems. arXiv:1706.02596.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.03771"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. arXiv:1910.03771.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning semantic textual similarity from conversations",
"authors": [
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Sheng-Yi",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Pilar",
"suffix": ""
},
{
"first": "Heming",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Yun-Hsuan",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Strope",
"suffix": ""
},
{
"first": "Ray",
"middle": [],
"last": "Kurzweil",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The Third Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "164--174",
"other_ids": {
"DOI": [
"10.18653/v1/W18-3022"
]
},
"num": null,
"urls": [],
"raw_text": "Yinfei Yang, Steve Yuan, Daniel Cer, Sheng-yi Kong, Noah Constant, Petr Pilar, Heming Ge, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Learn- ing semantic textual similarity from conversations. In Proceedings of The Third Workshop on Repre- sentation Learning for NLP, pages 164-174, Mel- bourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Complex factoid question answering with a free-text knowledge graph",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chenyan",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
}
],
"year": 2020,
"venue": "The Web Conference 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Zhao, Chenyan Xiong, Xin Qian, and Jordan Boyd-Graber. 2020. Complex factoid question an- swering with a free-text knowledge graph. In The Web Conference 2020 (formerly WWW conference).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Gold Answer: Yes Predicted Answer: No A number of works including Mihaylov and Frank (2018); Bauer et al. (2018); Lin et al. (2019); Qiu et al. (2019); Thayaparan et al. (2019); Talmor et al. (2019); Zhao et al. (2020) show successful usage of knowledge graphs (KGs) in several MRC settings.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Accuracy (G), stable accuracy (#), and majority voting accuracy (L) over up to 10 runs of (a) BERT and (b) RoBERTa baselines.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "An example of usage ConceptNet entities for answering a Boolean question.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "The GNN architecture based onShaw et al. (2019) without action selection and copy mechanism.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF4": {
"text": "Question: is northern ireland part of the great britain Passage: ... Great Britain is part of the United Kingdom of Great Britain and Northern Ireland ... Answer: No The questions in the BoolQ dataset are lowercased, and this may have affected the entity recognition.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF5": {
"text": "Question: is daisy the director of shield in the comics Passage: Daisy Johnson, ... The daughter of the supervillain Mister Hyde, she is a secret agent of the intelligence organization S.H.I.E.L.D. with the power to generate earthquakes. Answer: Yes (14) Question: is chicken cordon bleu made with blue cheese",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"text": "BoolQ errors anlysis by reasoning type.",
"html": null,
"num": null
},
"TABREF3": {
"content": "<table><tr><td/><td>Base</td><td>Sent</td><td>Q&amp;P</td><td colspan=\"2\">Intersection Union</td></tr><tr><td/><td>line</td><td>Emb</td><td>Match</td><td/></tr><tr><td>Data Cov-</td><td>-</td><td colspan=\"2\">21.84 22.58</td><td>1.23</td><td>38.57</td></tr><tr><td>erage (%)</td><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"text": "AVG 81.26 81.23 80.86 81.23 81.46 80.72 Stable 73.84 73.19 72.61 73.25 73.74 72.40 Ensemble 81.62 81.89 81.37 81.92 82.05 81.10",
"html": null,
"num": null
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"text": "",
"html": null,
"num": null
},
"TABREF5": {
"content": "<table><tr><td/><td colspan=\"3\">No KG +ConceptNet +GKG</td></tr><tr><td>BERT large</td><td>78.09</td><td>-</td><td>-</td></tr><tr><td>GNN + BERT</td><td>77.37</td><td>77.4</td><td>76.80</td></tr><tr><td>+ Same MID</td><td>-</td><td>-</td><td>77.60</td></tr><tr><td colspan=\"2\">+ Relation Type -</td><td>-</td><td>77.75</td></tr><tr><td>+ Q&amp;AMatch</td><td>-</td><td>-</td><td>76.95</td></tr></table>",
"type_str": "table",
"text": "The first row shows the baseline BERT model with no KG data and the remaining rows show the BERT + GN N system with no KG data, with ConceptNet or with the Google Knowledge Graph. Adding KG information does not outperform the baseline result. None of the differences between the baseline are statistically significant.",
"html": null,
"num": null
},
"TABREF6": {
"content": "<table/>",
"type_str": "table",
"text": "GNN accuracy results on a development set using ConceptNet or Google KG (GKG).",
"html": null,
"num": null
},
"TABREF8": {
"content": "<table><tr><td>ber of new stable (wrt to baseline) correct (incorrect)</td></tr><tr><td>predictions, New Fluct. is the number of new questions</td></tr><tr><td>where answer fluctuates: Err\u00a1Corr (Corr\u00a1Err) is</td></tr><tr><td>the number of questions where answer was a stable er-</td></tr><tr><td>ror (correct), becoming correct (error) sometimes.</td></tr></table>",
"type_str": "table",
"text": "New Correct (Error) corresponds to the num-",
"html": null,
"num": null
},
"TABREF10": {
"content": "<table/>",
"type_str": "table",
"text": "The basic statistics for the BoolQ dataset.",
"html": null,
"num": null
},
"TABREF11": {
"content": "<table><tr><td colspan=\"2\">in 2015 with Answer: Yes (true)</td></tr><tr><td colspan=\"2\">(11) Question: Can I get into Canada with a</td></tr><tr><td>military ID?</td><td/></tr><tr><td>Passage:</td><td>(Title: American entry into</td></tr><tr><td colspan=\"2\">Canada by land) Canadian law requires</td></tr><tr><td colspan=\"2\">that all persons entering Canada must carry</td></tr><tr><td colspan=\"2\">proof of both citizenship and identity. A valid</td></tr><tr><td colspan=\"2\">U.S. passport or passport card is preferred,</td></tr><tr><td colspan=\"2\">although a birth certificate, naturalization</td></tr><tr><td colspan=\"2\">certificate, citizenship certificate, or another</td></tr><tr><td colspan=\"2\">document proving U.S. nationality, together</td></tr><tr><td colspan=\"2\">with a government-issued photo ID (such as a</td></tr><tr><td colspan=\"2\">driver's license) are acceptable to establish</td></tr><tr><td colspan=\"2\">identity and nationality.</td></tr><tr><td>Answer: Yes</td><td/></tr><tr><td colspan=\"2\">(12) Question: Can I go to Montenegro with a</td></tr><tr><td colspan=\"2\">Schengen visa?</td></tr><tr><td colspan=\"2\">Passage: Nationals of any country may visit</td></tr><tr><td colspan=\"2\">Montenegro without a visa for up to 30 days</td></tr><tr><td colspan=\"2\">if they hold a passport with visas issued by</td></tr><tr><td colspan=\"2\">Ireland, a Schengen Area member state, ...</td></tr><tr><td>Answer: Yes</td><td/></tr><tr><td colspan=\"2\">Some passages looked unrelated or do not con-</td></tr><tr><td colspan=\"2\">tain enough information to obtain the answer, e.g.</td></tr><tr><td>and Return of the Jedi (1983); ... A</td><td/></tr><tr><td>prequel trilogy was released between 1999</td><td/></tr><tr><td>and 2005, albeit to mixed reactions from</td><td/></tr><tr><td>critics and fans. A sequel trilogy concluding</td><td/></tr><tr><td>the main story of the nine-episode saga began</td><td/></tr><tr><td>13 https://super.gluebenchmark.com/ -l.v.</td><td/></tr><tr><td>07/2020</td><td/></tr></table>",
"type_str": "table",
"text": "The Force Awakens. ... Together with the theatrical spin-off films The Clone Wars (2008), Rogue One (2016) and Solo: A Star Wars Story (2018), Star Wars is the second highest-grossing film series ever.",
"html": null,
"num": null
},
"TABREF12": {
"content": "<table/>",
"type_str": "table",
"text": "Question: Is it bad to wash your hair with shower gel? Passage: ... This means that shower gels can also double as an effective and perfectly acceptable substitute to shampoo, even if they are not labelled as a hair and body wash. Answer: Yes Should be No (16) Question: Is Hindi is our national language of India? Passage: The Constitution of India designates the official language of the Government of India as Hindi written in the Devanagari script, as well as English. There is no national language as declared by the Constitution of India. Hindi is used for official purposes ... Answer: Yes Should be No (17) Question: are all world cup matches played in russia Passage: The 2018 FIFA World Cup was the 21st FIFA World Cup, an international football tournament contested by the men's national teams of the member associations of FIFA once every four years. It took place in Russia from 14 June to 15 July 2018. ... Answer: No Should be Yes",
"html": null,
"num": null
}
}
}
}