{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:12:57.481771Z" }, "title": "Do Transformers Dream of Inference, or Can Pretrained Generative Models Learn Implicit Inferential Rules?", "authors": [ { "first": "Zhengzhong", "middle": [], "last": "Liang", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Arizona", "location": { "addrLine": "1040 4th St", "postCode": "85721", "settlement": "Tucson", "region": "AZ", "country": "USA" } }, "email": "zhengzhongliang@email.arizona.edu" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Arizona", "location": { "addrLine": "1040 4th St", "postCode": "85721", "settlement": "Tucson", "region": "AZ", "country": "USA" } }, "email": "msurdeanu@email.arizona.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Large pretrained language models (LM) have been used successfully for multi-hop question answering. However, most of these directions are not interpretable, as they do not make the inference hops necessary to explain a candidate answer explicitly. In this work, we investigate the capability of a state-of-the-art transformer LM to generate explicit inference hops, i.e., to infer a new statement necessary to answer a question given some premise input statements. Our analysis shows that such LMs can generate new statements for some simple inference types, but performance remains poor for complex, real-world inference types such as those that require monotonicity, composition, and commonsense knowledge.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Large pretrained language models (LM) have been used successfully for multi-hop question answering. However, most of these directions are not interpretable, as they do not make the inference hops necessary to explain a candidate answer explicitly. In this work, we investigate the capability of a state-of-the-art transformer LM to generate explicit inference hops, i.e., to infer a new statement necessary to answer a question given some premise input statements. Our analysis shows that such LMs can generate new statements for some simple inference types, but performance remains poor for complex, real-world inference types such as those that require monotonicity, composition, and commonsense knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The emergence of large pretrained language models (LM) (Devlin et al., 2019; Liu et al., 2019 ) yielded significant progress in question answering (QA), including complex QA tasks that require multihop reasoning (Banerjee et al., 2019; Asai et al., 2019; Yadav et al., 2019) . Most of these stateof-the-art (SOTA) approaches address multi-hop reasoning tasks in a discriminative manner: they take the question, the candidate answer, and all the context available as the input, and produce a single score indicating the likelihood of the answer as justified by the provided context (an example is shown in Figure 1 ). However, why that context actually justifies the answer remains unclear to the human end user of the QA system.", "cite_spans": [ { "start": 55, "end": 76, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF3" }, { "start": 77, "end": 93, "text": "Liu et al., 2019", "ref_id": "BIBREF5" }, { "start": 212, "end": 235, "text": "(Banerjee et al., 2019;", "ref_id": "BIBREF1" }, { "start": 236, "end": 254, "text": "Asai et al., 2019;", "ref_id": "BIBREF0" }, { "start": 255, "end": 274, "text": "Yadav et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 605, "end": 613, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In contrast, most of us are likely to answer the question in Figure 1 by building a reasoning chain from the given facts. For example, such a possible chain starts by first combining \"metal is a thermal conductor\" and \"steel is made of metal' to yield \"steel is a thermal conductor\". Next, combining \"steel is a thermal conductor\" and \"heat travels (Mihaylov et al., 2018) (the correct answer is option B). The science fact and the commonsense knowledge facts are needed to explain the correct answer. Usually the large LMs solve this problem by taking the question, the science fact, the common knowledge facts and each candidate answer as the input and producing a single score indicating the probability of the candidate answer being justified by all of the inputs. But why the facts explain the answer is normally not covered. through a thermal conductor\" yields \"heat travels through steel\". And, finally, \"heat travels through steel\" supports the correct explanation that \"a steel spoon in a cafeteria would let the most heat travel through.\" Generating such reasoning chains can be crucial for the adoption of natural language processing applications such as QA in critical domains such as medical or law.", "cite_spans": [ { "start": 349, "end": 372, "text": "(Mihaylov et al., 2018)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 61, "end": 69, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Motivated by this, in this work we investigate whether a state-of-the-art (SOTA) transformerbased language model is able to generate a valid intermediate statement given two premise statements on a natural language QA dataset, which is fundamental to generating the reasoning chains. Our results show that although the SOTA model investigated can handle some types of inferences well, there remain multiple types of inferences where the LM fails. 1 Perfect 31/87 43/87 Acceptable 11/87 13/87 Unacceptable 45/87 31/87 Table 1 : Statistics of the quality of the generated T5 statements on the dev set of QASC. The same randomly sampled 87 examples are manually evaluated for their quality, in both the \"without hint\" and \"with hint\" configurations.", "cite_spans": [], "ref_spans": [ { "start": 449, "end": 534, "text": "Perfect 31/87 43/87 Acceptable 11/87 13/87 Unacceptable 45/87 31/87 Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently several works have investigated whether deep learning (DL) language models (LM) are able to learn and use the explicit and implicit rules in natural language. (Sinha et al., 2019 ) build a synthetic dataset containing the relationships between people; their language model needs to predict the unstated relationships between people. The problem can be summarized as: given that \"Mike is the child of Kate and Kate is the child of Tom\", the model needs to predict \"Tom is the grandparent of Mike\", by learning the implicit rule: \"If X is the child of Y and Y is the child of Z, then Z is the grandparent of X\". It has been shown that the transformer networks perform well on this task.", "cite_spans": [ { "start": 168, "end": 187, "text": "(Sinha et al., 2019", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Other works have analyzed whether DL language models are able to leverage explicit rules. generates a synthetic dataset consisting of facts and rules. The problems can be summarized as: given the facts such as \"X is red\" and \"X is big\", as well as rules such as \"If X is red and big, then X is strong\", the LM trained on this data must be able to judge whether \"X is strong\" is true. They demonstrate that transformers can fulfill this task well, and are able to generalize to unseen lexicons.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "However, all existing works investigate this problem in a discriminative manner: either a single score, a single token, or a single choice is produced as the output. In contrast, we conduct our work in a generative manner: the LM needs to generate a whole natural language statement as the output. We believe this task will eventually give the LM the ability to generate clear and complete explanations, which are necessary in multi-hop reasoning problems. Further, we investigate the capability of transformers to generate inferential statements on a complex, real-world task in the science domain, which relies on much sparser data than other tasks previously investigated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this paper, we concentrate on a single-hop inference problem. That is, given the statements S 1 (A, B) and S 2 (B, C), the model needs to generate the valid and reasonable statement S 3 (A, C). Unlike reasoning tasks on structured knowledge bases or ConceptNet where A, B, C are entities, here A, B and C can be any text in natural language: they can be words, phrases, or clauses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3.1" }, { "text": "We used the QASC dataset (Khot et al., 2020) for this task. QASC contains approximately 10, 000 questions in the science domain, where each answer is associated with two supporting facts (fact 1 and fact 2). These two supporting facts have tokens in common, which is necessary for our inference task that requires overlap between facts (through B). Importantly, for each answer QASC provides a combined fact that explains the answer, and which is directly inferred from the two supporting facts. The first two columns in Tables 2, 3 , and 4 show a few examples of the supporting facts and the resulting combined fact. The forms of the combined facts can be very diverse due to the annotation process of QASC, where each annotator is first given fact 1, then the annotator needs to find an arbitrary fact 2 that has overlaps with the fact 1, and composes the combined fact, without other restrictions (Khot et al., 2020) . 2 The task we investigate here is whether transformer-based LMs can infer the combined fact when provided with the two initial facts.", "cite_spans": [ { "start": 25, "end": 44, "text": "(Khot et al., 2020)", "ref_id": "BIBREF4" }, { "start": 900, "end": 919, "text": "(Khot et al., 2020)", "ref_id": "BIBREF4" }, { "start": 922, "end": 923, "text": "2", "ref_id": null } ], "ref_spans": [ { "start": 521, "end": 532, "text": "Tables 2, 3", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3.1" }, { "text": "We use the pre-trained Google T5 small model (Raffel et al., 2020) published by huggingface (Wolf et al., 2019) , and fine-tune it on the QASC dataset. 3 We explore two types of input format: fact 1 + fact 2 \u2192 combined fact: In this setting, T5 takes the two facts as input to generate the combined fact. The T5 input format is \"substitution statement 1: [fact 1] statement 2: [fact 2]\", where \"substitution\", \"statement 1:\" and \"statement 2:\" are user-defined keywords for the task.", "cite_spans": [ { "start": 92, "end": 111, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF10" }, { "start": 152, "end": 153, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3.2" }, { "text": "Target Prediction Evaluation substitution statement 1: if weather is stormy then there is a greater chance of rain. statement 2: rain is also known as precipitation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input", "sec_num": null }, { "text": "if weather is stormy then there is a greater chance of precipitation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input", "sec_num": null }, { "text": "if weather is stormy then there is greater chance of precipitation. Table 3 : Comparison of T5 output in the \"without hint\" and \"with hint\" configurations on QASC. fact 1 + fact 2 + lexical hints \u2192 combined fact: During our experiments, we noticed that sometimes multiple valid statements could be inferred from fact 1 and fact 2, which tended to confuse the LM. 4 To mitigate this issue, we added lexical hints to the model input, on what tokens would be best to be included in the generated statement. The terms in the hint are generated as (Q\u222aA)\u2229(F 1 \u222aF 2 ), where Q is the set of unique terms in the question, A is the set of unique terms in the answer, F 1 and F 2 are the sets of unique terms in fact 1 and fact 2. 5 This is inspired by the fact that each question in QASC is derived from the gold combined fact, so that even when multiple valid statements may be generated 4 E.g., for the first and second row in Table 3 , \"one-celled animals make humans sick\" is a valid generation, but not perfect w.r.t. the target.", "cite_spans": [ { "start": 363, "end": 364, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 68, "end": 75, "text": "Table 3", "ref_id": null }, { "start": 920, "end": 927, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Input", "sec_num": null }, { "text": "5 Thus, the text containing the lexical hints is simply a bag of words, rather than grammatical correct text. from fact 1 and fact 2, paying extra attention on the terms in the question and the correct answer is likely to force the model to make predictions related to the gold combined fact.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input", "sec_num": null }, { "text": "For each configuration, we manually evaluated 100 generated statements against the corresponding gold combined fact on the dev set. 6 All generations are categorized into three classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metric", "sec_num": "3.3" }, { "text": "Perfect: The generated statement is (1) exactly the same as the gold combined fact, or (2) semantically the same as the gold combined fact but uses a different expression.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metric", "sec_num": "3.3" }, { "text": "Target Prediction Question Type substitution statement 1: skin color is a polygenic trait. statement 2: polygenic traits are the result of the interaction of several genes. hint: is genes of the result several skin color interaction. skin color is the result of the interaction of several genes. skin color is the result of the interaction of several genes. Instantiation substitution statement 1: if weather is stormy then there is a greater chance of rain. statement 2: rain is also known as precipitation. hint: stormy is greater weather there of a chance precipitation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input", "sec_num": null }, { "text": "if weather is stormy then there is a greater chance of precipitation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input", "sec_num": null }, { "text": "if weather is stormy then there is a greater chance of precipitation. Table 4 : Output of T5 categorized by the types of the inference (w/ hint).", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 77, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Input", "sec_num": null }, { "text": "The generated statement is semantically valid, but its meaning is slightly different from the gold combined fact.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acceptable:", "sec_num": null }, { "text": "Unacceptable: The generated statement (1) contains contradicting information, or (2) has severe grammatically issues, or (3) is missing essential content from the gold combined fact (e.g., contains information from only fact 1 or only fact 2). Table 1 shows the overall statistics gathered by our analysis. All in all, our analysis shows that this inferential task is far from solved, with most of the inferred statements being not perfect. In particular, for the w/o hints configuration, less than half of the generated statements are perfect. Adding lexical hints to the input boosts the generation quality in general, but leaves 51% of inferences as not perfect. A detailed analysis of the generated statements highlights that T5 performs well in certain situations, and not so in others. We categorize below these situations, discuss some possible solutions, and leave a more systematic analysis of the reason why the model fails on some problems to a future study.", "cite_spans": [], "ref_spans": [ { "start": 244, "end": 251, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Acceptable:", "sec_num": null }, { "text": "Below \"well learned\" means most of the predictions on that type of generations are evaluated as \"perfect\" and \"not well learned\" means most of the predictions are evaluated as \"unacceptable\" by the criteria mentioned in 3.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Instantiation Here the input statements are S 1 (A, B) and IsA(B, C), i.e., C is an instantiation of a more general concept B. The target output is S 1 (A, C) (Table 4) .", "cite_spans": [], "ref_spans": [ { "start": 159, "end": 168, "text": "(Table 4)", "ref_id": null } ], "eq_spans": [], "section": "Inference types well learned:", "sec_num": null }, { "text": "Equivalence Here the input statements are S 1 (A, B) and Equ(B, C), i.e., B is equivalent to C. The target output is S 1 (A, C) (Table 4) .", "cite_spans": [], "ref_spans": [ { "start": 128, "end": 137, "text": "(Table 4)", "ref_id": null } ], "eq_spans": [], "section": "Inference types well learned:", "sec_num": null }, { "text": "Inference types not well learned:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference types well learned:", "sec_num": null }, { "text": "Multiple possible statements to generate When the input statements are long and complex, there might be multiple valid statements that could be generated from the input (discussed in 3.2). In this case T5 tends to be confused. Adding lexical hints can relieve this problem to some extent by forcing the model to pay extra attention to certain areas in the input, but problems remain. First, even when adding the lexical hints, some generations are still not reasonable (Table 3) . Second, accurately identifying the important fractions to pay attention to is itself a non-trivial problem. We believe this is an exciting area for future research. For example, some specialized architectures such as the pointer generator network (See et al., 2017 ) might be capable to learn what parts should be copied or ignored.", "cite_spans": [ { "start": 728, "end": 745, "text": "(See et al., 2017", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 469, "end": 478, "text": "(Table 3)", "ref_id": null } ], "eq_spans": [], "section": "Inference types well learned:", "sec_num": null }, { "text": "Composition and summarization As shown in the third to last row of Table 4 , the new statement needs the composition of statement 1 and 2, and some summarization is needed (i.e., \"absorption of nutrients\" \u2192 \"function\").", "cite_spans": [], "ref_spans": [ { "start": 67, "end": 74, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Inference types well learned:", "sec_num": null }, { "text": "Dealing with quantifiers in natural language As shown in the second to last row of Table 4 , the new statement needs complex monotonicity reasoning and the understanding of quantifiers.", "cite_spans": [], "ref_spans": [ { "start": 83, "end": 90, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Inference types well learned:", "sec_num": null }, { "text": "Generating statements that comply with commonsense knowledge In several examples, the model generates statements that are grammatically correct but unreasonable regarding commonsense knowledge. In particular, many of these inferences require commonsense knowledge to generate new text and rephrasing to make the new statement reasonable. For example, in the last row of Table 4 , \"death can be treated with dialysis\" is grammatically correct but unreasonable.", "cite_spans": [], "ref_spans": [ { "start": 370, "end": 377, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Inference types well learned:", "sec_num": null }, { "text": "There might be multiple reasons why some types of generations are not well learned. For instance, it could be because the biases learned by T5 in the pre-training stage impede it from learning meaningful patterns by fine-tuning on a downstream task with relatively few training samples (e.g., the QASC dataset used in this paper has only about 8,000 training examples). Alternatively, it is possible that the patterns to be learned in this downstream task are too complex to be learned from the small training data available. We leave a more systematic analysis in this direction to future studies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference types well learned:", "sec_num": null }, { "text": "In this work we investigate how well a state-ofthe-art transformer language model can generate a valid statement inferred from two given statements. We manually evaluated two fine-tuned T5 models (Raffel et al., 2020) with slightly different inputs (i.e., with and without contextual information) on the Question Answering via Sentence Composition dataset (Khot et al., 2020) . Our analysis indicates that the two models can generate good-quality statements, when the inference relies solely on instantiation or equivalence. However, the models perform poorly on more complex inferences such as: (a) multiple valid statements can be generated given the premises, (b) inference that requires non-trivial reasoning of monotonicity (especially with quantifiers in natural language), (c) inference that needs composition and summarization, and (d) statements that require rephrasing based on background commonsense knowledge.", "cite_spans": [ { "start": 196, "end": 217, "text": "(Raffel et al., 2020)", "ref_id": "BIBREF7" }, { "start": 356, "end": 375, "text": "(Khot et al., 2020)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "The code and data for our analysis can be found at https://github.com/clulab/releases/tree/ master/emnlp2020-generative-nli.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that the two supporting facts and the gold combined fact of each question in QASC are annotated by the creators of the QASC dataset, not by the authors of this paper.3 We used the Adam optimizer with a learning rate of 1e-4, as recommended in the tutorial. The training stops when the evaluation loss starts to increase; we allowed a maximum of 10 epochs of training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "13 data points had issues in the raw data, and were removed, leaving the actual number of data points analyzed as 87.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning to retrieve reasoning paths over wikipedia graph for question answering", "authors": [ { "first": "Akari", "middle": [], "last": "Asai", "suffix": "" }, { "first": "Kazuma", "middle": [], "last": "Hashimoto", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. 2019. Learn- ing to retrieve reasoning paths over wikipedia graph for question answering. In International Conference on Learning Representations.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Careful selection of knowledge to solve open book question answering", "authors": [ { "first": "Pratyay", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Arindam", "middle": [], "last": "Kumar Pal", "suffix": "" }, { "first": "Chitta", "middle": [], "last": "Mitra", "suffix": "" }, { "first": "", "middle": [], "last": "Baral", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6120--6129", "other_ids": { "DOI": [ "10.18653/v1/P19-1615" ] }, "num": null, "urls": [], "raw_text": "Pratyay Banerjee, Kuntal Kumar Pal, Arindam Mi- tra, and Chitta Baral. 2019. Careful selection of knowledge to solve open book question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6120-6129, Florence, Italy. Association for Compu- tational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Transformers as soft reasoners over language", "authors": [ { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.05867" ] }, "num": null, "urls": [], "raw_text": "Peter Clark, Oyvind Tafjord, and Kyle Richardson. 2020. Transformers as soft reasoners over language. arXiv preprint arXiv:2002.05867.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Qasc: A dataset for question answering via sentence composition", "authors": [ { "first": "Tushar", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Guerquin", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" } ], "year": 2020, "venue": "AAAI", "volume": "", "issue": "", "pages": "8082--8090", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2020. Qasc: A dataset for question answering via sentence compo- sition. In AAAI, pages 8082-8090.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Can a suit of armor conduct electricity? a new dataset for open book question answering", "authors": [ { "first": "Todor", "middle": [], "last": "Mihaylov", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Tushar", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2381--2391", "other_ids": { "DOI": [ "10.18653/v1/D18-1260" ] }, "num": null, "urls": [], "raw_text": "Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct elec- tricity? a new dataset for open book question an- swering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2381-2391, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter J", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "140", "pages": "1--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the lim- its of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Get to the point: Summarization with pointergenerator networks", "authors": [ { "first": "Abigail", "middle": [], "last": "See", "suffix": "" }, { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1073--1083", "other_ids": { "DOI": [ "10.18653/v1/P17-1099" ] }, "num": null, "urls": [], "raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083, Vancouver, Canada. Association for Computa- tional Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "CLUTRR: A diagnostic benchmark for inductive reasoning from text", "authors": [ { "first": "Koustuv", "middle": [], "last": "Sinha", "suffix": "" }, { "first": "Shagun", "middle": [], "last": "Sodhani", "suffix": "" }, { "first": "Jin", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" }, { "first": "William", "middle": [ "L" ], "last": "Hamilton", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4506--4515", "other_ids": { "DOI": [ "10.18653/v1/D19-1458" ] }, "num": null, "urls": [], "raw_text": "Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle Pineau, and William L. Hamilton. 2019. CLUTRR: A diagnostic benchmark for inductive reasoning from text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4506-4515, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R'emi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Quick and (not so) dirty: Unsupervised selection of justification sentences for multi-hop question answering", "authors": [ { "first": "Vikas", "middle": [], "last": "Yadav", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2578--2589", "other_ids": { "DOI": [ "10.18653/v1/D19-1260" ] }, "num": null, "urls": [], "raw_text": "Vikas Yadav, Steven Bethard, and Mihai Surdeanu. 2019. Quick and (not so) dirty: Unsupervised se- lection of justification sentences for multi-hop ques- tion answering. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2578-2589, Hong Kong, China. As- sociation for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "text": "An example of question and candidate answers from OpenbookQA", "type_str": "figure" }, "TABREF0": { "text": "Perfect, exactly the same substitution statement 1: lightning can cause a forest fire. statement 2: forest fires are examples of wildfires.", "content": "
wildfires are causedlightning can causePerfect, semanti-
by lightning.wildfires.cally the same
substitution statement 1: whiskers are usedstiff hairs on the facestiff hairs on the dog'sAcceptable
for feeling by animals. statement 2: whiskersare used for feeling byface are used for feel-
whiskers are the stiff hairs on the dog's face.animals.ing by animals.
substitution statement 1: lymph is a fluid thatswelling nodes leakbacteria can swell.Unacceptable, un-
leaks out of capillaries into spaces between cells.fluid out of capillariesreasonable or con-
statement 2: bacteria invade lymph nodes, whichinto spaces betweentradicting informa-
swell, creating the buboes.cells.tion
substitution statement 1: absorption of nutrientsdiarrheareducesdiarrhea reduces theUnacceptable, miss-
occurs mainly in the small intestine. statement 2:funtcion that occursabsorption of nutri-ing essential ele-
diarrhea reduces the absorption of all nutrients.mainly in the smallents.ments
intestine.
", "html": null, "num": null, "type_str": "table" }, "TABREF1": { "text": "Output of T5 on QASC in the \"without hint\" configuration.", "content": "
InputTargetPredictionEvaluation
substitution statement 1: protozoa make humansone-celled animals makeone-celled animalsw/o hint
sick when they become human parasites. statementhumans sick when theymake humans sick.
2: exhibit one-celled animals are called protozoa.become parasites.
substitution statement 1: protozoa make humansone-celled animals makeone-celled animalsw/hint,
sick when they become human parasites. state-humans sick when theymake humans sickstatement
ment 2: exhibit one-celled animals are called proto-become parasites.when they becomeimproved
zoa. hint: animals when become one-celled humanshuman parasites.
make parasites they sick.
substitution statement 1: mutualism is a symbioticdomestication of ani-domestication of ani-w/o hint
relationship in which both species benefit. state-mals is an example ofmals is an example of
ment 2: domestication of animals is an example ofmutualism.mutualism.
a symbiotic relationship.
substitution statement 1: mutualism is a symbioticdomestication of ani-mutualism is an exam-w/ hint, state-
relationship in which both species benefit. state-mals is an example ofple of domesticationment harmed
ment 2: domestication of animals is an example ofmutualism.of animals.
a symbiotic relationship. hint: is animals mutualism
of domestication example an.
", "html": null, "num": null, "type_str": "table" } } } }