{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:10:59.808753Z" }, "title": "MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models", "authors": [ { "first": "Mandy", "middle": [], "last": "Guo", "suffix": "", "affiliation": { "laboratory": "", "institution": "Google Research Mountain View", "location": { "region": "CA", "country": "USA" } }, "email": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Google Research Mountain View", "location": { "region": "CA", "country": "USA" } }, "email": "yinfeiy@google.com" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "", "affiliation": { "laboratory": "", "institution": "Google Research Mountain View", "location": { "region": "CA", "country": "USA" } }, "email": "" }, { "first": "Qinlan", "middle": [], "last": "Shen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University Pittsburgh", "location": { "region": "PA", "country": "USA" } }, "email": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "", "affiliation": { "laboratory": "", "institution": "Google Research Mountain View", "location": { "region": "CA", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Retrieval question answering (ReQA) is the task of retrieving a sentence-level answer to a question from an open corpus (Ahmad et al., 2019). This dataset paper presents Multi-ReQA, a new multi-domain ReQA evaluation suite composed of eight retrieval QA tasks drawn from publicly available QA datasets 1. We explore systematic retrieval based evaluation and transfer learning across domains over these datasets using a number of strong baselines including two supervised neural models, based on fine-tuning BERT and USE-QA models respectively, as well as a surprisingly effective information retrieval baseline, BM25. Five of these tasks contain both training and test data, while three contain test data only. Performing cross training on the five tasks with training data shows that while a general model covering all domains is achievable, the best performance is often obtained by training exclusively on in-domain data.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Retrieval question answering (ReQA) is the task of retrieving a sentence-level answer to a question from an open corpus (Ahmad et al., 2019). This dataset paper presents Multi-ReQA, a new multi-domain ReQA evaluation suite composed of eight retrieval QA tasks drawn from publicly available QA datasets 1. We explore systematic retrieval based evaluation and transfer learning across domains over these datasets using a number of strong baselines including two supervised neural models, based on fine-tuning BERT and USE-QA models respectively, as well as a surprisingly effective information retrieval baseline, BM25. Five of these tasks contain both training and test data, while three contain test data only. Performing cross training on the five tasks with training data shows that while a general model covering all domains is achievable, the best performance is often obtained by training exclusively on in-domain data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Retrieval-based question answering (QA) investigates the problem of finding answers to questions from an open corpus (Surdeanu et al., 2008; Yang et al., 2015; Chen et al., 2017; Ahmad et al., 2019; Chang et al., 2020; Ma et al., 2020) . There is a growing interest in building scalable end-to-end question answering systems for large scale retrieval (Ahmad et al., 2019; Roy et al., 2020) . Retrieval question answering (ReQA) (Ahmad et al., 2019) , illustrated in Table 1 , defines the task as directly retrieving an answer sentence from a corpus. 2 Motivated by real applications such as Google's Talk to Books 3 , where sentencelevel answers from books are retrieved to answer users' queries, ReQA is different from traditional machine reading for question answering or \"reading comprehension\" which aims to extract a short answer span from a given passage. Rather than just identifying answers within a short preselected passage that is provided to the model effectively by an oracle, retrieving sentence-level answers from a large pool of candidates directly addresses the realworld problem of searching for answers within a corpus. Sentences retrieved as answers in this manner can be used directly to answer questions. Alternatively, retrieved sentences, as well as possibly the passages that contains them, can be provided to a traditional Open Domain QA model (Chen et al., 2017; Karpukhin et al., 2020) .", "cite_spans": [ { "start": 117, "end": 140, "text": "(Surdeanu et al., 2008;", "ref_id": "BIBREF30" }, { "start": 141, "end": 159, "text": "Yang et al., 2015;", "ref_id": "BIBREF37" }, { "start": 160, "end": 178, "text": "Chen et al., 2017;", "ref_id": "BIBREF3" }, { "start": 179, "end": 198, "text": "Ahmad et al., 2019;", "ref_id": "BIBREF0" }, { "start": 199, "end": 218, "text": "Chang et al., 2020;", "ref_id": "BIBREF2" }, { "start": 219, "end": 235, "text": "Ma et al., 2020)", "ref_id": "BIBREF21" }, { "start": 351, "end": 371, "text": "(Ahmad et al., 2019;", "ref_id": "BIBREF0" }, { "start": 372, "end": 389, "text": "Roy et al., 2020)", "ref_id": "BIBREF27" }, { "start": 428, "end": 448, "text": "(Ahmad et al., 2019)", "ref_id": "BIBREF0" }, { "start": 1370, "end": 1389, "text": "(Chen et al., 2017;", "ref_id": "BIBREF3" }, { "start": 1390, "end": 1413, "text": "Karpukhin et al., 2020)", "ref_id": null } ], "ref_spans": [ { "start": 466, "end": 473, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent research has shown promising results on developing neural models for retrieval tasks including ReQA, MS MARCO, and the retrieval part of open domain question QA (Roy et al., 2020; Karpukhin et al., 2020; Xiong et al., 2020; Luan et al., 2020) . One challenge of employing neural models is that it usually requires a large amount of training data. While it is possible to get such data from a general domain, it may hard to get similar data for specialized domains, which is a common span (Chen et al., 2017; Lee et al., Chicken Run is a 2000 stop-motion animated comedy film produced by the British studio Aardman Animations.", "cite_spans": [ { "start": 168, "end": 186, "text": "(Roy et al., 2020;", "ref_id": "BIBREF27" }, { "start": 187, "end": 210, "text": "Karpukhin et al., 2020;", "ref_id": null }, { "start": 211, "end": 230, "text": "Xiong et al., 2020;", "ref_id": "BIBREF36" }, { "start": 231, "end": 249, "text": "Luan et al., 2020)", "ref_id": "BIBREF20" }, { "start": 495, "end": 514, "text": "(Chen et al., 2017;", "ref_id": "BIBREF3" }, { "start": 515, "end": 526, "text": "Lee et al.,", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "when was the last episode of vampire diaries aired", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NQ", "sec_num": null }, { "text": "The series ran from September 10, 2009 to March 10, 2017 on The CW.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NQ", "sec_num": null }, { "text": "what decade did house music hit the mainstream in the us?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SQuAD", "sec_num": null }, { "text": "The early 1990s additionally saw the rise in mainstream US popularity for house music.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SQuAD", "sec_num": null }, { "text": "What chromosome is affected in Turner's syndrome?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BioASQ", "sec_num": null }, { "text": "The origin of sSMC of Turner syndrome with 45, X/46, X, + mar karyotype was almost all from sex chromosomes, and rarely from autosomes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BioASQ", "sec_num": null }, { "text": "Which year is Bird Girl and the Man Who Followed the Sun released?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "Bird Girl and the Man Who Followed the Sun is a 1996 novel by Velma Wallis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "TextbookQA which nervous system disease causes seizures?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "Epilepsy is a disease that causes seizures. situation with examples including personal search over private repositories and search within enterprise environments (Hawking, 2004; Chirita et al., 2005) . It is unknown how well a general domain model can perform on domain specific QA tasks or even the extent of transfer possible across different specialized domains. In order to further investigate these questions within the context of the ReQA task, we propose a new common evaluation suite consisting of eight new datasets extracted from existing QA datasets. Five in-domain tasks include training and test data, while three out-of-domain tasks contain only test data. We provide cross domain baselines for neural and non-neural retrieval methods. Our baseline experiments use two competitive neural models, based on BERT and USE-QA (Yang et al., 2019) , respectively, and BM25, a strong information retrieval baseline. BM25 performs surprisingly well on many retrieval question answering tasks, achieving the best performance on two of five in-domain tasks and all three out-ofdomain tasks. Neural models achieve the highest performance on three of five in-domain tasks, outperforming BM25 by a wide margin on tasks with less token overlap between question and answer. Comparing general models trained on a mixture of QA training sets to specialized in-domain models trained on a single QA task reveals that models trained jointly on multiple datasets rarely outperform those trained on only in-domain data.", "cite_spans": [ { "start": 162, "end": 177, "text": "(Hawking, 2004;", "ref_id": "BIBREF12" }, { "start": 178, "end": 199, "text": "Chirita et al., 2005)", "ref_id": "BIBREF4" }, { "start": 835, "end": 854, "text": "(Yang et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "The main contribution of this paper is summa-rized as follows: 1) A new evaluation suit derived from existing QA tasks for measuring retrieval question answering performance in multiple domains; 2) Establish the strong baselines with key word based retrieval approach and neural retrieval models. 3) Exploring the domain transferability and limitation for existing retrieval models. Extensive experiments show that BM25 remains a strong baseline for all domains. While a general neural model covering all domains is achievable, the best performing neural model is often obtained by training exclusively on in-domain data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "2 Retrieval QA (ReQA)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "ReQA formalizes the retrieval-based QA task as the identification of a sentence in-context that answers a provided question (Ahmad et al., 2019) . Retrieval QA models are evaluated using Precision at 1 (P@1) and Mean Reciprocal Rank (MRR). The P@1 score tests whether the true answer sentence appears as the top-ranked candidate 4 . MRR, introduced for the evaluation of retrieval based QA systems (Voorhees, 2001; Radev et al., 2002) , is", "cite_spans": [ { "start": 124, "end": 144, "text": "(Ahmad et al., 2019)", "ref_id": "BIBREF0" }, { "start": 398, "end": 414, "text": "(Voorhees, 2001;", "ref_id": "BIBREF33" }, { "start": 415, "end": 434, "text": "Radev et al., 2002)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "calculated as MRR = 1 N N i=1 1 rank i ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "where N is the total number of questions, and rank i is the rank of the first correct answer for the ith question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "3 Multi-domain ReQA (MultiReQA)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "The multi-domain ReQA (MultiReQA) test suite is composed of select datasets drawn from the MRQA shared task (Fisch et al., 2019a) . 5 We follow the training, in-domain test, out-of-domain test splits defined in MRQA. The individual datasets are described below:", "cite_spans": [ { "start": 108, "end": 129, "text": "(Fisch et al., 2019a)", "ref_id": "BIBREF8" }, { "start": 132, "end": 133, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "SearchQA Jeopardy question-answer pairs augmented with text snippets retrieved by Google (Dunn et al., 2017) .", "cite_spans": [ { "start": 89, "end": 108, "text": "(Dunn et al., 2017)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "TriviaQA Trivia enthusiasts authored questionanswer pairs. Answers are drawn from Wikipedia and Bing web search results, excluding trivia websites (Joshi et al., 2017b) .", "cite_spans": [ { "start": 147, "end": 168, "text": "(Joshi et al., 2017b)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "HotpotQA Wikipedia question-answer pairs. This dataset differs from the others in that the questions require reasoning over multiple supporting documents . The annotators generate the questions knowing the answers and the supporting contexts. SQuAD 1.1 Wikipedia question-answer pairs (Rajpurkar et al., 2016a) . Given the supporting contexts from Wikipedia, the annotators were asked to write questions such that the answers could be found in the contexts. Moreover, many of the questions are directly formed from parts of the supporting contexts.", "cite_spans": [ { "start": 285, "end": 310, "text": "(Rajpurkar et al., 2016a)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "NaturalQuestions (NQ) Questions are real queries issued by multiple users to Google search that retrieve a Wikipedia page in the top five search results. Answer text is drawn from the search results . We removed the duplicate question-answer pairs in the in-domain test split, since during the original dataset construction, multiple raters were asked to select answers from the paragraphs. Unlike ReQA (Ahmad et al., 2019), we did not limit the questions and candidates to be only within the HTML paragraph block, and the candidates could contain lists and tables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "BioASQ Bio-medical question-answer pairs with answers annotated by domain experts and drawn from research articles (Tsatsaronis et al., 2015) .", "cite_spans": [ { "start": 115, "end": 141, "text": "(Tsatsaronis et al., 2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "RelationExtraction (R.E.) Entity relation question-answer pairs, created by slot filling using the WikiReading dataset (Ahmad et al., 2019) .", "cite_spans": [ { "start": 119, "end": 139, "text": "(Ahmad et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "TextbookQA Multi-modal question-answer pairs taken from middle school science curricula (Kembhavi et al., 2017) . In this paper, we only consider the text aspect of this task as defined by the original MRQA shared task. Table 2 provides example question-answer sentence pairs. Datasets are converted from a span identification task to sentence-level retrieval. The questions from the original data are used without modification. Supporting documents are split into sentences using NLTK. 6 All resulting sentences become retrieval candidates. Answer spans identify sentences containing the correct answers. Spans covering multiple sentence are excluded. 7 Table 3 provides statistics on the number of training set pairs and the number of questions, candidates and average answers per question in the evaluation data. Table 4 shows the average length of word tokens and degree of token overlap. SearchQA and HotpotQA have supporting documents split by [DOC]/[PAR] tags, so they have comparatively shorter context length. TriviaQA has much longer context length because all supporting documents were tokenized as one due to lack of clear division among special tags in the dataset. NaturalQuestions contain lists and tables that bring up the average answer length. SearchQA and SQuAD have high degree of question/answer overlap because the supporting documents in SearchQA are retrieved by search engine, and SQuAD questions are written with advance knowledge of the answers and supporting contexts. However, even though HotpotQA questions are also written with the knowledge of the answers and contexts, the degree of overlap is quite low likely due to the inclusion of multi-document inference. 6 As the datasets SearchQA, TriviaQA and HotpotQA contain special tags [DOC], [PAR] , [SEP] , and [TLE], we perform dataset-specific pre-processing to handle context splitting and tag removal. TriviaQA has [DOC] [TLE] [PAR] tags, but with no clear divisions to mark where the span of each kind of tags ends. We remove all the tags, and tokenize the article as if it does not have special tags. SearchQA uses [DOC] to separate the supporting snippets, [TLE] to mark the start of title, and [PAR] to mark start of the snippet content. We treat contents between two [DOC] tags as individual context. We then use NLTK to split the sentences within each context. The contents between [TLE] and [PAR] are used as a title feature. If the answer appears in the title feature, we do not add it as a positive answer. There are about 500 examples where the answer span is only in the title span, and we remove the corresponding questions. We follow the same procedure for HotpotQA, which uses [PAR] to separate supporting documents, and [SEP] to separate title and document content.", "cite_spans": [ { "start": 88, "end": 111, "text": "(Kembhavi et al., 2017)", "ref_id": "BIBREF17" }, { "start": 1694, "end": 1695, "text": "6", "ref_id": null }, { "start": 1772, "end": 1777, "text": "[PAR]", "ref_id": null }, { "start": 1780, "end": 1785, "text": "[SEP]", "ref_id": null } ], "ref_spans": [ { "start": 220, "end": 227, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 655, "end": 662, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 816, "end": 823, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "7 This is typically due to sentence splitting errors by NLTK. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "R.E.", "sec_num": null }, { "text": "To establish strong baselines for the MultiReQA test suite, we use two neural models, based on BERT and USE-QA (Yang et al., 2019) , respectively, as well as an well established term-based information retrieval baseline, BM25.", "cite_spans": [ { "start": 111, "end": 130, "text": "(Yang et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "4" }, { "text": "BERT dual encoders are used for retrieval tasks like translation retrieval (Feng et al., 2020) and QA passage retrieval (Roy et al., 2020; Karpukhin et al., 2020) . We explore a BERT dual encoder as our first neural baseline, using the BERT BASE model, 8 due to memory constraints. 9", "cite_spans": [ { "start": 75, "end": 94, "text": "(Feng et al., 2020)", "ref_id": "BIBREF7" }, { "start": 120, "end": 138, "text": "(Roy et al., 2020;", "ref_id": "BIBREF27" }, { "start": 139, "end": 162, "text": "Karpukhin et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "BERT", "sec_num": "4.1" }, { "text": "Questions and answers are encoded using two separate towers with tied model weights. The question is fed into one tower and we take the embedding output of the CLS token as the question encoding. The answer text and context are concatenated as a long sequence, using segment IDs to separate them. The concatenated input is fed into the other tower. As with the question encoder, we take the CLS embedding as the answer encoding. To distinguish questions and answers, we add an additional input type embedding to each input token. 10 The final embeddings are l 2 normalized.", "cite_spans": [ { "start": 530, "end": 532, "text": "10", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "BERT", "sec_num": "4.1" }, { "text": "Following Ahmad et al. 2019, we employ the Universal Sentence Encoder QA (USE-QA) (Yang et al., 2019) 11 as another neural baseline. USE-QA is a multilingual QA retrieval model pretrained on billions of examples from web-crawled question answering corpora. 12 USE-QA encodes the question and answer separately using a transformer (Vaswani et al., 2017) based dual encoder architecture. The question embedding is obtained by average pooling over all token positions in the final transformer block follwed by fully-connected network. Answers and their context are encoded using a transformer for the answer text and a deep averaging network (DAN) (Iyyer et al., 2015) for context. Preliminary answer vectors are computed using average pooling over positions. The answer vector is then concatenated with the DAN based context vector and fed to a fully-connected network to compute the final joint representation.", "cite_spans": [ { "start": 82, "end": 101, "text": "(Yang et al., 2019)", "ref_id": "BIBREF0" }, { "start": 330, "end": 352, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF32" }, { "start": 645, "end": 665, "text": "(Iyyer et al., 2015)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Universal Sentence Encoder QA", "sec_num": "4.2" }, { "text": "Term frequency inverse document frequency (TF-IDF) based methods remain the dominant method for document retrieval, with the \"Best Matching 25\" (BM25) family of ranking functions providing a well established baseline (Robertson and Zaragoza, 2009) . In previous work on open domain question answering, BM25 has been used to retrieve evidetails of dual encoder training with negative sampling, see Gillick et al. (2018) and Guo et al. (2018) .", "cite_spans": [ { "start": 217, "end": 247, "text": "(Robertson and Zaragoza, 2009)", "ref_id": "BIBREF26" }, { "start": 397, "end": 418, "text": "Gillick et al. (2018)", "ref_id": "BIBREF10" }, { "start": 423, "end": 440, "text": "Guo et al. (2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "BM25", "sec_num": "4.3" }, { "text": "10 Note that we switch the final activation layer of the BERT CLS token from tanh to gelu.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BM25", "sec_num": "4.3" }, { "text": "11 https://tfhub.dev/google/universal-sentence-encodermultilingual-qa/1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BM25", "sec_num": "4.3" }, { "text": "12 USE-QA uses a 6 layer transformer with 8 attention heads, a hidden size of 512 and a filter size of 2048. The context DAN encoder uses hidden sizes [320, 320, 512, 512] with residual connections. The feed-forward networks for question and answer both use hidden sizes [320, 512] , so the final dimension of the encodings is 512. dence text, and has been shown to be a particularly strong baseline on tasks where the question is written with advance knowledge of the answer .", "cite_spans": [ { "start": 151, "end": 156, "text": "[320,", "ref_id": null }, { "start": 157, "end": 161, "text": "320,", "ref_id": null }, { "start": 162, "end": 166, "text": "512,", "ref_id": null }, { "start": 167, "end": 171, "text": "512]", "ref_id": null }, { "start": 271, "end": 276, "text": "[320,", "ref_id": null }, { "start": 277, "end": 281, "text": "512]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "BM25", "sec_num": "4.3" }, { "text": "The BM25 score of document D given query Q which contains words q 1 , ..., q n is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BM25", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "n i=1 IDF(qi) \u2022 f (qi, D) \u2022 (k1 + 1) f (qi, D) + k1 \u2022 (1 \u2212 b + b \u2022 |D| avgdl )", "eq_num": "(1)" } ], "section": "BM25", "sec_num": "4.3" }, { "text": "where f (q i , D) is q i 's term frequency in the document, |D| is the length of the document in words, and avgdl is the average document length across all documents. Scalars k 1 and b are free parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BM25", "sec_num": "4.3" }, { "text": "We concatenate the answer sentence and context as the document when applying BM25 for answer retrieval. 13", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BM25", "sec_num": "4.3" }, { "text": "5.1 Fine-tuning and Configurations", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We use the BM25 implementation in the Gensim library (\u0158eh\u016f\u0159ek and Sojka, 2010) with default k 1 and b settings. Inverse document frequency is calculated for each constructed dataset independently. We deploy two different tokenization methods for BM25: NLTK (Bird et al., 2009 ) and a WordPiece model (wpm) (Wu et al., 2016) following the BERT implementation. 14 The NLTK tokenizer does not normalize text, while the Word-Piece model does by default. Our results in Table 5 for BM25 word use NLTK without normalization, while BM25 wpm uses wpm with normalization. 15", "cite_spans": [ { "start": 257, "end": 275, "text": "(Bird et al., 2009", "ref_id": "BIBREF1" }, { "start": 306, "end": 323, "text": "(Wu et al., 2016)", "ref_id": "BIBREF35" } ], "ref_spans": [ { "start": 465, "end": 472, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "BM25", "sec_num": null }, { "text": "The USE-QA model is already pretrained specifically for retrieval QA tasks. We first evaluate the default model without any dataset specific fine-tuning. We further fine-tune USE-QA model with a discriminative ranking objective (Yang et al., 2019) on our training sets: 16 q,a) \u0101\u2208A e \u03c6(q,\u0101)", "cite_spans": [ { "start": 228, "end": 247, "text": "(Yang et al., 2019)", "ref_id": "BIBREF0" }, { "start": 273, "end": 277, "text": "q,a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "USE-QA", "sec_num": null }, { "text": "P (a | q) = e \u03c6(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "USE-QA", "sec_num": null }, { "text": "Above, q is the question, a is the correct answer, A is all answers in the same batch, which serve as 13 The answer sentence is included in the context, so it appears twice in the constructed documents. This allows multiple answers that share the same context to still receive unique scores.", "cite_spans": [ { "start": 102, "end": 104, "text": "13", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "USE-QA", "sec_num": null }, { "text": "14 The wpm vocab is from BERTBASE. 15 We also experimented on SQuAD with removing normalization from wpm, and found that wpm still outperforms NLTK.", "cite_spans": [ { "start": 35, "end": 37, "text": "15", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "USE-QA", "sec_num": null }, { "text": "16 Notably, this is the same discriminative objective used for the original USE-QA model sampled negatives, and \u03c6(q, a) is the dot product of question and answer representations. We fine-tune USE-QA models on in-domain data for 10 epochs using batch size 64, and SGD with learning rate decaying exponentially from 0.01 to 0.001.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "USE-QA", "sec_num": null }, { "text": "BERT Our BERT dual encoder is fine-tuned for retrieval with the same discriminative objective used for the USE-QA models. 17 We fine-tune for 10 epochs using batch size 128 and the default AdamW optimizer with learning rate 0.0001. 18 Table 5 shows baseline model performance on the MultiReQA evaluation suite for both precision at 1 (P@1) and Mean Reciprocal Rank (MRR). The highest score for each task is bolded. For P@1, the first two rows shows the results for BM25 word and BM25 wpm . Notably, BM25 wpm performs better on 7 of 8 tasks, indicating that a careful selection of tokenization and normalization can improve the term-based model considerably. The advantage of BM25 wpm is particularly noticeable on datasets where the question is constructed without seeing the answer: SearchQA, TriviaQA, NQ, BioASQ and Relation Extraction. BM25 wpm also achieves the highest P@1 on 2 of 5 in-domain datasets and on all out-of-domain dataests.", "cite_spans": [ { "start": 122, "end": 124, "text": "17", "ref_id": null } ], "ref_spans": [ { "start": 235, "end": 242, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "USE-QA", "sec_num": null }, { "text": "The remaining rows show the results of the neural models: off-the-shelf USE-QA, fine-tuned versions of USE-QA and fine-tuned BERT dual encoders. We fine-tune on each in-domain dataset separately. The off-the-self USE-QA baseline is overall not competitive with BM25 wpm . However, when fine-tuned on in-domain data, USE-QA outperforms BM25 wpm on 3 of 5 in-domain datasets. Fine-tuned BERT often performs almost as well as fine-tuned USE-QA, suggesting there is only minimal benefit to QA specific pre-training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "The best neural models outperform BM25 wpm on Hotpot and NQ by +11.68% and +12.68% on P@1, respectively. This aligns with the statistics from Table 3 , where token overlap between question and answer/context is low for these sets. BM25 wpm outperforms neural models, on datasets with higher token overlap between question and answer/context (e.g., SearchQA, R.E. and SQuAD w.r.t. all neural Table 6 : P@1(%) and MRR(%) of USE-QA models fine-tuned on either one or all in-domain datasets, evaluated across all datasets. Joint: Fine-tune on all in-domain datasets together. Joint No TriviaQA : Same as \"Joint\", but removing TriviaQA from the fine-tuning data pool. models except USE-QA finetune ) and paradoxically the particularly difficult TriviaQA task. A very similar pattern of results is seen for MRR, with the exception that BERT finetune performs best on BioASQ and TextbookQA. We observe that the vocabulary of BioASQ and TextbookQA are different from the other datasets, including more specialized technical terms. Superior MRR performance could be due to better representations of novel words, computed from the composition of sub-word tokens. 19 However, it's not clear why BM25 wpm , also using sub-word tokenization, performs best on these datasets for P@1.", "cite_spans": [], "ref_spans": [ { "start": 142, "end": 149, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 391, "end": 398, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5.2" }, { "text": "The previous section shows that the strongest baselines are the USE-QA and BERT models, finetuned on in-domain data, with USE-QA slightly outperforming BERT. In order to better understand generalization across QA tasks, we experiment with training and evaluating on different dataset pairings, focusing on the USE-QA model. Table 6 shows the performance of models trained on each individual dataset, as well as a model trained jointly on all available in-domain datasets.", "cite_spans": [], "ref_spans": [ { "start": 324, "end": 331, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Transfer Learning across Domains", "sec_num": "5.3" }, { "text": "Each column compares performance of different models on a specific test set. The best numbers for each test set are bolded. In general, models trained on an individual dataset achieve the best (or nearbest) performance on their associated evaluation set. TriviaQA is an exception, performing poorly on its own evaluation data and nearly all other datasets. This suggests training on the TriviaQA sentence-level retrieval task is more difficult than other datasets. Critically, TriviaQA requires reasoning across multiple sources of evidence (Joshi et al., 2017a) , with the meaning of complete sentences annotated with answer spans often not directly answering their associate questions.", "cite_spans": [ { "start": 541, "end": 562, "text": "(Joshi et al., 2017a)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Transfer Learning across Domains", "sec_num": "5.3" }, { "text": "Joint models use combined training sets. The model denoted as Joint trains on all the datasets. Joint No TriviaQA trains on all datasets except Triv-ialQA, motivated by the poor performance of models trained on only TriviaQA data. The model trained over all available data is competitive, but the performance on some datasets, e.g. NQ and SQuAD, is significantly lower than the individuallytrained models. By removing TriviaQA, the combined model gets close to the individual model performance on NQ and SQuAD, and achieves the best P@1 performance on TriviaQA and TextbookQA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer Learning across Domains", "sec_num": "5.3" }, { "text": "Candidate answers may be not fully interpretable when taken out of their surrounding context (Ahmad et al., 2019) . In this section we investigate how model performance changes when removing context. We experiment with one BM25 model and one neural model, by picking the best performing models from previous experiments: BM25 wpm and USE-QA finetune . Recall, USE-QA finetune models are fine-tuned on each individual dataset.", "cite_spans": [ { "start": 93, "end": 113, "text": "(Ahmad et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Analysis 6.1 Does Context Help?", "sec_num": "6" }, { "text": "Figure 1: Performance change on P@1(%) for BM25 wpm and USE-QA finetune when candidate selection is blind to answer context. Figure 1 illustrates the change in performance when models are restricted to evaluating candidate answers without context. 20 For the USE-QA model, the performance drop by excluding answer context is less than 5% on all datasets. The drop in BM25 performance is larger, supporting the hypothesis that BM25's token overlap heuristic is effective over large spans of text, while the neural model obtains a \"deeper\" semantic understanding and thus extracts more signal out of a single sentence. 20 We report P@1 here, but observed similar trends in MRR.", "cite_spans": [ { "start": 248, "end": 250, "text": "20", "ref_id": null }, { "start": 617, "end": 619, "text": "20", "ref_id": null } ], "ref_spans": [ { "start": 125, "end": 133, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Analysis 6.1 Does Context Help?", "sec_num": "6" }, { "text": "In this section we examine some typical failure cases of the BM25 wpm and USE-QA finetune models. As a first observation, the two models retrieve very different answers. For example, we find that on Natural Questions, the two models' top-ranked answers disagree on 64.75% questions. 21 The other datasets have similar levels of disagreement. This suggests that the models have different strengths, and that a combination of these modeling techniques could leads to a significant improvement. Table 7 shows examples where the models retrieve different answers, and both are incorrect. In the first example, the BM25 wpm retrieves the correct context by matching the keyword \"Salton Sea\". But it fails to retrieve the correct sentence, as none of the keywords in the question appear in the target answer. On the other hand, the USE-QA finetune model understands the question is asking about some sort of animal living in the sea, but fails to connect to the Salton Sea specifically. Similarly, in the second example, both models retrieve sentences that match some keywords from the question. The BM25 wpm matches keywords \"Spencer\" and \"Maine\", but misses that the question is looking for an invention. The USE-QA finetune matches \"Spencer\", and is able to connect \"invent\" with \"discover\", but surfaces the wrong discovery. Overall, we observe term based models very often retrieve the correct context, but then fail to identify the correct sentence as the answer. Conversely, neural models seems to better understand the question, but sometimes fails to recognize important keywords.", "cite_spans": [ { "start": 283, "end": 285, "text": "21", "ref_id": null } ], "ref_spans": [ { "start": 492, "end": 499, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.2" }, { "text": "Open domain QA involves finding answers to questions within large document collections (Voorhees and Tice, 2000) . The ground-truth answer for many evaluations is a span often containing a word or a short phrase (i.a., ; Chen et al. (2017) ; Rajpurkar et al. (2016b) ). Karpukhin et al. (2020) and Xiong et al. (2020) explored passage level retrieval for QA. Seo et al. (2018) constructs a phrase-indexed QA challenge benchmark retrieving phrases, allowing for a direct F 1 and exact-match evaluation on SQuAD. (Seo et al., 2019) demonstrates phrase-indexed QA systems can be built using a combination of dense (neural) and sparse (term-frequency based)", "cite_spans": [ { "start": 87, "end": 112, "text": "(Voorhees and Tice, 2000)", "ref_id": "BIBREF34" }, { "start": 221, "end": 239, "text": "Chen et al. (2017)", "ref_id": "BIBREF3" }, { "start": 242, "end": 266, "text": "Rajpurkar et al. (2016b)", "ref_id": "BIBREF24" }, { "start": 270, "end": 293, "text": "Karpukhin et al. (2020)", "ref_id": null }, { "start": 359, "end": 376, "text": "Seo et al. (2018)", "ref_id": "BIBREF28" }, { "start": 511, "end": 529, "text": "(Seo et al., 2019)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Example 1 (from NQ): what kind of fish live in the salton sea Correct Answer: [...] Due to the high salinity , very few fish species can tolerate living in the Salton Sea . Introduced tilapia are the main fish that can tolerate the high salinity levels and pollution . Other freshwater fish species live in the rivers and canals that feed the Salton Sea , including threadfin shad . [...] USE-QA finetune : [...] It may also drift in to the south -western part of the Baltic Sea ( where it can not breed due to the low salinity ) . Similar jellyfish -which may be the same species -are known to inhabit seas near Australia and New Zealand .", "cite_spans": [ { "start": 383, "end": 388, "text": "[...]", "ref_id": null }, { "start": 407, "end": 412, "text": "[...]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "The largest recorded specimen found washed up on the shore of Massachusetts Bay in 1870 . [...] BM25wpm: [...] Introduced tilapia are the main fish that can tolerate the high salinity levels and pollution . Other freshwater fish species live in the rivers and canals that feed the Salton Sea , including threadfin shad , carp, red shiner , channel catfish , white catfish , largemouth bass , mosquitofish , sailfin molly , and the vulnerable desert pupfish . [...] Example 2 (from TriviaQA): What was invented in the 1940s by Percy Spencer, an American self-taught engineer from Howland, Maine, who was building magnetrons for radar sets?", "cite_spans": [ { "start": 90, "end": 95, "text": "[...]", "ref_id": null }, { "start": 105, "end": 110, "text": "[...]", "ref_id": null }, { "start": 459, "end": 464, "text": "[...]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Correct Answer: [...] After experimenting, he realized that microwaves would cook foods quickly -even faster than conventional ovens that cook with heat. The Raytheon Corporation produced the first commercial microwave oven in 1954; it was called the 1161 Radarange. It was large, expensive, and had a power of 1600 watts. [...] USE-QA finetune : [...] Because of his accomplishments, Spencer was awarded the Distinguished Service Medal by the U.S. Navy and has a building named after him at Raytheon. Percy Spencer, while working for the Raytheon Company, discovered a more efficient way to manufacture magnetrons. In 1941, magnetrons were being produced at a rate of 17 per day. [...] BM25wpm: [...] By the end of 1971, the price of countertop units began to decrease and their capabilities were expanded. Spencer, born in Howland, Maine, was orphaned at a young age. Although he never graduated from grammar school, he became Senior Vice President and a member of the Board of Directors at Raytheon, receiving 150 patents during his career [...] Table 7 : Examples where both the BM25 wpm and USE-QA finetune models get wrong. Italics indicate the answer sentence. At most one sentence before/after the answer is shown, although the original context may be longer.", "cite_spans": [ { "start": 323, "end": 328, "text": "[...]", "ref_id": null }, { "start": 681, "end": 686, "text": "[...]", "ref_id": null } ], "ref_spans": [ { "start": 1049, "end": 1056, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "indices. Roy et al. (2020) investigates the retrieval of sentence-level answers from a language agnostic candidate pool. Chang et al. (2020) investigates the pre-training tasks for retrieving answers from a large scale candidate pool. Surdeanu et al. (2008) provides a dataset consisting of 142,627 question-answer pairs from Yahoo! Answers \"how to\" questions, with the goal of retrieving the correct answer to a given question from the set of all answers. WikiQA (Yang et al., 2015) is another sentence-level answer selection dataset consisting of 3,047 questions and 29,258 candidate answers, split into train, dev, and test. These datasets, however, are either limited to a specific type of question, or limited to a small set of candidates. we propose a more comprehensive eval covering multiple domains and include tasks at a much larger scale. Additionally, folding the various MRQA in-domain and out-of-domain datasets into a single eval allows us to directly investigate cross-domain generalization.", "cite_spans": [ { "start": 9, "end": 26, "text": "Roy et al. (2020)", "ref_id": "BIBREF27" }, { "start": 121, "end": 140, "text": "Chang et al. (2020)", "ref_id": "BIBREF2" }, { "start": 235, "end": 257, "text": "Surdeanu et al. (2008)", "ref_id": "BIBREF30" }, { "start": 464, "end": 483, "text": "(Yang et al., 2015)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "In this paper, we convert eight existing QA tasks from the MRQA shared task (Fisch et al., 2019b) into sentence-level retrieval tasks, by treating the sentence containing the ground-truth span as the target sentence-level answer. In additional to a new evaluation suite for sentence level retrieval, we provide strong baselines using unsupervised term-based information retrieval methods (BM25), and three neural models, off-the-self USE-QA, finetuned USE-QA, and BERT dual encoders.", "cite_spans": [ { "start": 76, "end": 97, "text": "(Fisch et al., 2019b)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Overall, BM25's classical term-based retrieval approach is a surprisingly strong baseline, and one that could likely be improved further using additional information retrieval techniques such as normalization and synonym matching. The neural models, however, can be trained end-to-end without feature engineering, and perform particularly well on tasks with a low degree of question/answer token overlap, or in situations where context is limited. The neural model performance can also be improved through the addition of in-domain training data. However, we find that QA tasks are not all alike and having training data in the precise target domain is important.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "https://books.google.com/talktobooks/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Retrieval models are often measured by P@N (N=1,3,5,10). However, as our main concern is whether the question is correctly answered, we focus on P@1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We exclude NewsQA, RACE, DROP, and DuoRC, as the majority of their questions are underspecified when taken out of their original context, making them inappropriate for a large-scale retrieval evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The BERTBASE model uses 12 transformer layers with 12 attention heads, a hidden size of 768 and a filter size of 3072. The final embedding size is 768.9 We use in-batch negative sampling in the dual encoder training, which requires relatively large batch size. For more", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Since BERT is originally pre-trained on masked language modeling and next-sentence prediction, fine-tuning is necessary to use it to perform retrieval tasks.18 For both USE-QA and BERT, hyper-parameters are tuned on a validation set (10%) split out from the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "BM23wpm benefits from subword tokens but lacks the ability to understand how adjacent sub-word tokens compose a larger meaningful unit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that even if the models retrieve different answers, both answers could still be correct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "ReQA: An evaluation for end-to-end answer retrieval models", "authors": [ { "first": "Amin", "middle": [], "last": "Ahmad", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Machine Reading for Question Answering", "volume": "", "issue": "", "pages": "137--146", "other_ids": { "DOI": [ "10.18653/v1/D19-5819" ] }, "num": null, "urls": [], "raw_text": "Amin Ahmad, Noah Constant, Yinfei Yang, and Daniel Cer. 2019. ReQA: An evaluation for end-to-end an- swer retrieval models. In Proceedings of the 2nd Workshop on Machine Reading for Question Answer- ing, pages 137-146, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Natural language processing with Python: analyzing text with the natural language toolkit", "authors": [ { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Ewan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyz- ing text with the natural language toolkit. \" O'Reilly Media, Inc.\".", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Pre-training tasks for embedding-based large-scale retrieval", "authors": [ { "first": "Wei-Cheng", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Felix", "middle": [ "X" ], "last": "Yu", "suffix": "" }, { "first": "Yin-Wen", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Sanjiv", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei-Cheng Chang, Felix X. Yu, Yin-Wen Chang, Yim- ing Yang, and Sanjiv Kumar. 2020. Pre-training tasks for embedding-based large-scale retrieval. In International Conference on Learning Representa- tions.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Reading Wikipedia to answer opendomain questions", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1870--1879", "other_ids": { "DOI": [ "10.18653/v1/P17-1171" ] }, "num": null, "urls": [], "raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870- 1879, Vancouver, Canada. Association for Computa- tional Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Using odp metadata to personalize search", "authors": [ { "first": "Wolfgang", "middle": [], "last": "Paul Alexandru Chirita", "suffix": "" }, { "first": "Raluca", "middle": [], "last": "Nejdl", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Paiu", "suffix": "" }, { "first": "", "middle": [], "last": "Kohlsch\u00fctter", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "178--185", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Alexandru Chirita, Wolfgang Nejdl, Raluca Paiu, and Christian Kohlsch\u00fctter. 2005. Using odp meta- data to personalize search. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 178-185.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Searchqa: A new q&a dataset augmented with context from a search engine", "authors": [ { "first": "Matthew", "middle": [], "last": "Dunn", "suffix": "" }, { "first": "Levent", "middle": [], "last": "Sagun", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Higgins", "suffix": "" }, { "first": "V", "middle": [ "Ugur" ], "last": "G\u00fcney", "suffix": "" }, { "first": "Volkan", "middle": [], "last": "Cirik", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur G\u00fcney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with con- text from a search engine. CoRR, abs/1704.05179.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Languageagnostic bert sentence embedding", "authors": [ { "first": "Fangxiaoyu", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Naveen", "middle": [], "last": "Arivazhagan", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Language- agnostic bert sentence embedding.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "MRQA 2019 shared task: Evaluating generalization in reading comprehension", "authors": [ { "first": "Adam", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Talmor", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Minjoon", "middle": [], "last": "Seo", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Machine Reading for Question Answering", "volume": "", "issue": "", "pages": "1--13", "other_ids": { "DOI": [ "10.18653/v1/D19-5801" ] }, "num": null, "urls": [], "raw_text": "Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eu- nsol Choi, and Danqi Chen. 2019a. MRQA 2019 shared task: Evaluating generalization in reading comprehension. In Proceedings of the 2nd Work- shop on Machine Reading for Question Answering, pages 1-13, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "MRQA 2019 shared task: Evaluating generalization in reading comprehension", "authors": [ { "first": "Adam", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Talmor", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Minjoon", "middle": [], "last": "Seo", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of 2nd Machine Reading for Reading Comprehension (MRQA) Workshop at EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eu- nsol Choi, and Danqi Chen. 2019b. MRQA 2019 shared task: Evaluating generalization in reading comprehension. In Proceedings of 2nd Machine Reading for Reading Comprehension (MRQA) Work- shop at EMNLP.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "End-to-end retrieval in continuous space", "authors": [ { "first": "Daniel", "middle": [], "last": "Gillick", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Presta", "suffix": "" }, { "first": "Gaurav Singh", "middle": [], "last": "Tomar", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.08008" ] }, "num": null, "urls": [], "raw_text": "Daniel Gillick, Alessandro Presta, and Gaurav Singh Tomar. 2018. End-to-end retrieval in continuous space. arXiv preprint arXiv:1811.08008.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Effective parallel corpus mining using bilingual sentence embeddings", "authors": [ { "first": "Mandy", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Qinlan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Heming", "middle": [], "last": "Ge", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Gustavo", "middle": [ "Hernandez" ], "last": "Abrego", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Stevens", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Yun-Hsuan", "middle": [], "last": "Sung", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Strope", "suffix": "" }, { "first": "Ray", "middle": [], "last": "Kurzweil", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "165--176", "other_ids": { "DOI": [ "10.18653/v1/W18-6317" ] }, "num": null, "urls": [], "raw_text": "Mandy Guo, Qinlan Shen, Yinfei Yang, Heming Ge, Daniel Cer, Gustavo Hernandez Abrego, Keith Stevens, Noah Constant, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Effective parallel corpus mining using bilingual sentence embeddings. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 165-176, Bel- gium, Brussels. Association for Computational Lin- guistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Challenges in enterprise search", "authors": [ { "first": "David", "middle": [], "last": "Hawking", "suffix": "" } ], "year": 2004, "venue": "ADC", "volume": "4", "issue": "", "pages": "15--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Hawking. 2004. Challenges in enterprise search. In ADC, volume 4, pages 15-24. Citeseer.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Deep unordered composition rivals syntactic methods for text classification", "authors": [ { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Manjunatha", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1681--1691", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum\u00e9 III. 2015. Deep unordered compo- sition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1681-1691, Beijing, China. Association for Compu- tational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension", "authors": [ { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Weld", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1601--1611", "other_ids": { "DOI": [ "10.18653/v1/P17-1147" ] }, "num": null, "urls": [], "raw_text": "Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017a. TriviaQA: A large scale dis- tantly supervised challenge dataset for reading com- prehension. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611, Van- couver, Canada. Association for Computational Lin- guistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension", "authors": [ { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017b. Triviaqa: A large scale dis- tantly supervised challenge dataset for reading com- prehension. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Are you smarter than a sixth grader? textbook question answering for multimodal machine comprehension", "authors": [ { "first": "Aniruddha", "middle": [], "last": "Kembhavi", "suffix": "" }, { "first": "Minjoon", "middle": [], "last": "Seo", "suffix": "" }, { "first": "Dustin", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Jonghyun", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2017, "venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aniruddha Kembhavi, Minjoon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Ha- jishirzi. 2017. Are you smarter than a sixth grader? textbook question answering for multimodal ma- chine comprehension. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics", "authors": [ { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Jennimaria", "middle": [], "last": "Palomaki", "suffix": "" }, { "first": "Olivia", "middle": [], "last": "Redfield", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "Danielle", "middle": [], "last": "Epstein", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Kelcey", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natu- ral questions: a benchmark for question answering research. Transactions of the Association of Compu- tational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Latent retrieval for weakly supervised open domain question answering", "authors": [ { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6086--6096", "other_ids": { "DOI": [ "10.18653/v1/P19-1612" ] }, "num": null, "urls": [], "raw_text": "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 6086-6096, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Sparse, dense, and attentional representations for text retrieval", "authors": [ { "first": "Yi", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2020. Sparse, dense, and atten- tional representations for text retrieval.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Zero-shot neural retrieval via domain-targeted synthetic query generation", "authors": [ { "first": "Ji", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Korotkov", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ji Ma, Ivan Korotkov, Yinfei Yang, Keith Hall, and Ryan McDonald. 2020. Zero-shot neural retrieval via domain-targeted synthetic query generation.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Evaluating web-based question answering systems", "authors": [ { "first": "R", "middle": [], "last": "Dragomir", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Radev", "suffix": "" }, { "first": "Harris", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Weiguo", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Fan", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Third International Conference on Language Resources and Evaluation (LREC'02)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dragomir R. Radev, Hong Qi, Harris Wu, and Weiguo Fan. 2002. Evaluating web-based question answer- ing systems. In Proceedings of the Third Interna- tional Conference on Language Resources and Eval- uation (LREC'02), Las Palmas, Canary Islands - Spain. European Language Resources Association (ELRA).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2383--2392", "other_ids": { "DOI": [ "10.18653/v1/D16-1264" ] }, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016a. SQuAD: 100,000+ questions for machine comprehension of text. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Lin- guistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2383--2392", "other_ids": { "DOI": [ "10.18653/v1/D16-1264" ] }, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016b. SQuAD: 100,000+ questions for machine comprehension of text. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Lin- guistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Software Framework for Topic Modelling with Large Corpora", "authors": [ { "first": "Petr", "middle": [], "last": "Radim\u0159eh\u016f\u0159ek", "suffix": "" }, { "first": "", "middle": [], "last": "Sojka", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", "volume": "", "issue": "", "pages": "45--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Cor- pora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45- 50, Valletta, Malta. ELRA. http://is.muni.cz/ publication/884893/en.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The probabilistic relevance framework: Bm25 and beyond", "authors": [ { "first": "Stephen", "middle": [], "last": "Robertson", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Zaragoza", "suffix": "" } ], "year": 2009, "venue": "Found. Trends Inf. Retr", "volume": "3", "issue": "4", "pages": "333--389", "other_ids": { "DOI": [ "10.1561/1500000019" ] }, "num": null, "urls": [], "raw_text": "Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and be- yond. Found. Trends Inf. Retr., 3(4):333-389.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Lareqa: Language-agnostic answer retrieval from a multilingual pool", "authors": [ { "first": "Uma", "middle": [], "last": "Roy", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Barua", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Phillips", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2020, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Uma Roy, Noah Constant, Rami Al-Rfou, Aditya Barua, Aaron Phillips, and Yinfei Yang. 2020. Lareqa: Language-agnostic answer retrieval from a multilingual pool. In Conference on Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Phraseindexed question answering: A new challenge for scalable document comprehension", "authors": [ { "first": "Minjoon", "middle": [], "last": "Seo", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "559--564", "other_ids": { "DOI": [ "10.18653/v1/D18-1052" ] }, "num": null, "urls": [], "raw_text": "Minjoon Seo, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2018. Phrase- indexed question answering: A new challenge for scalable document comprehension. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 559-564, Brus- sels, Belgium. Association for Computational Lin- guistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Real-time open-domain question answering with dense-sparse phrase index", "authors": [ { "first": "Minjoon", "middle": [], "last": "Seo", "suffix": "" }, { "first": "Jinhyuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4430--4441", "other_ids": { "DOI": [ "10.18653/v1/P19-1436" ] }, "num": null, "urls": [], "raw_text": "Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019. Real-time open-domain question answering with dense-sparse phrase index. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4430-4441, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Learning to rank answers on large online QA collections", "authors": [ { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Massimiliano", "middle": [], "last": "Ciaramita", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Zaragoza", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "719--727", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihai Surdeanu, Massimiliano Ciaramita, and Hugo Zaragoza. 2008. Learning to rank answers on large online QA collections. In Proceedings of ACL-08: HLT, pages 719-727, Columbus, Ohio. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "\u00c9ric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos, and Georgios Paliouras. 2015. An overview of the bioasq large-scale biomedical semantic indexing and question answering competition", "authors": [ { "first": "George", "middle": [], "last": "Tsatsaronis", "suffix": "" }, { "first": "Georgios", "middle": [], "last": "Balikas", "suffix": "" }, { "first": "Prodromos", "middle": [], "last": "Malakasiotis", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Partalas", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Zschunke", "suffix": "" }, { "first": "Michael", "middle": [ "R" ], "last": "Alvers", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Weissenborn", "suffix": "" }, { "first": "Anastasia", "middle": [], "last": "Krithara", "suffix": "" } ], "year": null, "venue": "Sergios Petridis, Dimitris Polychronopoulos, Yannis Almirantis, John Pavlopoulos, Nicolas Baskiotis, Patrick Gallinari, Thierry Arti\u00e8res, Axel-Cyrille Ngonga Ngomo", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R. Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopou- los, Yannis Almirantis, John Pavlopoulos, Nico- las Baskiotis, Patrick Gallinari, Thierry Arti\u00e8res, Axel-Cyrille Ngonga Ngomo, Norman Heino,\u00c9ric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos, and Georgios Paliouras. 2015. An overview of the bioasq large-scale biomedical semantic indexing and question answering competi- tion. In BMC Bioinformatics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "6000--6010", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 6000-6010. Curran Asso- ciates, Inc.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "The trec question answering track", "authors": [ { "first": "Ellen", "middle": [ "M" ], "last": "Voorhees", "suffix": "" } ], "year": 2001, "venue": "Nat. Lang. Eng", "volume": "7", "issue": "4", "pages": "361--378", "other_ids": { "DOI": [ "10.1017/S1351324901002789" ] }, "num": null, "urls": [], "raw_text": "Ellen M. Voorhees. 2001. The trec question answering track. Nat. Lang. Eng., 7(4):361-378.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Building a question answering test collection", "authors": [ { "first": "Ellen", "middle": [ "M" ], "last": "Voorhees", "suffix": "" }, { "first": "Dawn", "middle": [ "M" ], "last": "Tice", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '00", "volume": "", "issue": "", "pages": "200--207", "other_ids": { "DOI": [ "10.1145/345508.345577" ] }, "num": null, "urls": [], "raw_text": "Ellen M. Voorhees and Dawn M. Tice. 2000. Build- ing a question answering test collection. In Proceed- ings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, SIGIR '00, pages 200-207, New York, NY, USA. ACM.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Google's neural machine translation system", "authors": [ { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Le", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Norouzi", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Qin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Gao", "suffix": "" }, { "first": "", "middle": [], "last": "Macherey", "suffix": "" } ], "year": 2016, "venue": "Bridging the gap between human and machine translation", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1609.08144" ] }, "num": null, "urls": [], "raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between hu- man and machine translation. arXiv preprint arXiv:1609.08144.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Approximate nearest neighbor negative contrastive learning for dense text retrieval", "authors": [ { "first": "Lee", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Chenyan", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Ye", "middle": [], "last": "Li", "suffix": "" }, { "first": "Kwok-Fung", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Jialin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Bennett", "suffix": "" }, { "first": "Junaid", "middle": [], "last": "Ahmed", "suffix": "" }, { "first": "Arnold", "middle": [], "last": "Overwijk", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor neg- ative contrastive learning for dense text retrieval.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "WikiQA: A challenge dataset for open-domain question answering", "authors": [ { "first": "Yi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yih", "middle": [], "last": "Wen-Tau", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Meek", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2013--2018", "other_ids": { "DOI": [ "10.18653/v1/D15-1237" ] }, "num": null, "urls": [], "raw_text": "Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. WikiQA: A challenge dataset for open-domain ques- tion answering. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing, pages 2013-2018, Lisbon, Portugal. As- sociation for Computational Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Multilingual universal sentence encoder for semantic retrieval", "authors": [ { "first": "", "middle": [], "last": "Sung", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.04307" ] }, "num": null, "urls": [], "raw_text": "Sung, et al. 2019. Multilingual universal sen- tence encoder for semantic retrieval. arXiv preprint arXiv:1907.04307.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "HotpotQA: A dataset for diverse, explainable multi-hop question answering", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Saizheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2018, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Conference on Empirical Methods in Natural Language Processing (EMNLP).", "links": null } }, "ref_entries": { "TABREF0": { "type_str": "table", "content": "", "html": null, "num": null, "text": "ReQA example drawn from SQuAD. The goal is to retrieve the answer sentence (bolded) from an open corpus based on the meaning of the sentence and the surrounding context." }, "TABREF1": { "type_str": "table", "content": "
2019)
", "html": null, "num": null, "text": "Dataset Question Answer SearchQA At age 33 in 1804, he started a new symphony, his 5th, with a Da-Da-Da-Duhg This is the first movement of Beethoven's 5th symphony. TriviaQA From the Greek for color, what element, with an atomic number of 24, uses the symbol Cr? Rubies and emeralds also owe their colors to chromium compounds. HotpotQA Lenny Young is a collaborator on the stop motion film released in what year?" }, "TABREF2": { "type_str": "table", "content": "", "html": null, "num": null, "text": "" }, "TABREF4": { "type_str": "table", "content": "
DatasetQuestion Answer Context
Average Length (Tokens)
SearchQA17.2531.5155.50
TriviaQA15.5633.88747.75
HotpotQA18.5228.3191.57
SQuAD11.4529.70140.64
NQ9.24107.10220.02
BioASQ11.1829.01241.52
R.E.9.1527.5129.14
TextbookQA10.2016.37648.23
Question/Answer Token Overlap (%)
SearchQA-37.8355.23
TriviaQA-25.5374.23
HotpotQA-29.0849.16
SQuAD-43.0356.36
NQ-23.5036.87
BioASQ-23.0853.40
R.E.-39.2140.98
TextbookQA-25.6482.54
", "html": null, "num": null, "text": "Statistics for each constructed dataset: # of training pairs, # of questions, # of candidates, and average # of answers per question." }, "TABREF5": { "type_str": "table", "content": "", "html": null, "num": null, "text": "Average length (# of word tokens) and degree of question/answer token overlap of each constructed dataset." }, "TABREF7": { "type_str": "table", "content": "
Metric TrainTestIn-domain Datasets SearchQA TriviaQA HotpotQAOut-of-domain Datasets NQ SQuAD BioASQ R.E. TextbookQA
SearchQA31.4535.4816.04 24.6946.606.52 60.036.66
TriviaQA28.4432.5814.91 22.5838.874.45 60.844.06
HotpotQA30.7932.7031.71 26.4556.175.65 57.216.52
P@1NQ28.8031.7717.64 38.0052.236.52 55.487.66
SQuAD31.4435.2120.25 28.3266.837.65 63.738.32
Joint32.2437.4026.54 36.3560.817.58 62.717.52
JointNo TriviaQA31.9237.7129.68 36.2364.006.78 61.698.72
SearchQA50.7047.8825.88 36.3157.8313.34 75.5115.19
TriviaQA44.5742.3923.40 32.7747.509.26 75.8810.49
HotpotQA47.1744.4143.77 36.9966.2532.15 72.5415.08
MRRNQ45.0844.3926.57 52.2762.8813.77 70.0717.71
SQuAD48.7048.1630.12 38.7975.8615.75 78.5018.71
Joint51.0450.8838.95 50.1171.0214.86 78.0516.61
JointNo TriviaQA50.8050.7741.62 49.9373.7114.69 77.0418.64
", "html": null, "num": null, "text": "Precision at 1(P@1)(%) and Mean Reciprocal Rank (MRR)(%) on the constructed question answer retrieval datasets. USE-QA finetune and BERT finetune are fine-tuned on each in-domain dataset individually. The performance of fine-tuned models on out-of-domain datasets are the average score across all five fine-tuned models." } } } }